Ok so Samsung, SK Hynix and Micron do not have the capacity to meet demand. Also, what little capacity they do have they are allocating to HBM over DRAM. Based on my limited knowledge HBM can not be easily repurposed for consumer electronics. Translation: main street is cooked for the next 3-4 years.
It doesn't stop there though. OpenAI is currently mired in a capital crunch. Their last round just about sucked all the dry powder out of the private markets. Folks are now starting to ask difficult questions about their burn rate and revenue. It is increasingly looking like they might not commit to the purchase order they made which kick-started this whole panic over RAM.
Soo ... how sure are we that the memory makers themselves are not going to be the ones holding the bag?
The Radeon VII came out in 2019 as a $700 consumer GPU with an 1TB/s HBM2 memory subsystem which is more than any consumer GPU you can get today, including the high-end ones afaik. At that point in time, there was a whole lineup of AMD GPUs with HBM going down into the midrange.
If they could make this stuff and sell it to regular people a decade ago for very palatable prices, why do they come up with the idea that this is the technology of the gods, unaffordable by mere mortals?
I have been wondering this recently. It was the convention that if you wanted to keep costs down, try to keep the memory bus size down as low as possible. Still remember the awful Radeon 9200 SE - 64bit data bus that strangled an already slow GPU.
Heck, I have a phone with a 16bit memory bus for instance. The high(ish) clock rate only makes up the difference slightly.
But with general prices on all components going up, it might not be such a big factor any more.
HBM migght make sense for higher end products which can free up space for the lower end that will never use the tech.
Vega was a card with decent perf/$ for the consumer, but from a pure technical point of view (perf/mm2, perf/BW, perf/W) it was a major failure. Both Vega (and Fiji before it) showed that excess memory BW alone is not sufficient to win.
There are a lot of cyclical businesses that make money every year. It requires careful management. Factories can produce less than full capacity - but you better design for that. you can make money in the worst years without laying anyone off even - but it requires careful attention to details and not over hiring in good times as if they will never end.
Factories working at (significantly) less than full capacity gets a bit harder when you've got one of the most expensive machines on earth working in them, and production lines that'll be out of date in a couple of years
but if you don't collude during times of feast you will have famine, and during times of famine you will have famine, in an economy based on feast/famine you must sometimes feast or die.
All of the capital intensive businesses face this issue. Chemicals, Shipping, Semiconductors etc.
You get market signals that the demand is there, you acquire the necessary capital, you spend 5 years to build capacity, but guess what, 5 other market players did the same thing. So now you are doomed, because the market is flooded and you have low cash flow since you need to drop prices to compete for pennies.
Now you cannot find capital, you don't invest, but guess what, neither your competitors did. So now the demand is higher than the supply. Your price per unit sold skyrocketed, but you don't have enough capacity!
Is the DRAM industry really capitalist? Focusing on just the Korean parties, it functions like a command economy. I would say the same about most high end semi-conductor manufacturing, TSMC, Intel, ASML are being commanded and driven by nation-state level decision making. Right now the command is to focus on high wattage centralized AI systems at the expense of everything else.
No one at high levels is capitalist, in ideology or action. An ideological capitalist would be in favor of competition, but these people disdain it and collude regularly. The only 'capitalist' actions they take are by accident, the real goal is as much power/money as possible as fast as possible.
We don't even expect companies to plan long-term anymore, it's just moving wealth as fast as possible.
That isn't really a change, very few people could ever have been said to be ideological capitalists. (capitalist is not a word with a hard definition, but I'm considering it a different thing than the more modern pure libertarian zero-regulation ideology)
Forecasting demand 5 years into the future is intrinsically highly unreliable. It doesn’t matter if it is capitalism or a command economy. The bet is always going to be risky and someone will have to pay for that risk.
At least with capitalism you have many different people with different perspectives on the risk making independent bets. That mitigates the more extreme negative outcomes.
Because that does not happen exactly as you say for all players. The demand signals will be processed and long-term risk is balanced against short-term gain in a distributed fashion, so not everyone will do the same.
It's more optimal than planned economies until we have AI planned economies with realtime feedback, I guess.
Consumers get cheap goods during oversupply and most inefficient companies get elliminated during bust while consolidation leads to economies of scale.
To add a more local hurdle as well, the Dutch power grid is at capacity and its managing company is now telling companies that planned to build a datacenter that they can't be connected to the grid until 2030, even though said companies already paid for and got guarantees about that connection.
That is, memory capacity is reserved for datacenters yet to be built, but this will do weird things if said datacenter construction is postponed or cancelled altogether.
That guarantee is not as much of a guarantee as stated in the media. You get a guarantee it will be planned at a certain time (as in looked at), not that it will be build. The cost of doing business is taking risks and mitigating them. There is a reason the nuclear plant in Borsele was build: an aluminium smelter. Maybe you should arrange for something similar as a datacenter (no politician will fall on a sword for that but you can try). The (original) power draw is about the same 80-100MW.
> the Dutch power grid is at capacity and its managing company is now telling companies that planned to build a datacenter that they can't be connected to the grid until 2030, even though said companies already paid for and got guarantees about that connection.
Are the Netherlands a large proportion of global datacenters?
Amsterdam hosts a major internet exchange. It's not a bad place to build a datacenter and there are many. Northern latitude brings free air cooling, but also additional distance to clients. Lots of peers in AMS-IX, but not a lot of oceananic cable landings (one with two paths to the US, but most of the submarine cables land nearby in Europe)
Yes. Amsterdam has one of the largest IXPs (AMS-IX) in Europe and is also one of the largest European markets for Internet Infrastructure services (i.e. hosting, DNS provision, domain name registration, etc.)
What the grid looks like in different countries is very different. The Dutch power grid is already almost 50% renewables, which is an inconvenience for adding capacity because that's around where you have to start really dealing with storage in order to add more.
In most other places the percentage is significantly less than that and then you can easily add more of the cheap-but-intermittent stuff because a cloudy day only requires you to make up a 10% shortfall instead of a 50% one, which existing hydro or natural gas plants can handle without new storage when there are more of them to begin with.
This just highlights what an utter failure and self-inflicted wound the green policies of Euro countries have been. Europe has already lost the AI race to the U.S. and China.
I am betting the pendulum swings faster to the other side to excess capacity as all the construction lies of Altman fall through with financiers waking up the the fact they can't build the infrasctructure as fast nor make any profits on that infrastructure that will get built.
Do the memory makers not have a contract in place for an order this large? I assume that they aren't going to take "trust us bro" as good enough for several million dollars in orders, and even if there is a way to cancel the order it won't be free. I would assume so at least, but i would like if anyone knew for certain.
I was interning at a company that made networking gear that was put out of business when their largest customer canceled an order within a week of the delivery date.
The customer ran out of money. In terms of where you are in line of debtors when you haven't even delivered the product to a customer, it's so far back as to be assured you won't get your money.
If the memory makers got a deposit from OpenAI as part of this deal, that is likely to be the only money they will get for any undelivered memory, particularly if OpenAI runs out of capital.
I think I’m missing something. Financially, what bag would the memory makers be holding here? I don’t think I’m well informed regarding how these deals were structured.
Memory makers make capital investements (build different factories, convert physical production lines, etc.) to meet orders that have been place for the next ~5 years.
OpenAI (or whoever) crashes and can't pay for the order leaving the memory makers in a tough spot.
But wouldn’t you rather hbm prices come down first ? Memory makers will be fine. There is practically infinite demand.
Unless you get china style rationing of compute per person world wide.
The real issue is everyone wanting to upgrade to hbm, ddr5, and nvme5 at the same time.
HBM is just normal DDR RAM that's been packaged with (much) wider-than-usual data buses. That's where the high bandwidth comes from, not from high clock rates or any other innovation or improvement in core specifications.
The market is already stagnated. Even if OpenAI doesn’t buy what they reserved other players will do so. SK Hynix CEO said there is a 20% gap between supply and demand per year. And that doesn’t account the shock effect that will take place the moment prices normalize and everyone and their dog will go out and start buying inventory to avoid the next crisis. I for one would certainly buy more than I currently need just in case.
> Soo ... how sure are we that the memory makers themselves are not going to be the ones holding the bag?
We aren't. The remaining memory manufacturers fear getting caught in a "pork cycle" yet again - that is why there's only the three large ones left anyway.
Surely this can be solved with financial engineering. The memory makers build more capacity, but they finance it with something like floating-rate notes linked to an index of memory prices, or even catastrophe bonds or AT1s. Or more crudely, set up special purpose vehicles to build the extra capacity, and issue convertible bonds from those; if the memory market collapses, investors don't get paid, but they do get a memory factory.
If they don't expand capacity much, the only negative consequences I foresee happening for them is that they might lose spending discipline, and that systems will be set up to make do with a little less memory. Apart from that, it's just very high profits followed by more or less regular profits.
They could wind up losing all their business to China though.
China has memory makers who are creeping up through the stages of production maturity, and once they hit then there's no going back.
If the existing makers can't meet supply such that Chinese exports get their foot in the door, they may find they never get ahead again due to volume - that domestic market is huge so they have scale, and the gaming market isn't going to care because they get anything at the moment, which is all you'll need for enterprise to say "are we really afraid of memory in this business?"
Good point, it's a risk but so far the Chinese competition isn't up to par and it's unclear whether they'll be able to exploit the current window of opportunity.
> Soo ... how sure are we that the memory makers themselves are not going to be the ones holding the bag?
I hope they do, they did not have to agree to sell so much RAM to one customer. They’ve been caught colluding and price fixing more than once, I hope they take it in the shorts and new competitors arise or they go bankrupt and new management takes over the existing plants.
Don’t put all your eggs in the one basket is how the old saying goes.
Memory makers did get themselves into this situation by selling all wafers for empty promises and alienating everyone but OpenAI tbh. I do hope they end up holding the bag once again, cause after covid and the cartel thing they don't seem to ever learn their lesson on how to have the tiniest amount of integrity.
> Memory makers did get themselves into this situation by selling all wafers for empty promises and alienating everyone but OpenAI tbh.
Wasn't the problem here that OpenAI was negotiating with Samsung and SK Hynix at the same time without the other one knowing about it? People only realized the implications when they announced both deals at once.
Permanent public ownership of (very large stakes in) these companies doesn't seem like such a bad idea anymore, does it? It's what we used to have for most of the 20th century at least in Europe.
It's a business with huge up-front capital expenses and typically very low margins. Supply is scaling up slowly because it's hard, and if you overshoot, you go out of business.
Nobody is "allowing" this. It's a natural property of being both advanced technology and a commodity at the same time.
The strange deals on the entire future output are what was allowed. Try to do the same thing with onions and the government understands you are a criminal.
That is quite the amusing read but it seems like a poorly constructed law. It wasn't futures themselves that were the problem there. The duo engaged in blatant market manipulation and severely disrupted part of the food supply in the process.
It has the makings of a natural monopoly, except its compounded by RAM cartels colluding to shut out the last of the competitors.
Recently they had a second price fixing lawsuit thrown out (in the US).
Now with the state of things I'm sure another lawsuit will arrive and be thrown out because the government will do anything to keep the AI bubble rolling and a price fixing suit will be a threat to national security, somehow. Obviously thats speculative and opinion but to be clear, people are allowing it. There are and more so were things that could be done.
Allowed? We live in a neoliberal world where corporate monopolies / oligopolies aren’t even remotely regulated. If you try to do even the gentlest regulation of companies people scream about communism and totalitarianism. Unless the regulation serves the monopolies by making it harder to enter the market.
It started with raegan, and even parties on the “left” in the west believe in it with very few exceptions.
> We live in a neoliberal world where corporate monopolies / oligopolies aren’t even remotely regulated. If you try to do even the gentlest regulation of companies people scream about communism and totalitarianism. Unless the regulation serves the monopolies by making it harder to enter the market.
The thing that enables this is pretty obvious. The population is divided into two camps, the first of which holds the heuristic that regulations are "communism and totalitarianism" and this camp is used to prevent e.g. antitrust rules/enforcement. The second camp holds the heuristic that companies need to be aggressively "regulated" and this camp is used to create/sustain rules making it harder to enter the market.
The problem is that ordinary people don't have the resources to dive into the details of any given proposal but the companies do. So what we need is a simple heuristic for ordinary people to distinguish them: Make the majority of "regulations" apply only to companies with more than 20% market share. No one is allowed to dump industrial waste in the river but only dominant companies have bureaucratic reporting requirements etc. Allow private lawsuits against dominant companies for certain offenses but only government-initiated prosecutions against smaller ones, the latter preventing incumbents from miring new challengers in litigation and requiring proof beyond a reasonable doubt.
This even makes logical sense, because most of the rules are attempts to mitigate an uncompetitive market, so applying them to new entrants or markets with >5 competitors is more likely to be deleterious, i.e. drive further consolidation. Whereas if the market is already consolidated then the thicket of rules constrains the incumbents from abusing their dominance in the uncompetitive market while encouraging new entrants who are below the threshold.
Arguably a more efficient approach might just be to have a tax that adds on to corporate tax incrementally for every % of market share a company has above say 7-8%. Then dominant companies are incentivised to re-invest in improving their efficiencies rather than just buying/squeezing out competitors. A more evenly spread market would then, as a result, be against regulations that make smaller market participants less competitive, as they'd all be in relatively less table positions.
This will result in demand destruction which will starve the enterprise which will starve the hyperscaler. theres no situation where people not being able to afford hardware for 4 years results in the bubble not popping
I'd expect unaffordable hardware to drive demand for thin clients connected to cloud services which is something that had already been happening gradually prior to this.
They won't be, prices are high because they are refusing to build capacity for demand that may evaporated by the time they are done. They are holding back and building only enough so when the bubble pops they will be fine.
You can't build capacity overnight, and even with that in mind, it's hard to say if it is sensible to increase capacity now that we are in an AI bubble. For all we know, the bubble might burst.
So the ML hate is weaponized in the form of memory demand collapse FUD, and the public at large has to pay through their nose for it... thanks party poopers!
I don't think its from the ML collapse FUD, its most likely from the multiple time's in the past when they overbuilt and it resulted in a memory oversupply and price collapses. The 1985–1988, 1993–1994, 1998–2002 and the post pandemic oversupply. These were all cases where shortages followed by over corrections caused oversupply, financial losses due to low prices and fewer surviving companies. I think they're taking their time and are cautiously adding more capacity in such a way that prices won't end up collapsing again. Regardless, the result is still that we the consumers have to pay more.
At this point the remaining memory companies are… the ones that didn’t die during an over-supply collapse, right? I guess there’s been a strong evolutionary pressure against giving consumers what we want, haha.
If they gradually increase production capacity then prices stay high for 10+ years (or for as long as it takes for demand to crash) because a gradual increase in production takes that long for them to add enough capacity for current demand.
If they add enough capacity to meet current demand quickly then if demand crashes they still have billions of dollars in loans used to build capacity for demand that no longer exists and then they go bankrupt.
The biggest problem is predicting future demand, because it often declines quickly rather than gradually.
do we have evidence of RAM manufacturers going bankrupt? do we have evidence that the increased capacities after the mentioned past shortages went unused or were operated at a loss?
I would expect that OpenAI gets as much money as they ask for for the next 10 years.
There’s virtually infinite capital: if needed, more can be reallocated from the federal government (funded with debt), from public companies (funded with people’s retirement funds), from people’s pockets via wealth redistribution upwards, from offshore investment.
They will be allowed to strangle any part of the supply chain they want.
China already has a well developed DRAM industry, as DRAM is somewhat easier than logic, and can tolerate a much higher defect rate. The industry will figure this out.
Another point is I often see the money argument - like country X has more money, so they can afford to do more and better R&D, make more stuff.
This stuff comes out of factories, that need to be built, the machinery procured, engineers trained and hired.
I think the article has a giant blind spot as far as China is concerned , considering they have already a mature enough memory ecosystem via YMTC that Apple was considering sourcing from them. As well as continued expansion in the DRAM and HBM Fabs [1].
It feels like the memory cartel once again trying to incentivise their various govt to cough up some more tax breaks/funding to cushion the AI buildout bet that they made and the bubble seeming about to pop.
In any case if they leave the consumer market underserved it should be no surprise if before that 2030 prediction we are all on cheaper YMTC memory modules.
I think you're massively overestimating how much money is really accessible here. The parent comment's right that all of the easily available VC & private equity investment is basically used up. OpenAI was struggling to sell $600M of private equity, the big multi-billion dollar investment packages had lots of conditions and non-cash in it.
> more can be reallocated from the federal government (funded with debt)
While this is the most reliable funding, it's still not very accessible. OpenAI is a money pit, and their demands are growing quickly. The US government has started a bunch of very expensive spending. If OpenAI were to require yearly bundles of it's recent "$120B" deal, that's 6% of the US' discretionary budget. 12.5% of the non-military discretionary budget. (And the military is going to ask for a lot more money this year) Even the idea of just issuing more debt is dubious because they're going to want to do that to pay for the wars that are rapidly spiralling out of control.
None of this is saying that the US government can't or wouldn't pay for it, but it's non trivial and it's unclear how much Altman can threaten the US government "give me a trillion dollars or the economy explodes" without consequences.
Further deficit-spending isn't without it's risks for the US government either. Interests rates are already creeping up, and a careless explosion of deficit may well trigger a debt crisis.
> from public companies (funded with people’s retirement funds)
This would be at great cost. OpenAI would need to open up about it's financial performance to go public itself. With it's CFO being put on what is effectively Administrative Leave for pushing against going public, we can assume the financials are so catastrophic an IPO might bomb and take the company down with it. Nobody's going to be investing privately in a company that has no public takers.
Getting money through other companies is also running into limits. Big Tech has deep pockets but they've already started slowing down, switching to debt to finance AI investment, and similarly are increasingly pressured by their own shareholders to show results.
> from people’s pockets via wealth redistribution upwards
The practical mechanism of this is "AI companies raise their prices". That might also just crash the bubble if demand evaporates. For all the hype, the productivity benefit hasn't really shown up in economy-wide aggregates. The moment AI becomes "expensive", all the casual users will drop it. And the non-casual users are likely to follow. The idea of "AI tokens" as a job perk is cute, but exceedingly few are going to accept lower salary in order to use AI at their job.
There's simply not much money to take out of people's pockets these days, with how high cost of living has gotten.
> from offshore investment.
This is a pretty good source of money. The wealthy Arabian oil states have very deep slush funds, extensively investing in AI to get ties to US businesses and in the hope of diversifying their resource economies.
The "no food in other countries" is because of failed/corrupt governments, not because people use AI to generate cat pictures in the West. The economy is not a "fixed pie" that needs to be allocated among people of the world.
Just look at Cuba, which could be a very rich country and one of the prime tourist destinations of the world.
I’m a bit of an optimist. I think this will smack the hands of developers who don’t manage RAM well and future apps will necessarily be more memory-efficient.
The point is being able to write it once with web developers instead of writing it a minimum of twice (Windows and macOS) with much harder to hire native UI developers.
And HTML/CSS/JS are far more powerful for designing than any of SwiftUI/IB on Apple, Jetpack/XML on Android, or WPF/WinUI on Windows, leaving aside that this is what designers, design platforms and AI models already work best with. Even if all the major OSes converged on one solution, it still wouldn't compete on ergonomics or declarative power for designing.
Lol SwiftUI/Jetpack/WPF aren’t design tools, they’re for writing native UI code. They’re simply not the right tool for building mockups.
I don’t see how design workflows matter in the conversation about cross-platform vs native and RAM efficiency since designers can always write their mockups in HTML/CSS/JS in isolation whenever they like and with any tool of their choice. You could even use purely GUI-based approaches like Figma or Sketch or any photo/vector editor, just tapping buttons and not writing a single line of web frontend code.
The point is you can be lazy and write the app in html and js. Then you dont need to write c, even though c syntax is similar to js syntax and most gui apps wont require needing advanced c features if the gui framework is generous enough.
Now that everyone who cant be bothered, vibe codes, and electron apps are the overevangelized norm… People will probably not even worry about writing js and electron will be here to stay. The only way out is to evangelize something else.
Like how half the websites have giant in your face cookie banners and half have minimalist banners. The experience will still suck for the end user because the dev doesnt care and neither do the business leaders.
But the point isn’t that they’re more different than alike. The point is that learning c is not really that hard it’s just that corporations don’t want you building apps with a stack they don’t control.
If a js dev really wanted to it wouldn’t be a huge uphill climb to code a c app because the syntax and concepts are similar enough.
Who cares about 300Mb, where is that going to move the needle for you? And if the alternative is a memory-unsafe language then 300Mb is a price more than worth paying. Likewise if the alternative is the app never getting started, or being single-platform-only, because the available build systems suck too bad.
There ought to be a short one-liner that anyone can run to get easily installable "binaries" for their PyQt app for all major platforms. But there isn't, you have to dig up some blog post with 3 config files and a 10 argument incantation and follow it (and every blog post has a different one) when you just wanted to spend 10 minutes writing some code to solve your problem (which is how every good program gets started). So we're stuck with Electron.
If the alternative is memory-safe and easy to build, then maybe people will switch. But until it is it's irresponsible to even try to get them to do so.
Like what? Where else (that's a name brand platform and not, like, some obscure blog post's cobbled-together thing) can I start a project, push one button, and get binaries for all major platforms? Until you solve that people will keep using Electron.
In practice, you generally see the opposite. The "CPU" is in fact limited by memory throughput. (The exception is intense number crunching or similar compute-heavy code, where thermal and power limits come into play. But much of that code can be shifted to the GPU.)
RAM throughput and RAM footprint are only weakly related. The throughput is governed by the cache locality of access patterns. A program with a 50MB footprint could put more pressure on the RAM bus than one with a 5GB footprint.
Reducing your RAM consumption is not the best approach to reducing your RAM throughput is my point. It could be effective in some specific situations, but I would definitely not say that those situations are more common than the other ones.
The tradeoff has almost exclusively been development time vs resource efficiency. Very few devs are graced with enough time to optimize something to the point of dealing with theoretical tradeoff balances of near optimal implementations.
That's fine, but I was responding to a comment that said that RAM prices would put pressure to optimise footprint. Optimising footprint could often lead to wasting more CPU, even if your starting point was optimising for neither.
My response was that I disagree with this conclusion that something like "pressure to optimize RAM implies another hardware tradeoff" is the primary thing which will give, not that I'm changing the premise.
Pressure to optimize can more often imply just setting aside work to make the program be nearer to being limited by algorithmic bounds rather than doing what was quickest to implement and not caring about any of it. Having the same amount of time, replacing bloated abstractions with something more lightweight overall usually nets more memory gains than trying to tune something heavy to use less RAM at the expense of more CPU.
Only if the software is optimised for either in the first place.
Ton of software out there where optimisation of both memory and cpu has been pushed to the side because development hours is more costly than a bit of extra resource usage.
Some of the algorithms are built deep into the runtime. E.g. languages that rely on malloc/free allocators (which require maintaining free lists) are making a pretty significnant tradoff of wasting CPU to save on RAM as opposed to languages using moving collectors.
I'm a bit surprised the article makes no mention of Google's TurboQuant[0] introduced 26 days prior.
Given that TurboQuant results in a 6x reduction in memory usage for KV caches and up to 8x boost in speed, this optimization is already showing up in llama.cpp, enabling significantly bigger contexts without having to run a smaller model to fit it all in memory.
Some people thought it might significantly improve the RAM situation, though I remain a bit skeptical - the demand is probably still larger than the reduction turboquant brings.
TurboQuant is known across the industry to not be state of the art. There are superior schemes for KV quant at every bitrate. Eg, SpectralQuant: https://github.com/Dynamis-Labs/spectralquant among many, many papers.
> Given that TurboQuant results in a 6x reduction in memory usage for KV caches
All depends on baseline. The "6x" is by stylistic comparison to a BF16 KV cache; not a state of the art 8 or 4 bit KV cache scheme.
Current "TurboQuant" implementations are about 3.8X-4.9X on compression (w/ the higher end taking some significant hits of GSM8K performance) and with about 80-100% baseline speed (no improvement, regression): https://github.com/vllm-project/vllm/pull/38479
For those not paying attention, it's probably worth sending this and ongoing discussion for vLLM https://github.com/vllm-project/vllm/issues/38171 and llama.cpp through your summarizer of choice - TurboQuant is fine, but not a magic bullet. Personally, I've been experimenting with DMS and I think it has a lot more promise and can be stacked with various quantization schemes.
The biggest savings in kvcache though is in improved model architecture. Gemma 4's SWA/global hybrid saves up to 10X kvcache, MLA/DSA (the latter that helps solve global attention compute) does as well, and using linear, SSM layers saves even more.
None of these reduce memory demand (Jevon's paradox, etc), though. Looking at my coding tools, I'm using about 10-15B cached tokens/mo currently (was 5-8B a couple months ago) and while I think I'm probably above average on the curve, I don't consider myself doing anything especially crazy and this year, between mainstream developers, and more and more agents, I don't think there's really any limit to the number of tokens that people will want to consume.
The work going into local models seems to be targeting lower RAM/VRAM which will definately help.
For example Gemma 4 32B, which you can run on an off-the-shelf laptop, is around the same or even higher intelligence level as the SOTA models from 2 years ago (e.g. gpt-4o). Probably by the time memory prices come down we will have something as smart as Opus 4.7 that can be run locally.
Bigger models of course have more embedded knowledge, but just knowing that they should make a tool call to do a web search can bypass a lot of that.
The net effect won’t be a memory use reduction to achieve the same thing. We’ll do more with the same amount of memory. Companies will increase the context windows of their offerings and people will use it.
I am not convinced that more context will be useful, practical use of current models at 1mil context window shows they get less effective as the window grows. Given model progress is slowing as well, perhaps we end up reaching a balance of context size and competency sooner than expected.
Stuff in more code. Stuff in more system prompt. Stuff in raw utf8 characters instead of tokens to fix strawberries. Stuff in WAY more reasoning steps.
Given the current tech, I also doubt there will be practical uses and I hope we’ll see the opposite of what I wrote. But given the current industry, I fully trust them so somehow fill their hardware.
Market history shows us than when the cost of something goes down, we do more with the same amount, not the same thing with less. But I deeply hope to be wrong here and the memory market will relax.
that will only increase the demand for RAM as models will now be usable in scenarios that weren't feasible prior, and the ceiling for model and context size is not even visible at this point
I hate to mention Jevons paradox as it has become cliche by now, but this is a textbook such scenario
Of course, alternatively, the AI companies could go bust before finding profitability. Then, there’d be a ton of supply, prices would crash, and one or two of the current memory suppliers would go out of business. After that, the new Chinese memory companies might be producing at volume, and Renesas could be up and running.
At the moment, nothing is certain. Could this last? Sure. Could it not last? Yup.
I wonder if this might motivate to write more memory efficient software. I mean we have so much memory, but even some trivial programs eat hundreds of megabytes of ram.
>CXMT still trails Samsung, SK Hynix, and Micron by approximately three years in advanced DRAM node development, and yield rates on new production lines remain the variable that determines whether capacity targets translate into reliable supply. Liu notes that lines launched in the second half of 2026 are unlikely to change the global supply-demand balance until 2027.
The Verge article talks about demand exceeding supply in 2028. Your article suggests it'll take until 2029 before Chinese production catches up to current technology.
It'll help drive prices down in five yearss, but the Chinese memory production won't be ready and efficient enough to prevent the shortages from continuing to grow.
if a shortage lasts years, it's not a shortage. "The market clearing price of RAM in the face of expected sustained healthy demand should lead to a stable market for years."
even if gaming is and will remain very popular for years, it and the desire to upgrade gaming rigs is still a discretionary activity with more price elasticity of demand than corporate uses for RAM in the dawn of the AI age. gamers live on the margin of this market, where low prices will stimulate upgrades and high prices will lead to holding out. The complaints about price are real, but that segment of the market is some combination of less large and less important.
It’s not merely a “gaming vs data center“. There’s so many other places DRAM and NVM are needed - mobile, automotive, other consumer electronics,… the current situation is that _all_ of that is deprived of the memory that it needs. And much of this is critical to the real economy.
Why are you only talking about gamers? Apple, the most cautious planners in the whole industry have straight up cancelled their 512gb RAM Mac Studio. Don’t ask; they won’t sell you one.
I said it for many years that OS developers need to focus on over optimisations. If it wasnt a chip sgortage it would be the ever slowing progress on chip scaling.
But software optimisation helps all hardware and that doesnt drive sales.
Linux however, they dont have to worry about that. Maybe it is finally the era of Haiku OS as the ghost of BeOS rises!
I think RAM shortages would be the least of our problems…
Assuming China takes TSMC in one piece (unlikely without internal sabotage in the best case scenario), it would still probably take years before it produces another high end GPU or CPU.
We would probably be stuck with the existing inventory of equipment for a long time…
I am surprised we consider TSMC like a natural resource: isn't it really a combination of know-how and build-out according to that know-how? If smarts leave the country, perhaps this moves with them.
The risk with China taking over Taiwan is that they mostly expedite their own production research by a couple of years.
It kinda does resemble a natural resource though. The machines and technology in use at TSMC are so insanely complex, that there isn't a single person on earth who knows everything about how it works. TSMC functions only because of all of the pieces of the puzzle being together in the right place and arranged in just the right way. It's a very fragile balance that keeps it all running, and a major disruption could mean we get thrown back by a decade in chip-making technology.
> I am surprised we consider TSMC like a natural resource: isn't it really a combination of know-how and build-out according to that know-how?
Have you seen how many states and countries look enviously at Silicon Valley’s tech companies, China’s manufacturing dominance, or London’s financial sector and try to replicate them?
Turns out it’s way harder than you’d expect.
Hell, Intel can’t match TSMC despite decades of expertise, much greater fame, and regulators happy to change the law and hand out tens of billions in subsidies.
What you say is absolutely true, and is a serious problem—but the way our system operates does not allow us to correct for it.
Anyone trying to spin up a competitor to TSMC would have to first overcome a significant financial hurdle: the capital investment to build all the industrial equipment needed for fabrication.
Then they'd have to convince institutions to choose them over TSMC when they're unproven, and likely objectively worse than TSMC, given that they would not have its decades of experience and process optimization.
This would be mitigated somewhat if our institutions had common-sense rules in place requiring multiple vendors for every part of their supply chain—note, not just "multiple bids, leading to picking a single vendor" but "multiple vendors actively supplying them at all times". But our system prioritizes efficiency over resiliency.
A wealthy nation-state with a sufficiently motivated voter base could certainly build up a meaningful competitor to TSMC over the course of, say, a decade or two (or three...). But it would require sustained investment at all levels—and not just investment in the simple financial sense; it requires people investing their time in education and research. Dedicating their lives to making the best chips in the world. And the only reason that would work is that it defies our system, and chooses to invest in plants that won't be finished for years, and then pay for chips that they know are inferior in quality, because they're our chips, and paying for them when they're lower quality is the only way to get them to be the best chips in the world.
> A wealthy nation-state with a sufficiently motivated voter base could certainly build up a meaningful competitor to TSMC over the course of, say, a decade or two (or three...).
They've been burned before. The DRAM industry has a long history of booms and busts.
Demand increased, everyone built new fabs, then prices dropped and they couldn't pay off their investments. Many went out of business. It happened in the 80s, it happened in the 90s, it happened in the 2000s.
Now there's only three manufacturers left, and they know very well that demand for their product tends to be cyclical.
I've been in the industry for 30 years and I've worked at companies with fabs were demand was high and customers would only get 30% of what they ordered. Then just 2 years later our fab was only running at 50% capacity and losing money. It takes about $20 billion and 3-4 years to make a modern new fab. If you think that AI is a bubble then do you want to be left with a shiny new factory and no products to sell because demand has collapsed?
The same thing everyone who's paying attention to the real world (and not the financial fantasy world) does: that OpenAI's purchase commitments are wildly unrealistic and unsustainable.
What’s the lose scenario for them? They’re basically a cartel, and you need ram irregardless. If they make less it’s still a cost:demand, just not the most optimal for them. They’ve done that math, and figure this is the best risk and reward for them. Your goodwill or opinion doesn’t matter to them, because you need them more than they need you.
I just checked my gaming PC I built a few years ago with 64GB of DDR5 RAM, its actually gone up in value, that is unheard of generally.
Think I will scrap my PC and sell its parts.
I wonder if there are any niche companies building decent rigs with DDR3 and 5/6th generation Intel CPUs out there, it is cheap and might be a business opportunity?
I work at an e-waste recycling company. I have several dozen trays of RAM in my inventory, ~90% of it DDR3. DDR3 was selling as of a month ago, but I haven't tried to sell any RAM since. I'm looking forward to doing a huge one this week.
This is simple extrapolation from current demand, nothing more. And that's a borderline silly analysis because it assumes the AI bubble won't burst. The great misadventure in the Persian Gulf probably accelerates that because we're almost certainly going to be facing a recession.
Another thing I've been thinking about is what happens when the next generation of NVidia chips comes out? I suspect NVidia is going to delay this to milk the current demand but at some point you'll be able to buy something that's better than the H100 or B200 or whatever the current state-of-the-art for half the price. And what's that going to do to the trillions in AI DC investment?
I'm interested when the next bump in DRAM chip density is coming. That's going to change things although it seems like much of production has moved from consumer DRAM chips to HBM chips. So maybe that won't help at all.
I do think that companies will start seeing little ot no return from billions spent on AI and that's going to be aproblem. I also think that the hudnreds of billions of capital expenditure of OpenAI is going to come crashing down as there just isn't any even theoretical future revenue that can pay for all that.
I'm personally hoping that one of the AI or data center companies is suddenly unable to pay for their bills and deflate the entire industry. Probably the only hope of things getting better before the 2030s.
I fear that the real reason we do have a shortage, I mean, the real reason for the demand, is AI companies scooping what they can so that their competitors, whether existing or incumbent, can’t get to it.
I fear the author and most commenters are not aware of the law of demand and supply. If there is demand for consumer RAM, there will be supply for consumer RAM. It just takes time and risk-assessment to scale up operations.
We have RAM shortage now, we will have very cheap RAM tomorrow. It’s not like production is bottlenecked by raw materials. Chip companies just need to assess if the demand by AI companies will last so it’s better to scale up, or perhaps they should wait it out instead of oversupplying and cutting into their profits.
We're talking about advanced semiconductor manufacture. It takes years and 100s millions to billions of dollars to scale up operations. That's something you don't do unless you know there's demand to sustain it in future.
Ok so Samsung, SK Hynix and Micron do not have the capacity to meet demand. Also, what little capacity they do have they are allocating to HBM over DRAM. Based on my limited knowledge HBM can not be easily repurposed for consumer electronics. Translation: main street is cooked for the next 3-4 years.
It doesn't stop there though. OpenAI is currently mired in a capital crunch. Their last round just about sucked all the dry powder out of the private markets. Folks are now starting to ask difficult questions about their burn rate and revenue. It is increasingly looking like they might not commit to the purchase order they made which kick-started this whole panic over RAM.
Soo ... how sure are we that the memory makers themselves are not going to be the ones holding the bag?
The Radeon VII came out in 2019 as a $700 consumer GPU with an 1TB/s HBM2 memory subsystem which is more than any consumer GPU you can get today, including the high-end ones afaik. At that point in time, there was a whole lineup of AMD GPUs with HBM going down into the midrange.
If they could make this stuff and sell it to regular people a decade ago for very palatable prices, why do they come up with the idea that this is the technology of the gods, unaffordable by mere mortals?
I have been wondering this recently. It was the convention that if you wanted to keep costs down, try to keep the memory bus size down as low as possible. Still remember the awful Radeon 9200 SE - 64bit data bus that strangled an already slow GPU.
Heck, I have a phone with a 16bit memory bus for instance. The high(ish) clock rate only makes up the difference slightly.
But with general prices on all components going up, it might not be such a big factor any more.
HBM migght make sense for higher end products which can free up space for the lower end that will never use the tech.
It also does 64 bit floating point I think?
> 1TB/s HBM2 memory subsystem which is more than any consumer GPU you can get today
5090 has 1.8 TB/s?
I was gonna say, I still use an AMD Vega that uses HBM2.
Vega was a card with decent perf/$ for the consumer, but from a pure technical point of view (perf/mm2, perf/BW, perf/W) it was a major failure. Both Vega (and Fiji before it) showed that excess memory BW alone is not sufficient to win.
That card only had 16GB of memory; its memory bandwidth was 1TB/s.
The Pro variant had 32GB, I had one in a 2019 Mac Pro
You're saying this in a world where AMD's highest end consumer GPU in 2026 is also limited to 16 GB.
RX7900 XTX has 24GB
this card is 4 years old, it's not on store shelves anymore.
Don’t the memory makers always get left holding the bag? I feel this has happened at least three times before.
DRAM and to a lesser degree storage are notorious for their feast and famine cycles
(Well that and collusion)
There are a lot of cyclical businesses that make money every year. It requires careful management. Factories can produce less than full capacity - but you better design for that. you can make money in the worst years without laying anyone off even - but it requires careful attention to details and not over hiring in good times as if they will never end.
Factories working at (significantly) less than full capacity gets a bit harder when you've got one of the most expensive machines on earth working in them, and production lines that'll be out of date in a couple of years
I don’t know if they realize that collusion lends itself to feast/famine.
but if you don't collude during times of feast you will have famine, and during times of famine you will have famine, in an economy based on feast/famine you must sometimes feast or die.
All of the capital intensive businesses face this issue. Chemicals, Shipping, Semiconductors etc.
You get market signals that the demand is there, you acquire the necessary capital, you spend 5 years to build capacity, but guess what, 5 other market players did the same thing. So now you are doomed, because the market is flooded and you have low cash flow since you need to drop prices to compete for pennies.
Now you cannot find capital, you don't invest, but guess what, neither your competitors did. So now the demand is higher than the supply. Your price per unit sold skyrocketed, but you don't have enough capacity!
Rinse and repeat.
Capitalists claim that this is optimal.
The book Capital Returns: Investing Through the Capital Cycle details this phenomenon, including historical cases.
If anything, it shows it's possible for you to arbitrage this and in doing so help "smooth out the cycle."
Is the DRAM industry really capitalist? Focusing on just the Korean parties, it functions like a command economy. I would say the same about most high end semi-conductor manufacturing, TSMC, Intel, ASML are being commanded and driven by nation-state level decision making. Right now the command is to focus on high wattage centralized AI systems at the expense of everything else.
No one at high levels is capitalist, in ideology or action. An ideological capitalist would be in favor of competition, but these people disdain it and collude regularly. The only 'capitalist' actions they take are by accident, the real goal is as much power/money as possible as fast as possible.
We don't even expect companies to plan long-term anymore, it's just moving wealth as fast as possible.
That isn't really a change, very few people could ever have been said to be ideological capitalists. (capitalist is not a word with a hard definition, but I'm considering it a different thing than the more modern pure libertarian zero-regulation ideology)
Forecasting demand 5 years into the future is intrinsically highly unreliable. It doesn’t matter if it is capitalism or a command economy. The bet is always going to be risky and someone will have to pay for that risk.
At least with capitalism you have many different people with different perspectives on the risk making independent bets. That mitigates the more extreme negative outcomes.
> Capitalists claim that this is optimal.
Because that does not happen exactly as you say for all players. The demand signals will be processed and long-term risk is balanced against short-term gain in a distributed fashion, so not everyone will do the same.
It's not optimal, it's pathological. Definitely better than starving under communist dictatorships though.
>Capitalists claim that this is optimal.
It's more optimal than planned economies until we have AI planned economies with realtime feedback, I guess.
Consumers get cheap goods during oversupply and most inefficient companies get elliminated during bust while consolidation leads to economies of scale.
No this is literally a sign of an unstable system with too high of a gain K.
There is an alternative where legislation dampens this behavior but the short term profits will be lower. Hence the hawks don’t like it.
>legislation dampens this behavior
Potentially. Well meaning and thought out legislation still distorts the markets, possibly making things objectively worse.
This is a wild take.
To add a more local hurdle as well, the Dutch power grid is at capacity and its managing company is now telling companies that planned to build a datacenter that they can't be connected to the grid until 2030, even though said companies already paid for and got guarantees about that connection.
That is, memory capacity is reserved for datacenters yet to be built, but this will do weird things if said datacenter construction is postponed or cancelled altogether.
That guarantee is not as much of a guarantee as stated in the media. You get a guarantee it will be planned at a certain time (as in looked at), not that it will be build. The cost of doing business is taking risks and mitigating them. There is a reason the nuclear plant in Borsele was build: an aluminium smelter. Maybe you should arrange for something similar as a datacenter (no politician will fall on a sword for that but you can try). The (original) power draw is about the same 80-100MW.
> the Dutch power grid is at capacity and its managing company is now telling companies that planned to build a datacenter that they can't be connected to the grid until 2030, even though said companies already paid for and got guarantees about that connection.
Are the Netherlands a large proportion of global datacenters?
Amsterdam hosts a major internet exchange. It's not a bad place to build a datacenter and there are many. Northern latitude brings free air cooling, but also additional distance to clients. Lots of peers in AMS-IX, but not a lot of oceananic cable landings (one with two paths to the US, but most of the submarine cables land nearby in Europe)
Whether it's generally a reasonable place to build them isn't the percentage. The number seems to be ~3%.
Yes. Amsterdam has one of the largest IXPs (AMS-IX) in Europe and is also one of the largest European markets for Internet Infrastructure services (i.e. hosting, DNS provision, domain name registration, etc.)
And all of these are practically irrelevant for AI data centers.
Is that relevant? The grid in every country is getting ridiculously stressed by datacenters.
What the grid looks like in different countries is very different. The Dutch power grid is already almost 50% renewables, which is an inconvenience for adding capacity because that's around where you have to start really dealing with storage in order to add more.
In most other places the percentage is significantly less than that and then you can easily add more of the cheap-but-intermittent stuff because a cloudy day only requires you to make up a 10% shortfall instead of a 50% one, which existing hydro or natural gas plants can handle without new storage when there are more of them to begin with.
>The grid in every country is getting ridiculously stressed by datacenters.
In every country? Citation needed.
This just highlights what an utter failure and self-inflicted wound the green policies of Euro countries have been. Europe has already lost the AI race to the U.S. and China.
I am betting the pendulum swings faster to the other side to excess capacity as all the construction lies of Altman fall through with financiers waking up the the fact they can't build the infrasctructure as fast nor make any profits on that infrastructure that will get built.
Those financiers can’t risk not being involved in a company with even just a slight potential for AGI.
> Soo ... how sure are we that the memory makers themselves are not going to be the ones holding the bag?
The memory makers specifically did not scale up capacity to avoid being left holding the bag.
Do the memory makers not have a contract in place for an order this large? I assume that they aren't going to take "trust us bro" as good enough for several million dollars in orders, and even if there is a way to cancel the order it won't be free. I would assume so at least, but i would like if anyone knew for certain.
I was interning at a company that made networking gear that was put out of business when their largest customer canceled an order within a week of the delivery date.
The customer ran out of money. In terms of where you are in line of debtors when you haven't even delivered the product to a customer, it's so far back as to be assured you won't get your money.
If the memory makers got a deposit from OpenAI as part of this deal, that is likely to be the only money they will get for any undelivered memory, particularly if OpenAI runs out of capital.
* several Billion dollars in orders.
I think I’m missing something. Financially, what bag would the memory makers be holding here? I don’t think I’m well informed regarding how these deals were structured.
Memory makers make capital investements (build different factories, convert physical production lines, etc.) to meet orders that have been place for the next ~5 years.
OpenAI (or whoever) crashes and can't pay for the order leaving the memory makers in a tough spot.
But wouldn’t you rather hbm prices come down first ? Memory makers will be fine. There is practically infinite demand. Unless you get china style rationing of compute per person world wide.
The real issue is everyone wanting to upgrade to hbm, ddr5, and nvme5 at the same time.
What kind of consumer electronics can you build with HBM? That's the startup you should be founding...
AMD has built some consumer GPUs in the recent past with HBM - RX Vega and Radeon VII (although I assume not all "HBM" is created equal).
Isn't their APU also capable of doing HBM? There was an Intel AMD hybrid chip that used unified a while back too.
My vega 56 still has 400gb/s of memory which is still insane for how old the card is.
AMD's Hawaii architecture had 320GB/s on a 512b GDDR5 bus in 2013.
The Fiji XT architecture after it had 512GB/S on a 4096b HBM bus in 2015.
The Vega architecture did have 400GB/s or so in 2017, which was a bit of a downgrade.
HBM is just normal DDR RAM that's been packaged with (much) wider-than-usual data buses. That's where the high bandwidth comes from, not from high clock rates or any other innovation or improvement in core specifications.
Very few applications other than GPUs need HBM.
There's actually plenty of demand for LPDDR even in the AI datacenter, because HBM is quite wasteful of area for any given memory capacity.
Wafer area?
> Folks are now starting to ask difficult questions about their burn rate and revenue.
this view isn't updated correctly post-claude code and codex. there will clearly be sufficient demand.
Seriously? One release is all it took to turn the whole ship around?
The market is already stagnated. Even if OpenAI doesn’t buy what they reserved other players will do so. SK Hynix CEO said there is a 20% gap between supply and demand per year. And that doesn’t account the shock effect that will take place the moment prices normalize and everyone and their dog will go out and start buying inventory to avoid the next crisis. I for one would certainly buy more than I currently need just in case.
I think: s/stagnated/saturated/
Edit: also, that demand pressure is going to be applied constantly; there isn’t going to be a shock, it’s just going to keep prices high longer.
> Soo ... how sure are we that the memory makers themselves are not going to be the ones holding the bag?
We aren't. The remaining memory manufacturers fear getting caught in a "pork cycle" yet again - that is why there's only the three large ones left anyway.
Surely this can be solved with financial engineering. The memory makers build more capacity, but they finance it with something like floating-rate notes linked to an index of memory prices, or even catastrophe bonds or AT1s. Or more crudely, set up special purpose vehicles to build the extra capacity, and issue convertible bonds from those; if the memory market collapses, investors don't get paid, but they do get a memory factory.
If they don't expand capacity much, the only negative consequences I foresee happening for them is that they might lose spending discipline, and that systems will be set up to make do with a little less memory. Apart from that, it's just very high profits followed by more or less regular profits.
They could wind up losing all their business to China though.
China has memory makers who are creeping up through the stages of production maturity, and once they hit then there's no going back.
If the existing makers can't meet supply such that Chinese exports get their foot in the door, they may find they never get ahead again due to volume - that domestic market is huge so they have scale, and the gaming market isn't going to care because they get anything at the moment, which is all you'll need for enterprise to say "are we really afraid of memory in this business?"
Good point, it's a risk but so far the Chinese competition isn't up to par and it's unclear whether they'll be able to exploit the current window of opportunity.
> Soo ... how sure are we that the memory makers themselves are not going to be the ones holding the bag?
I hope they do, they did not have to agree to sell so much RAM to one customer. They’ve been caught colluding and price fixing more than once, I hope they take it in the shorts and new competitors arise or they go bankrupt and new management takes over the existing plants.
Don’t put all your eggs in the one basket is how the old saying goes.
Memory makers did get themselves into this situation by selling all wafers for empty promises and alienating everyone but OpenAI tbh. I do hope they end up holding the bag once again, cause after covid and the cartel thing they don't seem to ever learn their lesson on how to have the tiniest amount of integrity.
> Memory makers did get themselves into this situation by selling all wafers for empty promises and alienating everyone but OpenAI tbh.
Wasn't the problem here that OpenAI was negotiating with Samsung and SK Hynix at the same time without the other one knowing about it? People only realized the implications when they announced both deals at once.
While we're giving away bags, I'd like HDD manufacturers to get some too.
That wouldn’t help if another one goes bankrupt that’ll only make things worse.
Sounds like they're too big to fail, maybe we should bail them out to reinforce that they will get rewarded for making bad decisions.
Permanent public ownership of (very large stakes in) these companies doesn't seem like such a bad idea anymore, does it? It's what we used to have for most of the 20th century at least in Europe.
Good point. I think both AI companies and hardware makers should pay for the damage they caused to us here.
They act as a de-facto monopoly and milk us. Why is this allowed?
It's a business with huge up-front capital expenses and typically very low margins. Supply is scaling up slowly because it's hard, and if you overshoot, you go out of business.
Nobody is "allowing" this. It's a natural property of being both advanced technology and a commodity at the same time.
The strange deals on the entire future output are what was allowed. Try to do the same thing with onions and the government understands you are a criminal.
https://en.wikipedia.org/wiki/Onion_Futures_Act
That is quite the amusing read but it seems like a poorly constructed law. It wasn't futures themselves that were the problem there. The duo engaged in blatant market manipulation and severely disrupted part of the food supply in the process.
It has the makings of a natural monopoly, except its compounded by RAM cartels colluding to shut out the last of the competitors.
Recently they had a second price fixing lawsuit thrown out (in the US).
Now with the state of things I'm sure another lawsuit will arrive and be thrown out because the government will do anything to keep the AI bubble rolling and a price fixing suit will be a threat to national security, somehow. Obviously thats speculative and opinion but to be clear, people are allowing it. There are and more so were things that could be done.
Allowed? We live in a neoliberal world where corporate monopolies / oligopolies aren’t even remotely regulated. If you try to do even the gentlest regulation of companies people scream about communism and totalitarianism. Unless the regulation serves the monopolies by making it harder to enter the market.
It started with raegan, and even parties on the “left” in the west believe in it with very few exceptions.
> We live in a neoliberal world where corporate monopolies / oligopolies aren’t even remotely regulated. If you try to do even the gentlest regulation of companies people scream about communism and totalitarianism. Unless the regulation serves the monopolies by making it harder to enter the market.
The thing that enables this is pretty obvious. The population is divided into two camps, the first of which holds the heuristic that regulations are "communism and totalitarianism" and this camp is used to prevent e.g. antitrust rules/enforcement. The second camp holds the heuristic that companies need to be aggressively "regulated" and this camp is used to create/sustain rules making it harder to enter the market.
The problem is that ordinary people don't have the resources to dive into the details of any given proposal but the companies do. So what we need is a simple heuristic for ordinary people to distinguish them: Make the majority of "regulations" apply only to companies with more than 20% market share. No one is allowed to dump industrial waste in the river but only dominant companies have bureaucratic reporting requirements etc. Allow private lawsuits against dominant companies for certain offenses but only government-initiated prosecutions against smaller ones, the latter preventing incumbents from miring new challengers in litigation and requiring proof beyond a reasonable doubt.
This even makes logical sense, because most of the rules are attempts to mitigate an uncompetitive market, so applying them to new entrants or markets with >5 competitors is more likely to be deleterious, i.e. drive further consolidation. Whereas if the market is already consolidated then the thicket of rules constrains the incumbents from abusing their dominance in the uncompetitive market while encouraging new entrants who are below the threshold.
Arguably a more efficient approach might just be to have a tax that adds on to corporate tax incrementally for every % of market share a company has above say 7-8%. Then dominant companies are incentivised to re-invest in improving their efficiencies rather than just buying/squeezing out competitors. A more evenly spread market would then, as a result, be against regulations that make smaller market participants less competitive, as they'd all be in relatively less table positions.
Because for the last 60 years we've allowed big business to buy and hollow out our legal and education systems.
This will result in demand destruction which will starve the enterprise which will starve the hyperscaler. theres no situation where people not being able to afford hardware for 4 years results in the bubble not popping
I'd expect unaffordable hardware to drive demand for thin clients connected to cloud services which is something that had already been happening gradually prior to this.
The people who fucked over consumers are left holding the back that they sold us out over?
Oh no!
They won't be, prices are high because they are refusing to build capacity for demand that may evaporated by the time they are done. They are holding back and building only enough so when the bubble pops they will be fine.
You can't build capacity overnight, and even with that in mind, it's hard to say if it is sensible to increase capacity now that we are in an AI bubble. For all we know, the bubble might burst.
So the ML hate is weaponized in the form of memory demand collapse FUD, and the public at large has to pay through their nose for it... thanks party poopers!
I don't think its from the ML collapse FUD, its most likely from the multiple time's in the past when they overbuilt and it resulted in a memory oversupply and price collapses. The 1985–1988, 1993–1994, 1998–2002 and the post pandemic oversupply. These were all cases where shortages followed by over corrections caused oversupply, financial losses due to low prices and fewer surviving companies. I think they're taking their time and are cautiously adding more capacity in such a way that prices won't end up collapsing again. Regardless, the result is still that we the consumers have to pay more.
At this point the remaining memory companies are… the ones that didn’t die during an over-supply collapse, right? I guess there’s been a strong evolutionary pressure against giving consumers what we want, haha.
its not like all the RAM is passing the same machine, they can gradually increase machines and observe the change in demand, and smoothly match it.
If they gradually increase production capacity then prices stay high for 10+ years (or for as long as it takes for demand to crash) because a gradual increase in production takes that long for them to add enough capacity for current demand.
If they add enough capacity to meet current demand quickly then if demand crashes they still have billions of dollars in loans used to build capacity for demand that no longer exists and then they go bankrupt.
The biggest problem is predicting future demand, because it often declines quickly rather than gradually.
do we have evidence of RAM manufacturers going bankrupt? do we have evidence that the increased capacities after the mentioned past shortages went unused or were operated at a loss?
I would expect that OpenAI gets as much money as they ask for for the next 10 years.
There’s virtually infinite capital: if needed, more can be reallocated from the federal government (funded with debt), from public companies (funded with people’s retirement funds), from people’s pockets via wealth redistribution upwards, from offshore investment.
They will be allowed to strangle any part of the supply chain they want.
China already has a well developed DRAM industry, as DRAM is somewhat easier than logic, and can tolerate a much higher defect rate. The industry will figure this out.
Another point is I often see the money argument - like country X has more money, so they can afford to do more and better R&D, make more stuff.
This stuff comes out of factories, that need to be built, the machinery procured, engineers trained and hired.
If China capitalises on the big three focusing on data centre team, the big three might have a very hard time post bubble
I think the article has a giant blind spot as far as China is concerned , considering they have already a mature enough memory ecosystem via YMTC that Apple was considering sourcing from them. As well as continued expansion in the DRAM and HBM Fabs [1]. It feels like the memory cartel once again trying to incentivise their various govt to cough up some more tax breaks/funding to cushion the AI buildout bet that they made and the bubble seeming about to pop. In any case if they leave the consumer market underserved it should be no surprise if before that 2030 prediction we are all on cheaper YMTC memory modules.
[1]https://www.tomshardware.com/tech-industry/semiconductors/ym...
I'm guessing they become pets.com within the year. At least I hope.
Maybe if they had no competitors...
I think you're massively overestimating how much money is really accessible here. The parent comment's right that all of the easily available VC & private equity investment is basically used up. OpenAI was struggling to sell $600M of private equity, the big multi-billion dollar investment packages had lots of conditions and non-cash in it.
> more can be reallocated from the federal government (funded with debt)
While this is the most reliable funding, it's still not very accessible. OpenAI is a money pit, and their demands are growing quickly. The US government has started a bunch of very expensive spending. If OpenAI were to require yearly bundles of it's recent "$120B" deal, that's 6% of the US' discretionary budget. 12.5% of the non-military discretionary budget. (And the military is going to ask for a lot more money this year) Even the idea of just issuing more debt is dubious because they're going to want to do that to pay for the wars that are rapidly spiralling out of control.
None of this is saying that the US government can't or wouldn't pay for it, but it's non trivial and it's unclear how much Altman can threaten the US government "give me a trillion dollars or the economy explodes" without consequences.
Further deficit-spending isn't without it's risks for the US government either. Interests rates are already creeping up, and a careless explosion of deficit may well trigger a debt crisis.
> from public companies (funded with people’s retirement funds)
This would be at great cost. OpenAI would need to open up about it's financial performance to go public itself. With it's CFO being put on what is effectively Administrative Leave for pushing against going public, we can assume the financials are so catastrophic an IPO might bomb and take the company down with it. Nobody's going to be investing privately in a company that has no public takers.
Getting money through other companies is also running into limits. Big Tech has deep pockets but they've already started slowing down, switching to debt to finance AI investment, and similarly are increasingly pressured by their own shareholders to show results.
> from people’s pockets via wealth redistribution upwards
The practical mechanism of this is "AI companies raise their prices". That might also just crash the bubble if demand evaporates. For all the hype, the productivity benefit hasn't really shown up in economy-wide aggregates. The moment AI becomes "expensive", all the casual users will drop it. And the non-casual users are likely to follow. The idea of "AI tokens" as a job perk is cute, but exceedingly few are going to accept lower salary in order to use AI at their job.
There's simply not much money to take out of people's pockets these days, with how high cost of living has gotten.
> from offshore investment.
This is a pretty good source of money. The wealthy Arabian oil states have very deep slush funds, extensively investing in AI to get ties to US businesses and in the hope of diversifying their resource economies.
...
...
"Was". Was a good source of money.
I'm genuinely curious to find out how many billions they get every year from now.
love that theres virtually infinite capital there. meanwhile in the rest of the world there is virtually no food.
The "no food in other countries" is because of failed/corrupt governments, not because people use AI to generate cat pictures in the West. The economy is not a "fixed pie" that needs to be allocated among people of the world.
Just look at Cuba, which could be a very rich country and one of the prime tourist destinations of the world.
are you kidding? if spent all that money on food you guys would just use it to bullshit all day and make funny pictures, while if we spend it on AI..
I’m a bit of an optimist. I think this will smack the hands of developers who don’t manage RAM well and future apps will necessarily be more memory-efficient.
The demand is being driven by inference though. I really don't think there will be much motivation.
oh that would be a dream
> I think this will smack the hands of developers who don’t manage RAM well
And hopefully kill Electron.
I have never seen the point of spinning up a 300+Mb app just to display something that ought to need only 500Kb to paint onto the screen.
The point is being able to write it once with web developers instead of writing it a minimum of twice (Windows and macOS) with much harder to hire native UI developers.
And HTML/CSS/JS are far more powerful for designing than any of SwiftUI/IB on Apple, Jetpack/XML on Android, or WPF/WinUI on Windows, leaving aside that this is what designers, design platforms and AI models already work best with. Even if all the major OSes converged on one solution, it still wouldn't compete on ergonomics or declarative power for designing.
Lol SwiftUI/Jetpack/WPF aren’t design tools, they’re for writing native UI code. They’re simply not the right tool for building mockups.
I don’t see how design workflows matter in the conversation about cross-platform vs native and RAM efficiency since designers can always write their mockups in HTML/CSS/JS in isolation whenever they like and with any tool of their choice. You could even use purely GUI-based approaches like Figma or Sketch or any photo/vector editor, just tapping buttons and not writing a single line of web frontend code.
You mean the point is to dump it all on the end user's machine, hogging its resources.
It's bad enough having to run one boated browser, now we have to run multiples?
This is not the right path.
The point is you can be lazy and write the app in html and js. Then you dont need to write c, even though c syntax is similar to js syntax and most gui apps wont require needing advanced c features if the gui framework is generous enough.
Now that everyone who cant be bothered, vibe codes, and electron apps are the overevangelized norm… People will probably not even worry about writing js and electron will be here to stay. The only way out is to evangelize something else.
Like how half the websites have giant in your face cookie banners and half have minimalist banners. The experience will still suck for the end user because the dev doesnt care and neither do the business leaders.
Syntax ain't the problem. The semantics of C and JS could not be more different.
But the point isn’t that they’re more different than alike. The point is that learning c is not really that hard it’s just that corporations don’t want you building apps with a stack they don’t control.
If a js dev really wanted to it wouldn’t be a huge uphill climb to code a c app because the syntax and concepts are similar enough.
What "advanced features" are there to speak of in C? What does the syntax of C being similar to JS matter?
This comment makes no sense.
Well theres the whole c89 vs c99. I’ll let you figure the rest out since it’s a puzzle in your perspective.
Honestly C and JavaScript could hardly be more different, as languages.
About the only thing they share is curly braces.
You do need a couple framebuffers, but for the most part yeah...
Who cares about 300Mb, where is that going to move the needle for you? And if the alternative is a memory-unsafe language then 300Mb is a price more than worth paying. Likewise if the alternative is the app never getting started, or being single-platform-only, because the available build systems suck too bad.
There ought to be a short one-liner that anyone can run to get easily installable "binaries" for their PyQt app for all major platforms. But there isn't, you have to dig up some blog post with 3 config files and a 10 argument incantation and follow it (and every blog post has a different one) when you just wanted to spend 10 minutes writing some code to solve your problem (which is how every good program gets started). So we're stuck with Electron.
> And if the alternative is a memory-unsafe language
and if not?
> and if not?
If the alternative is memory-safe and easy to build, then maybe people will switch. But until it is it's irresponsible to even try to get them to do so.
Until? Just take what's out there - it's so easy to improve on Electron
Like what? Where else (that's a name brand platform and not, like, some obscure blog post's cobbled-together thing) can I start a project, push one button, and get binaries for all major platforms? Until you solve that people will keep using Electron.
Using a lot less RAM often implies using more CPU, so even with inflated RAM prices, it's not a good tradeoff (at least not in general).
In practice, you generally see the opposite. The "CPU" is in fact limited by memory throughput. (The exception is intense number crunching or similar compute-heavy code, where thermal and power limits come into play. But much of that code can be shifted to the GPU.)
RAM throughput and RAM footprint are only weakly related. The throughput is governed by the cache locality of access patterns. A program with a 50MB footprint could put more pressure on the RAM bus than one with a 5GB footprint.
You're absolutely right? I don't really disagree with anything you're saying there, that's why I said "generally" and "in practice".
Reducing your RAM consumption is not the best approach to reducing your RAM throughput is my point. It could be effective in some specific situations, but I would definitely not say that those situations are more common than the other ones.
The tradeoff has almost exclusively been development time vs resource efficiency. Very few devs are graced with enough time to optimize something to the point of dealing with theoretical tradeoff balances of near optimal implementations.
That's fine, but I was responding to a comment that said that RAM prices would put pressure to optimise footprint. Optimising footprint could often lead to wasting more CPU, even if your starting point was optimising for neither.
My response was that I disagree with this conclusion that something like "pressure to optimize RAM implies another hardware tradeoff" is the primary thing which will give, not that I'm changing the premise.
Pressure to optimize can more often imply just setting aside work to make the program be nearer to being limited by algorithmic bounds rather than doing what was quickest to implement and not caring about any of it. Having the same amount of time, replacing bloated abstractions with something more lightweight overall usually nets more memory gains than trying to tune something heavy to use less RAM at the expense of more CPU.
Only if the software is optimised for either in the first place.
Ton of software out there where optimisation of both memory and cpu has been pushed to the side because development hours is more costly than a bit of extra resource usage.
You're thinking an algorithmic tradeoff, but this is an abstraction tradeoff.
Some of the algorithms are built deep into the runtime. E.g. languages that rely on malloc/free allocators (which require maintaining free lists) are making a pretty significnant tradoff of wasting CPU to save on RAM as opposed to languages using moving collectors.
Free lists aren't expensive for most usage patterns. For cases where they are we've got stuff like arena allocators. Meanwhile GC is hardly cheap.
Of course memory safety has a quality all its own.
hopefully not implying needing a gc for memory safety...
Or just using less electron and writing less shit code.
I'm a bit surprised the article makes no mention of Google's TurboQuant[0] introduced 26 days prior.
Given that TurboQuant results in a 6x reduction in memory usage for KV caches and up to 8x boost in speed, this optimization is already showing up in llama.cpp, enabling significantly bigger contexts without having to run a smaller model to fit it all in memory.
Some people thought it might significantly improve the RAM situation, though I remain a bit skeptical - the demand is probably still larger than the reduction turboquant brings.
[0] https://news.ycombinator.com/item?id=47513475
TurboQuant is known across the industry to not be state of the art. There are superior schemes for KV quant at every bitrate. Eg, SpectralQuant: https://github.com/Dynamis-Labs/spectralquant among many, many papers.
> Given that TurboQuant results in a 6x reduction in memory usage for KV caches
All depends on baseline. The "6x" is by stylistic comparison to a BF16 KV cache; not a state of the art 8 or 4 bit KV cache scheme.
BTW, a number of corrections. The TurboQuant paper was submitted to Arxiv back in April 2025: https://arxiv.org/abs/2504.19874
Current "TurboQuant" implementations are about 3.8X-4.9X on compression (w/ the higher end taking some significant hits of GSM8K performance) and with about 80-100% baseline speed (no improvement, regression): https://github.com/vllm-project/vllm/pull/38479
For those not paying attention, it's probably worth sending this and ongoing discussion for vLLM https://github.com/vllm-project/vllm/issues/38171 and llama.cpp through your summarizer of choice - TurboQuant is fine, but not a magic bullet. Personally, I've been experimenting with DMS and I think it has a lot more promise and can be stacked with various quantization schemes.
The biggest savings in kvcache though is in improved model architecture. Gemma 4's SWA/global hybrid saves up to 10X kvcache, MLA/DSA (the latter that helps solve global attention compute) does as well, and using linear, SSM layers saves even more.
None of these reduce memory demand (Jevon's paradox, etc), though. Looking at my coding tools, I'm using about 10-15B cached tokens/mo currently (was 5-8B a couple months ago) and while I think I'm probably above average on the curve, I don't consider myself doing anything especially crazy and this year, between mainstream developers, and more and more agents, I don't think there's really any limit to the number of tokens that people will want to consume.
The work going into local models seems to be targeting lower RAM/VRAM which will definately help.
For example Gemma 4 32B, which you can run on an off-the-shelf laptop, is around the same or even higher intelligence level as the SOTA models from 2 years ago (e.g. gpt-4o). Probably by the time memory prices come down we will have something as smart as Opus 4.7 that can be run locally.
Bigger models of course have more embedded knowledge, but just knowing that they should make a tool call to do a web search can bypass a lot of that.
The net effect won’t be a memory use reduction to achieve the same thing. We’ll do more with the same amount of memory. Companies will increase the context windows of their offerings and people will use it.
That is the sad reality of the future of memory.
I am not convinced that more context will be useful, practical use of current models at 1mil context window shows they get less effective as the window grows. Given model progress is slowing as well, perhaps we end up reaching a balance of context size and competency sooner than expected.
Stuff in more code. Stuff in more system prompt. Stuff in raw utf8 characters instead of tokens to fix strawberries. Stuff in WAY more reasoning steps.
Given the current tech, I also doubt there will be practical uses and I hope we’ll see the opposite of what I wrote. But given the current industry, I fully trust them so somehow fill their hardware.
Market history shows us than when the cost of something goes down, we do more with the same amount, not the same thing with less. But I deeply hope to be wrong here and the memory market will relax.
You still need to hold the model in memory. If you have for example 16 GB ram, the gains aren't that much
That's not what consumes the most memory at scale. The KV caches are per-user.
You can still use as much memory, but fit more things into it, so I don’t think the current market hogs will let go easily.
that will only increase the demand for RAM as models will now be usable in scenarios that weren't feasible prior, and the ceiling for model and context size is not even visible at this point
I hate to mention Jevons paradox as it has become cliche by now, but this is a textbook such scenario
Of course, alternatively, the AI companies could go bust before finding profitability. Then, there’d be a ton of supply, prices would crash, and one or two of the current memory suppliers would go out of business. After that, the new Chinese memory companies might be producing at volume, and Renesas could be up and running.
At the moment, nothing is certain. Could this last? Sure. Could it not last? Yup.
I wonder if this might motivate to write more memory efficient software. I mean we have so much memory, but even some trivial programs eat hundreds of megabytes of ram.
I've definitely done some vibe-coding with the explicit intent to reduce memory usage.
How efficient is AI at reducing RAM consuption?
I'm a bit surprised the article makes no mention of China's new memory companies.
[0] https://techwireasia.com/2026/04/chinese-memory-chips-ymtc-c...
As the article states:
>CXMT still trails Samsung, SK Hynix, and Micron by approximately three years in advanced DRAM node development, and yield rates on new production lines remain the variable that determines whether capacity targets translate into reliable supply. Liu notes that lines launched in the second half of 2026 are unlikely to change the global supply-demand balance until 2027.
The Verge article talks about demand exceeding supply in 2028. Your article suggests it'll take until 2029 before Chinese production catches up to current technology.
It'll help drive prices down in five yearss, but the Chinese memory production won't be ready and efficient enough to prevent the shortages from continuing to grow.
if a shortage lasts years, it's not a shortage. "The market clearing price of RAM in the face of expected sustained healthy demand should lead to a stable market for years."
even if gaming is and will remain very popular for years, it and the desire to upgrade gaming rigs is still a discretionary activity with more price elasticity of demand than corporate uses for RAM in the dawn of the AI age. gamers live on the margin of this market, where low prices will stimulate upgrades and high prices will lead to holding out. The complaints about price are real, but that segment of the market is some combination of less large and less important.
It’s not merely a “gaming vs data center“. There’s so many other places DRAM and NVM are needed - mobile, automotive, other consumer electronics,… the current situation is that _all_ of that is deprived of the memory that it needs. And much of this is critical to the real economy.
Why are you only talking about gamers? Apple, the most cautious planners in the whole industry have straight up cancelled their 512gb RAM Mac Studio. Don’t ask; they won’t sell you one.
Everybody’s getting pinched, not just the gamers.
The issue is supply is inelastic so even as prices soar they can only make more so fast and that won’t get fixed until 2028.
The era of optimisation is finally here. I'm excited.
I said it for many years that OS developers need to focus on over optimisations. If it wasnt a chip sgortage it would be the ever slowing progress on chip scaling.
But software optimisation helps all hardware and that doesnt drive sales.
Linux however, they dont have to worry about that. Maybe it is finally the era of Haiku OS as the ghost of BeOS rises!
Wait until China invades Taiwan.. (ok, it's not too likely, but what if?)
I think RAM shortages would be the least of our problems…
Assuming China takes TSMC in one piece (unlikely without internal sabotage in the best case scenario), it would still probably take years before it produces another high end GPU or CPU.
We would probably be stuck with the existing inventory of equipment for a long time…
I am surprised we consider TSMC like a natural resource: isn't it really a combination of know-how and build-out according to that know-how? If smarts leave the country, perhaps this moves with them.
The risk with China taking over Taiwan is that they mostly expedite their own production research by a couple of years.
It kinda does resemble a natural resource though. The machines and technology in use at TSMC are so insanely complex, that there isn't a single person on earth who knows everything about how it works. TSMC functions only because of all of the pieces of the puzzle being together in the right place and arranged in just the right way. It's a very fragile balance that keeps it all running, and a major disruption could mean we get thrown back by a decade in chip-making technology.
> I am surprised we consider TSMC like a natural resource: isn't it really a combination of know-how and build-out according to that know-how?
Have you seen how many states and countries look enviously at Silicon Valley’s tech companies, China’s manufacturing dominance, or London’s financial sector and try to replicate them?
Turns out it’s way harder than you’d expect.
Hell, Intel can’t match TSMC despite decades of expertise, much greater fame, and regulators happy to change the law and hand out tens of billions in subsidies.
With you on the first two, but I haven't heard of London's financial sector being a big deal, what's going on there?
What you say is absolutely true, and is a serious problem—but the way our system operates does not allow us to correct for it.
Anyone trying to spin up a competitor to TSMC would have to first overcome a significant financial hurdle: the capital investment to build all the industrial equipment needed for fabrication.
Then they'd have to convince institutions to choose them over TSMC when they're unproven, and likely objectively worse than TSMC, given that they would not have its decades of experience and process optimization.
This would be mitigated somewhat if our institutions had common-sense rules in place requiring multiple vendors for every part of their supply chain—note, not just "multiple bids, leading to picking a single vendor" but "multiple vendors actively supplying them at all times". But our system prioritizes efficiency over resiliency.
A wealthy nation-state with a sufficiently motivated voter base could certainly build up a meaningful competitor to TSMC over the course of, say, a decade or two (or three...). But it would require sustained investment at all levels—and not just investment in the simple financial sense; it requires people investing their time in education and research. Dedicating their lives to making the best chips in the world. And the only reason that would work is that it defies our system, and chooses to invest in plants that won't be finished for years, and then pay for chips that they know are inferior in quality, because they're our chips, and paying for them when they're lower quality is the only way to get them to be the best chips in the world.
China is 10 years into what you describe, no?
> the way our system operates
They have the other system.
This bit, I mean:
> A wealthy nation-state with a sufficiently motivated voter base could certainly build up a meaningful competitor to TSMC over the course of, say, a decade or two (or three...).
the scientists will switch sides with minimal issues, like they did after WWII
It seems that RAM manufacturers are still reluctant to increase production. They know something that investors don't about long term RAM demands?
They've been burned before. The DRAM industry has a long history of booms and busts.
Demand increased, everyone built new fabs, then prices dropped and they couldn't pay off their investments. Many went out of business. It happened in the 80s, it happened in the 90s, it happened in the 2000s.
Now there's only three manufacturers left, and they know very well that demand for their product tends to be cyclical.
The semiconductor industry has been a boom and bust industry for over 50 years.
https://imgur.com/a/cDLoeZm
I've been in the industry for 30 years and I've worked at companies with fabs were demand was high and customers would only get 30% of what they ordered. Then just 2 years later our fab was only running at 50% capacity and losing money. It takes about $20 billion and 3-4 years to make a modern new fab. If you think that AI is a bubble then do you want to be left with a shiny new factory and no products to sell because demand has collapsed?
The same thing everyone who's paying attention to the real world (and not the financial fantasy world) does: that OpenAI's purchase commitments are wildly unrealistic and unsustainable.
What’s the lose scenario for them? They’re basically a cartel, and you need ram irregardless. If they make less it’s still a cost:demand, just not the most optimal for them. They’ve done that math, and figure this is the best risk and reward for them. Your goodwill or opinion doesn’t matter to them, because you need them more than they need you.
> They’re basically a cartel,
The lawsuits in the past prove that statement to not be basically but actually.
But thank god we were all able to generate some SVGs of pelicans, right guys?
Hilarious, The ram in my PC i built 5 years ago is will soon be worth more than i spent on building the whole PC.
I was about to give away my old PC, but I think it could be worth my hassle to sell it for the RAM now (64GB DDR4).
I bought a workstation with 3 TB of ram for FDTD simulations last year. Glad I got it then ...
I just checked my gaming PC I built a few years ago with 64GB of DDR5 RAM, its actually gone up in value, that is unheard of generally.
Think I will scrap my PC and sell its parts.
I wonder if there are any niche companies building decent rigs with DDR3 and 5/6th generation Intel CPUs out there, it is cheap and might be a business opportunity?
I work at an e-waste recycling company. I have several dozen trays of RAM in my inventory, ~90% of it DDR3. DDR3 was selling as of a month ago, but I haven't tried to sell any RAM since. I'm looking forward to doing a huge one this week.
do you have an online storefront?
I am still running a DDR3 2nd Gen i7. 32GB RAM, it is surprisingly comfortable but I also dont push it too hard.
If only we have not allowed oligopolies to exist. Meanwhile, EU is not in the race at all and US has very few fabs.
People don't want to pay more every day so they can pay less in an emergency.
Thank god they shut down 3D XPoint.
Fabricated shortage to fasten US Chip Act and US Chip Security Act
I want those AI companies that drove the prices up, to pay an immediate back-tax to all of us.
I don't want to pay more because of AI companies driving the price up. That is milking.
This is simple extrapolation from current demand, nothing more. And that's a borderline silly analysis because it assumes the AI bubble won't burst. The great misadventure in the Persian Gulf probably accelerates that because we're almost certainly going to be facing a recession.
Another thing I've been thinking about is what happens when the next generation of NVidia chips comes out? I suspect NVidia is going to delay this to milk the current demand but at some point you'll be able to buy something that's better than the H100 or B200 or whatever the current state-of-the-art for half the price. And what's that going to do to the trillions in AI DC investment?
I'm interested when the next bump in DRAM chip density is coming. That's going to change things although it seems like much of production has moved from consumer DRAM chips to HBM chips. So maybe that won't help at all.
I do think that companies will start seeing little ot no return from billions spent on AI and that's going to be aproblem. I also think that the hudnreds of billions of capital expenditure of OpenAI is going to come crashing down as there just isn't any even theoretical future revenue that can pay for all that.
something that's better than the current state-of-the-art for half the price. And what's that going to do to the trillions in AI DC investment?
They'll just spend whatever they were planning to spend and get more performance.
I'm personally hoping that one of the AI or data center companies is suddenly unable to pay for their bills and deflate the entire industry. Probably the only hope of things getting better before the 2030s.
That’s likely to happen if all the talks about OpenAI pulling out of their wafer deals are true.
I fear that the real reason we do have a shortage, I mean, the real reason for the demand, is AI companies scooping what they can so that their competitors, whether existing or incumbent, can’t get to it.
This was one of the theories behind the wafer buyout by OpenAI indeed. Pretty efficient way to make everyone panic and cut off of new hardware.
Was it debubked in any way (e.g. by OpenAI actually showing what they do with the wafers?)
can't read the article due to a paywall.
https://archive.is/QGU89
here is the source: https://www.reuters.com/world/asia-pacific/south-koreas-sk-g...
Expect shortages across the board. RAM? That's the tip of the iceberg, think food and gas.
I fear the author and most commenters are not aware of the law of demand and supply. If there is demand for consumer RAM, there will be supply for consumer RAM. It just takes time and risk-assessment to scale up operations.
We have RAM shortage now, we will have very cheap RAM tomorrow. It’s not like production is bottlenecked by raw materials. Chip companies just need to assess if the demand by AI companies will last so it’s better to scale up, or perhaps they should wait it out instead of oversupplying and cutting into their profits.
We're talking about advanced semiconductor manufacture. It takes years and 100s millions to billions of dollars to scale up operations. That's something you don't do unless you know there's demand to sustain it in future.
The law of supply and demand works in a perfect competition market.
There are two RAM suppliers...