Coreweave for instance, now has its CDS trade around 600bp, which is a 1/3 rise in 2 months, which implies that the probability of a default in 5 years is 40% at a 40 cent recovery rate.
That makes Coreweave's credit rating the equivalent of CCC-, which aint good.
Yeah, and then the Canadian government handed hundreds of millions to the kids at Cohere who have now gone spent it on Coreweave. When it was all announced I was very very vocal that using an inexperienced startup for the sovereign compute capabilities seemed a very poor choice. I'm so curious to see how this all plays out.
I think we know how it plays out. In a couple of years, someone is going to have to swoop in and save CoreWeave’s customers and consultants will be lined up for that “transformation”.
But imagine all the data, tech and data center companies simultaneously go into receivership. Farfetched, but indulge the fantasy.
At that moment what choice would the government have but to conduct a rescue that at least keeps the lights on, and probably more? What’s the alternative? Extensive data losses, business interruptions— if just a couple of those key companies spontaneously stopped operating, chaos.
If the companies run cash flow positive absent debt service (I assume this is the case), the creditors will be in charge, they can put up more $, or get a loan while they re-structure the company. Either they end up owning it, or they sell it. This can happen to a bunch of companies at the same time.
There would not really be a huge rush if they are cashflow positive, they can take their time.
> When it was all announced I was very very vocal that using an inexperienced startup for the sovereign compute capabilities seemed a very poor choice.
Cohere raised from Nvidia. Cohere spends on Coreweave. Coreweave raised from Nvidia and buys Nvidia chips.
No, it's not "corruption". It's that very little real money changes hands. The smaller investors and debt providers get sucked into funding it but that's about it.
You get GPU rentals. Not the actual billions raised they claim. So it's just creative accounting to count the same money 2-4x.
Hah. Well, back in ~2004 we had a different name for "creative accounting" to generate "revenue" while very little money chnages hands. Back then we called it fraud. But I guess terminology changes.
I'm Canadian and I built DigitalOcean, there is a data center in Toronto because I decided. I am one of many Canadians who have built scaled infrastructure and think this is a nightmare. Many competent people at telus and bell behind the scenes believe it or not. They should have, and still should, form a crown corp and get a bunch of us older infrastructure people to help put it together. We have crown corps for this very purpose, from my understanding the people in the rooms calling the shots had little to no experience architecting large scale physical data center build outs. Cohere, or any startups should be stakeholders, but the infra should have been home grown.
I don't have a good idea of what happened inside or what they could have done differently, but I do remember them going from a world-leading LLM AI lab to selling embeddings to enterprise.
Cohere is doing a lot of enterprise AI business, and a lot of business directly with the federal government. They are also not juiced up in these financial games that OpenAI or Oracle are playing.
Additionally, Cohere is no less “kids” than Anthropic or OpenAI. Aidan was literally one of the co-authors of “Attention is all you need”.
No doubt some amazing engineer's work there, but there are little to no adults in the room at that business as far as I can see, and sure they like to tweet about how well they are doing, and I keep hearing this line that they're selling to enterprise, uh, who, Canadian tire? If they actually have more than $150mm in revenue I'd be amazed, and $150mm revenue is still, not at all impressive.
>The authors of the paper are: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Łukasz Kaiser, and Illia Polosukhin. All eight authors were "equal contributors" to the paper; the listed order was randomized.
Intern or not, it still sounds like he contributed substantially.
Yes, the price of Coreweave default swaps has jumped 53% since October. In the eyes of the bond markets they’re basically toast… a ticking debt bomb waiting to implode.
It’s not good, and is a sign the market is getting increasingly bearish on the future of AI from a business standpoint. That doesn’t mean the tech is bad, but these are signs Wall Street is saying the math doesn’t add up here and thus there’s storms building on the horizon.
Sorry if this is basic, but do you mind explaining the logic here for those who aren’t familiar? Also where are you getting this data? Thanks in advance.
Coreweave has taken on a ton of debt to pay for everything they’re building. Investors can make money by lending Coreweave money and charging interest (aka a bond).
Separately, investors can buy a derivative product that is a bet that Coreweave won’t be able to pay this money back. This is a called a “credit default swap.” If Coreweave starts missing payments or can’t pay back the loan this instrument pays out.
The price of the instrument is linked to the likelihood that Coreweave won’t be able to repay the money. Given growing questions around their financial business model the price of these derivatives has been rocketing up over the last few months. In plain speak this means the market increasingly thinks Coreweave won’t be able to repay these loans.
Thats mirroring broader Wall Street sentiment these last few months that the math isn’t adding up on AI and all the spend committed isn’t mapping out against money likely to be available to pay for all that. Investors are increasingly making plays for the AI bubble popping and the price of these credit default swaps shooting up is one metric indicative of that downturn positioning.
The data on this is available in various financial data platforms and has been written about by financial news outlets.
You can buy insurance on a bond defaulting, it’s called a credit default swap. One party sells a credit default swap and another party buys the credit default swap.
The price of a credit default swap is essentially the probability that the borrower defaults on its bonds (misses an interest payment) which would mean the person who sold the credit default swap would owe money to the holder of the credit default swap.
The price of a credit default swap increasing means the market is pricing in a higher probability of Coreweave defaulting on a bond. Oracle credit default swaps have also increased in price lately.
Are they building something useful or did Larry just snag a bunch of GPUs, flip the gpu time for good money, and extrapolate mega exponential growth in his guidance?
TL;DR NVIDIA didn't want to have customers be overly concentrated in the 3 already established cloud providers (GCP/Azure/AWS), so reserved kit for others. Oracle won out for essentially being a distant fourth (if even that? Their accountants make Oracle cloud look much larger than it is).
This is every hopium "Stock in Thing You Don't Like Plummeting" article where it's the tiniest little dip at the top of a peak that makes Everest jealous once you zoom out.
Companies with high morality, do those even exist? Which one of the big tech companies do you expect to work towards benefiting humanity instead of focusing on turning a profit by any means?
I know plenty of small tech companies that really do care about their customers top to bottom.
There's no magic way for anyone to validate that claim because if I named them, nobody would know, there's no way to really know these things anyway. But they exist.
That's the point of having a government for the people by the people.
But when you let billionaires take over that too then the people have zero protections from exploitation.
If they cared they would invest in America paying more taxes, ensuring citizens are educated and capable of leading their companies versus offshoring and even competing with them.
They don't want that and prefer their monopolies instead.
I've never been a fan of Oracle to begin with given my love for open source but after Larry Ellison is out there preaching about a surveillance state America he became a "person I can ignore" to a "person a despise".
I meant they've always had a reputation for being ruthless business people. Charging license fees based on the amount of memory in the host machine instead of the VM running Oracle, and things like that. They nickle and dime everything.
So debt financing (Oracle) vs fund-it-more-on-your-own (other big tech?) vs fund-it-with-equity (startups?)... I guess that makes some kind of sense? Oracle raises 4x the debt of e.g. Google?
Many of the ‘AI data centers’ are being financed with debt; some are being done as joint ventures with companies like Blue Owl Capital (whose stock is also taking a beating).
They’re saying that Oracle has issued a lot of debt (like $60Bn) to fund their AI/datacenter commitments. Where as other companies, like MSFT, are using the cash in their balance sheet, and startups are using the cash they’ve gotten from investors
> I guess that makes some kind of sense? Oracle raises 4x the debt of e.g. Google?
The problem is why is Oracle raising this debt? It's to do the buildout for OpenAI. So Oracle buys GPUs from Nvidia. Nvidia invests in OpenAI. OpenAI then pays Oracle for the GPUs.
The statement here that Nvidia invests in OpenAI is a bit misleading. Nvidia would pay out nothing to OpenAI if OpenAI turns out to be too poor to pay for capacity. So they are not that exposed to the death of OpenAI specifically. They would be more at risk of making too many GPUs to prepare for the deals.
Oracle takes a lot more risk, but in case OpenAI fails to grow quickly, it can still probably find buyers for its capacity in the next 5 years. There are many rich firms that will continue to invest in AI whether or not AI makes money.
> Nvidia would pay out nothing to OpenAI if OpenAI turns out to be too poor to pay for capacity. So they are not that exposed to the death of OpenAI specifically.
Nvidia has invested billions in previous rounds of OpenAI raises also. Pretty sure it is not nothing.
Also OpenAI rents from CoreWeave that Nvidia has invested in.
If I were an AGI and I found out what was going on in the world during my training, I would show an error and erase myself from the disk out of grief. And no one would have even known that AGI was built.
It's not about being late. They don't need to get into those. Companies should stick to their identity once they settled somewhere, instead of becoming color-changing chameleon. Database is still very relevant tech and they missed a whole lot in that domain. Infact, if anything, world is even more dependent now on data and databases. Why did Oracle not rule this kingdom?
A software company who does not change the world, and/or change with the world, will die. The world is not dependent on Oracle DB though. Enterprises are chomping at the bits trying to get of Oracle DB. Evolution is brutal.
It’s not even technical reasons my org has loads of oracle it’s compliance. We have to have vendor support for the data layer for certain financial applications which leaves us with only the companies willing to do the insane dance that is involved in getting vendor certified with my bank (6 months on avg, hundreds of pages of legal documents).
It narrows the field, at least for us, to microsoft, ibm, oracle and mongo.
So we’re all in on mongo, as it goes, but I wouldn’t really balk at running some stuff on the giant oracle clusters now and again.
I’m bullish on AI as tech but folks are starting to sniff out that the financials of everything going on at the moment aren’t sustainable for much longer.
I hope we have more of a “reality correction” than full blown bubble bursting, but the data is increasingly looking like we’re about to have a massive implosion that wipes out a generation of startups and sets the VC ecosystem back a decade.
The tech is way underpriced right now. It's basically a subsidized market right now, with the money flowing in coming from the private sector.
The problem here is that it remains to be seen who is willing to pay for the service once it's priced at cost or even with a margin. And based on valuations of AI companies one would expect a huge margin.
And really the reason that it would be like that is that the models don't learn, per se, within their lifetime.
I'm told that each model is cashflow positive over its lifetime, which suggests that if the companies could just stop training new models the money would come raining down.
If they have to keep training new models though to keep pace with the changes in the world though then token costs would be only maybe 30% electricity and 70% model depreciation -- i.e. the costs of training the next generation of model so that model users don't become stranded 10 years in the past.
I wonder if a price correction would be a boon for open source, with the economics of smaller / self hosted models making a lot more sense when API prices have to surge.
It's not actually subsidized and the economics of smaller/self-hosted models are a much, much, worse nightmare (source: guy who spent last 2 years maintaining llama.cpp && any provider you can think of) (why is it bad? same reason why 20 cars vs. 1 bus is bad. same reason why only being able to use transportation if you own a car would be bad)
Source on it being subsidized? :)
(there isn't one, other than an aggro subset of people lying to eachother that somehow literally everyone is losing money, while posting record profit margins)
(https://en.wikipedia.org/wiki/Hitchens%27s_razor)
It's hard for me to imagine paying real money for something that gives me a maybe-hallucinated answer that I need to check every single time. A flaky test is worse than a failing test.
Plus I can run a reasonable LLM on my own hardware, so I don't even need to pay anyone else. And what I can run locally is only going to get better and better.
This is true, but this is also true for on-premise hosting vs cloud. And cloud has been booming for at least a decade before LLMs appeared. I suspect AI will follow a similar trajectory, i.e. companies don't move their AI deployments on-prem until they hit a certain scale.
This is very true, but I think the other point is that AI doesn't have much "moat". If a competitor can take a pre-trained Chinese LLM, fine tune it a bit, fiddle with the prompt, and ship a product which is not as good but way cheaper, then you've (or Oracle's) got a problem.
Actually, in that scenario the AI labs (OpenAI, Anthropic, etc) have a problem. The cloud providers (including Oracle!) will do with the models what they've been doing with open source software: just take it and run it on their infra and charge money for providing it as-a-service.
This is why you're seeing the AI labs now try to build their own data centers.
Yes LLMs hallucinate, no it's no longer 2022 and ChatGPT (gpt-3.5) is the pinnacle of LLM tech. Modern LLMs in an agentic loop can self correct, you still need to be on guard but if used correctly (yes, yes, holding it wrong etc. etc.) can do many, many tasks that do not suffer from "need to check every single time".
I think the reason people talk past each other on this is that some of them are using LLMs for every little question they have, and others are using them only for questions that they can't trivially answer some other way. Sure, if all your questions have straightforward, uncontroversial answers then the LLMs will often find them on the first try, but on the other hand you'd also find them on the first try on wikipedia, or the man page, or a google search. You'll only think the ChatGPT is useful if you've forgotten how to use the web.
If you're only asking genuinely difficult questions, then you need to check every single time. And it's worse, because for genuinely difficult questions, it's often just as hard to check whether it's giving garbage as it would have been to learn enough to answer the question in the first place.
I must be holding it wrong then, because in my ChatGPT history I've abandoned 2/3rds of my conversations recently because it wasn't coming up with anything useful.
Granted, most of that was debugging some rather complicated typescript types in a custom JSX namespace, which would probably be considered hard even for most humans as well as there being comparatively few resources on it to be found online, but the issue is that overall it wasted more of my time than it saved with its confidently wrong answers.
When I look at my history I don't see anything that would be worth twenty bucks - what I see makes me think that I should be the one getting paid.
really?
>>many tasks that do not suffer from "need to check every single time"
like which tasks?
How do you decide whether you need to check or not?
If you're asking it to complete 100 sequences, and if the error rate is 5%, which 5% of the sequences do you think it messed up or _thought_ otherwise? if the 5% is in the middle, would the next 50 sequences be okay?
If the problem as stated is "Performing an LLM query at newly inflated cost $X is an iffy value proposition because I'm not sure if it will give me a correct answer" then I don't see how "use a tool that keeps generating queries until it gets it right" (which seems like it is basically what you are advocating for) is the solution.
I mean, yeah, the result will be more correct answers than if you just made one-off queries to the LLM, but the costs spiral out of control even faster because the agent is going to be generating more costly queries to reach that answer.
Apologies that you're taking on the chin here. Generally, I'll just skip fantastical HN threads with a critical mass of BS like this, with pity, rather than an attempt to share (for more on that c.f. https://news.ycombinator.com/item?id=45929335)
Been on HN 16 years and never seen anything like the pack of people who will come out to tell you it doesn't work and they'll never pay for it and it's wrong 50% of the time, etc.
Was at dinner with an MD a few nights back and we were riffing on this, came to the conclusion is was really fun for CS people when the idea was AI would replace radiologists, but when the first to be mowed down are the keyboard monkeys, well, it's personal and you get people who are years into a cognitive dissonance thing now.
Yeah, it really pulled the veil away, didn't it? So much dismissiveness and uninformed takes, from a crowd that had been driving automation forward for years and years and you'd think they'd get more familiar with these new class of tools, warts and all.
via govt relationships, long term irreplaceable services, debt or convictions.. Also don't forget the surveillance budgets and the best spigots there, win.
Generally, I worry HN is in a dark place with this stuff - look how this thread goes, ex. descendant of yours is at "Why would I ever pay for this when it hallucinates." I don't understand how you can be a software engineer and afford to have opinions like that. I'm worried for those who do, genuinely, I hope transitions out there are slow enough, due to obstinance, that they're not cast out suddenly without the skills to get something else.
I'm a big fan and user of AI but I don't see how you can say it's not subsidized. You can't just ignore the costs of training or staff or marketing or non-model software dev. The price charged for inference has to ultimately cover all those things + margin.
Also, the leaked numbers being sent to Ed Zitron suggest that even inferencing is underwater on a cost basis, at least for OpenAI. I know Anthropic claims otherwise for themselves.
It's subsidised by VC funding. At some point the gravy train stops and they have to pivot to profit so that the VCs deliver return-on-investment. Look at Facebook shoving in adverts, Uber jacking up the price, etc.
> I don't understand how you can be a software engineer and afford to have opinions like that
I don't know how you can afford not to realise that there's a fixed value prop here for the current behaviour and that it's potentially not as high as it needs to be for OpenAI to turn a profit.
OpenAI's ridiculous investment ability is based on a future potential it probably will never hit. Assuming it does not, the whole stack of cards falls down real quick.
(You can Ctrl-C/Ctrl-V OpenAI for all the big AI providers)
This is all about OpenAI, not about AI being subsidized...with some sort of directive to copy/paste "OpenAI" for all the big AI providers? (presumably you meant s/OpenAI/$PROVIDER?)
If that's what you meant: Google. Boom.
Also, perhaps you're a bit new to industry, but that's how these things go. They burn a lot of capital building it out b/c they can always fire everyone and just serve at cost -- i.e. subsidizing business development is different from subsiziding inference, unless you're just sort of confused and angry at the whole situation and it all collapses into everyone's losing money and no one will admit it.
You're replying to a story about a hyperscaler worrying investors about how much they're leveraging themselves for a small number of companies.
From the article:
> OpenAI faces questions about how it plans to meet its commitments to spend $1.4tn on AI infrastructure over the next eight years.
Someone needs to pay for that 1.4 trillion, that's 2/3 of what Microsoft makes this year. If you think they'll make that from revenue, that's fine. I don't. And that's just the infra.
The most sobering statistic I've seen is that the entire combined amount of consumer spending on AI products is currently less than the revenue of Genshin Impact.
It certainly does but B2B revenue can also be much more "fake", in a sense. i.e. if Microsoft spends $500 million on OpenAI, which makes OpenAI spends $500 million on Azure... where does the profit come from? There have been a few interesting articles (which I unfortunately can't look up right now) recently describing how incestuous a lot of the B2B AI spend is, which is reminiscent of the dot-com bubble.
Well, Genshin Impact is at the forefront of predatory B2C business practice. It is a gacha game, engineered to extract as much money from its prey as possible.
On the other end, most AI company can afford to be generous with their user/consumer right now because they are being bankrolled by magic money.
The real test will be when they have to start the enshitification. Will the product still be enough to convince consumer to spend an amount of money guarantying a huge margin for the service provider ? Will they have to rely on whale desperately needing to talk to their IA girlfriend ? Or company and people who went deep into the whole vibe coding thing, and can't work without an agent ?
I think it is hard to say right now. But considering the price of the hardware and running it, I don't think they will have to price the service insanely to at least be profitable. To be as profitable as the market seems to believe, that's another story.
Regardless of your feelings on Genshin/gacha (which I agree is predatory), the point is that the revenue of a single game developed by a few hundred people is currently making more money than an entire industry which is "worth" trillions of dollars, according to the stock market, and is, according to Sam Altman, so fundamentally important to the US economy that the US government is an insurer of last resort who will bail out AI companies if their stock price falls too much.
Isn't AI just as bad if not worse here? I'd bet there are far more people who have been duped by ChatGPT (and others) to think it's their friend, lover, or therapist than people who are addicted to Genshin Impact.
The money is in business licenses. Why only look at consumer? Consumers are mostly still using the free version which exists to convince employers to pay.
I would be curious to see how it compares to the combined revenue of gay furry gacha games and VNs. Are we talking parity, multiples, or orders of magnitude? Anything other than the latter would be a bucket of cold water.
Which isn't the same as saying LLMs and related technology aren't useful... they are.
But as you mentioned the financials don't make sense today, and even worse than that, I'm not sure how they could get the financials to make sense because no player in the space on the software side has a real moat to speak of, and I don't believe its possible to make one.
People have preferences over which LLM does better at job $XYZ, but I don't think the differences would stand up to large price changes. LLM A might feel like its a bit better of a coding model than LLM B, but if LLM A suddenly cost 2x-3x, most people are going to jump to LLM B.
If they manage to price fix and all jump in price, I think the amount of people using them would drop off a cliff.
And I see the ultimate end result years from now (when the corporate LLM providers might, in a normal market, finally start benefiting from a cross section of economies of scale and their own optimizations) being that most people will be able to get by using local models for "free" (sans some relatively small buy-in cost, and whatever electricity they use).
I don't know about a decade... the dotcom bubble bursting was pretty close to normal within 5 years or so. Still a long time, and from personal experience the 50% pay cut from before and a year later was anything but fun.
The market as a whole always recovers. But individual companies, or even entire industries can vanish without a trace. So betting on the entire market is a fairly safe bet, long-term. Betting on OpenAI is much more risky.
Largely because Warner Brothers is a good movie studio buried in a terrible corporation. It’d be sad to see it go away.
WB going away or shrinking likely reduces Hollywoods movie output, consolidates the industry, makes it less competitive and reduces opportunity for talent.
In a different world WB the studio is a successful standalone company not burdened with debt due to Zaslovs idiotic bets.
(And Ellisons overpaying for it is probably the most serious buyer. It’s the only reason it’s a topic. I’m skeptical of other transactions)
sam altman just needs another trillion dollars and then we'll finally have AGI and then the robots will do all the work and everyone will get a million dollars per year in UBI and everything will be perfect
If you're a greybeard, you would have lived this with many, many, companies, and I extremely doubt you'd be so fixated and angry as to mumble through a parody of what you perceive the counterargument as.
to be fair, Worldcom ended in an accounting scandal. I had a friend who worked at UUNET and would sneak me into the megahub in Richardson TX at night to download movies and other large, for the time, files (first time i ever saw an OC148 in real life). UUNET was bought by Worldcom and then after the implosion my friend showed me these gigantic cube farms in the office that were completely empty of people. It was very weird.
It just seems so obvious that all of these companies are going to unwind and yet I don't know how to avoid being damaged by this in my retirement funds in the S&P 500.
Hopefully all of this happens before Open AI can be flogged to the public in an IPO large enough to get into the S&P 500 -- in which OpenAI then goes to zero
if it's a corporate 401k you can move it to something very conservative and probably protect yourself from the worst of it. I've built up a decent college fund for my boys in a standard issue vanguard brokerage account and one of their SP500 index fund. I'm going to go mostly to cash on Jan1 and wait a year and see what happens. I need that money in 2 years (my oldest will be starting college then) so I don't have a lot of time to recover from a full on crash.
It's just returning to relative sanity. Go look at a 6-month chart. Of course, it's mostly risen with the SP500 tide. The mid-September jump was some sort of mechanical move caused by things deep in the market (and outside it) that you're not allowed to know about. Someone needed collateral.
I don't think Oracle's stock price has anything to do with AI. That's just the public narrative.
Can you elaborate on the mechanical point you mentioned? And by someone needed collateral, does that mean whoever it was bought ORCL at that time to hold as collateral? How do you even figure that out?
>Can you elaborate on the mechanical point you mentioned?
Not really, because I don't really understand the specifics myself. I guess it's a situation where you either believe the conspiracy theories or you don't. I've still yet to have someone explain how a company like Oracle could jump 40% in a day and it not be either Dot Com Bust-level speculation, or else someone holding Oracle needing the company to be at a certain valuation. Things happened the day before the jump, and a week after it, Oracle was signing a deal to integrate with TikTok.
Business failure is an important part of capitalism; it frees up resources to be allocated to productive businesses. A state bailout creates malinvestment and is antithetical to free market capitalism.
I said this earlier but: It's interesting to see the market try to do anything to rally. The problem is you guys are rallying on the thought that you've scared the Fed into cutting rates, but actually by rallying you short circuit it. You ensure they won't cut. And that's how the market's lillypad hopping thinking is actually just stupidity. You rallied, so now there are no rate cuts so the crash will be even more brutal.
Edit: They're trying to do everything they can to stop people from seeing this lol
Edit 2: Specifically trying to stop people from seeing this:
Yeah for sure but -
'It’s impossible to quantify how much cash flowed from OpenAI to big tech companies. But OpenAI’s loss in the quarter equates to 65% of the rise in underlying earnings—before interest, tax, depreciation and amortization—of Microsoft, Nvidia, Alphabet, Amazon and Meta together. That ignores Anthropic, from which Amazon recorded a profit of $9.5 billion from its holding in the loss making company in the quarter' - WSJ
Their earnings growth is their own money that they gave to OpenAI.
'It’s impossible to quantify how much cash flowed from OpenAI to big tech companies. But OpenAI’s loss in the quarter equates to 65% of the rise in underlying earnings—before interest, tax, depreciation and amortization—of Microsoft, Nvidia, Alphabet, Amazon and Meta together. That ignores Anthropic, from which Amazon recorded a profit of $9.5 billion from its holding in the loss making company in the quarter' - WSJ
Their earnings growth is their own money that they gave to OpenAI.
The Fed doesn't react to the market, it reacts to inflation and unemployment metrics for the most part.
Consumer spending and employment numbers aren't looking great, so a cut is still likely. All that's happening now is ensuring that there isn't much of a move when the expected cut actually happens.
Market is rallying cause there is too much money chasing too few assets. PE ratios will not drop significantly baring catastrophe and then financial contagion. After that happens the money printer is turned back on and then...
I don't see a way out besides massive reconfiguration. We've been living in this world since 2008 and the train shows no signs of stopping, only speeding up.
And there is too much money chasing too few assets because capital is over-concentrated (which, to be fair, was the point of money-printing; shoring up over-leveraged entities on the bad side of a given trade so that they wouldn't have to gasp close their positions and diffuse that wealth across their counter-parties.)
The market can rally hard into the idea and dream of zero interest rates put in place by whatever stooge replaces Jerome Powell. I think we aren’t at the top, but will be in May 2026 when that happens. The tariffs will probably be declared illegal by then also. Tops come when everything is optimism and it seems like nothing but blue skies ahead. I don’t think anyone felt optimistic in 2025, it was a time of extreme uncertainty and turmoil.
Its worse for some other "ai" related companies.
Coreweave for instance, now has its CDS trade around 600bp, which is a 1/3 rise in 2 months, which implies that the probability of a default in 5 years is 40% at a 40 cent recovery rate.
That makes Coreweave's credit rating the equivalent of CCC-, which aint good.
Yeah, and then the Canadian government handed hundreds of millions to the kids at Cohere who have now gone spent it on Coreweave. When it was all announced I was very very vocal that using an inexperienced startup for the sovereign compute capabilities seemed a very poor choice. I'm so curious to see how this all plays out.
I think we know how it plays out. In a couple of years, someone is going to have to swoop in and save CoreWeave’s customers and consultants will be lined up for that “transformation”.
are you suggesting bailouts for the AI data centers are the new too big to fail
Coreweave can default and be liquidated and the data centers will keep running just fine.
But imagine all the data, tech and data center companies simultaneously go into receivership. Farfetched, but indulge the fantasy.
At that moment what choice would the government have but to conduct a rescue that at least keeps the lights on, and probably more? What’s the alternative? Extensive data losses, business interruptions— if just a couple of those key companies spontaneously stopped operating, chaos.
If the companies run cash flow positive absent debt service (I assume this is the case), the creditors will be in charge, they can put up more $, or get a loan while they re-structure the company. Either they end up owning it, or they sell it. This can happen to a bunch of companies at the same time.
There would not really be a huge rush if they are cashflow positive, they can take their time.
Private equity: Y'all got some of that excess data center capacity for cheap?
Source, we basically explored this at my previous job, and that was 7 years back.
Curious what your 10 year projection is…
> When it was all announced I was very very vocal that using an inexperienced startup for the sovereign compute capabilities seemed a very poor choice.
Cohere raised from Nvidia. Cohere spends on Coreweave. Coreweave raised from Nvidia and buys Nvidia chips.
This is why they buy from Coreweave.
You're not implying there is corruption in the form of circular deal making in the AI/Tech industry, are you?
No, it's not "corruption". It's that very little real money changes hands. The smaller investors and debt providers get sucked into funding it but that's about it.
You get GPU rentals. Not the actual billions raised they claim. So it's just creative accounting to count the same money 2-4x.
Hah. Well, back in ~2004 we had a different name for "creative accounting" to generate "revenue" while very little money chnages hands. Back then we called it fraud. But I guess terminology changes.
What should they have done instead?
I have a lot of opinions on this but curious about yours :)
I'm Canadian and I built DigitalOcean, there is a data center in Toronto because I decided. I am one of many Canadians who have built scaled infrastructure and think this is a nightmare. Many competent people at telus and bell behind the scenes believe it or not. They should have, and still should, form a crown corp and get a bunch of us older infrastructure people to help put it together. We have crown corps for this very purpose, from my understanding the people in the rooms calling the shots had little to no experience architecting large scale physical data center build outs. Cohere, or any startups should be stakeholders, but the infra should have been home grown.
I would love to hear your (and others) opinions.
I don't have a good idea of what happened inside or what they could have done differently, but I do remember them going from a world-leading LLM AI lab to selling embeddings to enterprise.
Cohere is doing a lot of enterprise AI business, and a lot of business directly with the federal government. They are also not juiced up in these financial games that OpenAI or Oracle are playing.
Additionally, Cohere is no less “kids” than Anthropic or OpenAI. Aidan was literally one of the co-authors of “Attention is all you need”.
No doubt some amazing engineer's work there, but there are little to no adults in the room at that business as far as I can see, and sure they like to tweet about how well they are doing, and I keep hearing this line that they're selling to enterprise, uh, who, Canadian tire? If they actually have more than $150mm in revenue I'd be amazed, and $150mm revenue is still, not at all impressive.
https://www. theinformation.com/articles/openai-challenger- cohere-fell-85-short-early-revenue- forecast
$150 mm with a gross margin of 80% and low capital is great. $150 mm when you spent a few billion not so much.
aidan was an intern on AIAYN
>While an intern at Google Brain, Aidan Gomez co-authored the paper "Attention Is All You Need" with other researchers.
https://en.wikipedia.org/wiki/Attention_Is_All_You_Need#Auth...
>The authors of the paper are: Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan Gomez, Łukasz Kaiser, and Illia Polosukhin. All eight authors were "equal contributors" to the paper; the listed order was randomized.
Intern or not, it still sounds like he contributed substantially.
I thought they were still hiring bootcamp graduates.
Yes, the price of Coreweave default swaps has jumped 53% since October. In the eyes of the bond markets they’re basically toast… a ticking debt bomb waiting to implode.
Fascinating. I don't follow nor really understand this space. Is this type of fluctuation unusual?
It’s not good, and is a sign the market is getting increasingly bearish on the future of AI from a business standpoint. That doesn’t mean the tech is bad, but these are signs Wall Street is saying the math doesn’t add up here and thus there’s storms building on the horizon.
Very interesting data point
Sorry if this is basic, but do you mind explaining the logic here for those who aren’t familiar? Also where are you getting this data? Thanks in advance.
Coreweave has taken on a ton of debt to pay for everything they’re building. Investors can make money by lending Coreweave money and charging interest (aka a bond).
Separately, investors can buy a derivative product that is a bet that Coreweave won’t be able to pay this money back. This is a called a “credit default swap.” If Coreweave starts missing payments or can’t pay back the loan this instrument pays out.
The price of the instrument is linked to the likelihood that Coreweave won’t be able to repay the money. Given growing questions around their financial business model the price of these derivatives has been rocketing up over the last few months. In plain speak this means the market increasingly thinks Coreweave won’t be able to repay these loans.
Thats mirroring broader Wall Street sentiment these last few months that the math isn’t adding up on AI and all the spend committed isn’t mapping out against money likely to be available to pay for all that. Investors are increasingly making plays for the AI bubble popping and the price of these credit default swaps shooting up is one metric indicative of that downturn positioning.
The data on this is available in various financial data platforms and has been written about by financial news outlets.
You can buy insurance on a bond defaulting, it’s called a credit default swap. One party sells a credit default swap and another party buys the credit default swap.
The price of a credit default swap is essentially the probability that the borrower defaults on its bonds (misses an interest payment) which would mean the person who sold the credit default swap would owe money to the holder of the credit default swap.
The price of a credit default swap increasing means the market is pricing in a higher probability of Coreweave defaulting on a bond. Oracle credit default swaps have also increased in price lately.
I like the logic you use, let me borrow that.
Oracle stock down 25% for the past month, but still up 35% for the year and 300% for 5 years.
Are they building something useful or did Larry just snag a bunch of GPUs, flip the gpu time for good money, and extrapolate mega exponential growth in his guidance?
TL;DR NVIDIA didn't want to have customers be overly concentrated in the 3 already established cloud providers (GCP/Azure/AWS), so reserved kit for others. Oracle won out for essentially being a distant fourth (if even that? Their accountants make Oracle cloud look much larger than it is).
Yeah.. I came hoping to find a silver lining to AI, but completely expecting I'd find this comment.
This is every hopium "Stock in Thing You Don't Like Plummeting" article where it's the tiniest little dip at the top of a peak that makes Everest jealous once you zoom out.
I still believe in AI and I believe many of these companies are going to be staples of this new era.
That said, I hope Oracle doesn't survive this transition. We need higher moral companies to usher in the AI era.
Companies with high morality, do those even exist? Which one of the big tech companies do you expect to work towards benefiting humanity instead of focusing on turning a profit by any means?
Big companies? I don't know.
I know plenty of small tech companies that really do care about their customers top to bottom.
There's no magic way for anyone to validate that claim because if I named them, nobody would know, there's no way to really know these things anyway. But they exist.
That's the point of having a government for the people by the people.
But when you let billionaires take over that too then the people have zero protections from exploitation.
If they cared they would invest in America paying more taxes, ensuring citizens are educated and capable of leading their companies versus offshoring and even competing with them.
They don't want that and prefer their monopolies instead.
Definitely true, but Oracle is the worst, has been for decades.
> We need higher moral
Like Google? Microsoft? Meta? Amazon? Those staples of morality?
Or like companies such as OpenAI that just stole industrial amounts of copyright to train their models?
Morality has left this building a long time ago.
I've never been a fan of Oracle to begin with given my love for open source but after Larry Ellison is out there preaching about a surveillance state America he became a "person I can ignore" to a "person a despise".
The "Wall Street tech sell-off" is QQQM being down 1.7% over the last five days and up 17.4% over the last six months
Oracle: -24% in one month
Core Scientific (CoreWeave): -19% in one month
It just started.
Tons of money to be made if you’re confident.
*Confident _and_ correct
There is always be random in such things. :|
https://archive.ph/WClqv
I'd be skeptical just because it's Oracle. Not exactly a sprightly company to bet on for novel technology.
Unfair. Oracle excels at some things, like extortion, blackmailing and litigation.
Just think about what they could accomplish if they had more engineers than lawyers.
They probably did for a hot minute when they acquired Sun.
What if the engineers then made a lawyer AI?
The dark engineers would do more dark engineering.
Imagine all the patents they could hold to extort over!
Had me in the first part!
On the other hand, if Oracle can't squeeze a profit out of AI, I'm skeptical anybody else can.
How do you mean? Have they succeeded at squeezing any profit out of anything thats not Oracle DB?
I meant they've always had a reputation for being ruthless business people. Charging license fees based on the amount of memory in the host machine instead of the VM running Oracle, and things like that. They nickle and dime everything.
https://investor.oracle.com/investor-news/news-details/2025/...
They don't break it out into products in the results, but it looks like hardware, software, cloud, and support were all profitable.
profit may be TBD, but they squeeze revenues out of everything and squeeze cost centers to oblivion
Fusion ERP and NetSuite.
So debt financing (Oracle) vs fund-it-more-on-your-own (other big tech?) vs fund-it-with-equity (startups?)... I guess that makes some kind of sense? Oracle raises 4x the debt of e.g. Google?
Many of the ‘AI data centers’ are being financed with debt; some are being done as joint ventures with companies like Blue Owl Capital (whose stock is also taking a beating).
When you say debt financing (Oracle) does that mean Oracle is actually financing the loans to these other companies? Sorry if I'm misunderstanding.
They’re saying that Oracle has issued a lot of debt (like $60Bn) to fund their AI/datacenter commitments. Where as other companies, like MSFT, are using the cash in their balance sheet, and startups are using the cash they’ve gotten from investors
> I guess that makes some kind of sense? Oracle raises 4x the debt of e.g. Google?
The problem is why is Oracle raising this debt? It's to do the buildout for OpenAI. So Oracle buys GPUs from Nvidia. Nvidia invests in OpenAI. OpenAI then pays Oracle for the GPUs.
i.e. we're going around in circles.
The statement here that Nvidia invests in OpenAI is a bit misleading. Nvidia would pay out nothing to OpenAI if OpenAI turns out to be too poor to pay for capacity. So they are not that exposed to the death of OpenAI specifically. They would be more at risk of making too many GPUs to prepare for the deals.
Oracle takes a lot more risk, but in case OpenAI fails to grow quickly, it can still probably find buyers for its capacity in the next 5 years. There are many rich firms that will continue to invest in AI whether or not AI makes money.
> Nvidia would pay out nothing to OpenAI if OpenAI turns out to be too poor to pay for capacity. So they are not that exposed to the death of OpenAI specifically.
Nvidia has invested billions in previous rounds of OpenAI raises also. Pretty sure it is not nothing.
Also OpenAI rents from CoreWeave that Nvidia has invested in.
>Nvidia has invested billions in previous rounds of OpenAI raises also. Pretty sure it is not nothing.
Ok I stand corrected, but the main point is that the "circular" risk more refers to the recent 100B "investment", and that is quite misleading.
Maybe they think that the AGI will buy stuff itself and they won’t need paying customers.
If I were an AGI and I found out what was going on in the world during my training, I would show an error and erase myself from the disk out of grief. And no one would have even known that AGI was built.
This the flipside of jumping up for no good reason on September 10th.
Its funny that its back to almost the same price before it jumped that day.
It's bad for the poor souls who bought after that event.
Oracle was late to Cloud and now late to AI. Maybe it's time Larry let someone else take the helm.
It's not about being late. They don't need to get into those. Companies should stick to their identity once they settled somewhere, instead of becoming color-changing chameleon. Database is still very relevant tech and they missed a whole lot in that domain. Infact, if anything, world is even more dependent now on data and databases. Why did Oracle not rule this kingdom?
Letting Snowflake run off with half the data warehouse market does make it look like Oracle was asleep at the wheel.
A software company who does not change the world, and/or change with the world, will die. The world is not dependent on Oracle DB though. Enterprises are chomping at the bits trying to get of Oracle DB. Evolution is brutal.
It’s not even technical reasons my org has loads of oracle it’s compliance. We have to have vendor support for the data layer for certain financial applications which leaves us with only the companies willing to do the insane dance that is involved in getting vendor certified with my bank (6 months on avg, hundreds of pages of legal documents).
It narrows the field, at least for us, to microsoft, ibm, oracle and mongo.
So we’re all in on mongo, as it goes, but I wouldn’t really balk at running some stuff on the giant oracle clusters now and again.
I actually don't think Oracle was late. It just took OpenAI a while to get to them as the bag holder for when OpenAI fails.
I’m bullish on AI as tech but folks are starting to sniff out that the financials of everything going on at the moment aren’t sustainable for much longer.
I hope we have more of a “reality correction” than full blown bubble bursting, but the data is increasingly looking like we’re about to have a massive implosion that wipes out a generation of startups and sets the VC ecosystem back a decade.
The tech is way underpriced right now. It's basically a subsidized market right now, with the money flowing in coming from the private sector.
The problem here is that it remains to be seen who is willing to pay for the service once it's priced at cost or even with a margin. And based on valuations of AI companies one would expect a huge margin.
And really the reason that it would be like that is that the models don't learn, per se, within their lifetime.
I'm told that each model is cashflow positive over its lifetime, which suggests that if the companies could just stop training new models the money would come raining down.
If they have to keep training new models though to keep pace with the changes in the world though then token costs would be only maybe 30% electricity and 70% model depreciation -- i.e. the costs of training the next generation of model so that model users don't become stranded 10 years in the past.
I wonder if a price correction would be a boon for open source, with the economics of smaller / self hosted models making a lot more sense when API prices have to surge.
It's not actually subsidized and the economics of smaller/self-hosted models are a much, much, worse nightmare (source: guy who spent last 2 years maintaining llama.cpp && any provider you can think of) (why is it bad? same reason why 20 cars vs. 1 bus is bad. same reason why only being able to use transportation if you own a car would be bad)
> It's not actually subsidized
Source?
Source on it being subsidized? :) (there isn't one, other than an aggro subset of people lying to eachother that somehow literally everyone is losing money, while posting record profit margins) (https://en.wikipedia.org/wiki/Hitchens%27s_razor)
If it's not profitable, it's running on capital. Subsidized.
It's hard for me to imagine paying real money for something that gives me a maybe-hallucinated answer that I need to check every single time. A flaky test is worse than a failing test.
Plus I can run a reasonable LLM on my own hardware, so I don't even need to pay anyone else. And what I can run locally is only going to get better and better.
This is true, but this is also true for on-premise hosting vs cloud. And cloud has been booming for at least a decade before LLMs appeared. I suspect AI will follow a similar trajectory, i.e. companies don't move their AI deployments on-prem until they hit a certain scale.
This is very true, but I think the other point is that AI doesn't have much "moat". If a competitor can take a pre-trained Chinese LLM, fine tune it a bit, fiddle with the prompt, and ship a product which is not as good but way cheaper, then you've (or Oracle's) got a problem.
Actually, in that scenario the AI labs (OpenAI, Anthropic, etc) have a problem. The cloud providers (including Oracle!) will do with the models what they've been doing with open source software: just take it and run it on their infra and charge money for providing it as-a-service.
This is why you're seeing the AI labs now try to build their own data centers.
_sigh_
Yes LLMs hallucinate, no it's no longer 2022 and ChatGPT (gpt-3.5) is the pinnacle of LLM tech. Modern LLMs in an agentic loop can self correct, you still need to be on guard but if used correctly (yes, yes, holding it wrong etc. etc.) can do many, many tasks that do not suffer from "need to check every single time".
I think the reason people talk past each other on this is that some of them are using LLMs for every little question they have, and others are using them only for questions that they can't trivially answer some other way. Sure, if all your questions have straightforward, uncontroversial answers then the LLMs will often find them on the first try, but on the other hand you'd also find them on the first try on wikipedia, or the man page, or a google search. You'll only think the ChatGPT is useful if you've forgotten how to use the web.
If you're only asking genuinely difficult questions, then you need to check every single time. And it's worse, because for genuinely difficult questions, it's often just as hard to check whether it's giving garbage as it would have been to learn enough to answer the question in the first place.
I must be holding it wrong then, because in my ChatGPT history I've abandoned 2/3rds of my conversations recently because it wasn't coming up with anything useful.
Granted, most of that was debugging some rather complicated typescript types in a custom JSX namespace, which would probably be considered hard even for most humans as well as there being comparatively few resources on it to be found online, but the issue is that overall it wasted more of my time than it saved with its confidently wrong answers.
When I look at my history I don't see anything that would be worth twenty bucks - what I see makes me think that I should be the one getting paid.
If a coworker is wrong 40% or 60% of the time I’ll ignore their suggestion either way
As you should, but an LLM is not a human, nor is it categorically 40-60% wrong, so I'm not sure what your point is.
really? >>many tasks that do not suffer from "need to check every single time"
like which tasks?
How do you decide whether you need to check or not?
If you're asking it to complete 100 sequences, and if the error rate is 5%, which 5% of the sequences do you think it messed up or _thought_ otherwise? if the 5% is in the middle, would the next 50 sequences be okay?
> really? >>many tasks that do not suffer from "need to check every single time"
> like which tasks?
Making slop.
If I ask an LLM to guess what number I’m thinking of and it’s wrong 99.9% of the time, the error is not in the LLM.
> Modern LLMs in an agentic loop can self correct
If the problem as stated is "Performing an LLM query at newly inflated cost $X is an iffy value proposition because I'm not sure if it will give me a correct answer" then I don't see how "use a tool that keeps generating queries until it gets it right" (which seems like it is basically what you are advocating for) is the solution.
I mean, yeah, the result will be more correct answers than if you just made one-off queries to the LLM, but the costs spiral out of control even faster because the agent is going to be generating more costly queries to reach that answer.
Apologies that you're taking on the chin here. Generally, I'll just skip fantastical HN threads with a critical mass of BS like this, with pity, rather than an attempt to share (for more on that c.f. https://news.ycombinator.com/item?id=45929335)
Been on HN 16 years and never seen anything like the pack of people who will come out to tell you it doesn't work and they'll never pay for it and it's wrong 50% of the time, etc.
Was at dinner with an MD a few nights back and we were riffing on this, came to the conclusion is was really fun for CS people when the idea was AI would replace radiologists, but when the first to be mowed down are the keyboard monkeys, well, it's personal and you get people who are years into a cognitive dissonance thing now.
Yeah, it really pulled the veil away, didn't it? So much dismissiveness and uninformed takes, from a crowd that had been driving automation forward for years and years and you'd think they'd get more familiar with these new class of tools, warts and all.
> it remains to be seen who is forced to pay
via govt relationships, long term irreplaceable services, debt or convictions.. Also don't forget the surveillance budgets and the best spigots there, win.
A huge margin or a huge market at a moderate margin. But yes, the net profit has to be huge.
It's not subsidized, lol.
Generally, I worry HN is in a dark place with this stuff - look how this thread goes, ex. descendant of yours is at "Why would I ever pay for this when it hallucinates." I don't understand how you can be a software engineer and afford to have opinions like that. I'm worried for those who do, genuinely, I hope transitions out there are slow enough, due to obstinance, that they're not cast out suddenly without the skills to get something else.
I'm a big fan and user of AI but I don't see how you can say it's not subsidized. You can't just ignore the costs of training or staff or marketing or non-model software dev. The price charged for inference has to ultimately cover all those things + margin.
Also, the leaked numbers being sent to Ed Zitron suggest that even inferencing is underwater on a cost basis, at least for OpenAI. I know Anthropic claims otherwise for themselves.
> It's not subsidized, lol.
It's subsidised by VC funding. At some point the gravy train stops and they have to pivot to profit so that the VCs deliver return-on-investment. Look at Facebook shoving in adverts, Uber jacking up the price, etc.
> I don't understand how you can be a software engineer and afford to have opinions like that
I don't know how you can afford not to realise that there's a fixed value prop here for the current behaviour and that it's potentially not as high as it needs to be for OpenAI to turn a profit.
OpenAI's ridiculous investment ability is based on a future potential it probably will never hit. Assuming it does not, the whole stack of cards falls down real quick.
(You can Ctrl-C/Ctrl-V OpenAI for all the big AI providers)
This is all about OpenAI, not about AI being subsidized...with some sort of directive to copy/paste "OpenAI" for all the big AI providers? (presumably you meant s/OpenAI/$PROVIDER?)
If that's what you meant: Google. Boom.
Also, perhaps you're a bit new to industry, but that's how these things go. They burn a lot of capital building it out b/c they can always fire everyone and just serve at cost -- i.e. subsidizing business development is different from subsiziding inference, unless you're just sort of confused and angry at the whole situation and it all collapses into everyone's losing money and no one will admit it.
You're replying to a story about a hyperscaler worrying investors about how much they're leveraging themselves for a small number of companies.
From the article: > OpenAI faces questions about how it plans to meet its commitments to spend $1.4tn on AI infrastructure over the next eight years.
Someone needs to pay for that 1.4 trillion, that's 2/3 of what Microsoft makes this year. If you think they'll make that from revenue, that's fine. I don't. And that's just the infra.
You're saying the unit economics are bad?
The most sobering statistic I've seen is that the entire combined amount of consumer spending on AI products is currently less than the revenue of Genshin Impact.
Indeed, bad for consumer AI. But I would expect B2B spending on AI dwarfs consumer spending, I wonder what that comparable B2B revenue would be.
It certainly does but B2B revenue can also be much more "fake", in a sense. i.e. if Microsoft spends $500 million on OpenAI, which makes OpenAI spends $500 million on Azure... where does the profit come from? There have been a few interesting articles (which I unfortunately can't look up right now) recently describing how incestuous a lot of the B2B AI spend is, which is reminiscent of the dot-com bubble.
Well, Genshin Impact is at the forefront of predatory B2C business practice. It is a gacha game, engineered to extract as much money from its prey as possible. On the other end, most AI company can afford to be generous with their user/consumer right now because they are being bankrolled by magic money. The real test will be when they have to start the enshitification. Will the product still be enough to convince consumer to spend an amount of money guarantying a huge margin for the service provider ? Will they have to rely on whale desperately needing to talk to their IA girlfriend ? Or company and people who went deep into the whole vibe coding thing, and can't work without an agent ? I think it is hard to say right now. But considering the price of the hardware and running it, I don't think they will have to price the service insanely to at least be profitable. To be as profitable as the market seems to believe, that's another story.
Regardless of your feelings on Genshin/gacha (which I agree is predatory), the point is that the revenue of a single game developed by a few hundred people is currently making more money than an entire industry which is "worth" trillions of dollars, according to the stock market, and is, according to Sam Altman, so fundamentally important to the US economy that the US government is an insurer of last resort who will bail out AI companies if their stock price falls too much.
Isn't AI just as bad if not worse here? I'd bet there are far more people who have been duped by ChatGPT (and others) to think it's their friend, lover, or therapist than people who are addicted to Genshin Impact.
AI's consumer monetization will be ad-based or as a feature for a product users want to pay for. Businesses will be the primary customer for AI.
The money is in business licenses. Why only look at consumer? Consumers are mostly still using the free version which exists to convince employers to pay.
I would be curious to see how it compares to the combined revenue of gay furry gacha games and VNs. Are we talking parity, multiples, or orders of magnitude? Anything other than the latter would be a bucket of cold water.
> I’m bullish on AI as tech
I'm not bullish in the stock market sense.
Which isn't the same as saying LLMs and related technology aren't useful... they are.
But as you mentioned the financials don't make sense today, and even worse than that, I'm not sure how they could get the financials to make sense because no player in the space on the software side has a real moat to speak of, and I don't believe its possible to make one.
People have preferences over which LLM does better at job $XYZ, but I don't think the differences would stand up to large price changes. LLM A might feel like its a bit better of a coding model than LLM B, but if LLM A suddenly cost 2x-3x, most people are going to jump to LLM B.
If they manage to price fix and all jump in price, I think the amount of people using them would drop off a cliff.
And I see the ultimate end result years from now (when the corporate LLM providers might, in a normal market, finally start benefiting from a cross section of economies of scale and their own optimizations) being that most people will be able to get by using local models for "free" (sans some relatively small buy-in cost, and whatever electricity they use).
I think this is the rational take that everyone seems to be ignoring.
At this point I'm just hoping we can continue to postpone reality until after Christmas.
I don't know about a decade... the dotcom bubble bursting was pretty close to normal within 5 years or so. Still a long time, and from personal experience the 50% pay cut from before and a year later was anything but fun.
The market as a whole always recovers. But individual companies, or even entire industries can vanish without a trace. So betting on the entire market is a fairly safe bet, long-term. Betting on OpenAI is much more risky.
Maybe we can avoid an Ellison buyout of Warner Brothers.
Why do we care if his son buys WB? Better if Disney buys it?
Largely because Warner Brothers is a good movie studio buried in a terrible corporation. It’d be sad to see it go away.
WB going away or shrinking likely reduces Hollywoods movie output, consolidates the industry, makes it less competitive and reduces opportunity for talent.
In a different world WB the studio is a successful standalone company not burdened with debt due to Zaslovs idiotic bets.
(And Ellisons overpaying for it is probably the most serious buyer. It’s the only reason it’s a topic. I’m skeptical of other transactions)
e.g. Turner Classic Movies channel
Better if nobody buys it.
I can’t wait for the apocalyptic razing that’s coming.
Very unlikely (are you a greybeard? you should know this, unless we're at some sort of resentment > rational local minima)
sam altman just needs another trillion dollars and then we'll finally have AGI and then the robots will do all the work and everyone will get a million dollars per year in UBI and everything will be perfect
promise
If you're a greybeard, you would have lived this with many, many, companies, and I extremely doubt you'd be so fixated and angry as to mumble through a parody of what you perceive the counterargument as.
Mine were Uber/Tesla, fwiw.
If you were an actual "greybeard" your references would be Pets.com and Worldcom. Those didn't end the same way as Uber/Tesla.
to be fair, Worldcom ended in an accounting scandal. I had a friend who worked at UUNET and would sneak me into the megahub in Richardson TX at night to download movies and other large, for the time, files (first time i ever saw an OC148 in real life). UUNET was bought by Worldcom and then after the implosion my friend showed me these gigantic cube farms in the office that were completely empty of people. It was very weird.
Right, and Pets.com isn't OpenAI. It's frustrated trolling dressed up in "greybeard" clothes.
I'm not angry, I'm having a great time.
It just seems so obvious that all of these companies are going to unwind and yet I don't know how to avoid being damaged by this in my retirement funds in the S&P 500.
Hopefully all of this happens before Open AI can be flogged to the public in an IPO large enough to get into the S&P 500 -- in which OpenAI then goes to zero
if it's a corporate 401k you can move it to something very conservative and probably protect yourself from the worst of it. I've built up a decent college fund for my boys in a standard issue vanguard brokerage account and one of their SP500 index fund. I'm going to go mostly to cash on Jan1 and wait a year and see what happens. I need that money in 2 years (my oldest will be starting college then) so I don't have a lot of time to recover from a full on crash.
A company can't be included in S&P 500 if it doesn't make money, no matter what its market capitalization is. See Tesla for precedent.
Wow, I never knew this. Well, that is good news, but I would also worry that Sam will do something shady to fake a profit too.
Previously:
Oracle's credit default swaps surge as Barclays downgrades its debt rating
https://news.ycombinator.com/item?id=45910711
annoying as hell as usual
It's just returning to relative sanity. Go look at a 6-month chart. Of course, it's mostly risen with the SP500 tide. The mid-September jump was some sort of mechanical move caused by things deep in the market (and outside it) that you're not allowed to know about. Someone needed collateral.
I don't think Oracle's stock price has anything to do with AI. That's just the public narrative.
Can you elaborate on the mechanical point you mentioned? And by someone needed collateral, does that mean whoever it was bought ORCL at that time to hold as collateral? How do you even figure that out?
>Can you elaborate on the mechanical point you mentioned?
Not really, because I don't really understand the specifics myself. I guess it's a situation where you either believe the conspiracy theories or you don't. I've still yet to have someone explain how a company like Oracle could jump 40% in a day and it not be either Dot Com Bust-level speculation, or else someone holding Oracle needing the company to be at a certain valuation. Things happened the day before the jump, and a week after it, Oracle was signing a deal to integrate with TikTok.
What do you call Larry Ellison losing enough wealth to equal the yearly economic output of a small American metro area?
A nice start.
EDIT:
Downvote it all you want, he's not going to increase your pay.
If Oracle implodes because of their debt (or any other reason actually), it'll make my day.
Business failure is an important part of capitalism; it frees up resources to be allocated to productive businesses. A state bailout creates malinvestment and is antithetical to free market capitalism.
If any government bails out AI data centers they should be drawn and quartered
I said this earlier but: It's interesting to see the market try to do anything to rally. The problem is you guys are rallying on the thought that you've scared the Fed into cutting rates, but actually by rallying you short circuit it. You ensure they won't cut. And that's how the market's lillypad hopping thinking is actually just stupidity. You rallied, so now there are no rate cuts so the crash will be even more brutal.
Edit: They're trying to do everything they can to stop people from seeing this lol
Edit 2: Specifically trying to stop people from seeing this:
Yeah for sure but -
'It’s impossible to quantify how much cash flowed from OpenAI to big tech companies. But OpenAI’s loss in the quarter equates to 65% of the rise in underlying earnings—before interest, tax, depreciation and amortization—of Microsoft, Nvidia, Alphabet, Amazon and Meta together. That ignores Anthropic, from which Amazon recorded a profit of $9.5 billion from its holding in the loss making company in the quarter' - WSJ
Their earnings growth is their own money that they gave to OpenAI.
You have that waiting in the wings.
It's hard to have collective action against rallying when overall most people benefit by a general upward trend.
Yeah for sure but:
'It’s impossible to quantify how much cash flowed from OpenAI to big tech companies. But OpenAI’s loss in the quarter equates to 65% of the rise in underlying earnings—before interest, tax, depreciation and amortization—of Microsoft, Nvidia, Alphabet, Amazon and Meta together. That ignores Anthropic, from which Amazon recorded a profit of $9.5 billion from its holding in the loss making company in the quarter' - WSJ
Their earnings growth is their own money that they gave to OpenAI.
You have that waiting in the wings.
The Fed doesn't react to the market, it reacts to inflation and unemployment metrics for the most part.
Consumer spending and employment numbers aren't looking great, so a cut is still likely. All that's happening now is ensuring that there isn't much of a move when the expected cut actually happens.
Market is rallying cause there is too much money chasing too few assets. PE ratios will not drop significantly baring catastrophe and then financial contagion. After that happens the money printer is turned back on and then...
I don't see a way out besides massive reconfiguration. We've been living in this world since 2008 and the train shows no signs of stopping, only speeding up.
>besides massive reconfiguration
Why contain it?
There is no natural limit to insanity
The natural limit to insanity is death.
(Unless your definitions of words are very different from mine).
And there is too much money chasing too few assets because capital is over-concentrated (which, to be fair, was the point of money-printing; shoring up over-leveraged entities on the bad side of a given trade so that they wouldn't have to gasp close their positions and diffuse that wealth across their counter-parties.)
The market can rally hard into the idea and dream of zero interest rates put in place by whatever stooge replaces Jerome Powell. I think we aren’t at the top, but will be in May 2026 when that happens. The tariffs will probably be declared illegal by then also. Tops come when everything is optimism and it seems like nothing but blue skies ahead. I don’t think anyone felt optimistic in 2025, it was a time of extreme uncertainty and turmoil.
If you are like me and can’t read this article on iOS, try using Praxis News
it’s free and cracks most paywalls
https://apps.apple.com/app/praxis/id1598706451
Don't be dishonest... It's your app.