This is remarkable if you consider how much it must wound Apple's pride to make this deal with their main rival in the smartphone software space, especially after all the fuss they made about "Apple Intelligence". It's a tacit admission that Google is just better at this kind of thing.
The C-levels leading the companies might, and the tech CEOs in question have been at the helm for a long enough while to build up some emotional feelings.
If true this is the deal of the century. Apple pay 1/14th of a Wang per year for a top tier model whereas Meta burn multiple Wangs a year in salary alone and get garbage.
Mind blowing they couldn't get this to work. It's struck me lately that the models don't seem to matter anymore, they're all equally good.
The UX and integration with regular phone features is what makes the tool shine and by now there should be plenty of open source models and know how to create their own.
What is Google offering that Apple can't figure out on their own?
Maybe people don't personal assitant AI enough to justify the investment? My phone has probably 6 or 7 AI tools that have talking features that I don't ever explore.
I don't know, Gemini 2.5 has been the only model that's been able to not consistently make fundamental mistakes with my project as I've been working with it over the last year. Claud 3.7, 4.0, and 4.5 are not nearly as good. I gave up on chatgpt a couple years ago so I have no idea how they perform. They were bad when I quit using it.
I use all of them about equally, and I don't really want to argue the point, as I've had this conversation with friends, and it really feels like it is becoming more about brand affiliation and preference. At the end of the day, they're random text generators and asking the same question with different seeds gives different results, and they're all mostly good.
Do you find that Gemini results are slightly different when you ask the same question multiple times? I found it to have the least consistently reproducible results compared to others I was trying to use.
> "Hey Siri, whens the next Formula 1 race in Montreal"
and she responds with the same infuriating answer I typically get
> "Hmm, I found some interesting results on the web, I can show them to you if you ask again from your iPhone"
I don't care what pride Apple has to swallow, or if they have to layoff 10,000 people.
I just want my device ecosystem to be able to do what its competitors have been able to do for a decade, or what Ive been able to build myself for the last 3 years. A working and useful voice assistant.
At this point Im convinced Tim Cook could sit at a terminal himself and ship a better version of what Apple has in an afternoon.
Why Gemini? Just because of the closeness between the two companies already or is there a technical reason? I like Gemini the least because each search results in slightly different hard to reproduce identically results... I find I like LibreChat the best and then just connect it to all the other LLMs like ChatGTP, Claude, Anthropic, etc. from there.
Much like they are paying the leaders in other specialties instead of becoming eg. a assembly company (Foxconn) or a search company (Google Search) they are not going to try and be a leader in at least large language models.
Am I interpreting that correctly?
I can understand that to a degree but that means the future for Apple is as a technology integrator, not a fundamental technology company.
As I type that out I guess I’m realizing that has always been true.
So now it does not matter what platform you choose for your smartphone, you cannot escape Google AI surveillance. Well you can shut it off on the iPhone I guess, but that means no more privacy focused Apple Intelligence.
Next to all the money they poured into Liquid glAss, this will be the worst investment Apple has ever made.
Also under the agreement, Google’s model will reportedly run on Apple’s own servers, which in practice means that no user data will be shared with Google. Instead, they won’t leave Apple’s Private Cloud Compute structure.
Bloomberg states:
The model will run on Apple’s own Private Cloud Compute servers, ensuring that user data remains walled off from Google’s infrastructure.
This assumes they'll make the data available to Google. With all their secure "Private Cloud Compute" stuffs they advertised, there's a good chance it will not be shared.
Giving credit where its due, I think the private cloud compute stuff of Apple is really interesting architecure wise. I think it included using ARM Cpu's with a special realm ability to prevent certain types of attacks to minimize the amount of trust if I remember correctly.
If iOS opened up the ability to implement your own assistant like VoiceInteractionService on Android, you wouldn't have to worry about it. On Android, if you don't like Google providing the service, you can switch to OpenAI, Alexa, or even your own service.
This is remarkable if you consider how much it must wound Apple's pride to make this deal with their main rival in the smartphone software space, especially after all the fuss they made about "Apple Intelligence". It's a tacit admission that Google is just better at this kind of thing.
I’m not sure. It could be a way to save a ton of money. Look at the investments non-Apple tech companies are making on data centers & compute.
Maybe paying Google a billion a year is still a lot cheaper?
Apple famously tries to focus on only a few things.
Still, they will continue working on their own LLM and plug it in when ready.
Edit: compare to another comment about Wang-units of currency
Well they would still be running the google models in Apple DCs. I doubt this is a very cost efficient deal for them.
> wound Apple's pride
do businesses really "think" in a personified manner as this? isnt it just what the accounting resolves to as the optimal path?
The C-levels leading the companies might, and the tech CEOs in question have been at the helm for a long enough while to build up some emotional feelings.
> that Google is just better at this kind of thing
That might be true but Siri sucks so bad it doesn't matter. It uses GPT but the quality is OSS models' level.
This isn't good news.
It means that Apple's huge, expensive AI team has basically failed.
And it presumably means that Apple is willing to accept Google's practices for ML model training and use.
If true this is the deal of the century. Apple pay 1/14th of a Wang per year for a top tier model whereas Meta burn multiple Wangs a year in salary alone and get garbage.
For those equally confused: Meta bought 49% of Scale AI for 14.3 billion, purportedly to largely bring Alexandr Wang on board.
When is this not true?
It is cheaper to buy GPUs than to develop the capabilities to develop GPUs.
Coincidentally, Google pays Apple over a billion dollars a year (est. at 1.5B) to be the default search in iOS. Could be re-titled.
Google closes their trade deficit to half a billion dollars per year.
It’s actually much more than that, $20 billion per year
Mind blowing they couldn't get this to work. It's struck me lately that the models don't seem to matter anymore, they're all equally good.
The UX and integration with regular phone features is what makes the tool shine and by now there should be plenty of open source models and know how to create their own.
What is Google offering that Apple can't figure out on their own?
Maybe people don't personal assitant AI enough to justify the investment? My phone has probably 6 or 7 AI tools that have talking features that I don't ever explore.
I don't know, Gemini 2.5 has been the only model that's been able to not consistently make fundamental mistakes with my project as I've been working with it over the last year. Claud 3.7, 4.0, and 4.5 are not nearly as good. I gave up on chatgpt a couple years ago so I have no idea how they perform. They were bad when I quit using it.
I use all of them about equally, and I don't really want to argue the point, as I've had this conversation with friends, and it really feels like it is becoming more about brand affiliation and preference. At the end of the day, they're random text generators and asking the same question with different seeds gives different results, and they're all mostly good.
Do you find that Gemini results are slightly different when you ask the same question multiple times? I found it to have the least consistently reproducible results compared to others I was trying to use.
Today I asked to my homepod:
> "Hey Siri, whens the next Formula 1 race in Montreal"
and she responds with the same infuriating answer I typically get
> "Hmm, I found some interesting results on the web, I can show them to you if you ask again from your iPhone"
I don't care what pride Apple has to swallow, or if they have to layoff 10,000 people.
I just want my device ecosystem to be able to do what its competitors have been able to do for a decade, or what Ive been able to build myself for the last 3 years. A working and useful voice assistant.
At this point Im convinced Tim Cook could sit at a terminal himself and ship a better version of what Apple has in an afternoon.
... and has widely been regarded as a bad move.
The question is if Apple will buy TPUs to run it too.
$1B for the software and $1B for the hardware, every few years.
I really hope Apple is working hard on improving on-device models for their use case so they can get out of this.
When I first read the headline, I thought they’d licensed a customized Gemma 3n for an on-device model.
Why Gemini? Just because of the closeness between the two companies already or is there a technical reason? I like Gemini the least because each search results in slightly different hard to reproduce identically results... I find I like LibreChat the best and then just connect it to all the other LLMs like ChatGTP, Claude, Anthropic, etc. from there.
Much like they are paying the leaders in other specialties instead of becoming eg. a assembly company (Foxconn) or a search company (Google Search) they are not going to try and be a leader in at least large language models.
Am I interpreting that correctly?
I can understand that to a degree but that means the future for Apple is as a technology integrator, not a fundamental technology company.
As I type that out I guess I’m realizing that has always been true.
Source: https://www.bloomberg.com/news/articles/2025-11-05/apple-pla... (https://news.ycombinator.com/item?id=45826664)
On device models please. My computer should work for me.
So now it does not matter what platform you choose for your smartphone, you cannot escape Google AI surveillance. Well you can shut it off on the iPhone I guess, but that means no more privacy focused Apple Intelligence.
Next to all the money they poured into Liquid glAss, this will be the worst investment Apple has ever made.
OP's 9to5mac article states:
Bloomberg states:This assumes they'll make the data available to Google. With all their secure "Private Cloud Compute" stuffs they advertised, there's a good chance it will not be shared.
Giving credit where its due, I think the private cloud compute stuff of Apple is really interesting architecure wise. I think it included using ARM Cpu's with a special realm ability to prevent certain types of attacks to minimize the amount of trust if I remember correctly.
If iOS opened up the ability to implement your own assistant like VoiceInteractionService on Android, you wouldn't have to worry about it. On Android, if you don't like Google providing the service, you can switch to OpenAI, Alexa, or even your own service.
GrapheneOS.
patiently waiting to see which snap dragon will supported. Hopefully something smallish.