Centralized proprietary software on on proprietary platforms can always be opted into a special update that makes all the private keys deterministic making end to end encryption useless for anyone with knowledge of that targeted backdoor.
Only FOSS can deliver verifiable E2EE, and all centralized and proprietary solutions like Zoom, Whatsapp, Instagram, etc should end the security theater.
I applaud Meta for at least being honest about one product.
While I agree reproducible builds are a huge part of the answer, if you get your builds from Google Play or the App Store you have no idea if anyone has reproduced the particular build that was served to your device.
A solution to this would be independent reproducible builds like F-Droid does, but Moxie rejected this citing it would cause them to lose control of the platform and install metrics Google and Apple provide. Always thought that was a weird position for a privacy tool.
Instagram wasn't set up this way. If you install it on a new phone or open it in-browser, you aren't expected to give it a recovery key to get your DMs back. They did add e2ee for FB Messenger, and it was very clunky besides not being secure at all (6-digit numeric pin).
I'm not sure if this meets the bar for substantive and thoughtful discussion, but this kind of corporate cowardice, enforced by unelected bureaucrats standing at the bully pulpit is only going to get worse as the noose tightens on the open web.
The combination of hardware attestation and walled garden "app stores" is the end goal of most policymakers in this area, and it happens to suit the monopolists in Google and Apple and Facebook down to the ground.
Perhaps a timely reminder that things do not always get better over time, and that we may have lived past the high point of secure communications in our lifetime.
These decisions are made whilst America is falling to fascism. Meta may not intend for the abolition of E2E encryption to make fascist crackdown on free speech easier, but that is the reality of what abolishing E2E encryption does.
The "walled garden" isn't even that big a concern anymore. The gestapo reading every single digital communication you have and showing up to harass, threaten, or simply disappear you into a camp if the infraction is severe enough. That is what's at stake here.
This is getting pushed in the EU too, the crucial thing to understand about where the road is headed: the goal is the inverse of who does the illicit content scanning. The idea is the government gives a list of illegal activity/speech to the platforms (such as anti-ICE activism or speaking in support of Palestinians) and then the platforms scan (presumably with LLMs) all public and private messages, deletes them and forwards them to federal and local law enforcement of the user. This is why E2E is under attack everywhere, because it makes that impossible.
Do people expect that Instagram can't read their Instagram private messages? I don't think people expect that. And E2EE is not nearly as cheap as the HN crowd likes to pretend—how do those devices get those keys if not through a central service? Especially if one of them is a web browser?
were like 20 years past that, at the very least 10 years with Snowden.
The people have spoken, caring about your communications not being spied on puts you in the minority. I mean like, just look at social media. You have tons of people who not only don't care about being spied on, they actively document everything for more views.
The answer to most everyone question you’re asking is just, “public key cryptography”. It’s kind of disheartening to me that such basic 1990s tech as implemented by Phil Zimmerman is now obscure enough to merit questions like this.
Both parties exchange public keys through the central service. Only the possessor of the respective (on device, Secure Enclave ideally) private keys can decrypt the messages encrypted to the public key. The process can also work in reverse, encrypting with the private key so only holders of the public key can decrypt: this is called “signing”.
No, it's not at all this simple. This is why so many "e2ee" apps like Telegram are bogus, they ended up prioritizing UX over security because there are many places where you can't pick both.
Webs of trust based on OOB key verification and signing, or centralized trust authorities are the two primary models I’m aware of.
I’ve always been enamored of the idea of DNS as a back end protocol to enable the former largely decentralized solution.
Bob looks up Alice and receives her key from Alice’s namespace within the DNS hierarchy, along with her trust claims. David then looks up Alice’s key within her namespace, sees a reference to endorsement by Bob, and can validate this by querying Bob’s namespace. David can also issue non-authoritative queries about Alice’s key to Bob’s DNS servers, ensuring that there is no mismatch between the query response received by Bob and the one received by David.
If Mallory manages to compromise Alice’s DNS, but not Bob’s, the result is a mismatch in query responses that both Bob and David can thus detect.
At scale, a MITM compromising a system like this would be difficult without compromise of a large number of independent namespaces, increasing the likelihood of detection via the non-authoritative queries.
The missing component in this arrangement is cryptographic security of DNS, which I cynically suspect is why the DNSsec working group was comprised of the usual suspects and eventually produced a protocol without query encryption. It could still be layered on by a protocol extension, however.
And how does one verify that the public key received belongs to the intended party, rather than a mitm?
If the answer is blind trust in a third party that runs the messaging service then I suspect that you can guess what the people asking those questions are really asking.
Diffe-Hellman-Merkel key exchange is vulnerable to attacker-in-the-middle attacks.
Eave could just do key negotiation with Alice and separately do key negotiation with Bob. You have to use a slightly more complicated cryptographic protocol to avoid this issue.
How would the keys get stored in the user's private browsing window? Do they lose all chat history when they log in on a private browsing window and then close it?
I don't know the technical details of that for sure, but I think the answer is that keys and chat history are stored on-device only; for example you lose your WhatsApp history if you don't restore a backup when moving to a new phone.
If a messaging app is showing you message history in a private browsing window then perhaps the encryption key for that history is derived from your password or something like that; that can be done locally so that all the server ever sees is encrypted data.
What if you log into the app and then log out of the app and then log into the app again? Should you be able to see your messages?
E2EE is a fail-secure design. In case of any doubt it deletes your private messages. When applied to this case I don't think the downside of constantly losing all your messages outweighs the upside of Facebook pretending they don't have a copy of all of them.
Are you asking for technical details about E2EE in messaging apps, or simply making the point that you don't like it? If you don't like it, then fine, you do you, however I would point out that we all accept some inconvenience in our lives as a trade off for improved security; the lock on my front door is inconvenient but I'd rather have it than not.
As to whether or not Meta have been lying about it, then that would be on-brand for them, but then what are they turning off if so? Or maybe the whole thing is theatre, and I should better disconnect from the internet altogether? I don't see the value in speculating about that.
> And how does one verify that the public key received belongs to the intended party, rather than a mitm?
Fingerprints. Again, this is like Crypto 101. Not saying that as a personal attack of any kind, I just remain incredulous that what used to be entry level knowledge in “our thing” has evidently become so obscure.
You shouldn't be talking down like this, you're wrong about it. Alice and Bob need to exchange keys beforehand in some trusted out-of-band way. There's no protocol that solves this if Eve can be in the middle. I'm not sure what you mean by fingerprints, but if you describe a protocol, I can describe the mitm attack.
Bob and Alice are setting up their e2e channel, and because they have some extra level of concern about snooping, they telephone each other and read off some form of hash of the public key to each other.
A more complex variant would be something like PGP implemented, where Bob and Alice could both sign each others keys after this exchange, ensuring that someone who hadn’t met Bob but did trust Alice could inherit trust in Bob’s Alice-signed key.
You’ve stated unequivocally that I’m wrong, so now, please show your homework.
This is a very frustrating exchange. You guys are saying the same thing. For key exchange to be secure against an attacker who can MITM the channel you're securing, either the public keys or at least their respective fingerprints need to be exchanged out of band, over some channel the same attacker cannot also MITM. For a sophisticated enough targeted attack, a telephone isn't that.
The way military radios handle this is hardware key loaders that have seeds pre-synced in factory, in person. Every day in the field, a unit comms person takes the key loader and loads new keys onto everyone's radios. The key loaders themselves are reseeded and resynced during maintenance periods between campaigns or exercises. They're physically accounted for on every movement and twice a day when not moving, and if they ever can't be found, all messages from any device they loaded keys onto is considered compromised.
Anyone trying to overthrow a government or run a criminal empire or whatever is going to have to take measures at least this drastic. Or quit LARPing and accept that nation state attackers can probably slide into your Instagram DMs, which are probably being sent to people you don't know, and if they're hot and actually answering you, 90% chance they're a honeypot anyway.
Web of trust or centralized trust are the main answers here.
Compromise of the secret key is a whole other issue - revocation.
MITM of a key can be solved pretty well via web of trust techniques.
Apologies if the dialog is frustrating to read! As a “recovering cypherpunk”, I find these sorts of discussions animating, as long as they’re polite and technically focused! Much love!
Throwing this on the "brainstorm if we had an ideal legislative world" pile: Stealing a user's private key should be a felony, even if it hasn't (yet) been abused for anything.
The tricky part is keeping it from being "permitted" by a crappy contract of adhesion. Banning it entirely would make it very difficult to buy/sell backup services...
I’m not sure why you think so? If the service provider claims E2E but intentionally provides a defective version of this, it’s a pretty clear cut violation of the federal statute, which afaik based on the statute’s language contains no exceptions for defects cajoled into being inserted by government pressure short of a clear statute mandating it, which does not exist afaik.
How is it a worse experience? It's ridiculously simple: The app sends a public key to the person you're talking to. The end user doesn't even need to notice it. What am I missing here?
> Our messaging system has long been designed to balance user privacy with the ability to respond to scams, harassment, and other safety concerns when users report them or when required by law
TikTok about why they won’t put e2e for private messages.
I guess it’s reasonable to give up privacy to save the children, TikTok cares so much about our kids safety and wellbeing !
HN isn't a place for serious thought nor internal critique. No one (all bots at this point?) will critically engage past the most surface level reddit tier argument.
I'm sorry but encryption like https has been around since 1995. Every software engineer knows they should use encryption to protect pii.
What am I supposed to be doing on behalf of meta? To prove I am not a bot with a smooth brain? Present their argument? Their argument is they want to sell as much data as possible about adults and children to the department of war. They want to double dip and use all user data as a means to produce advertisements, algorithms, and user experiences that negatively impact children's health so they can profit.
If you want me to argue for meta on their behalf to help them find reasons to forward their goals of exploiting their user base, I won't. The exercise has negative value.
No, because it's terrible. There's no need to break encryption to allow you to report a user. You'd just report via a copy of an excerpt of the conversation and leave the rest of your communication private. If the user can't tamper with the extraction of that excerpt, you can trust it is correct. You could even extract hashes from both the reporting party and the reported party and compare them with zero knowledge of the actual conversation.
More sophisticated HNers can chime in with zero-knowledge proofs and whatnot to show that their argument is DOA.
You have you contrive a complex system involving zero knowledge proofs and a lot more work, rather than just being able to see on the server the message asking for a child to dance in their underwear directly makes their argument DoA?
Hashing a handful of strings and comparing them is incredibly simple. Not sure where you think the complexity lies. It is in fact so trivial I'm confident Claude Code can one-shot this.
But I'll humor the notion and say that, yes, I'd rather not let them see me dancing in my underwear unless I've explicitly decided to share it.
Siri fell behind due to how good Apple’s privacy is.
Everyone made fun of them for protecting them.
This is exactly the opposite of that, where Mark is throwing you and your children under the bus again because he’s unoriginal and doesn’t know how to make money any other way than by getting all up in your business, statistically.
Same. The fact they're shoving AI into it and expanding it to providers who don't have privacy as a guiding principle is a key reason I'm sitting on a 14 Pro still, and why I'm exploring local alternatives with Home Assistant.
Besides, we just need to set verbal timers and control music. We don't need a full-blown verbal Oracle.
Home Assistant is indeed quite nice and relatively simple to set up with the Docker images provided by the team. Device setup on iOS was a little inconsistent, but has been rock solid for over a year. Check out Homebridge as well. I run both.
I ought to take a break from my Docker Compose work and get back to migrating off Homekit and into Home Assistant. The Home Assistant Yellow has been a real champ thus far, and once it’s set I can then tie the Unfolded Circle 3 into it for better control.
What value do you get out of Home Assistant you don't from HomeBridge? I use HomeBridge for a few devices, my Windmill AC, some Govee lights, and previously my Ikea smart lights (Tradfri, but now Dirigera supports HomeKit).
Not everything in life is a threat model, y’know; oftentimes it’s just personal preference.
I prefer to read reference material and do research instead of asking chatbots, for instance, because it helps the material stick better and enables me to make broader connections to disparate pieces of knowledge.
I also prefer technology to be narrow in scope and function, so I can spend more time enjoying life and less time troubleshooting why some needless complexity has failed again. This extends to voice assistants that consistently fumble on accents and grammar when asking for more complex queries, and often want to send data out of my LAN to some random server I have no control over just to process something that could be done on any of the myriad of GPUs and CPUs in my home instead.
Despite the EULA, TOS, and Privacy Polices governing these interactions, I intrinsically don’t trust a relationship that requires revalidation of those policies every time an update is pushed, whose changes fail to be summarized, and which force me into hostile relationships with the vendors. I also generally believe that as live services, there is no sufficient incentive for security or privacy but ample incentive for data mining and prolonged/frequent interactivity. Repeated incidents of supposedly “anonymous” and “private” conversations or data being inappropriately disclosed or compromised do not help lend any sense of security to said services, at least to me. Then you consider the wider economic environment prioritizing immediate gains over sustainable business practices, and my own personal preference for building and nurturing long-term infrastructure to solve my problems on a consistent basis, and it’s less a threat model and more just incompatibility between my personal needs and corporate goals.
What is your concern about prompts to go OpenAI? Apple has a contract with OpenAI that explicitly prevents them from logging, storing, training, or making any use of your prompts other than to satisfy the specific current request. Apple has some good lawyers and I’m sure that the teeth are prominent in that contract.
Not to mention the fact that the default settings are to ask the user before sending anything to ChatGPT, and you can selectively disable just the ChatGPT integration while leaving Apple Intelligence enabled.
No. "You have to unlock your iphone first" is such a hindrance to using Siri for anything more than setting timers and alarms. If you're doing anything that involves gloves or a mask or getting your hands messy, like in a kitchen or something, it is just so frustrating. How about making a toggle so I can choose to be slightly less locked down for Siri, and I take full responsibility if I get hacked because of it.
Yeah. I've offset that by then long-pressing on any particularly sensitive apps I don't want to risk other people potentially being able to control via Siri and turning on "Require FaceID" just for those apps. Which then blocks Siri from being able to interact with them too.
building new features on top of E2EE is genuinely hard, and I've seen many companies struggle to keep innovating while staying strictly E2EE.
Having seen multiple leading messaging/VoIP stacks from inside, the amount of engineering spent to work around various limitations of E2EE in real prod scenarios is insane, and even for simple every-day-use features metrics don't compare to the metrics of the same feature running without E2EE.
Then a more reasonable response is: “we cannot as effectively monetize all of the data in our advertising platform disguised as another tool entirely unless we disable E2EE and we need to be able to allow not only ourselves but others to invade your privacy even more than we already do because it’s technologically difficult to do so when we encrypt your communications.”
it doesn't necessarily have to be tied to monetization & privacy directly.
It may just be that ROI doesn't make sense: very few user out there truly care about (or even understand) E2EE, for quite some users it creates an inconvenience & support incidents (harder to move from device to device, forgot your passphrase - lost your history, new joiners to a group chat don't see previous history, etc), it requires a significant additional engineering effort to just maintain it, many new features get shipped much slower because of it...
It doesn't have to be, but that's not really an argument for claiming it isn't. Considering how deeply embedded privacy violation is in Meta's corporate DNA, is there any reason other than hilariously naïve and inexplicably charitable, hypothetical speculation to believe this is not motivated by more privacy violation for profit, just like literally every single thing Meta has done in the entire history of the company? No? Didn't think so.
not if it is one of the companies famous for going the extra mile of being "evil" / a corporate shit hole stealing every thought you ever have to better sell you stuff you don't need
From talking to people from Meta, they don’t believe in E2EE because “it’s decrypted on the other end” which they take as “becomes insecure in exactly the way we’ve designed the sausage factory”
They’re a bit of a self fulfilling prophecy for why it is a futile effort for trying to secure information near them.
I don't buy that. They could have done more with it despite the constraints. There's been a big lack of interest from Apple for a long time. Just like every few years they introduce a completely new Mac Pro with all the fanfare and then completely lose interest and let it wither and die for 5 years.
> "Siri, do [extremely similar to X] thing" "I don't know what you mean"
It’s funny. Even a team of interns could’ve mapped more synonyms, right?
& Apple Intelligence, when it uses ChatGPT, it wouldn’t be quite as horrible if Apple had paid for better tokens instead of quantizing into oblivion… I think.
Two savings for Apple resulting in subpar experiences.
I’m less impressed with Zuck every time I hear something new about him.
Apple has made incredible progress in the last 20 years, but almost none of that has been a brand new product. It has all been evolving the existing products and on building the world’s best supply chain and rearing incredible market share from Windows. To be clear, AirPods are a much bigger market than Nike shoes. Those, plus Apple Watch, iPad and Vision Pro are new in the last 20 years.
In the past 20 years, the Facebook website has evolved, but all of the other major investments by the company have been acquisitions. Instagram, WhatsApp, Oculus. Diem (or whatever that proprietary cryptocurrency was called) and the Metaverse were massive failures. I don’t know what to credit Meta for in the AI era except some of the LLAMA tooling and some open weights LLMs. CZI is doing cool things, but that’s Zuck’s private science company, not part of Meta.
Profiting off of a genocide is a first in the social media world, pretty ground breaking. Especially when you find out that workers and management knew about it as it was on-going but did nothing in response. Truly such innovative people, surely they deserve all the money and non of the responsibility. It is a meritocracy after all.
Not a fan of zack (quite the opposite), but he isn't wrong here. Apple indeed didn't come up with anything new. Their PR stunts each year are more and more laughable as time goes by.
Neither did Meta, but that's a different discussion
The iPhone and Apple Watch don't count in your book? They are both younger than 20 years. I think the iPhone changed the mobile market. I don't think it was for the better but it was 'new'. It might also be just where I live but I see more Apple Watches than any other digital watches. Maybe not as 'new' but they certainly changed the watch market as well, for better or worse.
I can't think of a single new thing Facebook did in 20 years that stuck. Metaverse?
Apple's response to the UK gov asking to see users' iCloud data says enough about where their priorities lie [1]. They do something far worse in China [2].
Don't fool yourself into believing Apple cares about your privacy. They care about money.
The UK public can still vote for governments that don’t demand backdoors into citizens’ private data. Instead, over the past century they’ve turned their country into an ineffectual nanny state of shrinking global relevance, while a fading aristocratic and old money class desperately cling to influence over a population that no longer cares about the old titles and prestige of having attended some ‘old boys’ boarding school nobody outside of GB has ever heard of.
Signal is one example. Their values are simply not compatible with what the Chinese government wants (local data storage, key access, etc.). Instead of complying and putting their users' privacy at risk, they accepted the ban.
Google, out of all companies, also decided to partially walk away from the Chinese market in 2010 over censorship concerns [1].
Nobody is forcing Apple to do business in China, or the UK. They actively choose to do so, and because of that also put themselves in a position where they have to comply with these laws, presumably because it makes them more money.
Signal responds to warrants with all the the data they keep.
ProtonMail / ProtonVPN responds to the vast majority of warrants with the data they keep.
Apple iCloud always responded to iCloud warrants with whatever data they had (eg. If the user didn’t enable encryption). They shouldn’t have removed end-to-end encryption for the UK, but they have thousands of employees in that country and millions of customers.
Sometimes it’s not the company that is the problem, but the country / legislators.
Android was designed to prevent Windows from dominating mobile:
I literally helped create Android to prevent Microsoft from controlling the phone the way they did the PC - stifling innovation. So it's always funny for me to hear Gates whine about losing mobile to Android.
If you charitable (like you should be), then a reasonable assumption is that they probably know what happens on a dairy farm, and that's actually their point.
“But I want my freedom, I want to install whatever I want, bad Apple for locking down my devices away from me.”
They stay rather secure because of all these measures. But they’ll get dismantled, too. Because idiots push idiots in power to weaken Apple’s stance. Useful idiots is the right term.
thats a generous view. The dystopian fascist view is he's aligning with the surveillance state's interests and instagram is seen as a breeding ground for anti-american-american activities.
I'm not sure the value of end to end encryption for proprietary application chats. For emails and SMS messages, your messages are being sent between different multiple servers on the open internet and it opens you up to spying, but end to end encryption on instagram is only protecting your chats from Meta.
I find the end to end encryption on Facebook to be detrimental to ease of use, because you always have to use a pin code, etc for the web interface.
If you don't trust meta with your chats, you probably shouldn't be using their application to begin with.
FB Messenger was nice and simple before they added the clunky e2ee feature, and it's not even secure cause it's just 6 digits of entropy.
WhatsApp e2ee is solid. It's painful if you have multiple devices, but it was designed for people to use on just one phone in the first place, not necessarily caring about chat history.
> but end to end encryption on instagram is only protecting your chats from Meta.
No. It protects your chats from Meta and all governments of the countries where Meta operates.
In fact, I expect Instagram to be more reachable globally now because these relaxed communication standards would be welcomed by oppressive governments as they can now retrieve messages as they please for whatever purpose they deem.
Actually, by doing e2e encryption, Meta can say to the authorities that Meta doesn't see any message and cannot be blamed for anything. We cannot snoop user's conversation, and that's generally a good thing.
The authority holds Meta responsible anyway; they don't care about the implementation detail. They want to catch a pedo, and Meta is unable to produce evidence that helps them. Everyone else will yell at Meta for helping pedos.
You can substitute "pedo" with any other heinous crime e.g. terrorism.
And this is how we arrive at the current situation.
What form of accountability are you suggesting is even being leveraged, here? No law could force Meta to backdoor its encryption, afaik. Public pressure would be unlikely to work.
Is Meta afraid of anything real, or is this just blame shifting via ungrounded speculation?
They can because Meta has chosen to implement e2e encryption. They could have chosen not to implement e2e encryption. All within their controls.
Australia already has this law in place where a company must hand over user's conversation. A company cannot make an excuse that they themselves implement e2e to prevent themselves from reading user's messages. Source: https://www.bbc.com/news/world-australia-46463029
UK has a proposal to ban encryption this year. It is still being discussed.
> Public pressure would be unlikely to work
Public pressure works to a certain degree. Do you think a product manager at Meta would want to be labeled as "protecting pedos"?
> Public pressure works to a certain degree. Do you think a product manager at Meta would want to be labeled as "protecting pedos"?
I think that Meta can afford as much PR as they would need to out-message this sort of BS, again if they were inclined to protect user privacy in the first place. Look at Apple.
the entire point of encryption is that you don't trust the channel you communicate through, that's what it was invented for, communication across adversarial channels. Distrust is the only condition under which you need encryption.
In addition from a practical POV it's if anything the reverse is the case. Email encryption is larp security because plain text is the default, leaks metadata and its interfaces make it trivial for people to leak entire conversations. If there's one technology where you should just assume your messages are public, it's email before someone copy pastes or wrongly forwards your encrypted communication to fifty other people.
Private message encryption makes sense because it's now a default, information exchanged is usually personal, and the problem isn't just Meta but law enforcement extorting your data out of their hands, which encryption in the real world has prevented a few times now already.
The executives don't want anyone else to be able to use the messages in a malicious way, so they decide to cut it at the sources of the messages i.e. e2e encryption.
This is like: corporate emails being deleted after 6 months. When an authority asks for emails from the last year, they can say they don't have it.
Now the authority can ask for the emails not to be deleted at all but then that will be a different battle the authority has to fight.
Corporate emails often don't involve pedos/terrorism, so there's much less push to retain corporate emails forever.
It's too bad we fell so hard for centralization. In an alternate universe, messaging on the Internet could have been:
1. Alice's device has a publicly routable IP address with a domain name like alice.home.her.isp
2. Bob's device is has same qualities, using: bob.mobile.his.isp
Then Alice can just open her chat app up, add bob@bob.mobile.his.isp and off they go. I mean we had UNIX's "talk" for how long but instead of evolving/securing/fixing it, we blew it! And now we have all these companies 1. coming up with their own incompatible protocols and 2. inserting their stupid centralized servers as intermediaries. And now every chat message we send over the Internet has to be received and re-sent through a handful of amoral corporations.
Why is this any better? It doesn't solve any of the identity and end-to-end encryption problems centralized messengers do; it just changes the underlying connectivity model, which is the least interesting part of the system.
>In an alternate universe, messaging on the Internet could have been:
I don't think so, and I think the very reason is because the people who opt for these decentralized solutions never really sit down and try and design a product, they just want decentralization for the sake of decentralization. For them decentralization is the product. Your alternate universe evaporates when you ask the question "what happens if Alice's device is offline?".
If you squint, the exact system you are describing is e-mail, and that has become effectively centralized, and it happened long before we had tech mega corps.
The issue with email is that too little is specified, and is instead just left up to providers and clients.
This specification vacuum forces centralization because the only way to build essential usability features is to own both the mail server and the mail client.
If email had evolved to move with the times such that basic QoL features were part of the spec rather than proprietary extensions, then it could have stayed decenteralized.
Contrast with what happened on the web. Yes it's imperfect but there is a standard that evolves to move with the times and there are multiple implementations of that standard.
Decentralized cooperation and associated protocols is a lot harder than just inserting messages into a MySQL database and then displaying those messages.
If people spent half the time they do wishing for decentralized messaging working on the actual problems with it, we would have decentralized messaging.
If people spent 5 minutes googling decentralized alternatives to stuff they would realize they don’t need to build anything, just pick something and use.
I worked at Instagram during this (not at the EeE, but saw enough of it, to see that it was a mess).
I think the reason for dropping it, is more of a technical issue and user experience, rather than a 'desire' issue or company will. From my understanding, Zuck wanted this. The implementation was a mess, and folks have different expectations about messages to appear at every platform. Having messages disappear between devices/web, or having to back up encryption, keys, etc... it was just a terrible user experience. Even employees, disliked this feature.
This was not something actually asked by users, but more of a feature done in order to thwart all the types of legal issues created when folks use the platform.
At some point, I counted, there were 64 'leads', just to make this happen. Each lead, had a certain area, or surface/views, which means we are talking about hundreds of folks involved to make this happen (across fb and ig).
It was a boodongle, and it was something that users didn't ask.
Ps. I know, many here at HN really care about this, but the average user was not willing to put up with the degradation of the user experience in order to make this happen.
All workarounds, require weakening E2E, which made it pointless.
Ultimately, If you want a truly E2E, you will have to use a platform specifically made for it. IG/FB are just not it.
Even Telegram, doesn't have it enabled by default, unless you specifiy it.
I don't know all the details because I'm not a cryptologist, but Wire messenger seemed to have solved this in a way that wasn't annoying. I haven't used it since they pivoted, so can't speak much to its implementation, but I remember it working seamlessly across devices.
I think people feel this is the begining of the end.
Meta is part of the reason Signals E2EE spread and E2EE became ubiquitous in general.
Many governments have also turned against E2EE and I suspect it's gone from a shield where you can say we can't really help you get that data, to a constant pressure.
Yea fine I see that, but their entire business model revolves around exposing people’s otherwise private lives, and they are making a lot of money doing so.
It’s like using a web browser distributed by a an ad company whose business model is all about tracking folks
This is a reductive and frankly insulting response. People want to be able to communicate with other people they will use the tools that allow them to do it. Sometimes the person I want is on Instagram, that’s how I can reach out to them. I believe privacy ought to be a human right. I don’t just limit myself to tools used by a very narrow swath of arrogant nerds. Instead, we fight for privacy.
Centralized proprietary software on on proprietary platforms can always be opted into a special update that makes all the private keys deterministic making end to end encryption useless for anyone with knowledge of that targeted backdoor.
Only FOSS can deliver verifiable E2EE, and all centralized and proprietary solutions like Zoom, Whatsapp, Instagram, etc should end the security theater.
I applaud Meta for at least being honest about one product.
Centralized FOSS software can do the same thing and remove encryption. Open source is not a requirement for security.
With reproducible builds like Signal does you can be sure the app you've downloaded matches the source code that's been audited:
https://github.com/signalapp/Signal-Android/blob/main/reprod...
While I agree reproducible builds are a huge part of the answer, if you get your builds from Google Play or the App Store you have no idea if anyone has reproduced the particular build that was served to your device.
A solution to this would be independent reproducible builds like F-Droid does, but Moxie rejected this citing it would cause them to lose control of the platform and install metrics Google and Apple provide. Always thought that was a weird position for a privacy tool.
Personally I would be more concerned about a vulnerability or backdoor in Intel SGX
there's no guarantee, but if the build is mass served - it's at least possible to find out. For closed source apps you may even not know
So what? The centralized owner owns the code repo too, so such a restriction doesn't stop anything.
Even if Instagram was open source, Meta could remove the E2E chat feature.
If it was open source people could fork.
FOSS is however a prerequisite to Kerckhoff's principle https://en.wikipedia.org/wiki/Kerckhoffs%27s_principle
Unlike the proprietary stuff there isn't a strong built incentive to remove it.
One incentive is that it makes for a simpler user experience.
It's absurd that you're actually taking the position you are
> 'Very few people were opting in to end-to-end encrypted messaging in DMs,' Meta says.
Then why didn't you make the opt-in default like Signal and WhatsApp? :-)
Because either you have:
1. An E2E system where the provider has de facto access to the encrypted data, or
2. You shift key management to the users and let them risk data loss.
Either way:
a. The provider can release an app version at any time that accesses the data on the client side, and
b. Most of your users cannot differentiate between E2EE and SSL/TLS, nor are they interested in doing so, nor they care about it.
Instagram wasn't set up this way. If you install it on a new phone or open it in-browser, you aren't expected to give it a recovery key to get your DMs back. They did add e2ee for FB Messenger, and it was very clunky besides not being secure at all (6-digit numeric pin).
i never even knew they had e2e available, so they cannot have been too serious about people opting in.
a shame that they now have to shut it off because people didn't use something they didn't know existed /s
I'm not sure if this meets the bar for substantive and thoughtful discussion, but this kind of corporate cowardice, enforced by unelected bureaucrats standing at the bully pulpit is only going to get worse as the noose tightens on the open web.
The combination of hardware attestation and walled garden "app stores" is the end goal of most policymakers in this area, and it happens to suit the monopolists in Google and Apple and Facebook down to the ground.
Perhaps a timely reminder that things do not always get better over time, and that we may have lived past the high point of secure communications in our lifetime.
Hardware attestation really sounds like one of the worst things that could've happened to computers.
It's not just "the death of the open web".
These decisions are made whilst America is falling to fascism. Meta may not intend for the abolition of E2E encryption to make fascist crackdown on free speech easier, but that is the reality of what abolishing E2E encryption does.
DHS is already subpoenaing tech companies for the information about users who criticize ICE. https://www.nytimes.com/2026/02/13/technology/dhs-anti-ice-s...
The "walled garden" isn't even that big a concern anymore. The gestapo reading every single digital communication you have and showing up to harass, threaten, or simply disappear you into a camp if the infraction is severe enough. That is what's at stake here.
This is getting pushed in the EU too, the crucial thing to understand about where the road is headed: the goal is the inverse of who does the illicit content scanning. The idea is the government gives a list of illegal activity/speech to the platforms (such as anti-ICE activism or speaking in support of Palestinians) and then the platforms scan (presumably with LLMs) all public and private messages, deletes them and forwards them to federal and local law enforcement of the user. This is why E2E is under attack everywhere, because it makes that impossible.
Do people expect that Instagram can't read their Instagram private messages? I don't think people expect that. And E2EE is not nearly as cheap as the HN crowd likes to pretend—how do those devices get those keys if not through a central service? Especially if one of them is a web browser?
>Do people expect that Instagram can't read their Instagram private messages? I don't think people expect that.
A deeper question is why we reached a point where people can't reasonably expect their communication to not be spied on.
People, or at least Americans, didn’t care in 2012 when the Snowden reveal happened. We’ve been at that point for over a decade now.
Considering the average person thinks that opening websites in incognito means no one knows they visited them, I would agree.
"Be wary of malicious software that tracks your keystrokes in exchange for free smileys"
PRISM?
were like 20 years past that, at the very least 10 years with Snowden.
The people have spoken, caring about your communications not being spied on puts you in the minority. I mean like, just look at social media. You have tons of people who not only don't care about being spied on, they actively document everything for more views.
I would expect any message facilitated by a company's software, and going through that same company's servers to be compromised.
Exactly. E2EE comes with UX consequences that you can't just bolt on later. There might be something to be outraged about, but this alone isn't it.
The answer to most everyone question you’re asking is just, “public key cryptography”. It’s kind of disheartening to me that such basic 1990s tech as implemented by Phil Zimmerman is now obscure enough to merit questions like this.
Both parties exchange public keys through the central service. Only the possessor of the respective (on device, Secure Enclave ideally) private keys can decrypt the messages encrypted to the public key. The process can also work in reverse, encrypting with the private key so only holders of the public key can decrypt: this is called “signing”.
No, it's not at all this simple. This is why so many "e2ee" apps like Telegram are bogus, they ended up prioritizing UX over security because there are many places where you can't pick both.
Webs of trust based on OOB key verification and signing, or centralized trust authorities are the two primary models I’m aware of.
I’ve always been enamored of the idea of DNS as a back end protocol to enable the former largely decentralized solution.
Bob looks up Alice and receives her key from Alice’s namespace within the DNS hierarchy, along with her trust claims. David then looks up Alice’s key within her namespace, sees a reference to endorsement by Bob, and can validate this by querying Bob’s namespace. David can also issue non-authoritative queries about Alice’s key to Bob’s DNS servers, ensuring that there is no mismatch between the query response received by Bob and the one received by David.
If Mallory manages to compromise Alice’s DNS, but not Bob’s, the result is a mismatch in query responses that both Bob and David can thus detect.
At scale, a MITM compromising a system like this would be difficult without compromise of a large number of independent namespaces, increasing the likelihood of detection via the non-authoritative queries.
The missing component in this arrangement is cryptographic security of DNS, which I cynically suspect is why the DNSsec working group was comprised of the usual suspects and eventually produced a protocol without query encryption. It could still be layered on by a protocol extension, however.
And how does one verify that the public key received belongs to the intended party, rather than a mitm?
If the answer is blind trust in a third party that runs the messaging service then I suspect that you can guess what the people asking those questions are really asking.
https://en.wikipedia.org/wiki/Diffie%E2%80%93Hellman_key_exc...
If Meta are turning it off then I guess it's reasonable to assume that there is something to turn off.
Diffe-Hellman-Merkel key exchange is vulnerable to attacker-in-the-middle attacks.
Eave could just do key negotiation with Alice and separately do key negotiation with Bob. You have to use a slightly more complicated cryptographic protocol to avoid this issue.
The only way to avoid this issue is if Alice and Bob can talk out-of-band. There's no protocol that fixes this.
True but the out of band secure channel could just be something like DNS, automated and constantly subject to distributed monitoring for deltas.
How would the keys get stored in the user's private browsing window? Do they lose all chat history when they log in on a private browsing window and then close it?
I don't know the technical details of that for sure, but I think the answer is that keys and chat history are stored on-device only; for example you lose your WhatsApp history if you don't restore a backup when moving to a new phone.
If a messaging app is showing you message history in a private browsing window then perhaps the encryption key for that history is derived from your password or something like that; that can be done locally so that all the server ever sees is encrypted data.
If you log in to the app on one phone and then in a web browser should you still be able to see your messages in the web browser?
Sorry do you mean, that's how it works now, or, that's how you think it should work? Are you talking about Instagram or WA or something else?
edit: misread your message; if you have two sessions active at the same time, then yes I would expect both sessions to receive the same messages.
What if you log into the app and then log out of the app and then log into the app again? Should you be able to see your messages?
E2EE is a fail-secure design. In case of any doubt it deletes your private messages. When applied to this case I don't think the downside of constantly losing all your messages outweighs the upside of Facebook pretending they don't have a copy of all of them.
Are you asking for technical details about E2EE in messaging apps, or simply making the point that you don't like it? If you don't like it, then fine, you do you, however I would point out that we all accept some inconvenience in our lives as a trade off for improved security; the lock on my front door is inconvenient but I'd rather have it than not.
As to whether or not Meta have been lying about it, then that would be on-brand for them, but then what are they turning off if so? Or maybe the whole thing is theatre, and I should better disconnect from the internet altogether? I don't see the value in speculating about that.
I'm asking you about how you want the world to work.
Well then, I think E2EE is a good thing and I'll take the minor inconveniences.
> And how does one verify that the public key received belongs to the intended party, rather than a mitm?
Fingerprints. Again, this is like Crypto 101. Not saying that as a personal attack of any kind, I just remain incredulous that what used to be entry level knowledge in “our thing” has evidently become so obscure.
You shouldn't be talking down like this, you're wrong about it. Alice and Bob need to exchange keys beforehand in some trusted out-of-band way. There's no protocol that solves this if Eve can be in the middle. I'm not sure what you mean by fingerprints, but if you describe a protocol, I can describe the mitm attack.
You’re not sure what key fingerprints are?
Bob and Alice are setting up their e2e channel, and because they have some extra level of concern about snooping, they telephone each other and read off some form of hash of the public key to each other.
A more complex variant would be something like PGP implemented, where Bob and Alice could both sign each others keys after this exchange, ensuring that someone who hadn’t met Bob but did trust Alice could inherit trust in Bob’s Alice-signed key.
You’ve stated unequivocally that I’m wrong, so now, please show your homework.
This is a very frustrating exchange. You guys are saying the same thing. For key exchange to be secure against an attacker who can MITM the channel you're securing, either the public keys or at least their respective fingerprints need to be exchanged out of band, over some channel the same attacker cannot also MITM. For a sophisticated enough targeted attack, a telephone isn't that.
The way military radios handle this is hardware key loaders that have seeds pre-synced in factory, in person. Every day in the field, a unit comms person takes the key loader and loads new keys onto everyone's radios. The key loaders themselves are reseeded and resynced during maintenance periods between campaigns or exercises. They're physically accounted for on every movement and twice a day when not moving, and if they ever can't be found, all messages from any device they loaded keys onto is considered compromised.
Anyone trying to overthrow a government or run a criminal empire or whatever is going to have to take measures at least this drastic. Or quit LARPing and accept that nation state attackers can probably slide into your Instagram DMs, which are probably being sent to people you don't know, and if they're hot and actually answering you, 90% chance they're a honeypot anyway.
Web of trust or centralized trust are the main answers here.
Compromise of the secret key is a whole other issue - revocation.
MITM of a key can be solved pretty well via web of trust techniques.
Apologies if the dialog is frustrating to read! As a “recovering cypherpunk”, I find these sorts of discussions animating, as long as they’re polite and technically focused! Much love!
The fly in the ointment is that they control the software and updates to that closed software so can short circuit that with appropriate pressure.
Throwing this on the "brainstorm if we had an ideal legislative world" pile: Stealing a user's private key should be a felony, even if it hasn't (yet) been abused for anything.
The tricky part is keeping it from being "permitted" by a crappy contract of adhesion. Banning it entirely would make it very difficult to buy/sell backup services...
That would seem to constitute Honest Services Fraud under federal law, if they promised E2E then sabotaged it intentionally…
Not in the case of mandated back doors and warrants.
I’m not sure why you think so? If the service provider claims E2E but intentionally provides a defective version of this, it’s a pretty clear cut violation of the federal statute, which afaik based on the statute’s language contains no exceptions for defects cajoled into being inserted by government pressure short of a clear statute mandating it, which does not exist afaik.
Ok, so drop all pretense then and blatantly scavenge through private conversations? Then take whatever from there and maybe sell it to highest bidder?
People here like it, but end-to-end encryption is an objectively worse user experience for people that don't care about that feature
To me part of using E2EE is not because I have something to hide but because I have friends who work in human rights that do have something to hide.
I think of it not as a consumer feature but rather quite foundational for a working democracy.
I would argue that WhatsApp's e2ee user experience is pretty decent, and didn't get worse when they introduced encryption.
But then again, their technical model has always been "fat client, dumb server" from the start.
How is it a worse experience? It's ridiculously simple: The app sends a public key to the person you're talking to. The end user doesn't even need to notice it. What am I missing here?
I honestly can't tell if this is sarcasm
Well except for the people here who love IRC. They must not like e2ee.
How is it "objectively worse"?
I guess because it is harder to back up and recover and harder to sync between devices. Signal has made a lot of progress on those though.
Main thing that comes to mind is things like this:
- If you lose you phone you lose your messages
- If you forget your password you lose your messages
- If you switch phone you often lose messages
- If you get added to a group you can't see the previous messages
> - If you forget your password you lose your messages
This shouldn't be the case right? As password isn't related to the key messages are encrypted with.
It should be, otherwise where is that key coming from?
> Our messaging system has long been designed to balance user privacy with the ability to respond to scams, harassment, and other safety concerns when users report them or when required by law
TikTok about why they won’t put e2e for private messages.
I guess it’s reasonable to give up privacy to save the children, TikTok cares so much about our kids safety and wellbeing !
This is awful. They are doing this so they can literally advertise to kids. I bet their dbs aren't encrypted at rest either. Complete foolishnes
Can you steelman TikTok's argument?
HN isn't a place for serious thought nor internal critique. No one (all bots at this point?) will critically engage past the most surface level reddit tier argument.
I'm sorry but encryption like https has been around since 1995. Every software engineer knows they should use encryption to protect pii.
What am I supposed to be doing on behalf of meta? To prove I am not a bot with a smooth brain? Present their argument? Their argument is they want to sell as much data as possible about adults and children to the department of war. They want to double dip and use all user data as a means to produce advertisements, algorithms, and user experiences that negatively impact children's health so they can profit.
https://www.abc.net.au/news/2026-03-26/meta-and-google-found...
If you want me to argue for meta on their behalf to help them find reasons to forward their goals of exploiting their user base, I won't. The exercise has negative value.
No, because it's terrible. There's no need to break encryption to allow you to report a user. You'd just report via a copy of an excerpt of the conversation and leave the rest of your communication private. If the user can't tamper with the extraction of that excerpt, you can trust it is correct. You could even extract hashes from both the reporting party and the reported party and compare them with zero knowledge of the actual conversation.
More sophisticated HNers can chime in with zero-knowledge proofs and whatnot to show that their argument is DOA.
You have you contrive a complex system involving zero knowledge proofs and a lot more work, rather than just being able to see on the server the message asking for a child to dance in their underwear directly makes their argument DoA?
Hashing a handful of strings and comparing them is incredibly simple. Not sure where you think the complexity lies. It is in fact so trivial I'm confident Claude Code can one-shot this.
But I'll humor the notion and say that, yes, I'd rather not let them see me dancing in my underwear unless I've explicitly decided to share it.
It's pretty easy to do tho and it has the benefits of both allowing privacy and being able to check if someone is actually trying to abuse kids.
No e2e is understandable if you're the Chinese government but isn't TikTok now run by a us company (at least in America) ?
Put simply:
I’ve talked to Apple engineers.
Siri fell behind due to how good Apple’s privacy is.
Everyone made fun of them for protecting them.
This is exactly the opposite of that, where Mark is throwing you and your children under the bus again because he’s unoriginal and doesn’t know how to make money any other way than by getting all up in your business, statistically.
I usually defend Siri, because I’m perfectly fine trading a little functionality for security. I prefer it that way.
Same. The fact they're shoving AI into it and expanding it to providers who don't have privacy as a guiding principle is a key reason I'm sitting on a 14 Pro still, and why I'm exploring local alternatives with Home Assistant.
Besides, we just need to set verbal timers and control music. We don't need a full-blown verbal Oracle.
They’re hosting their own Gemini, so they aren’t sacrificing to Google’s standards even if using their technology.
Home Assistant is indeed quite nice and relatively simple to set up with the Docker images provided by the team. Device setup on iOS was a little inconsistent, but has been rock solid for over a year. Check out Homebridge as well. I run both.
I ought to take a break from my Docker Compose work and get back to migrating off Homekit and into Home Assistant. The Home Assistant Yellow has been a real champ thus far, and once it’s set I can then tie the Unfolded Circle 3 into it for better control.
What value do you get out of Home Assistant you don't from HomeBridge? I use HomeBridge for a few devices, my Windmill AC, some Govee lights, and previously my Ikea smart lights (Tradfri, but now Dirigera supports HomeKit).
Im curious what the threat model is that you're protecting against
Not everything in life is a threat model, y’know; oftentimes it’s just personal preference.
I prefer to read reference material and do research instead of asking chatbots, for instance, because it helps the material stick better and enables me to make broader connections to disparate pieces of knowledge.
I also prefer technology to be narrow in scope and function, so I can spend more time enjoying life and less time troubleshooting why some needless complexity has failed again. This extends to voice assistants that consistently fumble on accents and grammar when asking for more complex queries, and often want to send data out of my LAN to some random server I have no control over just to process something that could be done on any of the myriad of GPUs and CPUs in my home instead.
Despite the EULA, TOS, and Privacy Polices governing these interactions, I intrinsically don’t trust a relationship that requires revalidation of those policies every time an update is pushed, whose changes fail to be summarized, and which force me into hostile relationships with the vendors. I also generally believe that as live services, there is no sufficient incentive for security or privacy but ample incentive for data mining and prolonged/frequent interactivity. Repeated incidents of supposedly “anonymous” and “private” conversations or data being inappropriately disclosed or compromised do not help lend any sense of security to said services, at least to me. Then you consider the wider economic environment prioritizing immediate gains over sustainable business practices, and my own personal preference for building and nurturing long-term infrastructure to solve my problems on a consistent basis, and it’s less a threat model and more just incompatibility between my personal needs and corporate goals.
By disabling Apple "Intelligence" you bypass the risk of your prompts going to OpenAI.
What is your concern about prompts to go OpenAI? Apple has a contract with OpenAI that explicitly prevents them from logging, storing, training, or making any use of your prompts other than to satisfy the specific current request. Apple has some good lawyers and I’m sure that the teeth are prominent in that contract.
Not to mention the fact that the default settings are to ask the user before sending anything to ChatGPT, and you can selectively disable just the ChatGPT integration while leaving Apple Intelligence enabled.
No. "You have to unlock your iphone first" is such a hindrance to using Siri for anything more than setting timers and alarms. If you're doing anything that involves gloves or a mask or getting your hands messy, like in a kitchen or something, it is just so frustrating. How about making a toggle so I can choose to be slightly less locked down for Siri, and I take full responsibility if I get hacked because of it.
Settings > Apple Intelligence & Siri > Allow Siri When Locked
Keeping in mind that this gives quite a bit of access to your data, depending on how someone wants to structure their query
Yeah. I've offset that by then long-pressing on any particularly sensitive apps I don't want to risk other people potentially being able to control via Siri and turning on "Require FaceID" just for those apps. Which then blocks Siri from being able to interact with them too.
If only that worked!
Explicitly: I have that turned on and I still need to unlock before using.
building new features on top of E2EE is genuinely hard, and I've seen many companies struggle to keep innovating while staying strictly E2EE.
Having seen multiple leading messaging/VoIP stacks from inside, the amount of engineering spent to work around various limitations of E2EE in real prod scenarios is insane, and even for simple every-day-use features metrics don't compare to the metrics of the same feature running without E2EE.
Then a more reasonable response is: “we cannot as effectively monetize all of the data in our advertising platform disguised as another tool entirely unless we disable E2EE and we need to be able to allow not only ourselves but others to invade your privacy even more than we already do because it’s technologically difficult to do so when we encrypt your communications.”
it doesn't necessarily have to be tied to monetization & privacy directly.
It may just be that ROI doesn't make sense: very few user out there truly care about (or even understand) E2EE, for quite some users it creates an inconvenience & support incidents (harder to move from device to device, forgot your passphrase - lost your history, new joiners to a group chat don't see previous history, etc), it requires a significant additional engineering effort to just maintain it, many new features get shipped much slower because of it...
It doesn't have to be, but that's not really an argument for claiming it isn't. Considering how deeply embedded privacy violation is in Meta's corporate DNA, is there any reason other than hilariously naïve and inexplicably charitable, hypothetical speculation to believe this is not motivated by more privacy violation for profit, just like literally every single thing Meta has done in the entire history of the company? No? Didn't think so.
were you in that room where Adam was making that call? No? didn't think so...
Just give people some benefit of doubt. There're much simpler ways to explain certain things that suspecting some universal evil in every move...
not if it is one of the companies famous for going the extra mile of being "evil" / a corporate shit hole stealing every thought you ever have to better sell you stuff you don't need
From talking to people from Meta, they don’t believe in E2EE because “it’s decrypted on the other end” which they take as “becomes insecure in exactly the way we’ve designed the sausage factory”
They’re a bit of a self fulfilling prophecy for why it is a futile effort for trying to secure information near them.
It's free to not use Instagram
> Siri fell behind due to how good Apple’s privacy is.
That makes zero sense.
The problem with Siri is... Siri. The interface itself.
Zero of my complaints around Siri have to do with it not being able to access my private data.
They're entirely about it not understanding my request in the first place or lacking a basic capability.
It’s not about your data.
It’s about the sum of all expression being able to be reflected back to you in such a way that Dawkins believes he’s met intelligent life.
Facebook and Google just slurped up their data centers, while Apple was encrypted.
I don't buy that. They could have done more with it despite the constraints. There's been a big lack of interest from Apple for a long time. Just like every few years they introduce a completely new Mac Pro with all the fanfare and then completely lose interest and let it wither and die for 5 years.
Siri fell behind because the bean counters at Cupertino didn’t want to spend on it, this is well documented and has nothing to do with privacy.
> Siri fell behind due to how good Apple’s privacy is.
Garbage. That's some good spin, though. Siri is a turd in a punch bowl for many reasons that have nothing to do with privacy.
"Siri, do X thing" "Done"
"Siri, do [extremely similar to X] thing" "I don't know what you mean"
Siri is connected to my Apple HomeKit. "Siri, turn off my Kitchen Lights" "I don't know what lights you mean."
Siri feels like it never evolved past a proof of concept.
> "Siri, do [extremely similar to X] thing" "I don't know what you mean"
It’s funny. Even a team of interns could’ve mapped more synonyms, right?
& Apple Intelligence, when it uses ChatGPT, it wouldn’t be quite as horrible if Apple had paid for better tokens instead of quantizing into oblivion… I think.
Two savings for Apple resulting in subpar experiences.
Do you know what Zuckerberg said in an interview? I think it was to Lex Fridman but I could be wrong
"Apple hasn't come up with anything new in 20 years"
Very likely in response to Apple's granularity. Poor Zuck can't steal people's credentials
I’m less impressed with Zuck every time I hear something new about him.
Apple has made incredible progress in the last 20 years, but almost none of that has been a brand new product. It has all been evolving the existing products and on building the world’s best supply chain and rearing incredible market share from Windows. To be clear, AirPods are a much bigger market than Nike shoes. Those, plus Apple Watch, iPad and Vision Pro are new in the last 20 years.
In the past 20 years, the Facebook website has evolved, but all of the other major investments by the company have been acquisitions. Instagram, WhatsApp, Oculus. Diem (or whatever that proprietary cryptocurrency was called) and the Metaverse were massive failures. I don’t know what to credit Meta for in the AI era except some of the LLAMA tooling and some open weights LLMs. CZI is doing cool things, but that’s Zuck’s private science company, not part of Meta.
Anything groundbreaking in advertising? Meta ad tools are pretty granular.
(Looking for anything here!)
Profiting off of a genocide is a first in the social media world, pretty ground breaking. Especially when you find out that workers and management knew about it as it was on-going but did nothing in response. Truly such innovative people, surely they deserve all the money and non of the responsibility. It is a meritocracy after all.
Not a fan of zack (quite the opposite), but he isn't wrong here. Apple indeed didn't come up with anything new. Their PR stunts each year are more and more laughable as time goes by.
Neither did Meta, but that's a different discussion
The iPhone and Apple Watch don't count in your book? They are both younger than 20 years. I think the iPhone changed the mobile market. I don't think it was for the better but it was 'new'. It might also be just where I live but I see more Apple Watches than any other digital watches. Maybe not as 'new' but they certainly changed the watch market as well, for better or worse.
I can't think of a single new thing Facebook did in 20 years that stuck. Metaverse?
Fair enough, for some reason I thought iphone is older than 20 years. iPhone was indeed a great product and it changed market, no doubt about it.
Apple Watch doesn't really count in my books - it's just a silly toy without much practical use (of course YMMV).
> I can't think of a single new thing Facebook did in 20 years that stuck
Again, I am not a fan of Meta. They didn't invent anything, arguably including FB itself, but again, this is topic for a separate discussion.
Privacy is the reason I’m still on team Apple.
Apple's response to the UK gov asking to see users' iCloud data says enough about where their priorities lie [1]. They do something far worse in China [2].
Don't fool yourself into believing Apple cares about your privacy. They care about money.
[1] https://www.bbc.com/news/articles/cgj54eq4vejo
[2] https://www.reuters.com/article/technology/apple-moves-to-st...
The UK public can still vote for governments that don’t demand backdoors into citizens’ private data. Instead, over the past century they’ve turned their country into an ineffectual nanny state of shrinking global relevance, while a fading aristocratic and old money class desperately cling to influence over a population that no longer cares about the old titles and prestige of having attended some ‘old boys’ boarding school nobody outside of GB has ever heard of.
Your links say that Apple complies with the laws of major countries. Which companies don’t do that?
Signal is one example. Their values are simply not compatible with what the Chinese government wants (local data storage, key access, etc.). Instead of complying and putting their users' privacy at risk, they accepted the ban.
Google, out of all companies, also decided to partially walk away from the Chinese market in 2010 over censorship concerns [1].
Nobody is forcing Apple to do business in China, or the UK. They actively choose to do so, and because of that also put themselves in a position where they have to comply with these laws, presumably because it makes them more money.
[1] https://www.nytimes.com/2010/03/23/technology/23google.html
Signal responds to warrants with all the the data they keep.
ProtonMail / ProtonVPN responds to the vast majority of warrants with the data they keep.
Apple iCloud always responded to iCloud warrants with whatever data they had (eg. If the user didn’t enable encryption). They shouldn’t have removed end-to-end encryption for the UK, but they have thousands of employees in that country and millions of customers.
Sometimes it’s not the company that is the problem, but the country / legislators.
Google also chooses to be a US company even thought the US is supporting a genocide and is doing an illegal war against a foreign country (again)
You could argue Signal is the most "moral" here, but even then they don't really allow self-hosted backends and refuse to open-source their setup
Google of all companies didn't. And we all know how much they care about privacy.
They’re very clear about when they use e2e encryption and atleast I know when they don’t. There’s multiple reasons I don’t use iCloud.
Would you care to explain why do you think privacy with Apple is any better than other teams?
Apple feels like the only big tech company that remotely cares about its users. Thank god they make the best computer and OS too.
I’m sure this will not be a popular take on HN however.
Android was originally enticing because of iOS locking everything down and controlling the ecosystem
Android was designed to prevent Windows from dominating mobile:
I literally helped create Android to prevent Microsoft from controlling the phone the way they did the PC - stifling innovation. So it's always funny for me to hear Gates whine about losing mobile to Android.
— Rich Miner (https://x.com/richminer/status/1879004092602982765)
Continue reading with a Verge subscription
"Best" is subjective. But "caring about their users"? Their response to RtR alone shows they care about their margins more than their users.
Apple care about their users like a farmer cares about a herd of dairy cows.
Whereas Microsoft and Google care about their users like a farmer cares about a herd of pigs.
I can tell you don't actually know what goes on on a dairy farm.
If you charitable (like you should be), then a reasonable assumption is that they probably know what happens on a dairy farm, and that's actually their point.
Ham and eggs. The chicken is involved. The pig is committed.
> I’m sure this will not be a popular take on HN however.
Precisely because it's your feelings, not objective observation.
No public company would care of their users, even remotely.
Indeed, during Jobs time it actually worked that way, but that time is long gone.
The OS is not nearly the best, just like laptops.
“But I want my freedom, I want to install whatever I want, bad Apple for locking down my devices away from me.”
They stay rather secure because of all these measures. But they’ll get dismantled, too. Because idiots push idiots in power to weaken Apple’s stance. Useful idiots is the right term.
Being in a prison cell is a great way to avoid traffic accidents, I agree.
They were kinda forced to in the name of "think of the children". The New Mexico case that's been going on at the moment.
thats a generous view. The dystopian fascist view is he's aligning with the surveillance state's interests and instagram is seen as a breeding ground for anti-american-american activities.
People are bozos for wanting more from a glorified egg timer. I like Siri myself.
Isn't this really about "protecting" minors using Instagram?
If they allow E2E encryption, they can't scan for CSAM or do other monitoring stuff effectively, so they can't provide a "safe" place for minors.
Obviously the right answer is kids shouldn't be exposed to social media at all, but more eyeballs is more important than our kids.
cmon bud this is never what it's really about
I'm not sure the value of end to end encryption for proprietary application chats. For emails and SMS messages, your messages are being sent between different multiple servers on the open internet and it opens you up to spying, but end to end encryption on instagram is only protecting your chats from Meta.
I find the end to end encryption on Facebook to be detrimental to ease of use, because you always have to use a pin code, etc for the web interface.
If you don't trust meta with your chats, you probably shouldn't be using their application to begin with.
I'm not sure I disagree, but I would summarise it slightly differently.
If you don't want Mark Zuckerberg to upload your private messages into his own chat AI, then stop using Instagram immediately.
FB Messenger was nice and simple before they added the clunky e2ee feature, and it's not even secure cause it's just 6 digits of entropy.
WhatsApp e2ee is solid. It's painful if you have multiple devices, but it was designed for people to use on just one phone in the first place, not necessarily caring about chat history.
> but end to end encryption on instagram is only protecting your chats from Meta.
No. It protects your chats from Meta and all governments of the countries where Meta operates.
In fact, I expect Instagram to be more reachable globally now because these relaxed communication standards would be welcomed by oppressive governments as they can now retrieve messages as they please for whatever purpose they deem.
Actually, by doing e2e encryption, Meta can say to the authorities that Meta doesn't see any message and cannot be blamed for anything. We cannot snoop user's conversation, and that's generally a good thing.
The authority holds Meta responsible anyway; they don't care about the implementation detail. They want to catch a pedo, and Meta is unable to produce evidence that helps them. Everyone else will yell at Meta for helping pedos.
You can substitute "pedo" with any other heinous crime e.g. terrorism.
And this is how we arrive at the current situation.
> The authority holds Meta responsible anyway
What form of accountability are you suggesting is even being leveraged, here? No law could force Meta to backdoor its encryption, afaik. Public pressure would be unlikely to work.
Is Meta afraid of anything real, or is this just blame shifting via ungrounded speculation?
They can because Meta has chosen to implement e2e encryption. They could have chosen not to implement e2e encryption. All within their controls.
Australia already has this law in place where a company must hand over user's conversation. A company cannot make an excuse that they themselves implement e2e to prevent themselves from reading user's messages. Source: https://www.bbc.com/news/world-australia-46463029
UK has a proposal to ban encryption this year. It is still being discussed.
> Public pressure would be unlikely to work
Public pressure works to a certain degree. Do you think a product manager at Meta would want to be labeled as "protecting pedos"?
> Public pressure works to a certain degree. Do you think a product manager at Meta would want to be labeled as "protecting pedos"?
I think that Meta can afford as much PR as they would need to out-message this sort of BS, again if they were inclined to protect user privacy in the first place. Look at Apple.
the entire point of encryption is that you don't trust the channel you communicate through, that's what it was invented for, communication across adversarial channels. Distrust is the only condition under which you need encryption.
In addition from a practical POV it's if anything the reverse is the case. Email encryption is larp security because plain text is the default, leaks metadata and its interfaces make it trivial for people to leak entire conversations. If there's one technology where you should just assume your messages are public, it's email before someone copy pastes or wrongly forwards your encrypted communication to fifty other people.
Private message encryption makes sense because it's now a default, information exchanged is usually personal, and the problem isn't just Meta but law enforcement extorting your data out of their hands, which encryption in the real world has prevented a few times now already.
It's a governance.
The executives don't want anyone else to be able to use the messages in a malicious way, so they decide to cut it at the sources of the messages i.e. e2e encryption.
This is like: corporate emails being deleted after 6 months. When an authority asks for emails from the last year, they can say they don't have it.
Now the authority can ask for the emails not to be deleted at all but then that will be a different battle the authority has to fight.
Corporate emails often don't involve pedos/terrorism, so there's much less push to retain corporate emails forever.
It's too bad we fell so hard for centralization. In an alternate universe, messaging on the Internet could have been:
1. Alice's device has a publicly routable IP address with a domain name like alice.home.her.isp
2. Bob's device is has same qualities, using: bob.mobile.his.isp
Then Alice can just open her chat app up, add bob@bob.mobile.his.isp and off they go. I mean we had UNIX's "talk" for how long but instead of evolving/securing/fixing it, we blew it! And now we have all these companies 1. coming up with their own incompatible protocols and 2. inserting their stupid centralized servers as intermediaries. And now every chat message we send over the Internet has to be received and re-sent through a handful of amoral corporations.
Why is this any better? It doesn't solve any of the identity and end-to-end encryption problems centralized messengers do; it just changes the underlying connectivity model, which is the least interesting part of the system.
>In an alternate universe, messaging on the Internet could have been:
I don't think so, and I think the very reason is because the people who opt for these decentralized solutions never really sit down and try and design a product, they just want decentralization for the sake of decentralization. For them decentralization is the product. Your alternate universe evaporates when you ask the question "what happens if Alice's device is offline?".
If you squint, the exact system you are describing is e-mail, and that has become effectively centralized, and it happened long before we had tech mega corps.
The issue with email is that too little is specified, and is instead just left up to providers and clients.
This specification vacuum forces centralization because the only way to build essential usability features is to own both the mail server and the mail client.
If email had evolved to move with the times such that basic QoL features were part of the spec rather than proprietary extensions, then it could have stayed decenteralized.
Contrast with what happened on the web. Yes it's imperfect but there is a standard that evolves to move with the times and there are multiple implementations of that standard.
Decentralized cooperation and associated protocols is a lot harder than just inserting messages into a MySQL database and then displaying those messages.
If people spent half the time they do wishing for decentralized messaging working on the actual problems with it, we would have decentralized messaging.
If people spent 5 minutes googling decentralized alternatives to stuff they would realize they don’t need to build anything, just pick something and use.
Didn't you just describe email?
No, email is federated, what he described is clearly p2p.
Have a look at Keet, it’s a p2p IM app, works on mobile and desktop, behind NAT and all. To be honest it’s not even the only app in this game.
It just seems like nobody cares about these things until the frog already boiled.
> now every chat message
"It hurts when I do that."
"Don't do that."
Big question is how to put sgt.cia.gov or mayor.fbi.gov in the middle between alice.home.her.isp and bob.mobile.his.isp.
That is why centralised messengers are pushed hard.
And this is not to protect society from harm, as many would assume.
I worked at Instagram during this (not at the EeE, but saw enough of it, to see that it was a mess).
I think the reason for dropping it, is more of a technical issue and user experience, rather than a 'desire' issue or company will. From my understanding, Zuck wanted this. The implementation was a mess, and folks have different expectations about messages to appear at every platform. Having messages disappear between devices/web, or having to back up encryption, keys, etc... it was just a terrible user experience. Even employees, disliked this feature.
This was not something actually asked by users, but more of a feature done in order to thwart all the types of legal issues created when folks use the platform.
At some point, I counted, there were 64 'leads', just to make this happen. Each lead, had a certain area, or surface/views, which means we are talking about hundreds of folks involved to make this happen (across fb and ig).
It was a boodongle, and it was something that users didn't ask.
Ps. I know, many here at HN really care about this, but the average user was not willing to put up with the degradation of the user experience in order to make this happen. All workarounds, require weakening E2E, which made it pointless.
Ultimately, If you want a truly E2E, you will have to use a platform specifically made for it. IG/FB are just not it.
Even Telegram, doesn't have it enabled by default, unless you specifiy it.
I don't know all the details because I'm not a cryptologist, but Wire messenger seemed to have solved this in a way that wasn't annoying. I haven't used it since they pivoted, so can't speak much to its implementation, but I remember it working seamlessly across devices.
This is exactly what I expected, and it's good to hear from someone with experience.
How likely is this about collection of LLM training data?
I don't understand why they would go in this inversely-progressive direction.
Shouldn't we be aiming to increase e2e encryption for the most regularly used communication platforms?
So don’t use Meta products if you care about privacy?
I think people feel this is the begining of the end.
Meta is part of the reason Signals E2EE spread and E2EE became ubiquitous in general.
Many governments have also turned against E2EE and I suspect it's gone from a shield where you can say we can't really help you get that data, to a constant pressure.
Yea fine I see that, but their entire business model revolves around exposing people’s otherwise private lives, and they are making a lot of money doing so.
It’s like using a web browser distributed by a an ad company whose business model is all about tracking folks
This is a reductive and frankly insulting response. People want to be able to communicate with other people they will use the tools that allow them to do it. Sometimes the person I want is on Instagram, that’s how I can reach out to them. I believe privacy ought to be a human right. I don’t just limit myself to tools used by a very narrow swath of arrogant nerds. Instead, we fight for privacy.
Instagram should be shut down. Not using encryption for social media and places where users expect any level of privacy is insanity.
I didn’t even know it was available to opt-in to! Probably why adoption wasn’t great.
Perhaps they should phase out the encryption on WhatsApp as well?
Previously:
3 days ago https://news.ycombinator.com/item?id=48024160
mid-March https://news.ycombinator.com/item?id=47363922
While encryption already ruined FB Messenger (no comment on IG encryption or lack of but people have hated Insta since Zuck took over)
While they ALREADY probably only have Messenger for nefarious reasons https://news.ycombinator.com/item?id=4151433
He's a bit of a... something. That might get a 'low effort comment' moniker attached to it. Rhymes with ociopath