If your threat model requires a high level of privacy, there is no case in which you can use any of these tools and providers. Those goals are mutually exclusive.
Bear in mind that it shares memory from previous chats.
There's at least two types. One saving things you tell it. Another querying recent chats.
There seems to be another kind of memory for when it does searches. May be related to Atlas. I've tried to clear bugs from this (it gets my name wrong) but it's not in the other two.
Safest bet is to assume that OpenAI is lying about everything. Don't know why but would guess that (1) they often consider it to be to their advantage and (2) they have no moral compunction against it. It's just the way they are.
Interesting examples, thank you for sharing. My interactions with GPT 5.1 have shown knowledge of past interactions but I don't explicitly prohibit it as you have done through settings.
For my work, I spoke with OpenAI representatives directly and we talked about privacy. They assured me and my colleague no one could see your chats, something like "you don't have to worry that your boss can read your chats and think you're dumb."
My colleague (data scientist) asked "what if we wanted to study peoples prompts to teach them better prompt methods?" And they did a 180 without even needing to ask. "Oh yeah, we can get you that. No problem."
Of course this was for enterprise chatgpt but my experience was very much they are run like a startup, tell the customer whatever you need to to make money since they're burning through so much.
That was my initial assumption about how they're collecting the location data too. However I find if I switch to a different browser it will rarely mention my location, so it could be finger printing and pulling in info from previous device session where I have mentioned my location.
I think the larger issue here is that when you ask how it knows these things it seems to have been instructed to lie and say it doesn't know and has just guessed which seem extremely unlikely. This simply isn't acceptable in my opinion.
If your threat model requires a high level of privacy, there is no case in which you can use any of these tools and providers. Those goals are mutually exclusive.
"These are all new chats."
Bear in mind that it shares memory from previous chats.
There's at least two types. One saving things you tell it. Another querying recent chats.
There seems to be another kind of memory for when it does searches. May be related to Atlas. I've tried to clear bugs from this (it gets my name wrong) but it's not in the other two.
No, when memory is disabled this shouldn't be the case.
they have de facto immunity and have been openly violating laws for years, why did you think they would not lie about this?
Safest bet is to assume that OpenAI is lying about everything. Don't know why but would guess that (1) they often consider it to be to their advantage and (2) they have no moral compunction against it. It's just the way they are.
This. Don’t ever trust anything it says, especially about itself or how it works under the hood
Interesting examples, thank you for sharing. My interactions with GPT 5.1 have shown knowledge of past interactions but I don't explicitly prohibit it as you have done through settings.
I said it and say it again, openai is the biggest privacy disaster ever. Sam Altman should be ashamed, because it's not at all necessary.
For my work, I spoke with OpenAI representatives directly and we talked about privacy. They assured me and my colleague no one could see your chats, something like "you don't have to worry that your boss can read your chats and think you're dumb."
My colleague (data scientist) asked "what if we wanted to study peoples prompts to teach them better prompt methods?" And they did a 180 without even needing to ask. "Oh yeah, we can get you that. No problem."
Of course this was for enterprise chatgpt but my experience was very much they are run like a startup, tell the customer whatever you need to to make money since they're burning through so much.
IP-based geolocation. Really annoying that we can't disable it.
That was my initial assumption about how they're collecting the location data too. However I find if I switch to a different browser it will rarely mention my location, so it could be finger printing and pulling in info from previous device session where I have mentioned my location.
I think the larger issue here is that when you ask how it knows these things it seems to have been instructed to lie and say it doesn't know and has just guessed which seem extremely unlikely. This simply isn't acceptable in my opinion.
just ask it not to?