It's obvious that coffee is hot, and yet Liebeck v. McDonald's (in which "I was burned by your hot coffee that's illegal" was the entire argument) settled for close to a million dollars.
After she suffered third degree burns requiring skin grafts, and her attorneys presented evidence that coffee they had tested all over the city was served at a temperature at least 11 °C lower than McDonald's coffee plus evidence this could have given her enough time to prevent those burns, plus showing evidence McDonald's had received more than 700 reports of people burned by its coffee to varying degrees of severity, and had settled claims arising from scalding injuries for more than $500,000 so they had no excuse for not knowing this was a problem.
A role-play bot saying "I'm a doctor!" may well also be illegal, but it's much less clear to me if it should be.
What if I'm a role playing bot(tom) saying I'm a doctor?
What if I don't disclose that I'm playing a role?
What if I did originally disclose that I was roleplaying but that was thousands of incredibly convincing highly detailed comments ago and only the one time?
What if I'm literally just WebMD but presented in such a fashion as "because you are experiencing X there is a high likelihood that you have $disease and should implement $cure immediately" without the traditional hedging and qualifying?
What if most of the audience seeing an advertisment for a medical intervention hearing "symptoms include x, y, z, death, godhood, and the ability to commune with the devil" truly believe that they themselves are very likely or certain to experience not just one but all of the listed symptoms every single time they are exposed to said intervention?
Should the rule of law protect the vulnerable (those susceptible to influence), or not?
I'm really just playing the devil's advocate here. I'd rather software didn't have superficial culturally influenced laws attached to them, but it is easy to see the harm even if I am rather comfortable with darwinian selection. My being okay with people selecting themselves out of the pool does not preclude me from being able to see that they might want some outside protection from doing so.
To me, the mentioned McDonald's case is pure nonsense. There is no world in which I pay anyone for damages resulting from their interaction with coffee I served them. I do not believe that there are any people who are going to ask me for a coffee, receive it, spill it upon themselves by tampering with the vessel I provided it in, and then be mad to find out that it was indeed quite hot and that there was a reason I put a lid on it. But it is also instructive in the relevant mechanism. My understanding that hot liquids are dangerous is apparently not enough in that context. There is a reality in which I was supposed to somehow prevent the user from harming themselves with the dangerous item they asked for. As if I could be blamed for someone who died from shooting themselves when I sold them a gun, or cutting themselves when I sold them a knife.
We learn from the case that laws, the "court of public opinion", and genuine morality, bear only passing resemblance. See the recent ongoing case in which an LLM provider is being sued for providing advice on how to carry out a shooting as if the same information is not available on countless websites. See the relatively recent outrage over video games causing an increase in violence.
> I'm really just playing the devil's advocate here.
Indeed. But all your questions are why I'm, as I said, unclear if it should be illegal.
> My understanding that hot liquids are dangerous is apparently not enough in that context. There is a reality in which I was supposed to somehow prevent the user from harming themselves with the dangerous item they asked for. As if I could be blamed for someone who died from shooting themselves when I sold them a gun, or cutting themselves when I sold them a knife.
Hotter than expected, because it was hotter than anyone else provided it. As I showed, you were in error to claim '"I was burned by your hot coffee that's illegal" was the entire argument'. The jury even determined the fault was 20% hers, 80% McDonald's. Also the final settlement in her case was undisclosed, so you can't tell if it was or wasn't close to a million; the jury decided on $2.8m between actual damages (harm to her) and punitive damages (to deter McDonald's), the judge reduced it $640k, both sides appealed before settling confidentially: https://en.wikipedia.org/wiki/Liebeck_v._McDonald's_Restaura...
I'm from a culture where even the cops don't have guns, so I don't know what the gun analogy of this would look like. Handing someone a fully automatic pistol when all other gun providers only supply manual? So everyone knows it's dangerous, but the people who buy one are wildly wrong about how dangerous?
Ars Technica (9+3 points, 9+1 comment) https://news.ycombinator.com/item?id=48028667 https://news.ycombinator.com/item?id=48036228
NPR (4 points) https://news.ycombinator.com/item?id=48032672
Character Technologies, Inc., maker of character.ai
HN titles have 80-character limit
No pun intended
Oh, you mean like it says in the title: "Character.ai Faces Unlawful Practice of Medicine"
Frivolous lawsuit. It’s an AI chatbot. It’s obvious what its limitations are.
It's obvious that coffee is hot, and yet Liebeck v. McDonald's (in which "I was burned by your hot coffee that's illegal" was the entire argument) settled for close to a million dollars.
After she suffered third degree burns requiring skin grafts, and her attorneys presented evidence that coffee they had tested all over the city was served at a temperature at least 11 °C lower than McDonald's coffee plus evidence this could have given her enough time to prevent those burns, plus showing evidence McDonald's had received more than 700 reports of people burned by its coffee to varying degrees of severity, and had settled claims arising from scalding injuries for more than $500,000 so they had no excuse for not knowing this was a problem.
A role-play bot saying "I'm a doctor!" may well also be illegal, but it's much less clear to me if it should be.
What if I'm a role playing bot(tom) saying I'm a doctor?
What if I don't disclose that I'm playing a role?
What if I did originally disclose that I was roleplaying but that was thousands of incredibly convincing highly detailed comments ago and only the one time?
What if I'm literally just WebMD but presented in such a fashion as "because you are experiencing X there is a high likelihood that you have $disease and should implement $cure immediately" without the traditional hedging and qualifying?
What if most of the audience seeing an advertisment for a medical intervention hearing "symptoms include x, y, z, death, godhood, and the ability to commune with the devil" truly believe that they themselves are very likely or certain to experience not just one but all of the listed symptoms every single time they are exposed to said intervention?
Should the rule of law protect the vulnerable (those susceptible to influence), or not?
I'm really just playing the devil's advocate here. I'd rather software didn't have superficial culturally influenced laws attached to them, but it is easy to see the harm even if I am rather comfortable with darwinian selection. My being okay with people selecting themselves out of the pool does not preclude me from being able to see that they might want some outside protection from doing so.
To me, the mentioned McDonald's case is pure nonsense. There is no world in which I pay anyone for damages resulting from their interaction with coffee I served them. I do not believe that there are any people who are going to ask me for a coffee, receive it, spill it upon themselves by tampering with the vessel I provided it in, and then be mad to find out that it was indeed quite hot and that there was a reason I put a lid on it. But it is also instructive in the relevant mechanism. My understanding that hot liquids are dangerous is apparently not enough in that context. There is a reality in which I was supposed to somehow prevent the user from harming themselves with the dangerous item they asked for. As if I could be blamed for someone who died from shooting themselves when I sold them a gun, or cutting themselves when I sold them a knife.
We learn from the case that laws, the "court of public opinion", and genuine morality, bear only passing resemblance. See the recent ongoing case in which an LLM provider is being sued for providing advice on how to carry out a shooting as if the same information is not available on countless websites. See the relatively recent outrage over video games causing an increase in violence.
> I'm really just playing the devil's advocate here.
Indeed. But all your questions are why I'm, as I said, unclear if it should be illegal.
> My understanding that hot liquids are dangerous is apparently not enough in that context. There is a reality in which I was supposed to somehow prevent the user from harming themselves with the dangerous item they asked for. As if I could be blamed for someone who died from shooting themselves when I sold them a gun, or cutting themselves when I sold them a knife.
Hotter than expected, because it was hotter than anyone else provided it. As I showed, you were in error to claim '"I was burned by your hot coffee that's illegal" was the entire argument'. The jury even determined the fault was 20% hers, 80% McDonald's. Also the final settlement in her case was undisclosed, so you can't tell if it was or wasn't close to a million; the jury decided on $2.8m between actual damages (harm to her) and punitive damages (to deter McDonald's), the judge reduced it $640k, both sides appealed before settling confidentially: https://en.wikipedia.org/wiki/Liebeck_v._McDonald's_Restaura...
I'm from a culture where even the cops don't have guns, so I don't know what the gun analogy of this would look like. Handing someone a fully automatic pistol when all other gun providers only supply manual? So everyone knows it's dangerous, but the people who buy one are wildly wrong about how dangerous?