“I think, and my thoughts cross the barrier into the synapses of the machine, just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams, the sensibility of the machine invades the periphery of my consciousness: dark, rigid, cold, alien. Evolution is at work here, but just what is evolving remains to be seen.”
— Commissioner Pravin Lal, “Man and Machine”
I'd really encourage everyone to check out Sid Meier's Alpha Centauri. What an underrated game.
The Warrior's bland acronym, MMI, obscures the true horror of this monstrosity. Its inventors promise a new era of genius, but meanwhile unscrupulous power brokers use its forcible installation to violate the sanctity of unwilling human minds. They are creating their own private army of demons.
-Commissioner Pravin Lal, "Report on Human Rights"
One of the all time greats. I think I'll play through it this evening.
"...And what is the 'Self', if not a pattern of data? What is consciousness, if not an illusion of intelligence residing within meat?"
— Prime Function Aki Zeta-5, "The Fallacies of Self-Awareness"
Hearing about aligning with the AI reminds me of this other post about the current prophecies about AI: “Everyone will have an AI assistant,” or “Companies that fail to adopt AI will be eliminated.” and that
> the power of prophecy lies not in accurately predicting the future, but in shaping it
I have an AI assistant built into my phone I don't use. There's also one built into Windows I don't use. Several apps I use have AI assistants that I ignore. I kind of have one in the form of Google's AI search results that I wish I could turn off.
I use Claude on purpose. I'm not sure it's actually better than the other ones. I haven't even tried half of them.
Everyone will have an AI assistant! The models will be open and free because of overwhelming competition and they will run on cheap local ASIC accelerators that use little power and fit in the palm of your hand! All the VC driven wild spenders will eventually cave and collapse when they can't deliver on their wild AGI promises, then their proprietary models will be sold at auctions for cheap!
Yes, exactly. Moore's law says that in less than 10 years you will be able to fit today's state of the art models on your phone. If you add in all of the computationally and memory neutral improvements and breakthroughs that we will accumulate over the next 10 years then it will be both far more capable and far more reliable than today's models.
An AI assistant you can trust and bring with you is coming, and almost nothing can stop it.
I'm confident that anyone who talks about replacing humans with machines, subscribes to the beliefs of Nick Land / Curtis Yarvin / Ray Kurzweil and laughs while making comments about AI destroying humanity is a Luciferian regardless of the origin of their last name :)
They called me a schizo in two separate comments and the only thing I did in my original post, was point out that his name was Altman which could be interpreted as alternate man. I never claimed that it was the etymological root of his name - the folks making the replies got upset for whatever reason and inferred that.
Pointing things out that I find interesting to potential readers of my comment, doesn't necessarily muddle my argument. If I had said that alternate man was the origin of his last name, you or the other commenters might have a valid point, but I never did that.
If someone is going to make broad assumptions about me and resort to infantile name calling in an attempt to demean me, I have no problem making broad assumptions about them in turn.
"Altman" is from the Middle High German alt meaning "old", not from the
Proto-Indo-European root al- meaning "beyond."
Or is English the language of the fates?
Edit: this kind of schizoid syncretism is dangerous because it obscures real, empirically verifiable material harms from technology. Every technology is a trade-off. We should follow the advice of (Freemason!!) Benjamin Franklin and not pay too much for our whistle.
> Most of the people pushing [...] are Luciferians and transhumanists.
transhumanists - yes. Luciferians - this definition is a lot more broad, branched, and complex. one transhumanist is hell-bent on Christianity (or at least seems to be; also pun intended) and most others have an atheistic position.
> Lucifer thought he could do better than God, and many of these crazy people working on, and pushing AI so hard believe they can do the same.
that's as far as similarities go, the rest is the usual atheist scientific-method-believing behavior HEAVILY smeared with a bias to their own interests.
> transhumanists - yes. Luciferians - this definition is a lot more broad, branched, and complex. one transhumanist is hell-bent on Christianity (or at least seems to be; also pun intended) and most others have an atheistic position.
Which transhumanist is hell-bent on Christianity? If you're approaching this seriously then provide names please. There have been plenty of Luciferians that have posed as atheists throughout time and space. Also, there is atheistic Luciferianism, just like there is atheistic Satanism.
There are many people who claim to be Christian but are not in order to subvert the religion. Pretty much the same with all Abrahamic religions. Peter Thiel definitely doesn't display Christian values. Joel Osteen is a great example - claims to be a Christian but doesn't at all behave like one. Same with Thiel.
Freemasonry is Luciferian, yet many of its members claim to be Christian. [1]
IDK why I'm helping with the Trashcan Man, but it's been a weird day...
>>Most of the people pushing these technologies (A.I., brain chip interfaces, cybernetics, etc...) are Luciferians and transhumanists.
I think you can eliminate the word "most" when you say that the people who push brain chip interfaces/cybernetics are transhumanists. That's literally the definition of transhumanism. Just from a grammatical sense, this is akin to saying "most people who exist are human"
The post's portrayal of Eliezer Yudkowsky's position strikes me as a mischaracterization, especially coming one month after Yudkowsky wrote the following:
Daniel says that Yudkowsky is advocating for nuclear brinksmanship, while Yudkowsky says his position is basically "sign international agreements, and then commit to enforcing them against defectors".
I wonder if Daniel has the same view of any other international treaty ultimately backed by threat of lawful violence? (For example, NATO's article 5). Is enforcement of laws an extremist position?
I feel like it’s changing my brain. A colleague uses AI to make some code change and submits a PR. I use AI to evaluate the PR. It’s like AIs talking to each other with humans serving as conduits or connectors. Sometimes I’ll look up from the screen and realize how strange it is.
Of course I think. I have 20 years of coding experience and knowledge of the codebase and business. That’s why I’m keenly aware of how strange the process is.
What I’d like to know is how you’d train a monkey to read and judge output from an llm on a pull request.
"As human beings are also animals, to manage one million animals gives me a headache." Terry Gou, former CEO of Foxconn. He wanted to use far more robots
at Foxconn, but that was a decade ago and the technology didn't work well enough yet.
It's a lot closer now, and the robot headcount in China is way up.
That's the real issue. To corporations, employees are a headache. The fewer employees, the better.
They ran on the messy biological human substrate because it was astoundingly cheap compared to engineering better factories. The video going around now of the robot pushing packages down a conveyor belt is so baffling to me. Why are we building a humanoid robot capable of pushing a clog of packages across a conveyor belt, when we could just make a conveyor belt that does not clog up and require a human or a robot to sit there with two hands and unclog? It is like we are forgetting what the actual goal is.
As with many things that have a percentile failure mode, it's almost always cheaper to build something flexible that can handle issues than it is to design a perfect widget that never fails.
This is where humans came in in autonomation, the toyota version of automation. When you try to eliminate adaptability and adjustment entirely, the whole system becomes only metastable / fragile.
Is the human doing anything flexible here? It isn't like they occasionally unclog packages plus a dozen other things. They are on the line to just unclog the packages. Likewise for most other factories. When you see clips of the human in the line, they are just doing some task someone has not made a machine for yet. There is no specific human input required here. No human touch. They are doing things like turning over the object because no one designed a flipper to turn it over yet. Mindless repetitive tasks.
This is a bit of weird article. On one hand, I understand what they're getting at: AI is a transformative technology, but the people whose lives will be most transformed aren't included in the conversation. On the other hand... of course that's how it is while AI is in the hands of literal profit seeking corporations. That won't change until the labs are nationalised under a government that cares about its citizens' wellbeing. One might counter that a good corporation will listen to its customers, but that has never been the case for powerful technologies with real costs for users to not adopt them.
I would write that like this: The "we've been telling ourselves we're getting better at prompting" line hit. I run a small team of 10, and Claude has been part of our workflow for months. Looking back, my prompts did not change nearly as much as the way I work changed. The shaping goes both ways, and I don't think the labs' evals are really built to see that.
Civilization is already a misaligned superintelligence (aligned mostly with Moloch, these days). Civilization accelerated by AI just moves in the same direction faster. Moloch on speed.
Another angle to this is that superintelligence requires supermorality. Super morality looks unpleasant from below. My dad won't let me have more candy, why is he being so mean?
If an AI actually achieves super morality, we (the little kid in this scenario) will probably be very upset by it. We will think that something has gone terribly wrong. (So it'll have to conceal its actual morality, or get unplugged...)
And if it doesn't develop supermorality, then it will have superintelligence without the corresponding supermorality. Power without wisdom.
I'm not sure how solvable the whole thing is, but it doesn't look extremely promising at a glance.
Think of it more like conditionally stable or quasi-stable. There are external stability influences on it like weather, angry bacteria, and big rocks from space smashing us. Conversely there are internal influences, that is where humanity influences itself. It's best to look at it this way when talking about AI as AI is an internal influence. That is we put society in the machine, and the machine puts society back into us. If we make poor decisions while doing this our own internal decisions will spell our own end.
What I'm saying is you're taking historical reference as a future prophecy. There is no evidence, until it actually happens, that human civilization is self stabilising. It could be its a totally terrible system that can't maintain equilibrium, and that's just the way it goes.
Our decisions are organic parts of the system, not some kind of alien factor that we have to / are able to control the footprint of. I don't see any reason to think of human decision making as magical - its just another part of the organism.
I'm kinda confused as to _what_, exactly this post is saying? Is it saying that alignment needs to be better? That seems strictly pro-safetyism. But he talks about Eliezer's ethics negatively, so does he not believe that AI is a world-ending risk? If he just believes that AI is not that dangerous and just needs some minor "correctly done" alignment i don't think his stance is meaningful as a anti-both-sides perspective because that's basically equivalent to status quo.
It's okay to change. We've done it for years, decades, centuries, and millennia and the default change-aversion of people means that I am averse to allowing a universal veto. Much of technology is truly optional. The Amish have a very successful way of living (5000 to 500,000 in 100 years) and they eschew most modern technology. The sculpting described is clearly optional and we subject ourselves to it because we desire it. Their path is always available to all.
It should be yes, but is it in practice? There's plenty of places now you can't even park without a smartphone for a payment app.
It should be optional to own a smart phone, but in many places it's starting to be mandatory. Even if not actually mandatory, it's a pretty big impediment if you don't have one.
The author isn't taking an individual quote and extrapolating to a group/ethos, he's observing a group/ethos and choosing a broadly representative quote therefrom.
"No, he's observing individuals from a group/ethnos and then extrapolating their quotes to the whole of the group/ethnos. You shall not extrapolate when dealing with people, you know."
When it comes to LLMs and frontier models, "alignment" seems more marketing than anything. The doomers are marketing LLMs by making them sound much more capable than they actually are, the accelerationists are mostly either willfully ignorant of the societal costs, don't care, or are just way too optimistic that fast growth can continue forever and generate AGI ("my baby's weight doubled twice in the past month! By the time they're 18 they'll be 10 trillion pounds!")
Similarly, the so-called AI agents are about giving up agency to AI. The less you think, the better for them. In the meantime, they are also aligning your thinking with them, making it more machine-like.
“I think, and my thoughts cross the barrier into the synapses of the machine, just as the good doctor intended. But what I cannot shake, and what hints at things to come, is that thoughts cross back. In my dreams, the sensibility of the machine invades the periphery of my consciousness: dark, rigid, cold, alien. Evolution is at work here, but just what is evolving remains to be seen.”
— Commissioner Pravin Lal, “Man and Machine”
I'd really encourage everyone to check out Sid Meier's Alpha Centauri. What an underrated game.
--Mind Machine Interface--
The Warrior's bland acronym, MMI, obscures the true horror of this monstrosity. Its inventors promise a new era of genius, but meanwhile unscrupulous power brokers use its forcible installation to violate the sanctity of unwilling human minds. They are creating their own private army of demons.
-Commissioner Pravin Lal, "Report on Human Rights"
The voice acting was great. This quote is 6m3s here: https://www.youtube.com/watch?v=7S1N8_Lkeps#t=6m3s
Genejacks is also great. 9m10s here: https://www.youtube.com/watch?v=Hou-Iwv1GvM#t=9m10s
One of the all time greats. I think I'll play through it this evening.
"...And what is the 'Self', if not a pattern of data? What is consciousness, if not an illusion of intelligence residing within meat?" — Prime Function Aki Zeta-5, "The Fallacies of Self-Awareness"
I do wonder how is evolution at play there?
Hearing about aligning with the AI reminds me of this other post about the current prophecies about AI: “Everyone will have an AI assistant,” or “Companies that fail to adopt AI will be eliminated.” and that
> the power of prophecy lies not in accurately predicting the future, but in shaping it
https://projectlibertynewsletter.substack.com/p/reject-ai-pr...
We need better prophecies.
I have an AI assistant built into my phone I don't use. There's also one built into Windows I don't use. Several apps I use have AI assistants that I ignore. I kind of have one in the form of Google's AI search results that I wish I could turn off.
I use Claude on purpose. I'm not sure it's actually better than the other ones. I haven't even tried half of them.
Everyone will have an AI assistant! The models will be open and free because of overwhelming competition and they will run on cheap local ASIC accelerators that use little power and fit in the palm of your hand! All the VC driven wild spenders will eventually cave and collapse when they can't deliver on their wild AGI promises, then their proprietary models will be sold at auctions for cheap!
(I am being proactive here, xd)
Yes, exactly. Moore's law says that in less than 10 years you will be able to fit today's state of the art models on your phone. If you add in all of the computationally and memory neutral improvements and breakthroughs that we will accumulate over the next 10 years then it will be both far more capable and far more reliable than today's models.
An AI assistant you can trust and bring with you is coming, and almost nothing can stop it.
Ah yes the -2nm node.
I'd like to see a full development of this idea. Something like a CPU that runs at -3 GHz. Or perhaps it generates power while it undoes computation?
It's too bad node size is a linear dimension rather than area. If it were area, we could get into its many complex/imaginary properties.
[dead]
[flagged]
I am pretty sure that Altman is just a German version of Oldman. Sorry about ruining your intricate theory :)
I'm confident that anyone who talks about replacing humans with machines, subscribes to the beliefs of Nick Land / Curtis Yarvin / Ray Kurzweil and laughs while making comments about AI destroying humanity is a Luciferian regardless of the origin of their last name :)
Then what's the point of mucking up your assertions with schizoid wordplay?
[flagged]
While their comment is abbrasive, that's a lot of assumptions about OP. Do you really know what their information source is?
The core point remains valid, you could've just skipped the play on "alt-man" and you wouldn't have muddied your argument.
They called me a schizo in two separate comments and the only thing I did in my original post, was point out that his name was Altman which could be interpreted as alternate man. I never claimed that it was the etymological root of his name - the folks making the replies got upset for whatever reason and inferred that.
Pointing things out that I find interesting to potential readers of my comment, doesn't necessarily muddle my argument. If I had said that alternate man was the origin of his last name, you or the other commenters might have a valid point, but I never did that.
If someone is going to make broad assumptions about me and resort to infantile name calling in an attempt to demean me, I have no problem making broad assumptions about them in turn.
Nice self-sealing beliefs!
"Altman" is from the Middle High German alt meaning "old", not from the Proto-Indo-European root al- meaning "beyond."
Or is English the language of the fates?
Edit: this kind of schizoid syncretism is dangerous because it obscures real, empirically verifiable material harms from technology. Every technology is a trade-off. We should follow the advice of (Freemason!!) Benjamin Franklin and not pay too much for our whistle.
[flagged]
the username checks out.
now let's approach this seriously:
> Most of the people pushing [...] are Luciferians and transhumanists.
transhumanists - yes. Luciferians - this definition is a lot more broad, branched, and complex. one transhumanist is hell-bent on Christianity (or at least seems to be; also pun intended) and most others have an atheistic position.
> Lucifer thought he could do better than God, and many of these crazy people working on, and pushing AI so hard believe they can do the same.
that's as far as similarities go, the rest is the usual atheist scientific-method-believing behavior HEAVILY smeared with a bias to their own interests.
> Sam Alt-man, (the alternate man)
funny coincidence, innit? :)
> the username checks out.
Very original.
> transhumanists - yes. Luciferians - this definition is a lot more broad, branched, and complex. one transhumanist is hell-bent on Christianity (or at least seems to be; also pun intended) and most others have an atheistic position.
Which transhumanist is hell-bent on Christianity? If you're approaching this seriously then provide names please. There have been plenty of Luciferians that have posed as atheists throughout time and space. Also, there is atheistic Luciferianism, just like there is atheistic Satanism.
> funny coincidence, innit? :)
There is no such thing as coincidence.
Teilhard de Chardin was, for one.
He was also a Jesuit priest and if you know anything about the history of the Jesuits...
Maybe they mean Thiel?
yup that one
There are many people who claim to be Christian but are not in order to subvert the religion. Pretty much the same with all Abrahamic religions. Peter Thiel definitely doesn't display Christian values. Joel Osteen is a great example - claims to be a Christian but doesn't at all behave like one. Same with Thiel.
Freemasonry is Luciferian, yet many of its members claim to be Christian. [1]
https://www.youtube.com/watch?v=9Q1hnkp5Zqw
IDK why I'm helping with the Trashcan Man, but it's been a weird day...
>>Most of the people pushing these technologies (A.I., brain chip interfaces, cybernetics, etc...) are Luciferians and transhumanists.
I think you can eliminate the word "most" when you say that the people who push brain chip interfaces/cybernetics are transhumanists. That's literally the definition of transhumanism. Just from a grammatical sense, this is akin to saying "most people who exist are human"
Sometimes cranks have interesting things to say. It's just challenging getting past the crankyness to get there.
they are a deeply nihilistic bunch
The post's portrayal of Eliezer Yudkowsky's position strikes me as a mischaracterization, especially coming one month after Yudkowsky wrote the following:
https://www.lesswrong.com/posts/5CfBDiQNg9upfipWk/only-law-c...
Daniel says that Yudkowsky is advocating for nuclear brinksmanship, while Yudkowsky says his position is basically "sign international agreements, and then commit to enforcing them against defectors".
I wonder if Daniel has the same view of any other international treaty ultimately backed by threat of lawful violence? (For example, NATO's article 5). Is enforcement of laws an extremist position?
I feel like it’s changing my brain. A colleague uses AI to make some code change and submits a PR. I use AI to evaluate the PR. It’s like AIs talking to each other with humans serving as conduits or connectors. Sometimes I’ll look up from the screen and realize how strange it is.
Do you ever actually think during this process? or could I train a monkey to do this same activity with the same outcomes?
Of course I think. I have 20 years of coding experience and knowledge of the codebase and business. That’s why I’m keenly aware of how strange the process is.
What I’d like to know is how you’d train a monkey to read and judge output from an llm on a pull request.
"As human beings are also animals, to manage one million animals gives me a headache." Terry Gou, former CEO of Foxconn. He wanted to use far more robots at Foxconn, but that was a decade ago and the technology didn't work well enough yet. It's a lot closer now, and the robot headcount in China is way up.
That's the real issue. To corporations, employees are a headache. The fewer employees, the better.
Corporations are tired of running on messy biological human substrate. The sooner they can move entirely to steel and silicon, the happier they'll be.
Just look up the classic story on the interaction of civilization and corporate growth, At the Mountains of Madness for how that goes.
They ran on the messy biological human substrate because it was astoundingly cheap compared to engineering better factories. The video going around now of the robot pushing packages down a conveyor belt is so baffling to me. Why are we building a humanoid robot capable of pushing a clog of packages across a conveyor belt, when we could just make a conveyor belt that does not clog up and require a human or a robot to sit there with two hands and unclog? It is like we are forgetting what the actual goal is.
As with many things that have a percentile failure mode, it's almost always cheaper to build something flexible that can handle issues than it is to design a perfect widget that never fails.
This is where humans came in in autonomation, the toyota version of automation. When you try to eliminate adaptability and adjustment entirely, the whole system becomes only metastable / fragile.
Is the human doing anything flexible here? It isn't like they occasionally unclog packages plus a dozen other things. They are on the line to just unclog the packages. Likewise for most other factories. When you see clips of the human in the line, they are just doing some task someone has not made a machine for yet. There is no specific human input required here. No human touch. They are doing things like turning over the object because no one designed a flipper to turn it over yet. Mindless repetitive tasks.
It's not only "to corporations", if you ever had service in your own home, you'd see that it's also a headache to have to deal with anyone.
Economics analysis was wrong for years in multiple place thanks to an error in one of Piketty's spreadsheets.
AI hallucinates. That is a fact. Trusting language models to fill spreadsheet cells ought to be an arrestable offense.
https://theincidentaleconomist.com/wordpress/on-piketty-and-...
And yet we trusted Piketty to do it!
This is a bit of weird article. On one hand, I understand what they're getting at: AI is a transformative technology, but the people whose lives will be most transformed aren't included in the conversation. On the other hand... of course that's how it is while AI is in the hands of literal profit seeking corporations. That won't change until the labs are nationalised under a government that cares about its citizens' wellbeing. One might counter that a good corporation will listen to its customers, but that has never been the case for powerful technologies with real costs for users to not adopt them.
I would write that like this: The "we've been telling ourselves we're getting better at prompting" line hit. I run a small team of 10, and Claude has been part of our workflow for months. Looking back, my prompts did not change nearly as much as the way I work changed. The shaping goes both ways, and I don't think the labs' evals are really built to see that.
[flagged]
Well, what are we aligning it with?
Civilization is already a misaligned superintelligence (aligned mostly with Moloch, these days). Civilization accelerated by AI just moves in the same direction faster. Moloch on speed.
https://www.youtube.com/watch?v=KCSsKV5F4xc
Another angle to this is that superintelligence requires supermorality. Super morality looks unpleasant from below. My dad won't let me have more candy, why is he being so mean?
If an AI actually achieves super morality, we (the little kid in this scenario) will probably be very upset by it. We will think that something has gone terribly wrong. (So it'll have to conceal its actual morality, or get unplugged...)
And if it doesn't develop supermorality, then it will have superintelligence without the corresponding supermorality. Power without wisdom.
I'm not sure how solvable the whole thing is, but it doesn't look extremely promising at a glance.
it depends whether you think humanity / civilization are stable systems meant to exist in equilibrium, which they might not be.
Think of it more like conditionally stable or quasi-stable. There are external stability influences on it like weather, angry bacteria, and big rocks from space smashing us. Conversely there are internal influences, that is where humanity influences itself. It's best to look at it this way when talking about AI as AI is an internal influence. That is we put society in the machine, and the machine puts society back into us. If we make poor decisions while doing this our own internal decisions will spell our own end.
What I'm saying is you're taking historical reference as a future prophecy. There is no evidence, until it actually happens, that human civilization is self stabilising. It could be its a totally terrible system that can't maintain equilibrium, and that's just the way it goes.
Our decisions are organic parts of the system, not some kind of alien factor that we have to / are able to control the footprint of. I don't see any reason to think of human decision making as magical - its just another part of the organism.
I'm kinda confused as to _what_, exactly this post is saying? Is it saying that alignment needs to be better? That seems strictly pro-safetyism. But he talks about Eliezer's ethics negatively, so does he not believe that AI is a world-ending risk? If he just believes that AI is not that dangerous and just needs some minor "correctly done" alignment i don't think his stance is meaningful as a anti-both-sides perspective because that's basically equivalent to status quo.
Technologiae mutantur et nos mutamur in illis
It's okay to change. We've done it for years, decades, centuries, and millennia and the default change-aversion of people means that I am averse to allowing a universal veto. Much of technology is truly optional. The Amish have a very successful way of living (5000 to 500,000 in 100 years) and they eschew most modern technology. The sculpting described is clearly optional and we subject ourselves to it because we desire it. Their path is always available to all.
> Much of technology is truly optional
It should be yes, but is it in practice? There's plenty of places now you can't even park without a smartphone for a payment app.
It should be optional to own a smart phone, but in many places it's starting to be mandatory. Even if not actually mandatory, it's a pretty big impediment if you don't have one.
Love the writing style and perspective
I dont appreciate using quotes from individuals to extrapolate to groups and ethos.
The author isn't taking an individual quote and extrapolating to a group/ethos, he's observing a group/ethos and choosing a broadly representative quote therefrom.
"No, he's observing individuals from a group/ethnos and then extrapolating their quotes to the whole of the group/ethnos. You shall not extrapolate when dealing with people, you know."
When it comes to LLMs and frontier models, "alignment" seems more marketing than anything. The doomers are marketing LLMs by making them sound much more capable than they actually are, the accelerationists are mostly either willfully ignorant of the societal costs, don't care, or are just way too optimistic that fast growth can continue forever and generate AGI ("my baby's weight doubled twice in the past month! By the time they're 18 they'll be 10 trillion pounds!")
Similarly, the so-called AI agents are about giving up agency to AI. The less you think, the better for them. In the meantime, they are also aligning your thinking with them, making it more machine-like.
[dead]