AI is unlikely to make people like me, or most already established professionals, lazy.
But it absolutely could affect younger people who are still learning and building fundamentals.
That raises a pretty serious question about regulating AI usage in education, and it’s surprising how little attention that discussion still gets.
Speaking for myself, I feel a difference if I stop using AI for a week and just rely on regular web searches. And I have a fair amount of professional experience
Also, speaking for academia, AI is basically all we talk about now when it comes to curriculum and instruction. That's not to say that we only rely on AI, or something like that, but we talk a lot about how to get basically anything done now. It's the biggest learning experience we've ever had as instructors, and I suspect we'll be trying to figure it out for a long time to come.
We haven't even discussed as a society what might go wrong with LLMs, and we're already seeing what is going wrong. That's how hard we failed as a society.
Maybe not sneaking huge concessions to AI in omnibus bills would be a start.
Not getting teachers in trouble when they can clearly tell their students are submitting AI essays.
But we're still just letting kids use their phone in class, and our lawmakers are just learning what Facebook is. AI is going to "happen" to us, we are not serious enough to discuss it.
This is true and we should never go to the point where all we are doing is prompting. Thinking and analyzing is a core human function, and we should not dig ourselves to be dumb and stupid
This is where critical thinking needs to be a skill taught and learned. Too many people take information at face value. An AI that's 80/90% right is a great way to get duped - like the author at the end.
For people that have been programming for 20 years, AI is not an issue, on the contrary, it can work for you as you become older. They have trained enough.
For young people, it can destroy them if they stop practicing hours every day, outsourcing all their work to the ai, as they will start losing contact with the reality they are controlling.
But that depends on the personality of people. One of the best uses of LLMs is creating trainers for practicing what you need. For example, I have created programs for practicing my Japanese and Mandarin pronunciation, for practicing chords and note recognition with MIDI, and playing of sheet music, practicing IPA, my handwriting...
Usually I buy some software that does more or less what I want, but them I realise that I need a specific feature the software does not have. So I create the trainer for personalization.
It will be impossible for me to do that without LLMs. There is no time in 5 lifetimes to do that by hand.
I think it depends on how you use it. Recently I spent several days using Opus Extended for help on a simple mathematical model. I came up with the modeling ideas and some hypotheses, it did the math and worked through implications. Then I carefully read through everything it wrote, made sure it hadn't misinterpreted what I'd said (which it sometimes did), and checked all the math. Occasionally it used math I didn't understand, so I opened another chat to teach me that math.
In one way it was sycophantic, frequently saying how "sharp" my ideas were, but I just reminded myself to discount that. In other ways it pushed back. If the math didn't work out to the conclusions I expected, it pointed that out. It was like working with a reasonably smart, extremely productive, but less creative coworker.
I don't think it made me lazier, because it was intense and exhausting. But I learned a lot.
I agree.
A month back, I was brainstorming ideas with AI a lot, and it took me a while to figure this out: I was essentially outsourcing my thinking. Honestly, I felt extremely dumb after the sessions and kept blaming myself for becoming dumb, but I stopped using AI, went back to my non-AI routine, and ideas started popping back up.
For subjects I care about, I will hammer the AI with questions, ask for references, proofs, explanations, code samples, book recommendations until I'm satisfied with the answer and I can explain it myself.
For things I don't care about.. meh. I wouldn't have done any research without AI anyway and I'm probably getting a better nuanced or grounded answer than whatever someone who thinks they know might confidently explain me.
I'd argue getting your answers from social media is orders of magnitude worse, and the time to teach critical thinking skills was a few decades ago.
Bro, I started doing my balcony self watering project thanks to AI. It is helping me to chose parts which I'll later solder and program. But ofc people use AI for different things.
It sucks that these days, being human requires effort. You have to force yourself to go out and socialize or do things by hand and willingly reject easier alternatives. Unlike the old days where it felt effortless because there was nothing to reject. You were forced to just get out there and do stuff the hard way because it was the only way, and you didn’t think it was hard, because that’s just the way it was. And by just living life you naturally became smarter and stronger.
Now it’s easy to just kick back and scroll a phone and not think too deep or work too hard on anything. You don’t learn. This is the default state, and it’s not human, it’s just being a living vegetable, that other people squeeze for money. And the squeeze gets tighter and tighter with time because you become increasingly worthless, until one day even the juice is no longer worth the squeeze so you’re left to rot.
This entire study hinges on overplaying the fact that these people because used to AI doing the work for them and didn't feel the need to try. That has nothing to do with actually being dumber.
Did advanced machining operators who used to perform all machine setup and operation in an analog way themselves become lazy and dumb when (computer) numerical control took over? Are modern machinists dumb? Or are they just smart in a different domain now that the positioning is actually taken care of? Does showing that the machinists don't want to, and would chose to skip physically positioning of the machine tools every step of the way, mean they are lazy?
It's like asking people to dig a hole and giving some of them a shovel and asking others to do it with their hands. Or asking them to go to a location a half mile away with a bike and then taking it away the nex time. They're not going to be real enthused after experiencing mechanical advantage.
Anyone that thinks that using AI for just 10 minutes has a real and lasting effect on intelligence might need to go back and do a little more training of theirs. Significantly more than 10 minutes of training, because 10 minutes will do nothing.
Exactly, just like using Google instead of going to the local library, using GPS instead of paper maps, or saving phone numbers as contacts in your phone instead of remembering them.
The more tech we invent and use, the dumber and lazier we get.
No they're not. Using GPS and blinding following what it says is "wholesale giving over your navigation to GPS". Writing things down is "wholesale giving over your memory to paper".
AI chatbots do not have agency, they are not actively trying to take over your thinking. People can prompt them to do their thinking for them, or they can prompt to get examples and help with understanding.
Isn't that a strawman? There's a significant difference between "using AI" and "wholesale giving over your thinking to an AI". You won't find anyone arguing against the second.
AI is unlikely to make people like me, or most already established professionals, lazy. But it absolutely could affect younger people who are still learning and building fundamentals.
That raises a pretty serious question about regulating AI usage in education, and it’s surprising how little attention that discussion still gets.
Speaking for myself, I feel a difference if I stop using AI for a week and just rely on regular web searches. And I have a fair amount of professional experience
Also, speaking for academia, AI is basically all we talk about now when it comes to curriculum and instruction. That's not to say that we only rely on AI, or something like that, but we talk a lot about how to get basically anything done now. It's the biggest learning experience we've ever had as instructors, and I suspect we'll be trying to figure it out for a long time to come.
Boy, everyone is stupid except me.
It doesn't really read that way, more like we should be aware of how using LLMs could affect a child in an educational environment.
most already established professionals are lazy
>AI is unlikely to make people like me, or most already established professionals, lazy.
Lol, that's already a lazy take.
“Wow, everywhere I go everyone smells like they stepped in dog poop”
[dead]
We haven't even discussed as a society what might go wrong with LLMs, and we're already seeing what is going wrong. That's how hard we failed as a society.
What does "discuss as a society" mean? Pass regulations? Religious doctrine? Warfare?
I think that is the role speculative fiction, like sci-fi, takes.
As Asimov and Roddenberry envisioned it, yes. Certainly not as the drivel that carries the sci-fi label today.
Maybe not sneaking huge concessions to AI in omnibus bills would be a start.
Not getting teachers in trouble when they can clearly tell their students are submitting AI essays.
But we're still just letting kids use their phone in class, and our lawmakers are just learning what Facebook is. AI is going to "happen" to us, we are not serious enough to discuss it.
This is true and we should never go to the point where all we are doing is prompting. Thinking and analyzing is a core human function, and we should not dig ourselves to be dumb and stupid
How do you prompt without thinking and analyzing?
Do the bare minimum of both to get something out from AI.
Help me think through and analyze this [ctrl-v]
This is where critical thinking needs to be a skill taught and learned. Too many people take information at face value. An AI that's 80/90% right is a great way to get duped - like the author at the end.
In my country we have a literal course that is called "critical thinking". I think it's one of the reasons I got my degree.
How will companies like GOOG monetize AI from the unwashed masses? Where are the ads?
10 minutes? I find that a ridiculous statement.
For people that have been programming for 20 years, AI is not an issue, on the contrary, it can work for you as you become older. They have trained enough.
For young people, it can destroy them if they stop practicing hours every day, outsourcing all their work to the ai, as they will start losing contact with the reality they are controlling.
But that depends on the personality of people. One of the best uses of LLMs is creating trainers for practicing what you need. For example, I have created programs for practicing my Japanese and Mandarin pronunciation, for practicing chords and note recognition with MIDI, and playing of sheet music, practicing IPA, my handwriting...
Usually I buy some software that does more or less what I want, but them I realise that I need a specific feature the software does not have. So I create the trainer for personalization.
It will be impossible for me to do that without LLMs. There is no time in 5 lifetimes to do that by hand.
I think it depends on how you use it. Recently I spent several days using Opus Extended for help on a simple mathematical model. I came up with the modeling ideas and some hypotheses, it did the math and worked through implications. Then I carefully read through everything it wrote, made sure it hadn't misinterpreted what I'd said (which it sometimes did), and checked all the math. Occasionally it used math I didn't understand, so I opened another chat to teach me that math.
In one way it was sycophantic, frequently saying how "sharp" my ideas were, but I just reminded myself to discount that. In other ways it pushed back. If the math didn't work out to the conclusions I expected, it pointed that out. It was like working with a reasonably smart, extremely productive, but less creative coworker.
I don't think it made me lazier, because it was intense and exhausting. But I learned a lot.
Which models were you using? Local inference or cloud?
Local inference with Claude Opus, now that would be something. :)
There are a few people out there with AI psychosis that thought they developed a new mathematical theorem.
Definitely making us lazier. Not sure on dump; Depends on how you use it. Don't blindly use the output, try to learn from it instead..
I’m so happy that AI is making people lazier because that means I should have that much of an easier time getting ahead
Or when they get sick, or hurt, or...
There’s a difference between getting sick and self-lobotomizing
Mental laziness is effectively dumbness.
I agree. A month back, I was brainstorming ideas with AI a lot, and it took me a while to figure this out: I was essentially outsourcing my thinking. Honestly, I felt extremely dumb after the sessions and kept blaming myself for becoming dumb, but I stopped using AI, went back to my non-AI routine, and ideas started popping back up.
https://archive.ph/IKojW
Does that mean that, by extension, becoming a manager makes you stupid ?
For subjects I care about, I will hammer the AI with questions, ask for references, proofs, explanations, code samples, book recommendations until I'm satisfied with the answer and I can explain it myself.
For things I don't care about.. meh. I wouldn't have done any research without AI anyway and I'm probably getting a better nuanced or grounded answer than whatever someone who thinks they know might confidently explain me.
I'd argue getting your answers from social media is orders of magnitude worse, and the time to teach critical thinking skills was a few decades ago.
Soon companies will test newcomers to see if they can work without AI.
Bro, I started doing my balcony self watering project thanks to AI. It is helping me to chose parts which I'll later solder and program. But ofc people use AI for different things.
AI appears to cause more brain damage than street drugs. Where is Nancy Regan when you need her?
It sucks that these days, being human requires effort. You have to force yourself to go out and socialize or do things by hand and willingly reject easier alternatives. Unlike the old days where it felt effortless because there was nothing to reject. You were forced to just get out there and do stuff the hard way because it was the only way, and you didn’t think it was hard, because that’s just the way it was. And by just living life you naturally became smarter and stronger.
Now it’s easy to just kick back and scroll a phone and not think too deep or work too hard on anything. You don’t learn. This is the default state, and it’s not human, it’s just being a living vegetable, that other people squeeze for money. And the squeeze gets tighter and tighter with time because you become increasingly worthless, until one day even the juice is no longer worth the squeeze so you’re left to rot.
This entire study hinges on overplaying the fact that these people because used to AI doing the work for them and didn't feel the need to try. That has nothing to do with actually being dumber.
Did advanced machining operators who used to perform all machine setup and operation in an analog way themselves become lazy and dumb when (computer) numerical control took over? Are modern machinists dumb? Or are they just smart in a different domain now that the positioning is actually taken care of? Does showing that the machinists don't want to, and would chose to skip physically positioning of the machine tools every step of the way, mean they are lazy?
It's like asking people to dig a hole and giving some of them a shovel and asking others to do it with their hands. Or asking them to go to a location a half mile away with a bike and then taking it away the nex time. They're not going to be real enthused after experiencing mechanical advantage.
Anyone that thinks that using AI for just 10 minutes has a real and lasting effect on intelligence might need to go back and do a little more training of theirs. Significantly more than 10 minutes of training, because 10 minutes will do nothing.
[dead]
Exactly, just like using Google instead of going to the local library, using GPS instead of paper maps, or saving phone numbers as contacts in your phone instead of remembering them.
The more tech we invent and use, the dumber and lazier we get.
This is different as you are wholesale giving over your thinking to an AI.
No they're not. Using GPS and blinding following what it says is "wholesale giving over your navigation to GPS". Writing things down is "wholesale giving over your memory to paper".
AI chatbots do not have agency, they are not actively trying to take over your thinking. People can prompt them to do their thinking for them, or they can prompt to get examples and help with understanding.
Are you writing a book with GPS? Are you creating a program with GPS? Are you creating music with GPS?
You need to think to do this, you don't need to with AI.
Isn't that a strawman? There's a significant difference between "using AI" and "wholesale giving over your thinking to an AI". You won't find anyone arguing against the second.