The author gives this example of the problem and incorrect way to leverage AI:
"Sarah was relieved. She thought she could focus on high-value synthesis work. She’d take the agent’s output and refine it, add strategic insights, make it client-ready."
Then they propose a long winded solution which is essentially the same exact thing but uses the magical term "orchestrate" a few times to make it sound different.
In fairness to the author, I think their point was that you take _several_ agents (not just one) and find a way to have them work like a team of 20 people. In the example, Sarah is trying to do the same job she did before, just marginally better.
Yea I guess that's accurate but they also explained that AI capabilities advance every 6-12 months and managing a team of agents buys you a few years. So their proposed solution and conclusion that it keeps you safe for years makes no sense right now. Multi agent orchestration, with an agent doing the orchestrating, is all the craze nowadays.
They made half the point, in my opinion - that you should be "doing the thing that wasn't possible before" but missed the other half - that maybe the thing you should be doing is owning and creating relationships with customers yourself instead of doing it through a company... Which maybe wasn't possible before but is now.
I agree. But the article then seems to suggest, 'you be the one left standing to orchestrate'. It didn't offer much of a suggestion about the other 20 people that would be gone.
It seemed to come down to the old 'just work better , faster, cheaper' , but that is dialed up to 11 now.
I read it more as "look for the thing that was _never done_ because no one was going to hire 20 people to do it" and all the examples were pointing out how you _should not_ try to "better, faster, cheaper" AI because you will lose quickly on all those dimensions.
I realize the irony, of course, that this article is AI-generated but it provoked something close to an epiphany for me even so.
God damn it. Can people write interesting articles in NORMAL writing style nowadays? Why is everyone writing in these stupid short "punchline" sentences?
Seriously. This is trash. It presents no evidence, contains no original ideas, it’s just written—excuse me, generated—to be as provocative as possible.
I think I’ll just start flagging these. They’re just a new kind of spam.
I can't quite put my finger on it; obviously the "it's not this. It's that." is part of it, but even without the obvious tells that writing was AI-generated/improved, it's just so tiring to read?
Maybe a linguist can chime in why all these texts are so samey, cloying and annoying to read? Is it (just) the pacing?
It reminds me of the "overenthusiastic youtuber" presentation style, with jump cuts etc., just in written form. From its prevalence I can only assume that some audiences prefer it - I'd be more interested to know why that is.
I wonder if part of it is that we're mentally trying to get the actual meaning and thoughts out of it. It's inflated like trying to read a bad students essay that's struggling for word count? I wish people would just post their prompts directly.
This is just myth and faith. Even if all AI wrote like that, it doesn't follow that all writing in that style is by AI, hence the belief in the style. Focus less on the aesthetics, more on the message. After all, for this article to have been written in the form of a sonnet was just a prompt away.
All of a sudden everybody is a writing style critic. The only question that is pertinent is if the message of the post is relevant.
Think about it. You wouldn't give someone crap for writing in broken English because there are many really smart people that are non English speakers. So why are we giving crap for people using AI to write better posts? If the idea is relevant, what's the point in criticizing the style?
A fair question would be "is the idea in the post actually the writer's or was it entirely done by AI"? However how can one actually tell if the idea, not the style is original? You can't. So it's pointless to be angry about style. Focus on the message.
No. I have a lot of respect for people who write in a second language. You can often tell because the content is thoughtful and some of the word choices or grammar is quirky.
This is a bozo who prompted the machine for a viral essay. He did not write anything. He does not know anything.
But I’ve spent enough time with these tools and coaching people on writing over the years to recognize the extremely low signal to noise ratio and prompted style instructions. I’m equally confident the gentleman in my spam folder is not a Nigerian prince.
"is the idea in the post actually the writer's or was it entirely done by AI"
Look. If you don't want your readers to worry that your hard-written article is AI slop - just don't run it through the slopifier. Or at the very least, spend 5 minutes tweaking the output.
If you can't be assed to do that, then it's very likely that you don't have valuable insights to share.
You can make other drastic decisions first if things seem that bad. Upend your life, move to a really cheap part of your country, change to a purely physical labor job or a self run business or something. All of those are less permanent and have a better chance of working out?
All of this involves an amount of effort im not willing to go through every few months due to corporations letting people go left and right. Once you land a job and manage to save up a little all of it will go down the drain for your next move and the cycle repeats. All that while tech bros promise AI utopia
find a non tech job is what I mean. work for a small town Library or the govt or something more stable, get out of ai impacted fields as much as you can. maybe risky but less risky than giving up?
Yes, the problem is that many corporate resources cannot differentiate their roles from that of a glorified search engine. In fact, some experts on the human mind cannot effectively differentiate the human experience from that of a glorified search engine.
It’s a recession being masked by two things: inflation making nominal numbers not go down, and the fact that stock markets have fully decoupled from the actual economy. They are just casinos now.
Work that can largely accelerate AI seems pointless to me in the current situation. It’s fairly clear to me that this will soon lead to an oversupply of workers and a drop in wages. It’s possible that wages may even fall to such a level that pursuing those professions will no longer make sense at all. Unfortunately it seems like software engineering will be one of those professions.
It seems inevitable that the wages for a software engineer will approach minimum wage. The role would then have transformed into a "requirements specification technician", which is something that a PM might even do directly.
The fundamental problem is not unlike what happened in the industrial revolution: we are suddenly much more productive as a society, how do we distribute that productivity?
A sane society would use a tool like the monetary supply to do so: money is a public good (it exists because we say it does) and thus should be managed for the public good. People should be able to work less while having a higher living standard, which is easily achievable given our almost comical productivity.
Because we've privatized money creation in the form of credit monopolies, this obvious mechanism isn't available, so it seems like we will end up with either short term crushing poverty followed by bloody revolution or the techno-feudalist utopia-for-the-few.
> Last week, you spent three hours writing a campaign brief. You saw a colleague generate something 80% as good in four minutes using an AI agent. Maybe 90% as good if you’re being honest.
No it's like 60% as good, but management and other "AI for brains" people can't see it.
If that's the case, then business results should get worse, and management should notice this. If business results don't get worse, then either 1) it's actually more than 60% as good, or 2) it doesn't matter to the business's bottom line that the result is only 60% as good instead of 80% as good, and management made the right decision.
>business results should get worse, and management should notice
This is a common oversimplification that results in an enormous amount of waste and bad products/services. Lots of causes and effects are too disconnected to see or too hard to measure. In addition to looking at metrics, good business leadership most also act like a human (which is a depressing thing to have to say): Use common sense; like good things; dislike bad things.
I need to stop using the metric of "if a hacker news post has a lot of comments, then the article is worth a read" and instead read the comments first.
Lately, there have many controversial articles (with a lot of comments) that are most likely written by AI and I regret wasting my time on. Sigh, is there a hacker news replacement with higher quality articles that I don't know about? I imagine all platforms are inundated with slop now.
The key "One critical caveat: this won’t work forever in its current form. Eventually, agents will get better at orchestration too. But it buys you three to five years. And in that time, you’ll see the next evolution coming"
The suggestion does sound a bit like 'work faster'.
I tried to take the headline seriously, but reading deeper into the piece they just do a semantic trick - the current job is "shrinking" and you need to find a new better job to do instead. This is basically what disappearing would imply anyway?
Also very funny to use an AI to write this kind of article. I w wonder how they feel about their job writing blog posts shrinking.
We need a plugin to automatically detect AI posts as I'm basically skipping reading or clicking most links now due to a lot of it being generated word soup.
The author gives this example of the problem and incorrect way to leverage AI:
"Sarah was relieved. She thought she could focus on high-value synthesis work. She’d take the agent’s output and refine it, add strategic insights, make it client-ready."
Then they propose a long winded solution which is essentially the same exact thing but uses the magical term "orchestrate" a few times to make it sound different.
Well, the article was written by AI, so I wouldn't expect it to make valid arguments through a long article like this.
In fairness to the author, I think their point was that you take _several_ agents (not just one) and find a way to have them work like a team of 20 people. In the example, Sarah is trying to do the same job she did before, just marginally better.
Yea I guess that's accurate but they also explained that AI capabilities advance every 6-12 months and managing a team of agents buys you a few years. So their proposed solution and conclusion that it keeps you safe for years makes no sense right now. Multi agent orchestration, with an agent doing the orchestrating, is all the craze nowadays.
They made half the point, in my opinion - that you should be "doing the thing that wasn't possible before" but missed the other half - that maybe the thing you should be doing is owning and creating relationships with customers yourself instead of doing it through a company... Which maybe wasn't possible before but is now.
I agree. But the article then seems to suggest, 'you be the one left standing to orchestrate'. It didn't offer much of a suggestion about the other 20 people that would be gone.
It seemed to come down to the old 'just work better , faster, cheaper' , but that is dialed up to 11 now.
I read it more as "look for the thing that was _never done_ because no one was going to hire 20 people to do it" and all the examples were pointing out how you _should not_ try to "better, faster, cheaper" AI because you will lose quickly on all those dimensions.
I realize the irony, of course, that this article is AI-generated but it provoked something close to an epiphany for me even so.
> add strategic insights
This claim has always been BS in my experience.
God damn it. Can people write interesting articles in NORMAL writing style nowadays? Why is everyone writing in these stupid short "punchline" sentences?
Seriously. This is trash. It presents no evidence, contains no original ideas, it’s just written—excuse me, generated—to be as provocative as possible.
I think I’ll just start flagging these. They’re just a new kind of spam.
It's exhausting to read.
I can't quite put my finger on it; obviously the "it's not this. It's that." is part of it, but even without the obvious tells that writing was AI-generated/improved, it's just so tiring to read?
Maybe a linguist can chime in why all these texts are so samey, cloying and annoying to read? Is it (just) the pacing?
It reminds me of the "overenthusiastic youtuber" presentation style, with jump cuts etc., just in written form. From its prevalence I can only assume that some audiences prefer it - I'd be more interested to know why that is.
or "reels" equivalent of an article
I wonder if part of it is that we're mentally trying to get the actual meaning and thoughts out of it. It's inflated like trying to read a bad students essay that's struggling for word count? I wish people would just post their prompts directly.
Because it's not people doing the writing.
Because it’s a lead generation machine
That's ChatGPT for you:
> The ... isn’t just ... . It’s ... .
This is just myth and faith. Even if all AI wrote like that, it doesn't follow that all writing in that style is by AI, hence the belief in the style. Focus less on the aesthetics, more on the message. After all, for this article to have been written in the form of a sonnet was just a prompt away.
These people do not put in any effort and go for defaults. I see this with images too.
Because they aren't writing, it's vibe blogging or whatever
All of a sudden everybody is a writing style critic. The only question that is pertinent is if the message of the post is relevant.
Think about it. You wouldn't give someone crap for writing in broken English because there are many really smart people that are non English speakers. So why are we giving crap for people using AI to write better posts? If the idea is relevant, what's the point in criticizing the style?
A fair question would be "is the idea in the post actually the writer's or was it entirely done by AI"? However how can one actually tell if the idea, not the style is original? You can't. So it's pointless to be angry about style. Focus on the message.
No. I have a lot of respect for people who write in a second language. You can often tell because the content is thoughtful and some of the word choices or grammar is quirky.
This is a bozo who prompted the machine for a viral essay. He did not write anything. He does not know anything.
Do you know this for a fact? How? Honest question.
If I’m wrong I’ll eat my hat.
But I’ve spent enough time with these tools and coaching people on writing over the years to recognize the extremely low signal to noise ratio and prompted style instructions. I’m equally confident the gentleman in my spam folder is not a Nigerian prince.
> So why are we giving crap for people using AI to write better posts?
Pretty sure the point is it's making the post worse, not better.
"is the idea in the post actually the writer's or was it entirely done by AI"
Look. If you don't want your readers to worry that your hard-written article is AI slop - just don't run it through the slopifier. Or at the very least, spend 5 minutes tweaking the output.
If you can't be assed to do that, then it's very likely that you don't have valuable insights to share.
> You saw a colleague generate something 80% as good in four minutes using an AI agent. Maybe 90% as good if you’re being honest.
Wish this were realistic - I'd have enjoyed the read more.
It’s possible the author is such a bad writer that this is really 90% as good as they get.
Actually that’s probably the only way anyone would publish this without being embarrassed.
The only way to win is to already have enough money that you don't need a job.
Or get a physical job AI can't do. But all of those are commodities and pay shit wages.
In my situation the next best alternative is to end it all. I wont go back to wage slavery fuck ass companies not respecting me nor my time.
You can make other drastic decisions first if things seem that bad. Upend your life, move to a really cheap part of your country, change to a purely physical labor job or a self run business or something. All of those are less permanent and have a better chance of working out?
All of this involves an amount of effort im not willing to go through every few months due to corporations letting people go left and right. Once you land a job and manage to save up a little all of it will go down the drain for your next move and the cycle repeats. All that while tech bros promise AI utopia
find a non tech job is what I mean. work for a small town Library or the govt or something more stable, get out of ai impacted fields as much as you can. maybe risky but less risky than giving up?
Please consider reaching out to get some help.
AI is a glorified search engine, it can’t tell what solutions an org actually needs or how to plan & execute them safely.
Yes, the problem is that many corporate resources cannot differentiate their roles from that of a glorified search engine. In fact, some experts on the human mind cannot effectively differentiate the human experience from that of a glorified search engine.
That’s just a matter of context management right?
We're in a recession, it's the economy that's shrinking, not my job.
What also seems to be shrinking is the awareness that the presence of LLMs does not obviate the need to understand the world.
Sleepwalking into Idiocracy x Waterworld while dreaming of Star Trek...
It’s a recession being masked by two things: inflation making nominal numbers not go down, and the fact that stock markets have fully decoupled from the actual economy. They are just casinos now.
Work that can largely accelerate AI seems pointless to me in the current situation. It’s fairly clear to me that this will soon lead to an oversupply of workers and a drop in wages. It’s possible that wages may even fall to such a level that pursuing those professions will no longer make sense at all. Unfortunately it seems like software engineering will be one of those professions.
It seems inevitable that the wages for a software engineer will approach minimum wage. The role would then have transformed into a "requirements specification technician", which is something that a PM might even do directly.
>this will soon lead to an oversupply of workers and a drop in wages
Precisely. So why are our masters still panicking about population decline and hyping the need for immigration?
AI slop, obviously.
The fundamental problem is not unlike what happened in the industrial revolution: we are suddenly much more productive as a society, how do we distribute that productivity?
A sane society would use a tool like the monetary supply to do so: money is a public good (it exists because we say it does) and thus should be managed for the public good. People should be able to work less while having a higher living standard, which is easily achievable given our almost comical productivity.
Because we've privatized money creation in the form of credit monopolies, this obvious mechanism isn't available, so it seems like we will end up with either short term crushing poverty followed by bloody revolution or the techno-feudalist utopia-for-the-few.
> Last week, you spent three hours writing a campaign brief. You saw a colleague generate something 80% as good in four minutes using an AI agent. Maybe 90% as good if you’re being honest.
No it's like 60% as good, but management and other "AI for brains" people can't see it.
If that's the case, then business results should get worse, and management should notice this. If business results don't get worse, then either 1) it's actually more than 60% as good, or 2) it doesn't matter to the business's bottom line that the result is only 60% as good instead of 80% as good, and management made the right decision.
>business results should get worse, and management should notice
This is a common oversimplification that results in an enormous amount of waste and bad products/services. Lots of causes and effects are too disconnected to see or too hard to measure. In addition to looking at metrics, good business leadership most also act like a human (which is a depressing thing to have to say): Use common sense; like good things; dislike bad things.
recent Windows 11 update?
I need to stop using the metric of "if a hacker news post has a lot of comments, then the article is worth a read" and instead read the comments first.
Lately, there have many controversial articles (with a lot of comments) that are most likely written by AI and I regret wasting my time on. Sigh, is there a hacker news replacement with higher quality articles that I don't know about? I imagine all platforms are inundated with slop now.
Me too. I ended up reading the article first and then comments. It's definitely AI slop, can I have my ten minutes back.
This belongs on reddit. Not HN.
Users need to stop shaming Reddit. It doesn't belong on Reddit either.
The key "One critical caveat: this won’t work forever in its current form. Eventually, agents will get better at orchestration too. But it buys you three to five years. And in that time, you’ll see the next evolution coming"
The suggestion does sound a bit like 'work faster'.
Don't just work faster, but yes, work faster.
I tried to take the headline seriously, but reading deeper into the piece they just do a semantic trick - the current job is "shrinking" and you need to find a new better job to do instead. This is basically what disappearing would imply anyway?
Also very funny to use an AI to write this kind of article. I w wonder how they feel about their job writing blog posts shrinking.
AI-written article, as a heads up.
I feel this comment can apply to 80% of the content that gets posted anymore.
We need a plugin to automatically detect AI posts as I'm basically skipping reading or clicking most links now due to a lot of it being generated word soup.
My job as a software engineer is not going anywhere. It’s only getting more interesting. Nice vibe blog / fear mongering piece.
The advice is: identify human constraints and remove them with agents.
Yet another simple stupid idea inflated to a massive article with ai.
AI slop describing how AI slop is affecting your job. Cool.