> The way work gets done has changed, and enterprises are starting to feel it in big ways.
Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.
When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.
For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.
For classic engineering it's been a boon. This is in a pretty similar vein to the gains mathematicians have been making with AI.
These models can pretty reliably bang out what once was long mathematical solves for hypothetical systems in incredibly short periods of time. It also enables you to do second and third order approximations way easier. What was a first order approach that would take a day, is now a second order approach taking an hour.
And to top it off, they're also pretty damn competent in at least pointing you in the right direction (if nothing else) for getting information about adjacent areas you need to understand.
I've been doing an electro-optical project recently as an electronics guys, and LLMs have been infinitely useful in helping with the optics portion (on top of the mathing electronics speed up).
It's still "trust, but verify" for sure, but damn, it's powerful.
I probe Claude on occasion when I don't feel like looking up source documentation. A couple weeks ago, I was interrogating it about a system that happened to use Kafka. Last night, I was asking it to propose an entirely different solution and it kept trying to shoe horn Kafka into it. I asked it why and proposed a simpler alternative. I was absolutely right! Claude was just very eager to demonstrate familiarity with other systems I was working on!
I shudder to think of the things people will wind up shipping blindly accepting AI guidance.
To presume to point a man to the right and ultimate goal — to point with a trembling finger in the RIGHT direction is something only a fool would take upon himself.
- Hunter S. Thompson
"And for this cause God shall send them strong delusion, that they should believe a lie:
That they all might be damned who believed not the truth, but had pleasure in unrighteousness."
For a more modern take, paraphrasing Hannah Arendt.
“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.”
We live in a an age where for many media is reality, uncritical, unchecked. Press releases are about creating reality, not reporting it, they are about psychological manipulation, not information.
> As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.
This actually happened in reverse with the spread of social media dynamics to politics and major media. Twitter made Trump president, not the other way around.
* No LLMs were harmed in the making of this comment.
I disagree with your sentiment and genuinely think something big is coming. It doesn't need to be perfect now, but it could be good enough to disrupt SaaS market.
> say all of this fluff when everyone knows it’s not exactly true yet
How do you know it's not exactly true? I am already seeing employees in enterprises are heavily reliant on LLMs, instead of using other SaaS vendors.
* Want to draft email and fix your grammar -> LLMs -> Grammarly is dying
* Want to design something -> Lovable -> No need to wait designer, no need to get access to Figma, let designer design and present, for anything else use lovable, or alternatives
* want to code -> obviously LLMs -> I sometimes feel like JetBrains is probably in code red at the moment, because I am barely opening it (saying this as a heavy user in the past)
To make message shorter, I will share my vision in the reply
Let's imagine AI is not there yet and won't be there for 100% accuracy, but you still need accountability, you can't run everything in autopilot and hope you will make 10B ARR.
How do you overcome this limitation?
By making human accountable, imagine you come to work in the morning and your only task is to: "Approve / Request improvement / Reject". You just press 3 buttons all day long:
* Customer is requesting pricing for X, based on the requirements, I found CustomerA had similar requirements and we offered them 100$ / piece last month. What should I do? Approve / Reject / "Ask for 110$"
* Customer (or their agent) is not happy with your 110$ proposal, I used historical data and based on X,Y,Z min we can offer is 104$ to make our ARR increase 15% year-over-year, what should I do? Approve / Reject / Your input
So what, you show up to work one day, hit your three button rotation and one day you just end up in prison because your agent asked you to approve fraud/abuse because the legal ramifications section was outside of it's context window? This is asinine.
It sure reads like it.. These days unfortunately so many things do, there is a real “impersonality” (if that’s the right word) to the whole new communication theme.
To be frank I don’t think your worldview is directionally accurate. OpenAI is certainly trying to sell something but every incremental update to these models there are more avenues of value generation being unlocked. For sure it’s not as it was hyped up to be as all the talking heads in the industry were spouting, but there is a lot of interesting ways to use these tools and it’s not for generating slop.
I am already tired of the disaster that is social media. Hilariously, we’ve gotten to the point that multiple countries are banning social media for under 18s.
The costs of AI slop are going to be paid by everyone, social media will ironically becoming far less useful, and the degree of fraud we will see will be … well cyber fraud is already terrifying, what’s the value of infinity added to infinity.
I would say that tech firms are definitely running around setting society on fire at this point.
God, they built all of this on absurd amounts of piracy, and while I am happy to dance on the grave of the MPAA and RIAA, the farming of content from people who have no desire to be harvested is absurd. I believe wikipedia has already started seeing a drop in traffic, which will lead to a reduction in donations. Smaller sites are going to have an even worse time.
> At a major semiconductor manufacturer, agents reduced chip optimization work from six weeks to one day.
I call BS right there. If you can actually do that, you’d spin up a “chip optimization” consultancy and pocket the massive efficiency gain, not sell model access at a couple bucks per million-tokens.
There should be a massive “caveats and terms apply” on that quote.
So far the AI productivity gains have been all bark and no bite. I’ll believe when I see either faster product development, higher quality or lower prices (which indeed happened with other technological breakthroughs, whether the printing press or the loom) - if anything, software quality is going down suggesting we aren’t there yet.
I'm willing to bet "chip optimization work" doesn't mean "the work required to optimize a chip", but "some work tasks performed as part of chip optimization". Basically they sped up some unknown subset of the work from six weeks to one day. Which could be big or could be negligible
> ChatGPT, we have been optimizing production work for six weeks. <uploads some random documents that management team has uploaded to SharePoint, most of them generated by LLMs>. Finalize this optimization work.
> <ChatGPT spits out another document and claims that production work is now optimal>
I have a hard time believing that the right move for most organizations that aren't already bought into an OpenAI enterprise plan is going to be building their entire business around something like this. This ties you to one model provider that has been having issues keeping up with the other big labs and provides what looks like superficially some extremely useful tools but with unclear amounts of rigor. I don't think I would want to build my business on this if I was an AI-native company that was just starting right now unless they figure out how to make this much more legible and transparent to people.
This is a crowded solution space with participation from cloud, SaaS and data infrastructure vendors. All of these players and their customers have been trying to operationalize LLMs in enterprise workflows for 2+ years. Two big challenges are business ontology and fitting probabilistic tools into processes requiring deterministic outcomes. Overcoming these problems require significant systems integration and process engineering work. What does OpenAI have that makes them specifically capable of solving these problems over Azure, Databricks, Snowflake, etc., who have all been working on these problems for quite a while? I don't know if the press release really addresses any of this, which makes it seem more like marketing copy than anything else.
The question of lock-in is also a major one. Why tether your workflow automation platform to your LLM vendor when that may just be a component of the platform, especially when the pace of change in LLMs specifically is so rapid in almost every conceivable way. I think you'd far rather have an LLM-vendor neutral control plane and disaggregate the lock-in risk somewhat.
> "75% of enterprise workers say AI helped them do tasks they couldn’t do before."
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?
- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT
- OpenAI going for the agent management market share (Dust, n8n, crewai)
> Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT|
Because that requires human thought and it might take couple weeks more to design and develop. Do something fast is the mantra, not doing something good.
I imagine the salary bumps occur when the individuals who have developed these productivity boosting skills apply for jobs at other companies, and either get those jobs or use the offer to negotiate a pay increase with their current employer.
Over the past few months mentions of AI in job applications have gone from "Comfortable using AI assisted programming - Cursor, Windsurf" to "Proficient in agentic development" and even mentions of "Claude code" in the desired skills sections. Yet the salary range has remained the exact same.
Companies are literally expecting junior/mid level devs to have management skills (for those even hiring juniors). They expect you to come in and perform on the level of a lead architect - not just understand the codebase but the data, the integrations, build pipelines to ingest the entire companies documentation into your agentic platform of choice, then begin delegating to your subordinates (agents). Does this responsibility shift not warrant an immediate compensation shift?
Ahh, but its not 2022 anymore, even senior devs are struggling to change companies. Only companies that are hiring are knee deep into AI wrappers and have no possibility of becoming sustainable.
I doubt that there is any risk at all. One is that this is probably a minor aspect of the business seeing these AIs. AIs that ive seen deployed so far for real work seem to best be done in side business offerings that you can tolerate a high false positive / false negative rate, but also isnt price sensitive enough that building a fully done automated pipeline classically is worth it or possible.
There is a reason Apple chose gemini for apple intelligence, despite Google being in many ways a foe, and OpenAI and Anthropic both having way more "Apple flavor" to them.
Exactly. OpenAI is riding out a first mover advantage, but they have no moat. Plus a company like Google can bundle Gemini with other corporate offerings like Drive and Google Docs etc.
I just don’t see OpenAI winning this in the long run. And I’m saying that while I am subscribed to ChatGPT lol.
I think that’s sort of right. Said differently and the way I process these tools. You have mundane tasks that a human does where there are clear guidelines on acceptance. The underlying tools being used have clear APIs for automation. You can use natural language to automate said task without a full blown engineer in the loop. For non business engineers this sounds silly but it can save a lot of time for business users.
"Year of X" is so cringe. They said it was all about Agents last year... yawn. Wake me up when they have something to show that makes people go "wow this is amazing" and has real economic consequences.
There are many ways to interpret it. What’s your interpretation?
It is also interesting to contrast calling them by name vs. the other example, “a major semiconductor company”, not called by name. Though of course, there are also different reasonable ways to interpret that.
Why didn't your VC friend drop some seed on you back then if the stealth startup was doing 25MM ARR? They probably could've had a better deal with you!
Oh they had $25MM in funding, $0 ARR - lol - the two reasons I decided to leave it as a weekend project: "Thanks!!! I decided not to build it, that space is already too busy, there is a startup with $25MM in stealth, who else is in stealth? On top of that, this method will get stale very very quickly, foundation model businesses are just too hard to work around right now, it's a silly way to do business. My magic is I've build a startup from scratch to over 400 people and watched what they do, it won't be long till that isn't worth much." and "I build it on my own over a weekend, on my own. I just wanted to confirm it can be done therefore will exist, that is all. Personally, I decided not to peruse it because I am old and lazy and don't want to compete against a16z and sequoia funded adderall filled teenagers." - 8 months forward I was right: If I could slap a bunch of python together and create business as a service that actually functions, in a weekend as just me, that's a feature on someone elses product! :)
If your employee does (with intent/malice) something very egregious, you can always fire and sue them for the damage done. Out of curiosity, what will the option be if some AI agent does the same?
> This is happening for AI leaders across every industry, and the pressure to catch up is increasing.
> Enterprises are feeling the pressure to figure this out now, because the gap between early leaders and everyone else is growing fast.
> The question now isn’t whether AI will change how work gets done, but how quickly your organization can turn agents into a real advantage.
FOMO at its finnest. "Quick before you're left behind for good this time!"
The idea itself has sensibility. It is the kind of AI application I've been pitching to companies, though without going all in on agents. Though I think it would be foollish for any CEO to build this on top of OpenAi, instead of a self-hosted model, and also train the model for them. You're just externalizing your internal knowledge this way.
The animations look nice, but why does OpenAI want to be the substrate for intelligence? It's at a disadvantage there vs competitors with strong domain experience.
> “Partnering with OpenAI helps us give thousands of State Farm agents and employees better tools to serve our customers. By pairing OpenAI’s Frontier platform and deployment expertise with our people, we’re accelerating our AI capabilities and finding new ways to help millions plan ahead, protect what matters most, and recover faster when the unexpected happens.”
— Joe Park, Executive Vice President and Chief Digital Information Officer at State Farm
Ok how about you tell us one thing this shit is actually doing instead of vague nonsense.
Well, even working as an AI engineer is no longer secure. It may soon be the case that all humans work for bots created by others. Is that the universal salary we are talking about?
As someone who would be in a position to advise enterprises on whether to adopt Frontier, there is simply not enough information for me to follow the "Contact Sales" CTA.
We need technical details, example workflows, case studies, social proof and documentation. Especially when it's so trivial to roll your own agent.
I’m imagining this is like ai native slack. Which would be a super useful thing. But I’m with you, who knows? I had a ceo sign up - I’m curious to see one of my companies try it out.
Great, some more bullshit our founders are going to force onto the company while they never use it, ignore everyone’s feedback that it doesn’t work, and expect everything to be done twice as fast now
Another day, another blog post about managing Agents. Its for pretend companies who think they are doing something worthwhile if they run 4000 agents at once.
It's not going to "trigger" mass layoffs; it'll be used as a convenient scapegoat for mass layoffs that were always going to happen anyway to make room for more stock buybacks. Business as usual. Same shit, different hat.
Lets be honest color is not solving cancer when they make money from managing cancer. you just searched openai cancer and gave me back the first result.
TLDR:
« Today, we’re introducing Frontier, a new platform that helps enterprises build, deploy, and manage AI agents that can do real work. Frontier gives agents the same skills people need to succeed at work: shared context, onboarding, hands-on learning with feedback, and clear permissions and boundaries. That’s how teams move beyond isolated use cases to AI coworkers that work across the business. »
> The way work gets done has changed, and enterprises are starting to feel it in big ways.
Why do they say all of this fluff when everyone knows it’s not exactly true yet. Just makes me be cynical of the rest.
When can we say we have enough AI? Even for enterprise? I would guess that for the majority of power users you could stop now and people would be generally okay with it, maybe some further into medical research or things that are actually important.
For Sam Altman and microslop though it seems to be a numbers game, just have everyone in and own everything. It’s not even about AGI anymore I feel.
For classic engineering it's been a boon. This is in a pretty similar vein to the gains mathematicians have been making with AI.
These models can pretty reliably bang out what once was long mathematical solves for hypothetical systems in incredibly short periods of time. It also enables you to do second and third order approximations way easier. What was a first order approach that would take a day, is now a second order approach taking an hour.
And to top it off, they're also pretty damn competent in at least pointing you in the right direction (if nothing else) for getting information about adjacent areas you need to understand.
I've been doing an electro-optical project recently as an electronics guys, and LLMs have been infinitely useful in helping with the optics portion (on top of the mathing electronics speed up).
It's still "trust, but verify" for sure, but damn, it's powerful.
If the AI is pointing you in a direction, how much creativity is lost through the mathematician no longer doing that?
I genuinely feel AI makes the ability to come up with approaches worse in software dev.
I probe Claude on occasion when I don't feel like looking up source documentation. A couple weeks ago, I was interrogating it about a system that happened to use Kafka. Last night, I was asking it to propose an entirely different solution and it kept trying to shoe horn Kafka into it. I asked it why and proposed a simpler alternative. I was absolutely right! Claude was just very eager to demonstrate familiarity with other systems I was working on!
I shudder to think of the things people will wind up shipping blindly accepting AI guidance.
Reminds me of this quote:
> when everyone knows it’s not exactly true yet
I think two things:
1. Not everyone knows.
2. As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.
I'm agnostic, but ...
"And for this cause God shall send them strong delusion, that they should believe a lie: That they all might be damned who believed not the truth, but had pleasure in unrighteousness."
For a more modern take, paraphrasing Hannah Arendt.
“The ideal subject of totalitarian rule is not the convinced Nazi or the convinced Communist, but people for whom the distinction between fact and fiction, true and false, no longer exists.”
We live in a an age where for many media is reality, uncritical, unchecked. Press releases are about creating reality, not reporting it, they are about psychological manipulation, not information.
> As we've seen at a national scale: if you just lie lie lie enough it starts being treated like the truth.
This actually happened in reverse with the spread of social media dynamics to politics and major media. Twitter made Trump president, not the other way around.
* No LLMs were harmed in the making of this comment.
I disagree with your sentiment and genuinely think something big is coming. It doesn't need to be perfect now, but it could be good enough to disrupt SaaS market.
> say all of this fluff when everyone knows it’s not exactly true yet
How do you know it's not exactly true? I am already seeing employees in enterprises are heavily reliant on LLMs, instead of using other SaaS vendors.
* Want to draft email and fix your grammar -> LLMs -> Grammarly is dying
* Want to design something -> Lovable -> No need to wait designer, no need to get access to Figma, let designer design and present, for anything else use lovable, or alternatives
* want to code -> obviously LLMs -> I sometimes feel like JetBrains is probably in code red at the moment, because I am barely opening it (saying this as a heavy user in the past)
To make message shorter, I will share my vision in the reply
Let's imagine AI is not there yet and won't be there for 100% accuracy, but you still need accountability, you can't run everything in autopilot and hope you will make 10B ARR.
How do you overcome this limitation?
By making human accountable, imagine you come to work in the morning and your only task is to: "Approve / Request improvement / Reject". You just press 3 buttons all day long:
* Customer is requesting pricing for X, based on the requirements, I found CustomerA had similar requirements and we offered them 100$ / piece last month. What should I do? Approve / Reject / "Ask for 110$"
* Customer (or their agent) is not happy with your 110$ proposal, I used historical data and based on X,Y,Z min we can offer is 104$ to make our ARR increase 15% year-over-year, what should I do? Approve / Reject / Your input
....
So what, you show up to work one day, hit your three button rotation and one day you just end up in prison because your agent asked you to approve fraud/abuse because the legal ramifications section was outside of it's context window? This is asinine.
nope, you won't.
your agentic platform vendor will be responsible for not showing important things
Ok, you’re the platform vendor and just enabled fraud. Now what?
That's exactly how I play RPGs
Is there any reason not to assume that the article was created by an LLM?
It sure reads like it.. These days unfortunately so many things do, there is a real “impersonality” (if that’s the right word) to the whole new communication theme.
>a real “impersonality”
I mean that's been a lot of corporate writing for some time.
These two are not unrelated. It's just a step further along the same path.
Without a byline it’s probably the safer assumption.
Well, it is OpenAI - I would be disappointed if it wasn't in some way created by an LLM.
If anything with the latest OpenAI releases, it's that they are probably developed using Claude Code
> for the majority of power users you could stop now and people would be generally okay with it
Why stop though? Google didn't say Altavista and Yahoo is good enough for the majority of power users, let's not create something better.
When you have something good at your hand and you see other possibilities, would you say let's stop, this is enough?
To be frank I don’t think your worldview is directionally accurate. OpenAI is certainly trying to sell something but every incremental update to these models there are more avenues of value generation being unlocked. For sure it’s not as it was hyped up to be as all the talking heads in the industry were spouting, but there is a lot of interesting ways to use these tools and it’s not for generating slop.
> Why do they say all of this fluff
They're desperate?
It's a kind of gaslighting, probably first and foremost for themselves before others.
> When can we say we have enough AI?
I’m good for now.
I am already tired of the disaster that is social media. Hilariously, we’ve gotten to the point that multiple countries are banning social media for under 18s.
The costs of AI slop are going to be paid by everyone, social media will ironically becoming far less useful, and the degree of fraud we will see will be … well cyber fraud is already terrifying, what’s the value of infinity added to infinity.
I would say that tech firms are definitely running around setting society on fire at this point.
God, they built all of this on absurd amounts of piracy, and while I am happy to dance on the grave of the MPAA and RIAA, the farming of content from people who have no desire to be harvested is absurd. I believe wikipedia has already started seeing a drop in traffic, which will lead to a reduction in donations. Smaller sites are going to have an even worse time.
> At a major semiconductor manufacturer, agents reduced chip optimization work from six weeks to one day.
I call BS right there. If you can actually do that, you’d spin up a “chip optimization” consultancy and pocket the massive efficiency gain, not sell model access at a couple bucks per million-tokens.
There should be a massive “caveats and terms apply” on that quote.
So far the AI productivity gains have been all bark and no bite. I’ll believe when I see either faster product development, higher quality or lower prices (which indeed happened with other technological breakthroughs, whether the printing press or the loom) - if anything, software quality is going down suggesting we aren’t there yet.
I'm willing to bet "chip optimization work" doesn't mean "the work required to optimize a chip", but "some work tasks performed as part of chip optimization". Basically they sped up some unknown subset of the work from six weeks to one day. Which could be big or could be negligible
They've already changed the wording to
Make of that what you will.> ChatGPT, we have been optimizing production work for six weeks. <uploads some random documents that management team has uploaded to SharePoint, most of them generated by LLMs>. Finalize this optimization work.
> <ChatGPT spits out another document and claims that production work is now optimal>
This is likely similar to the tweet from the google engineer. AI coded their solution in a matter of days…after a year of planning prior.
MAybe the agent returned "nope, can't be optimized any more".
It's only time #50952 that scam altman has lied. He's in a race right now with Elon and Trump to see who can lie hardest and most often.
I have a hard time believing that the right move for most organizations that aren't already bought into an OpenAI enterprise plan is going to be building their entire business around something like this. This ties you to one model provider that has been having issues keeping up with the other big labs and provides what looks like superficially some extremely useful tools but with unclear amounts of rigor. I don't think I would want to build my business on this if I was an AI-native company that was just starting right now unless they figure out how to make this much more legible and transparent to people.
This is a crowded solution space with participation from cloud, SaaS and data infrastructure vendors. All of these players and their customers have been trying to operationalize LLMs in enterprise workflows for 2+ years. Two big challenges are business ontology and fitting probabilistic tools into processes requiring deterministic outcomes. Overcoming these problems require significant systems integration and process engineering work. What does OpenAI have that makes them specifically capable of solving these problems over Azure, Databricks, Snowflake, etc., who have all been working on these problems for quite a while? I don't know if the press release really addresses any of this, which makes it seem more like marketing copy than anything else.
The question of lock-in is also a major one. Why tether your workflow automation platform to your LLM vendor when that may just be a component of the platform, especially when the pace of change in LLMs specifically is so rapid in almost every conceivable way. I think you'd far rather have an LLM-vendor neutral control plane and disaggregate the lock-in risk somewhat.
> "75% of enterprise workers say AI helped them do tasks they couldn’t do before."
> "At OpenAI alone, something new ships roughly every three days, and that pace is getting faster."
- We're seeing all these productivity improvements and it seems as though devs/"workers" are being forced to output so much more, are they now being paid proportionally for this output? Enterprise workers now have to move at the pace of their agents and manage essentially 3-4 workers at all times (we've seen this in dev work). Where are the salary bumps to reflect this?
- Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT
- OpenAI going for the agent management market share (Dust, n8n, crewai)
> Why do AI companies struggle to make their products visually distinct OpenAI Frontier looks the exact same as OpenAI Codex App which looks the exact same as GPT|
Because that requires human thought and it might take couple weeks more to design and develop. Do something fast is the mantra, not doing something good.
Workers at tech companies are getting paid for this because they are shareholders.
Increased efficiency benefits capital not labor; always good to remember to look at which side you prefer to be on
>Where are the salary bumps to reflect this?
Revenue bumps and ROI bumps both gotta come first. Iirc, there's a struggle with the first one.
I imagine the salary bumps occur when the individuals who have developed these productivity boosting skills apply for jobs at other companies, and either get those jobs or use the offer to negotiate a pay increase with their current employer.
I haven't seen any examples of that.
Over the past few months mentions of AI in job applications have gone from "Comfortable using AI assisted programming - Cursor, Windsurf" to "Proficient in agentic development" and even mentions of "Claude code" in the desired skills sections. Yet the salary range has remained the exact same.
Companies are literally expecting junior/mid level devs to have management skills (for those even hiring juniors). They expect you to come in and perform on the level of a lead architect - not just understand the codebase but the data, the integrations, build pipelines to ingest the entire companies documentation into your agentic platform of choice, then begin delegating to your subordinates (agents). Does this responsibility shift not warrant an immediate compensation shift?
> apply for jobs at other companies
Ahh, but its not 2022 anymore, even senior devs are struggling to change companies. Only companies that are hiring are knee deep into AI wrappers and have no possibility of becoming sustainable.
The only group whose salaries have gone up as a result of LLMs are hardcore AI professionals, i.e. AI researchers.
> Where are the salary bumps to reflect this?
Let me increase salary to all my employees 2x, because productivity is 4x'ed now - never said a capitalist.
Building on OpenAI as a long term business strategy is dubious. Better go with an established cloud player for these solutions imo.
OpenAI might burn through all their money, and end up dropping support for these features and/or being sold off for parts altogether.
I doubt that there is any risk at all. One is that this is probably a minor aspect of the business seeing these AIs. AIs that ive seen deployed so far for real work seem to best be done in side business offerings that you can tolerate a high false positive / false negative rate, but also isnt price sensitive enough that building a fully done automated pipeline classically is worth it or possible.
There is a reason Apple chose gemini for apple intelligence, despite Google being in many ways a foe, and OpenAI and Anthropic both having way more "Apple flavor" to them.
Exactly. OpenAI is riding out a first mover advantage, but they have no moat. Plus a company like Google can bundle Gemini with other corporate offerings like Drive and Google Docs etc.
I just don’t see OpenAI winning this in the long run. And I’m saying that while I am subscribed to ChatGPT lol.
I didn't quite grasp what this is trying to solve but I hope its doing this:
In our company we have a list of long tail "workflows" or "processes" that really just involves reading a document and filling a form.
For example, how do I even get access to a new DB? Or a new AWS account?
Can this tool help us create an agent that can automate this with some reasonable accuracy?
I see OpenAI frontier as quick way to automate these long tail processes.
I think that’s sort of right. Said differently and the way I process these tools. You have mundane tasks that a human does where there are clear guidelines on acceptance. The underlying tools being used have clear APIs for automation. You can use natural language to automate said task without a full blown engineer in the loop. For non business engineers this sounds silly but it can save a lot of time for business users.
They are starting to sound less like a frontier AI lab, and more like a consultancy staffed with AI agents.
Looks like 2026 is indeed shaping up to be the year of the agent.
Hilarious! Year of the Agent, hahahaha. Good one!
"Year of X" is so cringe. They said it was all about Agents last year... yawn. Wake me up when they have something to show that makes people go "wow this is amazing" and has real economic consequences.
placing State Farm's testimonial first really tell you something
There are many ways to interpret it. What’s your interpretation?
It is also interesting to contrast calling them by name vs. the other example, “a major semiconductor company”, not called by name. Though of course, there are also different reasonable ways to interpret that.
Upside: your employees don’t have to use WorkDay.
Downside: your employees’ agents decide that they should collectively bargain.
Can those agents get my company legal team to approve the use of AI so I can at least try these modern things that make everyone's life better ?
Because for many of us, AI is "not approved until legal say so".
OpenAI's response: Start your own company
heh, I build something very very similar about 8/9 months ago, everyone thought I was full of it tho hehe. https://news.ycombinator.com/item?id=44143928
Why didn't your VC friend drop some seed on you back then if the stealth startup was doing 25MM ARR? They probably could've had a better deal with you!
Oh they had $25MM in funding, $0 ARR - lol - the two reasons I decided to leave it as a weekend project: "Thanks!!! I decided not to build it, that space is already too busy, there is a startup with $25MM in stealth, who else is in stealth? On top of that, this method will get stale very very quickly, foundation model businesses are just too hard to work around right now, it's a silly way to do business. My magic is I've build a startup from scratch to over 400 people and watched what they do, it won't be long till that isn't worth much." and "I build it on my own over a weekend, on my own. I just wanted to confirm it can be done therefore will exist, that is all. Personally, I decided not to peruse it because I am old and lazy and don't want to compete against a16z and sequoia funded adderall filled teenagers." - 8 months forward I was right: If I could slap a bunch of python together and create business as a service that actually functions, in a weekend as just me, that's a feature on someone elses product! :)
If your employee does (with intent/malice) something very egregious, you can always fire and sue them for the damage done. Out of curiosity, what will the option be if some AI agent does the same?
Fire the employee that created the agent xD
> This is happening for AI leaders across every industry, and the pressure to catch up is increasing.
> Enterprises are feeling the pressure to figure this out now, because the gap between early leaders and everyone else is growing fast.
> The question now isn’t whether AI will change how work gets done, but how quickly your organization can turn agents into a real advantage.
FOMO at its finnest. "Quick before you're left behind for good this time!"
The idea itself has sensibility. It is the kind of AI application I've been pitching to companies, though without going all in on agents. Though I think it would be foollish for any CEO to build this on top of OpenAi, instead of a self-hosted model, and also train the model for them. You're just externalizing your internal knowledge this way.
“Never send a human to do a machine’s job.” — Agent Smith, The Matrix
The animations look nice, but why does OpenAI want to be the substrate for intelligence? It's at a disadvantage there vs competitors with strong domain experience.
Folks like you have been hyping up LLMs like crazy. Its time for some reflection.
OpenClawd for the business is here. Wow that was fast.
Vibe coded?
Is it their version of virtual AI employees that some startups were previously getting into, plus on-site support by FDEs and such?
Weird that it doesn't support MS Office, unless this would affect OpenAI <=> MS partnership.
M$ already has a program like this doesn't it? Microsoft™ 365™ Copilot™ agents™
Weird amounts of overlap between the two.
What's MS Office?
Ah, right, GP meant Microsoft Copilot 365 Enterprise Edition (with Copilot).
Beautiful!
Never heard of it.
> “Partnering with OpenAI helps us give thousands of State Farm agents and employees better tools to serve our customers. By pairing OpenAI’s Frontier platform and deployment expertise with our people, we’re accelerating our AI capabilities and finding new ways to help millions plan ahead, protect what matters most, and recover faster when the unexpected happens.” — Joe Park, Executive Vice President and Chief Digital Information Officer at State Farm
Ok how about you tell us one thing this shit is actually doing instead of vague nonsense.
I'm sorry do you have some problem with protecting things that matter most?
Well, even working as an AI engineer is no longer secure. It may soon be the case that all humans work for bots created by others. Is that the universal salary we are talking about?
As someone who would be in a position to advise enterprises on whether to adopt Frontier, there is simply not enough information for me to follow the "Contact Sales" CTA.
We need technical details, example workflows, case studies, social proof and documentation. Especially when it's so trivial to roll your own agent.
I’m imagining this is like ai native slack. Which would be a super useful thing. But I’m with you, who knows? I had a ceo sign up - I’m curious to see one of my companies try it out.
Slopware as a service?
products and features are starting to get spread thin...
The failure risk of OAI is rising.
Great, some more bullshit our founders are going to force onto the company while they never use it, ignore everyone’s feedback that it doesn’t work, and expect everything to be done twice as fast now
Love all these anecdotes and magic stats that they don't have citations for and we're just supposed to believe.
Another day, another blog post about managing Agents. Its for pretend companies who think they are doing something worthwhile if they run 4000 agents at once.
Funny how not a single one of those companies they use as examples works as a upsell for me. I’m clearly not the target audience.
The only numbers mentioned are speed of output. Any associated loss due to rework/ missed contracts?
More bullshit from OpenAI.
Okay now this is gonna trigger mass layoffs, if it works.
It's not going to "trigger" mass layoffs; it'll be used as a convenient scapegoat for mass layoffs that were always going to happen anyway to make room for more stock buybacks. Business as usual. Same shit, different hat.
Misguided mass layoffs though so nothing new.
if only companies like openAI can put this much effort into actually curing Cancer.
> if only companies like openAI can put this much effort into actually curing Cancer
https://openai.com/index/color-health/
Lets be honest color is not solving cancer when they make money from managing cancer. you just searched openai cancer and gave me back the first result.
Stock sell-off again?
TLDR: « Today, we’re introducing Frontier, a new platform that helps enterprises build, deploy, and manage AI agents that can do real work. Frontier gives agents the same skills people need to succeed at work: shared context, onboarding, hands-on learning with feedback, and clear permissions and boundaries. That’s how teams move beyond isolated use cases to AI coworkers that work across the business. »