In my large enterprise world, AI adoption hasn't made it outside of the development teams - only developers have access to Github Copilot.
Code takes 6-12 months to make it from commit to production. Development speed was never the bottleneck; it's all the other processes that take time: infra provisioning, testing, sign-offs, change management, deployment scheduling etc.
AI makes these post-development bottlenecks worse. Changes are now piling up at the door waiting to get on a release train.
Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend. Unshipped code is a liability, not an asset.
The post hits the nail on the head with the messy middle. There is simply no motivation to develop this sort of intelligence loop as a dev who has their own responsibilities which their job depend on. Management can ask as nicely as they want, but I’m not going to selflessly share my productivity gains with the broader company for free. I might share a tool if it’s useful. All the learning of how to wrangle AI or set up agents is better kept to myself if there is no recognition for sharing.
My company set up a “prompt of the week” award and brown-bag sessions to help spread adoption. We also have teams meant to develop these workflows. Clearly, they set these events up to play it off as their own productivity. Without a real (read “monetary”) incentive or job security, the risk and cost of spreading the knowledge falls squarely on the developer.
One more point I noticed: since AI adoption is being promoted by companies, collaboration between developers could suffer. Why wait for a more experienced developer to have the time to explain some aspect of the codebase to you (and at the same time confess your ignorance), when AI can do it right away in a competent-sounding way (and most of the time it will probably be right, too)?
I think if these companies first adopted local models with fewer token outs and the learners got to watch the tokens get made, there'd be a lot more understanding.
In my large enterprise world, AI adoption hasn't made it outside of the development teams - only developers have access to Github Copilot.
Code takes 6-12 months to make it from commit to production. Development speed was never the bottleneck; it's all the other processes that take time: infra provisioning, testing, sign-offs, change management, deployment scheduling etc.
AI makes these post-development bottlenecks worse. Changes are now piling up at the door waiting to get on a release train.
Large enterprises need to learn how to ship software faster if they want to lock in ROI on their token spend. Unshipped code is a liability, not an asset.
The post hits the nail on the head with the messy middle. There is simply no motivation to develop this sort of intelligence loop as a dev who has their own responsibilities which their job depend on. Management can ask as nicely as they want, but I’m not going to selflessly share my productivity gains with the broader company for free. I might share a tool if it’s useful. All the learning of how to wrangle AI or set up agents is better kept to myself if there is no recognition for sharing.
My company set up a “prompt of the week” award and brown-bag sessions to help spread adoption. We also have teams meant to develop these workflows. Clearly, they set these events up to play it off as their own productivity. Without a real (read “monetary”) incentive or job security, the risk and cost of spreading the knowledge falls squarely on the developer.
> Where is the ROI for the 2 mio € we paid Anthropic last year?
The CEO has a youtube style platinum token plaque for their office.
One more point I noticed: since AI adoption is being promoted by companies, collaboration between developers could suffer. Why wait for a more experienced developer to have the time to explain some aspect of the codebase to you (and at the same time confess your ignorance), when AI can do it right away in a competent-sounding way (and most of the time it will probably be right, too)?
I think if these companies first adopted local models with fewer token outs and the learners got to watch the tokens get made, there'd be a lot more understanding.