> Tell us your hopes and dreams for a Cloudflare-wide CLI
It'd be great if the Wrangler CLI could display the required API token permissions upfront during local dev, so you know exactly what to provision before deploying. Even better if there were something like a `cf permissions check` command that tells you what's missing or unneeded perms with an API key.
That would be glorious! If ChatGPT doesn't get the permissions right on the first try I know that I'm going to have to spend the next hours reading the documentation or trying random combinations to get a token that works.
Why is "on the first try" so important? What's wrong with telling it your end goal and letting it figure out the exact right combo while you go off and work on something else?
I think this boils down to more discoverability of the entire API. While I'm not a fan of GraphQL necessarily, it does provide the tools for very robust LLM usage typically because of the discoverability and HATEOAS aspects they actually follow compared to most "REST" APIs. I would love if LLMs could learn everything they need to about an API just by following links from its root entry point. That drastically cuts down on the "ingested knowledge" and documentation reads (of the wrong version) it needs to perform. Outdated documentation can often be worse than no documentation if the tool has the capability of helping you "discover" its features and capabilities.
Please add resource groups and the ability to enforce permissions per resource group before you do this so that we don’t have agents (or people) blowing up prod from their command line. Thank you.
Currently you can only enforce zone-based permissions (domain based) BUT plenty of resources, such as workers, don’t belong to zones so essentially their code can be replaced or deleted with the lowest level permission. And there’s no way to block it…
Alternatively if you could please allow us to create multiple accounts that share a single super account (for SSO and such), similar to GitHub Enterprise which has Enterprises and Organisations. Then we could have ACME Corp. and ACME Corp (Prod) and segregate the two and resource groups wouldn’t be strictly required.
Wonderful post and I will be taking inspiration from it. Surprised not to see TypeSpec https://typespec.io/ mentioned, which is a TypeScript-like schema language that I like to describe as "what if OpenAPI was good". I'm guessing they considered it and decided building their own would be both simpler and more flexible. The cost of BYO has come down a lot thanks to agents.
Love TypeSpec, agree it makes writing OpenAPI really easy.
But I’ve moved to using https://aep.dev style APIs as much as possible (sometimes written with TypeSpec), because the consistency allows you to use prebaked aepcli or very easily write your own since everything behaves like know “resources” with a consistent pattern.
Also Terraform works out of the box, with no needing to write a provider.
Looking at the first example - it's far less verbose. Although, seems to be suspiciously minimal, so I can't even tell from a single .tsp route definition what response content type to expect (application/json is the default most likely).
It’s actually human-readable, it has generics, it supports sum and product types in a much more natural way. There’s a lot more, that’s just off the top of my head.
The trend of CLI-first design because AI agents need it is interesting. We ended up in the same place building developer tools, the CLI and API came first because that's what agents actually consume. The dashboard came after.
The cf permissions check idea from the top comment is great. One thing I've found is that agents are surprisingly good at using CLIs but terrible at diagnosing why a command failed. Clear error messages with the exact fix ("missing scope X, run cf token add --scope X") matter way more for agent usability than the happy path.
Wrangler currently has two auth modes: OAuth, full user permissions and static tokens that are manually created in the dashboard. Neither is designed for the pattern where a human operator and one or more AI agents need different permission boundaries.
I'd like the ability to create scoped, short-lived tokens from the CLI itself. There's an open GitHub issue (13042) for this.
But, there needs to be a twist: Tokens should be sociable not just by resource type, but by specific resource ID and action.
The consistency enforcement at the schema layer is the part I'd want to steal. Enforcing get vs info, standardizing --json across commands, these sound obvious but they're the things that break agent pipelines silently.
One thing I'm curious about: Cloudflare uses TypeScript for Workers and now this CLI, but Rust for the actual edge runtime. Is there a rough heuristic the team uses internally for when TS wins vs when you reach for something else?
> You can try the technical preview today by running npx cf. Or you can install it globally by running npm install -g cf.
A couple of obvious questions - Is it open source (npmjs side doesn't point to repo)? And in general will it be available as a single binary instead of requiring nodejs tooling to install/use? If so, using recently-acquired Bun or another product/approach?
> Tell us your hopes and dreams for a Cloudflare-wide CLI
No long lived tokens, or at least a very straightforward configuration to avoid them.
One option: an easy tool to make narrowly scoped, very short lived tokens, in a file, and maybe even a way to live-update the file (so you can bind mount it).
Another option: a proxy mode that can narrow the scope. So I set it up on a host, then if I want to give a container access to one domain or one bucket or whatever, I ask the host CLI to become a proxy that gives the relevant subset of its permissions to the proxied client, and I connect the container to it.
I like how GitLab does this, with an SSH server that implements only a few commands for creating PATs, so you can authenticate with your SSH keys and create a short-lived PAT in one command.
Ironically, with the advent of AI agents and stuff, we're going back from "checkbox engineering" in GUI webpages to CLI tools. Every time I need to clear cache in cloudflare when I upload a new version of an asset, I have to click through a bunch of things. Would be nice to just message my openclaw agent to do it.
I think the point is that I don't have to remember the command. I just have to tell my agent in plain-English to do X.
For example, we're dragging our feet on Github Config as Terraform at our org so in the meantime I've been using Claude + the gh cli to deploy changes across repos. I don't need to know / remember the gh cli command to pull or push a ruleset, or script a loop in Bash, I just have to say
> Claude pull the ruleset from <known good repo> and push it to <repo 1>, <repo 2>, <repo 3>
The CLI is also nice because it abstracts away authentication. I have another flow which doesn't have a CLI and Claude is more than happy to interpolate the API key that it read from a config file into the chat history (horrifying).
Exactly. Talking in plain English is a lot less mental overhead than reading the man page and figuring out the right command. Nowadays I just use AI instead of remembering ffmpeg commands too. Your point about authentication is important too.
That's what the coding agent does if you ask it to record a skill. Yes I can do that manually. But I'll get it wrong a few times before I get it right and waste half an hour in the documentation, if I'm lucky.
Or I just go, "I just created this cloudflare domain. Deploy the site via gh actions to cloudlare pages whenever I push to main. here are my credentials; put them in Github secrets." Or something similarly high level.
The clever thing here is not doing things manually but make sure you generate automation and scripts if you are going to do the same things more often along with skill files detailing how and when to call shit.
Actually if you read the first five sentences of the article, the point of the CLI is to be friendlier for AI agents (in addition to being useful for humans presumably).
> Increasingly, agents are the primary customer of our APIs. Developers bring their coding agents to build and deploy applications, agents, and platforms to Cloudflare, configure their account, and query our APIs for analytics and logs.
> We want to make every Cloudflare product available in all of the ways agents need.
Yeah but you can waste $10-50 having an LLM do that for you instead rather than using your engineering experience to do it. Think of the shareholder value!
> So we introduced a new TypeScript schema that can define the full scope of APIs, CLI commands and arguments, and context needed to generate any interface. The schema format is “just” a set of TypeScript types with conventions, linting, and guardrails to ensure consistency.
I'm confused though, why isn't that tool/framework being shown here. What is it and how does it work? It is similar to the TypeSpec tool someone else posted?
> Tell us your hopes and dreams for a Cloudflare-wide CLI
Initial impression:
-h and --help should follow the short / long standard of providing more / less info. The approach currently used is -h and --help show command lists and point at a --help-full flag. The --help-full output seems to give what I'd expect on -h. This needs to be much better - it should give enough information that a user / coding agen doesn't have to read websites / docs to understand how the feature works.
Completions are broken by default compared to the actual list of commands - i.e. dns didn't show up in the list.
When I ran cf start -h it prompted to install completions (this was odd because completions were already installed / detected). But either way, -h should never do anything interactive
Some parts of the cli seem very different to the others (e.g. cf domains -h is very different to cf dns -h). Color / lack of color, options, etc.
My perspective on the rationale for splitting short/long help is that optimizing for the reader's time is a reasonable thing to do. Often I just need a refresher of what options are available. But sometimes I need a deeper understanding of what each option controls and how. (Yes I understand that this should be in man pages). There needs to be a reasonable way to control the verbosity of the help output from the command line however.
I agree with your point that most flags should generally treat short versions as exact aliases to long flags, but I just think that a convention that treats -h and --help as concise vs long is 100% reasonable. The distinction is often breadth vs depth.
That would be a perfectly reasonable convention, except it's already a convention that they do the same thing.
Having them be different could cause someone to look at -h, and not even know about --help. Or if someone writes a script parsing the output of -h for some reason, someone else might come along and change it to --help expecting it to be the same thing.
100% agree, not sure where this idea came from but I'm not a fan.
You can just make a `--help-all` (or whatever word you want to use), imo the `--help-all` command doesn't need a short equivalent because it's not something you'd frequently use.
I've never heard of this convention. Every getopt-style CLI tool I've used has identical behavior whether an option is specified in its short- or long-form.
Any rust cli built with clap or go cli built with cobra supports short and long help and surface these with `-h` and `--help` (I think cobra surfaces this in the help command rather than in the --help, which is probably a reasonable alternative way to frame this)
Possibly controversial, but I think short commands should be disallowed. This is the stance the AWS CLI takes, and it 1) vastly improves readability, especially for those learning the syntax 2) makes it less easy to shoot yourself in the foot with a typo.
About the cf domains -h vs cf dns -h drift you flagged, in my case, I've watched Claude learn one subcommand format, then take for granted the same flags on a sibling with a different help shape.
It's not cosmetic. Uniform help is a way to not let agents hallucinate.
Otherwise you end up with invalid commands, or worse, silent ones that go through without doing anything at all, or go totally wrong.
yeah - absolutely. I use codex all the time with jj and encourage it to check the help for details about how to run commands as the commands / args / flags have evolved post training-cutoff date.
I found that agents can already figure out how to do everything in CloudFlare as long as they have an API token with the right privileges. They are smart enough to figure out how to use the API. But the friction spot is that I need to make the API token by hand and basically add permission to do everything they might ever need to do. It would be nice to have a smoother way to handle the permissions so I can let them do everything they need to do frequently without asking for permission yet still make it easy to approve actions that are only needed occasionally, without having to guess in advance which permissions fall into which category
API tokens are complete mess on CF. There are 2 or 3 types of them. Agents constantly confuse, which one does what. Documentation is not referring to them correctly. I still don't understand the difference myself and can't explain clearly to an agent. Why we need to bother at all?
I really dislike wrangler, though i understand the need to make something for mainstream appeal (agent cli tools + npx execution) It would be really nice if the cli was a layer on top of a git ops enabled declarative layer that was usable directly. (The terraform provider does not count...)
The restish tool by the author of Huma is functionally correct, but I'm finding the models are not doing a great job at inferring the syntax. Admittedly I am having a hard time following the syntax too.
Fantastic. I recently automated a bunch of operations across ~50 domains with an agent, an API key, and a bunch of HTTP requests. I kept thinking that surely cloudflare had to be working on better ergonomics for this -- glad to see they were :D
Tell us your hopes and dreams for a Cloudflare-wide CLI
This is only partly about the CLI and mostly about the API itself, but a straightforward and consistent way to manage environments would be nice.
I have a project using CF workers and Astro, with an ugly but working wrangler.toml defining multiple environments. When Cloudflare acquired Astro, I assumed that would be a good thing, but the latest version of the Cloudflare plugin (mandatory if you want to use the latest Astro) seems to manage environments in its own special incompatible way.
Please have an endpoint for up-to-minute billing with hooks for notifications and setting limits, importantly have it correlate-with/be the endpoint that shows usage analytics.
Previous-co could never get argo billing to match argo analytics, and with no support from CF over months we backed away from CF completely in fear that scale-up would present new surprise unknown/untraceable costs
Previous-previous-co is probably the largest user of web worker
Finally. Jumping between wrangler, the dashboard, and raw API calls has been annoying for a while. I'm keen on the local explorer most, debugging Workers locally has always been clunky. Anyone know how this plays with Terraform-managed infra?
Am I the only engineer that thinks it is a bad idea to shove the entire functionality of a gigantic company into one program? Just me? Yeah, having one single gigantic interface for an entire tech company's technology products couldn't possibly be complicated...
Nobody else here ever spent years begging in pull requests for some basic functionality or bug to be fixed, and it never could be, because someone in the company decided they didn't have the time, or didn't think your feature was needed, or decided it wasn't a bug?
How about, has anyone ever had to pin multiple versions of a tool and run the different versions to get around something broken, get back something obsoleted, or fix a backwards-incompatibility?
> you can install it globally by running npm install -g cf
...I'm gonna vibe-code my own version as independent CLI tools in Go, I hope ya'll realize. Besides the security issues, besides the complexity, besides the slowness, I just don't want to be smothered by the weight of a monolith that isn't controlled by a community of users. Please keep a stable/backward-compatible HTTP API so this less difficult? And if Terraform providers have taught us anything, it's that we need copious technical and service documentation to cover the trillion little edge cases that break things. If you can expose this documentation over your API, that will solve a ton of issues.
I'm happy that there will be more tooling, but the reason for that (and the target audience) should not be ai agents. It should be a good experience for humans!
Tools should be tested and quality assured. Something that was utterly missing for cloudflare's unusable v5 terraform provider.
Quality over quantity with a ux that has humans in mind!
Agreed that the current Terraform provider is shockingly bad. It’s changed my estimation of Cloudflare’s technical competence, drastically. The automated migration from v4 still doesn’t work and it’s been, what, nearly a year since v5 was released? (This is not to mention I’ve never used a Terraform provider that made me run external migration tools in the first place.)
Exactly! Number of turns, average tokens to achieve a task using your CLI, as well as average number of characters being returned per CLI command alongside other metrics: all important to both users and agents! I am working on allowing to accurately capture this at www.cliwatch.com! Feel free to request an example eval suite for a list of tasks you want to achieve with your CLI
I was the original author of the cloudflare-go library (which I worked on in my spare time while working at Cloudflare), and I included a `flarectl` command with it, but sadly it didn't get much traction :-(
Build using ecosystem with a better standard library. Too little in node so you have to install lots of deps. I have to encapsulate your tool in docker to prevent node supply chain attack. If you use a better ecosystem with larger standard library you will be installing fewer rando deps. You can do it. You are very AI forward organization. Fewer dependencies and write all code yourself. You have LLM.
Node, Python etc. allow arbitrary footgun tech to lose all local data. You have to use better tech.
I just wish they'd fix billing notifications. The ux makes it impossible to set it up. Been complaining about that on X, got a couple people saying they would look into it, even one that gave me his email address. Pure silence.
> We write a lot of TypeScript at Cloudflare. It’s the lingua franca of software engineering.
This scares me more than Im able to admit, typescript sucks and in my opinion its way worse than the more commonly used lingua franca of computing, which I would attribute to C. At least C can be used to create shared objects i guess?
I used to dislike JavaScript a lot after learning it and PHP, then using languages like C#. Then TypeScript came along making JS much easier to live with, but has actually become quite nice in some ways.
If you use deno as your default runtime, it's almost Go-like in its simplicity when you don't need much. Simple scripts, piping commands into the REPL, built-in linting, testing, etc. It's not that bad!
Of course you're welcome to your opinion and we'd likely agree about a lot of what's wrong with it, but I guess I feel a bit more optimistic about TS lately. The runtime is improving, they've got great plans for it, it's actually happening, and LLMs aren't bad at using it either. It's a decent default for me.
You look at libraries like Effect, and it's genuinely incredible work, but you can't help feeling like... Man, so many languages partially address these problems with first-class primitives and control flow tooling.
I'm grateful for their work and it's an awesome project, but it's a clear reflection of the deficiencies in the language and runtime.
I think it sucks because it transpiles to JavaScript and is an interpreted language. Users have to resolve the dependencies themselves and have the correct runtime. I definitely prefer my CLI tools be written in a compiled language with a single binary.
I agree, though one cool thing arriving lately (albeit with some major shortcomings) is the ability to compile binaries with deno or bun (and nodejs experimentally, I think).
With Go you can compile binaries with bindings for other binaries, like duckdb or sqlite or so on. With deno or bun, you're out of luck. It's such a drag. Regardless, it's been quite useful at my work to be able to send CLI utilities around and know they'll 'just work'. I maintain a few for scientific data processing and gardening (parsing, analysis, cleaning, etc) which is why the lack of duckdb bundling is such a thorn. I do wish I could use Go instead and pack everything directly into the binary.
you can already "compile" TS binaries with deno, but it'll include the runtime in it and etc. so it'll take some disk space but I think these days it's less of a concern than before
Personally I haven't felt like Typescript has bought me enough over JavaScript to use it in contexts that I don't have to. I have to use TypeScript for work, and it's "fine", but I guess I haven't found that it helps all that much.
I'm not sure why; I guess it's because the web itself is already really flexible that I find that the types don't really buy me a lot since I have to encode that dynamism into it.
To be clear, before I get a lecture on type safety and how wonderful you think types are and how they should be in everything: I know. I like types in most languages. I didn't finish but I was doing a PhD in formal methods, and specifically in techniques to apply type safety to temporal logic. I assure you that I have heard all your reasoning for types before.
> Tell us your hopes and dreams for a Cloudflare-wide CLI
It'd be great if the Wrangler CLI could display the required API token permissions upfront during local dev, so you know exactly what to provision before deploying. Even better if there were something like a `cf permissions check` command that tells you what's missing or unneeded perms with an API key.
That would be glorious! If ChatGPT doesn't get the permissions right on the first try I know that I'm going to have to spend the next hours reading the documentation or trying random combinations to get a token that works.
Why is "on the first try" so important? What's wrong with telling it your end goal and letting it figure out the exact right combo while you go off and work on something else?
Because I don’t want the earth to melt. Maybe if we’re all a bit more efficient we can postpone our doom for a bit.
That's also my experience. If it does not work on the first try, the model will likely continue to struggle with the problem, unless I babysit it.
So I prefer to dig through the source of truth, which also helps me build my mental model around the problem.
A doctor command that does this is always nice! Wish more services would have this
I would second this. Would be very helpful. Had a fat finger issue a while ago that this would have saved.
Maybe even better: create that key for you.
I think this boils down to more discoverability of the entire API. While I'm not a fan of GraphQL necessarily, it does provide the tools for very robust LLM usage typically because of the discoverability and HATEOAS aspects they actually follow compared to most "REST" APIs. I would love if LLMs could learn everything they need to about an API just by following links from its root entry point. That drastically cuts down on the "ingested knowledge" and documentation reads (of the wrong version) it needs to perform. Outdated documentation can often be worse than no documentation if the tool has the capability of helping you "discover" its features and capabilities.
Please add resource groups and the ability to enforce permissions per resource group before you do this so that we don’t have agents (or people) blowing up prod from their command line. Thank you.
Currently you can only enforce zone-based permissions (domain based) BUT plenty of resources, such as workers, don’t belong to zones so essentially their code can be replaced or deleted with the lowest level permission. And there’s no way to block it…
Alternatively if you could please allow us to create multiple accounts that share a single super account (for SSO and such), similar to GitHub Enterprise which has Enterprises and Organisations. Then we could have ACME Corp. and ACME Corp (Prod) and segregate the two and resource groups wouldn’t be strictly required.
Is it what CloudFlare Organization is about? https://blog.cloudflare.com/organizations-beta/
Superaccount + subaccounts would be great, even if that meant domains can’t be shared between them.
Yeah I don’t mind this, we can move our entire prod domains into another account.
The only reason we can’t right now is because SSO can’t exist for multiple accounts at once.
Second this. Workers being non zone-based makes them less useful to me.
Wonderful post and I will be taking inspiration from it. Surprised not to see TypeSpec https://typespec.io/ mentioned, which is a TypeScript-like schema language that I like to describe as "what if OpenAPI was good". I'm guessing they considered it and decided building their own would be both simpler and more flexible. The cost of BYO has come down a lot thanks to agents.
Love TypeSpec, agree it makes writing OpenAPI really easy.
But I’ve moved to using https://aep.dev style APIs as much as possible (sometimes written with TypeSpec), because the consistency allows you to use prebaked aepcli or very easily write your own since everything behaves like know “resources” with a consistent pattern.
Also Terraform works out of the box, with no needing to write a provider.
Which parts of Openapi does it fix?
Looking at the first example - it's far less verbose. Although, seems to be suspiciously minimal, so I can't even tell from a single .tsp route definition what response content type to expect (application/json is the default most likely).
I would guess that is defined by the `@route` decorator.
It’s actually human-readable, it has generics, it supports sum and product types in a much more natural way. There’s a lot more, that’s just off the top of my head.
Open api is an exchange format, it quickly become to verbose and repetitive, it is only OK when auto generated and consumed by tooling.
The trend of CLI-first design because AI agents need it is interesting. We ended up in the same place building developer tools, the CLI and API came first because that's what agents actually consume. The dashboard came after.
The cf permissions check idea from the top comment is great. One thing I've found is that agents are surprisingly good at using CLIs but terrible at diagnosing why a command failed. Clear error messages with the exact fix ("missing scope X, run cf token add --scope X") matter way more for agent usability than the happy path.
Wrangler currently has two auth modes: OAuth, full user permissions and static tokens that are manually created in the dashboard. Neither is designed for the pattern where a human operator and one or more AI agents need different permission boundaries.
I'd like the ability to create scoped, short-lived tokens from the CLI itself. There's an open GitHub issue (13042) for this.
But, there needs to be a twist: Tokens should be sociable not just by resource type, but by specific resource ID and action.
The consistency enforcement at the schema layer is the part I'd want to steal. Enforcing get vs info, standardizing --json across commands, these sound obvious but they're the things that break agent pipelines silently.
One thing I'm curious about: Cloudflare uses TypeScript for Workers and now this CLI, but Rust for the actual edge runtime. Is there a rough heuristic the team uses internally for when TS wins vs when you reach for something else?
> You can try the technical preview today by running npx cf. Or you can install it globally by running npm install -g cf.
A couple of obvious questions - Is it open source (npmjs side doesn't point to repo)? And in general will it be available as a single binary instead of requiring nodejs tooling to install/use? If so, using recently-acquired Bun or another product/approach?
I can't find any repository, either, but the package is listed as MIT-licensed and includes source maps, so I assume it will be published soon.
I suppose you could probably legally justify claude-code-ing the package from the source maps by the license if they don't...
> Tell us your hopes and dreams for a Cloudflare-wide CLI
No long lived tokens, or at least a very straightforward configuration to avoid them.
One option: an easy tool to make narrowly scoped, very short lived tokens, in a file, and maybe even a way to live-update the file (so you can bind mount it).
Another option: a proxy mode that can narrow the scope. So I set it up on a host, then if I want to give a container access to one domain or one bucket or whatever, I ask the host CLI to become a proxy that gives the relevant subset of its permissions to the proxied client, and I connect the container to it.
I like how GitLab does this, with an SSH server that implements only a few commands for creating PATs, so you can authenticate with your SSH keys and create a short-lived PAT in one command.
Ironically, with the advent of AI agents and stuff, we're going back from "checkbox engineering" in GUI webpages to CLI tools. Every time I need to clear cache in cloudflare when I upload a new version of an asset, I have to click through a bunch of things. Would be nice to just message my openclaw agent to do it.
Why bother with OpenClaw? Just call the CLI command directly. That is the point of a CLI.
I think the point is that I don't have to remember the command. I just have to tell my agent in plain-English to do X.
For example, we're dragging our feet on Github Config as Terraform at our org so in the meantime I've been using Claude + the gh cli to deploy changes across repos. I don't need to know / remember the gh cli command to pull or push a ruleset, or script a loop in Bash, I just have to say
> Claude pull the ruleset from <known good repo> and push it to <repo 1>, <repo 2>, <repo 3>
The CLI is also nice because it abstracts away authentication. I have another flow which doesn't have a CLI and Claude is more than happy to interpolate the API key that it read from a config file into the chat history (horrifying).
Exactly. Talking in plain English is a lot less mental overhead than reading the man page and figuring out the right command. Nowadays I just use AI instead of remembering ffmpeg commands too. Your point about authentication is important too.
What the hell am I reading, you guys know you can make little scripts for commands with prefilled options or even take in parameters?
That's what the coding agent does if you ask it to record a skill. Yes I can do that manually. But I'll get it wrong a few times before I get it right and waste half an hour in the documentation, if I'm lucky.
Or I just go, "I just created this cloudflare domain. Deploy the site via gh actions to cloudlare pages whenever I push to main. here are my credentials; put them in Github secrets." Or something similarly high level.
The clever thing here is not doing things manually but make sure you generate automation and scripts if you are going to do the same things more often along with skill files detailing how and when to call shit.
Actually if you read the first five sentences of the article, the point of the CLI is to be friendlier for AI agents (in addition to being useful for humans presumably).
> Increasingly, agents are the primary customer of our APIs. Developers bring their coding agents to build and deploy applications, agents, and platforms to Cloudflare, configure their account, and query our APIs for analytics and logs.
> We want to make every Cloudflare product available in all of the ways agents need.
Yeah but you can waste $10-50 having an LLM do that for you instead rather than using your engineering experience to do it. Think of the shareholder value!
> So we introduced a new TypeScript schema that can define the full scope of APIs, CLI commands and arguments, and context needed to generate any interface. The schema format is “just” a set of TypeScript types with conventions, linting, and guardrails to ensure consistency.
I'm confused though, why isn't that tool/framework being shown here. What is it and how does it work? It is similar to the TypeSpec tool someone else posted?
> Tell us your hopes and dreams for a Cloudflare-wide CLI
Initial impression:
-h and --help should follow the short / long standard of providing more / less info. The approach currently used is -h and --help show command lists and point at a --help-full flag. The --help-full output seems to give what I'd expect on -h. This needs to be much better - it should give enough information that a user / coding agen doesn't have to read websites / docs to understand how the feature works.
Completions are broken by default compared to the actual list of commands - i.e. dns didn't show up in the list.
When I ran cf start -h it prompted to install completions (this was odd because completions were already installed / detected). But either way, -h should never do anything interactive
Some parts of the cli seem very different to the others (e.g. cf domains -h is very different to cf dns -h). Color / lack of color, options, etc.
Please don't ever make the short version of a cli flag be different than the long version.
The short version is for typing on the fly, and the long version is for scripts, they should have identical output.
The full thorough documentation should be in man, and/or info.
My perspective on the rationale for splitting short/long help is that optimizing for the reader's time is a reasonable thing to do. Often I just need a refresher of what options are available. But sometimes I need a deeper understanding of what each option controls and how. (Yes I understand that this should be in man pages). There needs to be a reasonable way to control the verbosity of the help output from the command line however.
I agree with your point that most flags should generally treat short versions as exact aliases to long flags, but I just think that a convention that treats -h and --help as concise vs long is 100% reasonable. The distinction is often breadth vs depth.
That would be a perfectly reasonable convention, except it's already a convention that they do the same thing.
Having them be different could cause someone to look at -h, and not even know about --help. Or if someone writes a script parsing the output of -h for some reason, someone else might come along and change it to --help expecting it to be the same thing.
100% agree, not sure where this idea came from but I'm not a fan.
You can just make a `--help-all` (or whatever word you want to use), imo the `--help-all` command doesn't need a short equivalent because it's not something you'd frequently use.
I've never heard of this convention. Every getopt-style CLI tool I've used has identical behavior whether an option is specified in its short- or long-form.
Any rust cli built with clap or go cli built with cobra supports short and long help and surface these with `-h` and `--help` (I think cobra surfaces this in the help command rather than in the --help, which is probably a reasonable alternative way to frame this)
Jujutsu does it, and it's quite nice.
Possibly controversial, but I think short commands should be disallowed. This is the stance the AWS CLI takes, and it 1) vastly improves readability, especially for those learning the syntax 2) makes it less easy to shoot yourself in the foot with a typo.
About the cf domains -h vs cf dns -h drift you flagged, in my case, I've watched Claude learn one subcommand format, then take for granted the same flags on a sibling with a different help shape.
It's not cosmetic. Uniform help is a way to not let agents hallucinate. Otherwise you end up with invalid commands, or worse, silent ones that go through without doing anything at all, or go totally wrong.
yeah - absolutely. I use codex all the time with jj and encourage it to check the help for details about how to run commands as the commands / args / flags have evolved post training-cutoff date.
Wrangler is a disaster. Since it's a Node.js environment, why use TOML for configuration? Can't you use TypeScript?
If you like Rust so much, I think you should just completely refactor it.
I found that agents can already figure out how to do everything in CloudFlare as long as they have an API token with the right privileges. They are smart enough to figure out how to use the API. But the friction spot is that I need to make the API token by hand and basically add permission to do everything they might ever need to do. It would be nice to have a smoother way to handle the permissions so I can let them do everything they need to do frequently without asking for permission yet still make it easy to approve actions that are only needed occasionally, without having to guess in advance which permissions fall into which category
API tokens are complete mess on CF. There are 2 or 3 types of them. Agents constantly confuse, which one does what. Documentation is not referring to them correctly. I still don't understand the difference myself and can't explain clearly to an agent. Why we need to bother at all?
I really dislike wrangler, though i understand the need to make something for mainstream appeal (agent cli tools + npx execution) It would be really nice if the cli was a layer on top of a git ops enabled declarative layer that was usable directly. (The terraform provider does not count...)
I have been experimenting with Open API spect -> CLI too. I have Go and specs auto-generated either with Huma or Fuego
https://github.com/danielgtaylor/huma https://github.com/go-fuego/fuego
The restish tool by the author of Huma is functionally correct, but I'm finding the models are not doing a great job at inferring the syntax. Admittedly I am having a hard time following the syntax too.
https://github.com/rest-sh/restish
I need to do proper evals, but it makes me wonder if `curl` or a CLI with more standard args / opts parsing will work better.
Thanks to Cloudflare for sharing their notes, anyone else figure this out?
Fantastic. I recently automated a bunch of operations across ~50 domains with an agent, an API key, and a bunch of HTTP requests. I kept thinking that surely cloudflare had to be working on better ergonomics for this -- glad to see they were :D
Tell us your hopes and dreams for a Cloudflare-wide CLI
This is only partly about the CLI and mostly about the API itself, but a straightforward and consistent way to manage environments would be nice.
I have a project using CF workers and Astro, with an ugly but working wrangler.toml defining multiple environments. When Cloudflare acquired Astro, I assumed that would be a good thing, but the latest version of the Cloudflare plugin (mandatory if you want to use the latest Astro) seems to manage environments in its own special incompatible way.
Kind of ironic that AI and Agents seems to be leading to more CLI/API stuff, when AI actually allows human-like computer use for the first time.
A very welcome development - much better for machines to the APIs - but it always would have been welcome without AI.
Please have an endpoint for up-to-minute billing with hooks for notifications and setting limits, importantly have it correlate-with/be the endpoint that shows usage analytics.
Previous-co could never get argo billing to match argo analytics, and with no support from CF over months we backed away from CF completely in fear that scale-up would present new surprise unknown/untraceable costs
Previous-previous-co is probably the largest user of web worker
Finally. Jumping between wrangler, the dashboard, and raw API calls has been annoying for a while. I'm keen on the local explorer most, debugging Workers locally has always been clunky. Anyone know how this plays with Terraform-managed infra?
I wish there were CLI preview command when making changes in Cloudflare.
I have few domains on Cloudflare and when making some changes, I wish there were a way to apply the same changes to multiple domains for consistency.
CLI preview for UI action will make it possible.
Am I the only engineer that thinks it is a bad idea to shove the entire functionality of a gigantic company into one program? Just me? Yeah, having one single gigantic interface for an entire tech company's technology products couldn't possibly be complicated...
Nobody else here ever spent years begging in pull requests for some basic functionality or bug to be fixed, and it never could be, because someone in the company decided they didn't have the time, or didn't think your feature was needed, or decided it wasn't a bug?
How about, has anyone ever had to pin multiple versions of a tool and run the different versions to get around something broken, get back something obsoleted, or fix a backwards-incompatibility?
> you can install it globally by running npm install -g cf
...I'm gonna vibe-code my own version as independent CLI tools in Go, I hope ya'll realize. Besides the security issues, besides the complexity, besides the slowness, I just don't want to be smothered by the weight of a monolith that isn't controlled by a community of users. Please keep a stable/backward-compatible HTTP API so this less difficult? And if Terraform providers have taught us anything, it's that we need copious technical and service documentation to cover the trillion little edge cases that break things. If you can expose this documentation over your API, that will solve a ton of issues.
I recently let LLM deploy a service for me in gcp using gcloud cli without going to gcp dashboard even once.
it was magical
Great work. I like the approach of creating a schema to work with the OpenAPI spec.
I'm happy that there will be more tooling, but the reason for that (and the target audience) should not be ai agents. It should be a good experience for humans!
Tools should be tested and quality assured. Something that was utterly missing for cloudflare's unusable v5 terraform provider. Quality over quantity with a ux that has humans in mind!
Agreed that the current Terraform provider is shockingly bad. It’s changed my estimation of Cloudflare’s technical competence, drastically. The automated migration from v4 still doesn’t work and it’s been, what, nearly a year since v5 was released? (This is not to mention I’ve never used a Terraform provider that made me run external migration tools in the first place.)
Making a good experience for AI agents also makes a good experience for the humans that are tasked with the management of their agents.
Exactly! Number of turns, average tokens to achieve a task using your CLI, as well as average number of characters being returned per CLI command alongside other metrics: all important to both users and agents! I am working on allowing to accurately capture this at www.cliwatch.com! Feel free to request an example eval suite for a list of tasks you want to achieve with your CLI
Oh yes to this! I spent yesterday morning working this out when it smacked me in the face
I’ve used a lot of cloudflare functionality with Claude and their API. Worked very good
CLIs are the new programming language. I think we should design them like a language
Its so depressing that it took widespread LLM psychosis to finally get company leadership to invest in actual CLI tooling.
No, the customers never mattered but the mythical "LLM agent" is vitally important to cater too.
i love the idea but sad to see that we needed ai to give companies incentives to build better dx.
-h vs --help is one of those small things that gets very loud when it's broken
> Tell us your hopes and dreams for a Cloudflare-wide CLI
Please call it flare.
I was the original author of the cloudflare-go library (which I worked on in my spare time while working at Cloudflare), and I included a `flarectl` command with it, but sadly it didn't get much traction :-(
https://github.com/cloudflare/cloudflare-go/tree/v0/cmd/flar...
> TypeScript is "the lingua franca of software engineering."
Seems odd to me. I guess we all live in our bubbles.
If there is some fancy tool out there, "does it have binding for language X"? X seems to be much more commonly Python than Typescript.
Building CLIs don't make any sense and in fact it is the wrong way.
I wish we would stop building CLIs and instead use something like this:
https://executor.sh/
https://github.com/RhysSullivan/executor
This looks like a really nice pattern for exposing all allowed capabilities in one place. Are you using it? Looks like it could easily wrap a CLI too…
Build using ecosystem with a better standard library. Too little in node so you have to install lots of deps. I have to encapsulate your tool in docker to prevent node supply chain attack. If you use a better ecosystem with larger standard library you will be installing fewer rando deps. You can do it. You are very AI forward organization. Fewer dependencies and write all code yourself. You have LLM.
Node, Python etc. allow arbitrary footgun tech to lose all local data. You have to use better tech.
waiting this feature from long time
This, but for Bunny DNS, so I can get closer to 100% European clouds. :)
Complete CLI coverage is so great to see.
I just wish they'd fix billing notifications. The ux makes it impossible to set it up. Been complaining about that on X, got a couple people saying they would look into it, even one that gave me his email address. Pure silence.
Excellente!
> First Principles
am I the only one put off with such language? they talk as if they invented compilers or assembly or Newton's law of gravity.
> Right now, cf provides commands for just a small subset of Cloudflare products.
Why didn't they vibe code support for more? With this on the heels of EmDash, and this being a technical preview, it feels inconsistent.
> We write a lot of TypeScript at Cloudflare. It’s the lingua franca of software engineering.
This scares me more than Im able to admit, typescript sucks and in my opinion its way worse than the more commonly used lingua franca of computing, which I would attribute to C. At least C can be used to create shared objects i guess?
Why do you think it sucks?
I used to dislike JavaScript a lot after learning it and PHP, then using languages like C#. Then TypeScript came along making JS much easier to live with, but has actually become quite nice in some ways.
If you use deno as your default runtime, it's almost Go-like in its simplicity when you don't need much. Simple scripts, piping commands into the REPL, built-in linting, testing, etc. It's not that bad!
Of course you're welcome to your opinion and we'd likely agree about a lot of what's wrong with it, but I guess I feel a bit more optimistic about TS lately. The runtime is improving, they've got great plans for it, it's actually happening, and LLMs aren't bad at using it either. It's a decent default for me.
Not OP, I use TS but only because it’s the only option. TS is a build your own typing sandbox, more than enough rope to hang yourself.
Coming from typing systems that are opinionated, first class citizens of their languages, it doesn’t stand up.
This is one of my dislikes as well.
You look at libraries like Effect, and it's genuinely incredible work, but you can't help feeling like... Man, so many languages partially address these problems with first-class primitives and control flow tooling.
I'm grateful for their work and it's an awesome project, but it's a clear reflection of the deficiencies in the language and runtime.
I think it sucks because it transpiles to JavaScript and is an interpreted language. Users have to resolve the dependencies themselves and have the correct runtime. I definitely prefer my CLI tools be written in a compiled language with a single binary.
I agree, though one cool thing arriving lately (albeit with some major shortcomings) is the ability to compile binaries with deno or bun (and nodejs experimentally, I think).
With Go you can compile binaries with bindings for other binaries, like duckdb or sqlite or so on. With deno or bun, you're out of luck. It's such a drag. Regardless, it's been quite useful at my work to be able to send CLI utilities around and know they'll 'just work'. I maintain a few for scientific data processing and gardening (parsing, analysis, cleaning, etc) which is why the lack of duckdb bundling is such a thorn. I do wish I could use Go instead and pack everything directly into the binary.
you can already "compile" TS binaries with deno, but it'll include the runtime in it and etc. so it'll take some disk space but I think these days it's less of a concern than before
Totally, it's inconsequential for our use cases.
I think the binaries wind up being somewhere around 70mb. That's insane, but these are disposable tools and the cost is negligible in practice.
“Typescript sucks” is not really a great reason.
Personally I haven't felt like Typescript has bought me enough over JavaScript to use it in contexts that I don't have to. I have to use TypeScript for work, and it's "fine", but I guess I haven't found that it helps all that much.
I'm not sure why; I guess it's because the web itself is already really flexible that I find that the types don't really buy me a lot since I have to encode that dynamism into it.
To be clear, before I get a lecture on type safety and how wonderful you think types are and how they should be in everything: I know. I like types in most languages. I didn't finish but I was doing a PhD in formal methods, and specifically in techniques to apply type safety to temporal logic. I assure you that I have heard all your reasoning for types before.
Well it does suck for a huge list of reasons but specifically disqualifying for being the lingua franca would be it being controlled by microsoft
It's open source. If Microsoft did anything weird it would be immediately forked ala. terraform, ElasticSearch etc.
There's so much momentum behind it from the front-end community alone it's not going anywhere.
IMO using Typescript sucks because of the node ecosystem/npm. The language itself is passable.
Well it does suck, and it isn't really great for implementing performant developer tools, such as parsers, formatters and so on.
The performance is that bad that the typescript developers are rewriting the language itself in Go. [0]
Tells me everything I need to know about how bad typescript is from a performance stand point.
[0] https://devblogs.microsoft.com/typescript/typescript-native-...
That’s the lsp not runtime. Bun runs Typescript very fast. It’s a fantastic language and ecosystem.
I’ve just checked FFI in bun and it’s marked as experimental. There are great libraries in C/C++ world and FFI is kinda table stakes to use them.
No where did I say "runtime".
Even with Bun it's because of Zig, not TypeScript and that only proves my point even more.
you're right. we should just not use any interpreted/script languages because they're not as fast as compiled ones.
why does a CLI tool that just wraps APIs need this native performance?
The performance is so bad that the most used software in the world is written using it.