Related. We have several third party web apps in use. These apps don't expose a public api, but they are all single page web apps. We wanted to connect claude code to these web apps for our limited use case.
We opened chrome, navigated the entire website, the downloaded the network tab as an har file. The asked claude to analyze and document the apis as an openapi json. Worked amazing.
Next step - we wrote a small python script. On one side, this script implements stdio mcp. On the other side, it calls the Internal apis exposed by the 3rd party app. Only thing missing is the auth headers..
This is the best part. When claude connects to the mcp, the mcp launches a playwright controlled browser and opens the target web apication. It detects if the user is logged in. Then it extracts the auth credentials using playwright, saves them to a local cache file and closes the browser. Then it accesses the apis directly - no browser needed thereafter.
In about an hour worth of tokens with claude, we get a mcp server that works locally with each users credentials in a fairly reliable manner. We have been able to get this working in otherwise locked down corporate environments.
Super cool. I think this is where most automation is heading . Would be curious if you could one-shot the auth flow using Kampala and completely ditch the browser. Also FWIW you can import HAR into Kampala and we have a few nice tools (like being able to a/b test payloads/replay requests) that meaningfully reduce integration time.
smeels like severe breach of ToS. virtually every single website and app mandates not to reverse engineer and not to temper with inner workings (including client-server networking).
side note, YC25/YC26 batches have multiple startups that blantly violate ToS and sitting on a timebomb just pending a lawsuite and Cease and Desist Letters.
The goal is not to scrape sites en-masse, but to allow people to automate their existing workflows and actions that they perform already via a browser. I understand the concerns around this being unethical, and it's something I spent a lot of time thinking about when I worked on automations previously. I've written a decent amount about how I don't think that sneaker bots or ticket bots are ethical. I don't support mass scraping websites/making the web more inaccessible for others.
I do have to push back on the ToS comments though. Automation is used daily by nearly all companies. RPA is a billion dollar industry. Browserbase raised at 300M valuation. Is using puppeteer to automate a form submission a violation of ToS? If so then why is using a screen reader not? Is it the intention? Why is hitting network requests directly different? I personally don't think that automation is unethical (as long as it is not affecting server capacity). I don't think the answer to the ethical problems in scraping is just not to automate at all. Open to disagreement here though.
> Is using puppeteer to automate a form submission a violation of ToS? If so then why is using a screen reader not?
Without taking a position on the ethics of automation, surely this isn't a serious question? Things that the ToS prohibits you from doing are ToS violations, and other things aren't.
For instance, from AirBnb's terms of service: "Do not use bots, crawlers, scrapers, or other automated means to access or collect data or other content from or otherwise interact with the Airbnb Platform."
There is no similar prohibition against using screen readers.
My broader point is that these ToS clauses are often so broad and vague that they're essentially unenforceable and not meaningful in practice. For example, "Do not use bots" covers a pretty substantial amount of ground, and intention isn't exactly something you can screen for. Is an autofill chrome extension a bot? If so what separates that autofill from accessibility extensions? Is someone using Whispr flow to fill forms considered a bot? AirBNB doesn't block Google's crawler. Why? A company can enforce its TOS as it wishes. My general point is that the waters are murky, and that automation is a sort of sliding scale.
> For instance, from AirBnb's terms of service: "Do not use bots, crawlers, scrapers, or other automated means to access or collect data or other content from or otherwise interact with the Airbnb Platform."
> There is no similar prohibition against using screen readers.
A screen reader uses automated means to access or collect data or other content from or otherwise interact with a platform.
Noticed you have two comments here. I think my response to your other comment best answers this. Definitely open to discussing this more here. Not sure if I agree on the self-compromised CA bit. MITM proxies have been used for 20+ years for debugging. In fact, I use Kampala to debug our personal APIs/web app all of the time.
I built the same thing as this just for websites. [0] I'm more interested in using Claude recursion to tune itself -- the agent writes itself, the agent -- than hacking websites. It is a good demonstration that 47 iterations of a recursive Claude agent writing itself to decompose any transport.
I've tested it against YouTube, Twitch, Ticketmaster, and Yahoo Finance. It will detect any transport like JSON, WebSocket, GraphQL, SSE, Protbuf, UDP, WebRTC, ect.. It after 3 hours and some coaching succeeded in reverse engineering ChatGPT + Cloudflare Turnstile but I didn't merge that into it yet.
It works by Claude using the Chrome DevTools Protocol (CDP) intercepting all traffic.
We’ve essentially been using that “recursion” to tune our agent. Having the agent build itself is not something I would have ever thought of though. Curious if you find it genuinely creates specific enough tools for it to be worth the setup time? I have a claude skill that takes in a chat and then offers tools/fixes to system prompt. Have found that + the anthropic harness engineering blogs to be super useful in making the agent actually do the work.
Have a look at https://github.com/adam-s/agent-tuning. Now, I'm working on developing the evaluation, the part that quantifies the performance of the agent. I'm having a hard time explaining it. You should be able to point Opus 4.7 to the repository and it will know how to set it up in your project.
You are welcome to send me an email at [my_username]@gmail.com if you want to talk about some of these things that I'm working on that are in your space.
`intercept` is just a proof-of-concept and at this point, if it added any value to what you are working on, that would be the best. Overall, people are pounding every website and your product will save billions in compute from AI inference to servers grinding and grinding getting pounded by these bots.
It seems like it’s quite HTTP-centric (like most of the web…). I didn’t see anything on the page about this - can it also intercept / “reverse engineer” service calls that go over gRPC or WebSocket? I’m guessing at least a partial “yes” if the gRPC traffic uses grpc-web/Envoy?
Seems like a great product, potentially quite powerful for automated testing of SPAs.
Yep we handle gRPC and websocket. gRPC is a bit sketch/hard to do because of the way the protocol is designed. FWIW not many sites implement gRPC (some google sites and spotify being the only two I can think of), and if they do they usually have decent APIs. Feel free to try and lmk if you have any issues!
gRPC obscures the keys not the values. Enums and signed ints are sort of tricky, but the latter is just a mapping problem and the former can be figured out through some logical deduction. gRPC isn't designed to obscure request content, but for over the wire efficiency.
Congrats. You may want to consider dropping the "reverse engineer" language though, since most every application's ToS is clear on that being prohibited. Perhaps just "replay any application" or similar.
How do you handle SSL pinning ? Most of the apps I interact with have some sort of SSL pinning, which is the hard part to circumvent. I tried Kampala but got stuck at the usual place; as soon as I enable it, chatGPT stops working. Most of my iPhone apps stop responding etc.
I would love to try using this tool to build an agent that can simply subscribe me to my gym lessons instead of me having to go on the horrible app. But even that relatively simple (iOS) app stopped working as soon as I enabled the proxy.
Unfortunately we can’t do much around SSL pinning yet. Not sure how deep you want to go, but there are several Frida scripts that patch common pinning implementations.
I also think mitmproxy (open source) has an option to spin up a virtual Android device that can bypass pinning via AVD. I have not tested how reliable it is though.
FWIW, it could also be a cert trust issue. I would try a quick Safari search to confirm the cert is fully trusted. ChatGPT is pinned, but the gym app makes me think it might be a trust or config issue on your device.
Happy to take a look as well. Email me at alex at zatanna dot ai.
Totally unrelated, I am just curious about why you chose the name, as someone who is Ugandan and was born in raised in Kampala (which is the Capital City of Uganda BTW).
Definitely get that. Being hammered by scrapers is a massive PITA (especially with latest aggressive AI crawlers). We focus primarily on allowing people to automate their existing workflows. For all hosted workflows we have rate limits to prevent mass scraping/affecting server workload in any real capacity. In fact, because we don't load js/html and hit endpoints directly I would guess that we consume less server resources in the end.
The requests still route through your servers/the data still lives with you. Kampala is a powerful tool but I don't see people replacing the actual apps with it. Most of our customers use it for automating repetitive actions in legacy dashboards.
I wouldn't consider what we do evasion really. We are using real tokens that you have received from your browser as a result of browsing the web. Any good anti-bot will have enforcement for abuses of that token.
Oops now realizing that pattern where we send you to bottom latest download link is definitely confusing. Fixed so that the top button sends you straight to Download now.
Interesting product (Caido co-founder here). It is very hard to nail auth, probably the most underlooked aspect by end users. We are working on something similar for PoC reproduction of vulnerabilities.
Fingerprinting is also a hard thing to match perfectly, I would be curious to know what your strategy is on that. My experience has been that unless you bundle multiple TLS lib it is almost impossible to do at 100% because none of the lib cover all the TLS extensions.
We’re currently running a variety of stuff for TLS/HTTP2. If you download you can see the full trace of the connection. We dump the TLS connection byte for byte with the different structured subsections. With tls.peet.ws and bogdann finn’s tls-client (which we use parts of with some modifications) I would say that http3/tcp fingerprinting is probably the remaining issue. We currently don’t support http3 connections (they’re niche + apple system proxy doesn’t support them well), and TCP fingerprinting is a bit too low level to build out tooling in GO currently. Possibly for a later release. Curious if you’ve tried bogdann finn/the existing tooling?
Zatanna is a DC comic book character. I’m not sure if either of us have even read comics, so not sure where that came from. For Kampala, when I started this I was trying Conductor for the first time. The generated workspace name was Kampala (the capital of Uganda). We even have a 3rd name. We actually incorporated as NoPoll. That one’s a bit less inspiring though lol.
Yep essentially. I would argue that we're probably closer to a MITM proxy like Proxyman than Wireshark. We don't do general packet sniffing (yet), although internally we use our own packet sniffing tools for reverse engineering on-prem installations.
Think this is really interesting especially for creating datasets. Proxyman was always hard to use for me, so connecting it to a MCP was something I have been waiting for.
Quick question: How do you handle session re-auth mid-script?
Congrats on the launch.. I need that conference script!
Thanks Ben! For session re-auth we attempt to agentically find the session refresh/login endpoints and make those part of the flow as an auth provider. This can be a bit sketchy though and is the main bottleneck right now. Currently working on some cool workarounds for this that allow us to piggy back on browser that should land by next week :)
sorry a bit confused on your question here. If you're asking about JSON RPC we handle this via parsing. The AI can then handle deducing structure most of the time given enough context
I’ve probably spent on the order of months of my life in proxyman/charles/burp/powhttp. All are great, but I’ve never been completely satisfied with the UX/features for building automations. As far as differences; we don’t modify TLS/HTTP2 connections, have a fully featured MCP (each UI action is an api action by definition), and have built more robust automation tooling in the app itself. The goal is to be an AI-native burp suite/powhttp with Proxyman-like UI.
Related. We have several third party web apps in use. These apps don't expose a public api, but they are all single page web apps. We wanted to connect claude code to these web apps for our limited use case.
We opened chrome, navigated the entire website, the downloaded the network tab as an har file. The asked claude to analyze and document the apis as an openapi json. Worked amazing.
Next step - we wrote a small python script. On one side, this script implements stdio mcp. On the other side, it calls the Internal apis exposed by the 3rd party app. Only thing missing is the auth headers..
This is the best part. When claude connects to the mcp, the mcp launches a playwright controlled browser and opens the target web apication. It detects if the user is logged in. Then it extracts the auth credentials using playwright, saves them to a local cache file and closes the browser. Then it accesses the apis directly - no browser needed thereafter.
In about an hour worth of tokens with claude, we get a mcp server that works locally with each users credentials in a fairly reliable manner. We have been able to get this working in otherwise locked down corporate environments.
Super cool. I think this is where most automation is heading . Would be curious if you could one-shot the auth flow using Kampala and completely ditch the browser. Also FWIW you can import HAR into Kampala and we have a few nice tools (like being able to a/b test payloads/replay requests) that meaningfully reduce integration time.
Smart! That's what I do as well for customers when they ask me to build a vibe coding layer on top of their SaaS platform.
Takes very little time and tokens and I get to plug into their platform in seconds.
smeels like severe breach of ToS. virtually every single website and app mandates not to reverse engineer and not to temper with inner workings (including client-server networking).
side note, YC25/YC26 batches have multiple startups that blantly violate ToS and sitting on a timebomb just pending a lawsuite and Cease and Desist Letters.
The goal is not to scrape sites en-masse, but to allow people to automate their existing workflows and actions that they perform already via a browser. I understand the concerns around this being unethical, and it's something I spent a lot of time thinking about when I worked on automations previously. I've written a decent amount about how I don't think that sneaker bots or ticket bots are ethical. I don't support mass scraping websites/making the web more inaccessible for others.
I do have to push back on the ToS comments though. Automation is used daily by nearly all companies. RPA is a billion dollar industry. Browserbase raised at 300M valuation. Is using puppeteer to automate a form submission a violation of ToS? If so then why is using a screen reader not? Is it the intention? Why is hitting network requests directly different? I personally don't think that automation is unethical (as long as it is not affecting server capacity). I don't think the answer to the ethical problems in scraping is just not to automate at all. Open to disagreement here though.
> Is using puppeteer to automate a form submission a violation of ToS? If so then why is using a screen reader not?
Without taking a position on the ethics of automation, surely this isn't a serious question? Things that the ToS prohibits you from doing are ToS violations, and other things aren't.
For instance, from AirBnb's terms of service: "Do not use bots, crawlers, scrapers, or other automated means to access or collect data or other content from or otherwise interact with the Airbnb Platform."
There is no similar prohibition against using screen readers.
My broader point is that these ToS clauses are often so broad and vague that they're essentially unenforceable and not meaningful in practice. For example, "Do not use bots" covers a pretty substantial amount of ground, and intention isn't exactly something you can screen for. Is an autofill chrome extension a bot? If so what separates that autofill from accessibility extensions? Is someone using Whispr flow to fill forms considered a bot? AirBNB doesn't block Google's crawler. Why? A company can enforce its TOS as it wishes. My general point is that the waters are murky, and that automation is a sort of sliding scale.
> For instance, from AirBnb's terms of service: "Do not use bots, crawlers, scrapers, or other automated means to access or collect data or other content from or otherwise interact with the Airbnb Platform."
> There is no similar prohibition against using screen readers.
A screen reader uses automated means to access or collect data or other content from or otherwise interact with a platform.
Wait till these sites discover web browsers and developer tools.
so if API is published, there is nothing to reverse engineer.
and if API is not published, and you MITM with self-compromised CAs, and then use it (commercially?) you ~100% breaking ToS.
this is just un-ethical. or YC does not have regard anymore for such things?
Noticed you have two comments here. I think my response to your other comment best answers this. Definitely open to discussing this more here. Not sure if I agree on the self-compromised CA bit. MITM proxies have been used for 20+ years for debugging. In fact, I use Kampala to debug our personal APIs/web app all of the time.
> this is just un-ethical.
There is nothing unethical about this. You can technically do this with a browser and its dev tools.
You being here is far more unethical than this app.
> anymore
Ehh…
I built the same thing as this just for websites. [0] I'm more interested in using Claude recursion to tune itself -- the agent writes itself, the agent -- than hacking websites. It is a good demonstration that 47 iterations of a recursive Claude agent writing itself to decompose any transport.
I've tested it against YouTube, Twitch, Ticketmaster, and Yahoo Finance. It will detect any transport like JSON, WebSocket, GraphQL, SSE, Protbuf, UDP, WebRTC, ect.. It after 3 hours and some coaching succeeded in reverse engineering ChatGPT + Cloudflare Turnstile but I didn't merge that into it yet.
It works by Claude using the Chrome DevTools Protocol (CDP) intercepting all traffic.
[0] https://github.com/adam-s/intercept?tab=readme-ov-file#how-i...
We’ve essentially been using that “recursion” to tune our agent. Having the agent build itself is not something I would have ever thought of though. Curious if you find it genuinely creates specific enough tools for it to be worth the setup time? I have a claude skill that takes in a chat and then offers tools/fixes to system prompt. Have found that + the anthropic harness engineering blogs to be super useful in making the agent actually do the work.
Have a look at https://github.com/adam-s/agent-tuning. Now, I'm working on developing the evaluation, the part that quantifies the performance of the agent. I'm having a hard time explaining it. You should be able to point Opus 4.7 to the repository and it will know how to set it up in your project.
You are welcome to send me an email at [my_username]@gmail.com if you want to talk about some of these things that I'm working on that are in your space.
`intercept` is just a proof-of-concept and at this point, if it added any value to what you are working on, that would be the best. Overall, people are pounding every website and your product will save billions in compute from AI inference to servers grinding and grinding getting pounded by these bots.
Nice ty for sharing I was going to build something like this for a customer.
I think just downloading all network traffic and giving it to claude code is the fastest and cheapest approach for 99% use cases.
It seems like it’s quite HTTP-centric (like most of the web…). I didn’t see anything on the page about this - can it also intercept / “reverse engineer” service calls that go over gRPC or WebSocket? I’m guessing at least a partial “yes” if the gRPC traffic uses grpc-web/Envoy?
Seems like a great product, potentially quite powerful for automated testing of SPAs.
Yep we handle gRPC and websocket. gRPC is a bit sketch/hard to do because of the way the protocol is designed. FWIW not many sites implement gRPC (some google sites and spotify being the only two I can think of), and if they do they usually have decent APIs. Feel free to try and lmk if you have any issues!
so how do you parse gRPC binary? unless you have proto definitions, it is blackbox and is totally unsuable, isn't it?
gRPC obscures the keys not the values. Enums and signed ints are sort of tricky, but the latter is just a mapping problem and the former can be figured out through some logical deduction. gRPC isn't designed to obscure request content, but for over the wire efficiency.
Congrats. You may want to consider dropping the "reverse engineer" language though, since most every application's ToS is clear on that being prohibited. Perhaps just "replay any application" or similar.
Yeah agreed this messaging is a bit confusing. Our focus is on helping people build automations, not do any mass-scale scraping.
Automations are also often prohibited by TOS.
Congratulations.
How do you handle SSL pinning ? Most of the apps I interact with have some sort of SSL pinning, which is the hard part to circumvent. I tried Kampala but got stuck at the usual place; as soon as I enable it, chatGPT stops working. Most of my iPhone apps stop responding etc.
I would love to try using this tool to build an agent that can simply subscribe me to my gym lessons instead of me having to go on the horrible app. But even that relatively simple (iOS) app stopped working as soon as I enabled the proxy.
Unfortunately we can’t do much around SSL pinning yet. Not sure how deep you want to go, but there are several Frida scripts that patch common pinning implementations.
I also think mitmproxy (open source) has an option to spin up a virtual Android device that can bypass pinning via AVD. I have not tested how reliable it is though.
FWIW, it could also be a cert trust issue. I would try a quick Safari search to confirm the cert is fully trusted. ChatGPT is pinned, but the gym app makes me think it might be a trust or config issue on your device.
Happy to take a look as well. Email me at alex at zatanna dot ai.
Congratulations on the launch.
Totally unrelated, I am just curious about why you chose the name, as someone who is Ugandan and was born in raised in Kampala (which is the Capital City of Uganda BTW).
Congratulations again.
It was the (generated) name of the Conductor workspace when I started the project. We were going to rename it before launch but the name stuck lol :)
I was caught off guard as well!!
This makes me want to never create a public service again.
Definitely get that. Being hammered by scrapers is a massive PITA (especially with latest aggressive AI crawlers). We focus primarily on allowing people to automate their existing workflows. For all hosted workflows we have rate limits to prevent mass scraping/affecting server workload in any real capacity. In fact, because we don't load js/html and hit endpoints directly I would guess that we consume less server resources in the end.
Oh no I’m not worried about the resources or rate limits.
If I’d make a mobile app and users simply use your automation service instead of my mobile app, I’d lose traffic/money/motivation to improve it.
If they run into issues from your service now it could make my app look bad while the error isn’t with the app.
See tailwind for an example tale.
The requests still route through your servers/the data still lives with you. Kampala is a powerful tool but I don't see people replacing the actual apps with it. Most of our customers use it for automating repetitive actions in legacy dashboards.
> Because Kampala is a MITM, it is able to leverage existing session tokens/anti-bot cookies and automate things deterministically in seconds
If a web property has implemented anti-bot mechanisms, what ethical reasons do you have for providing evasion as a service?
I wouldn't consider what we do evasion really. We are using real tokens that you have received from your browser as a result of browsing the web. Any good anti-bot will have enforcement for abuses of that token.
Cool! Links on the page doesn't work, at least not for me, e.g., https://www.zatanna.ai/kampala#how-it-works
Also not clear on the page if it is apps from the local machine or on the network. Maybe some clearer examples and use cases would help?
Oops now realizing that pattern where we send you to bottom latest download link is definitely confusing. Fixed so that the top button sends you straight to Download now.
Interesting product (Caido co-founder here). It is very hard to nail auth, probably the most underlooked aspect by end users. We are working on something similar for PoC reproduction of vulnerabilities.
Fingerprinting is also a hard thing to match perfectly, I would be curious to know what your strategy is on that. My experience has been that unless you bundle multiple TLS lib it is almost impossible to do at 100% because none of the lib cover all the TLS extensions.
We’re currently running a variety of stuff for TLS/HTTP2. If you download you can see the full trace of the connection. We dump the TLS connection byte for byte with the different structured subsections. With tls.peet.ws and bogdann finn’s tls-client (which we use parts of with some modifications) I would say that http3/tcp fingerprinting is probably the remaining issue. We currently don’t support http3 connections (they’re niche + apple system proxy doesn’t support them well), and TCP fingerprinting is a bit too low level to build out tooling in GO currently. Possibly for a later release. Curious if you’ve tried bogdann finn/the existing tooling?
Zatanna
Kampala (had to double check it wasn’t Harris)
Just mulling these names over, how’d you come up with them?
PS: clear value prop!
Zatanna is a DC comic book character. I’m not sure if either of us have even read comics, so not sure where that came from. For Kampala, when I started this I was trying Conductor for the first time. The generated workspace name was Kampala (the capital of Uganda). We even have a 3rd name. We actually incorporated as NoPoll. That one’s a bit less inspiring though lol.
Gotta ask, did you talk to legal in any way before naming your company after someone's IP
Wireshark + some post processing?
Yep essentially. I would argue that we're probably closer to a MITM proxy like Proxyman than Wireshark. We don't do general packet sniffing (yet), although internally we use our own packet sniffing tools for reverse engineering on-prem installations.
guess they are automating this with AI clearly with intent to reproduce websites on their own. clone-every app pretty much.
(every app that is not hidden their networking)
Great job Alex!
Think this is really interesting especially for creating datasets. Proxyman was always hard to use for me, so connecting it to a MCP was something I have been waiting for.
Quick question: How do you handle session re-auth mid-script?
Congrats on the launch.. I need that conference script!
Thanks Ben! For session re-auth we attempt to agentically find the session refresh/login endpoints and make those part of the flow as an auth provider. This can be a bit sketchy though and is the main bottleneck right now. Currently working on some cool workarounds for this that allow us to piggy back on browser that should land by next week :)
Thank you! Looking forward to it.
how does this work? for eg, how is it possible to even deduce bitcoin structure from rpc list?
sorry a bit confused on your question here. If you're asking about JSON RPC we handle this via parsing. The AI can then handle deducing structure most of the time given enough context
How is this different/better than charles proxy/proxyman or similar apps?
I’ve probably spent on the order of months of my life in proxyman/charles/burp/powhttp. All are great, but I’ve never been completely satisfied with the UX/features for building automations. As far as differences; we don’t modify TLS/HTTP2 connections, have a fully featured MCP (each UI action is an api action by definition), and have built more robust automation tooling in the app itself. The goal is to be an AI-native burp suite/powhttp with Proxyman-like UI.
guess time to move to gRPC and private encryption.