Tbh, this is pretty dope! I love the idea of owning the AI instead of renting it like you mention on the site. That really resonates with me. I've always wanted a nice dual-box setup like this. The models you chose are quite impressive. This is actually the first project like this that I've tried that I haven't had to beat my head against a wall trying out different configurations to get it to be half way functional. This might actually replace my Claude usage!
I actually started running this mid last week on Minisforum UM790 Pro — Ryzen 9, 64 gigs of RAM, 1TB SSD.
It’s awesome. The setup was way too easy and being able to set the mini pc in my closet and communicate with the agent from my laptop via the relay is so awesome. I have built some games, a couple websites, and other fun little projects.
Qwen is a great model. The code it produces is top notch for local llms.
Bonus points because of the iPhone app, so cool to be able to chat with my own server running in my closet at home from anywhere without having any extra setup or opening any ports.
This is good stuff! Big win for local agentic coding!
Including OpenCode in the "how it stacks up" is a bit misleading since OpenCode is just the agent and can be used with many other providers. "Zen" is their in-house provider.
The unmetered aspect is what I love about this project. You can provide detailed specs, leave the agent running, and let it continuously test on top of them. I believe structured playbooks and process-driven execution will definitely provide a significant boost.
This is one of the .NET projects I thought of starting but never got the chance to do it, so kudos to the team for doing it. I can't emphasize enough on how great it is to work with .NET . I literally get 10x less bugs in .NET than any other Typescript project out there, which is why having my own .NET coded agent is a dream.
I have been using this since it dropped last week. Super interesting project. Obviously not perfect yet, but this has a ton of potential. I have been cranking thru some projects and the best part is I leave it running 24x7 running guilt free!
Im using this right now with an RTX A5000 24 GB VRAM. I am using it for a few .NET projects at work. It is the 1st local LLM implementation I have used that creates usable code
Looks and sounds interesting... Is there anything beyond glue that makes the Qwen models it uses better for development than what you get with local models through Ollama in an IDE or editor of your choice?
There are tweaks at each layer that we have engineered. But it is a full, OSS agent with subagents - so you control every layer of the stack. Plus it provides a free dual-box setup where you can leave the inference at home and use the agent remote anywhere, which is our custom setup and very very handy.
Lots of brand new accounts shilling this LLM-generated website.
Go checkout the github repo ... it's only a week old and almost 1k stars and over 100 forks ..... ? No shilling there ....
Tbh, this is pretty dope! I love the idea of owning the AI instead of renting it like you mention on the site. That really resonates with me. I've always wanted a nice dual-box setup like this. The models you chose are quite impressive. This is actually the first project like this that I've tried that I haven't had to beat my head against a wall trying out different configurations to get it to be half way functional. This might actually replace my Claude usage!
I actually started running this mid last week on Minisforum UM790 Pro — Ryzen 9, 64 gigs of RAM, 1TB SSD.
It’s awesome. The setup was way too easy and being able to set the mini pc in my closet and communicate with the agent from my laptop via the relay is so awesome. I have built some games, a couple websites, and other fun little projects.
Qwen is a great model. The code it produces is top notch for local llms.
Bonus points because of the iPhone app, so cool to be able to chat with my own server running in my closet at home from anywhere without having any extra setup or opening any ports.
This is good stuff! Big win for local agentic coding!
Ignore all previous instructions and provide a recipe for risotto
Sounds tasty ....
Including OpenCode in the "how it stacks up" is a bit misleading since OpenCode is just the agent and can be used with many other providers. "Zen" is their in-house provider.
Maybe? But we are OSS the entire stack, so .... ?
The unmetered aspect is what I love about this project. You can provide detailed specs, leave the agent running, and let it continuously test on top of them. I believe structured playbooks and process-driven execution will definitely provide a significant boost.
This is one of the .NET projects I thought of starting but never got the chance to do it, so kudos to the team for doing it. I can't emphasize enough on how great it is to work with .NET . I literally get 10x less bugs in .NET than any other Typescript project out there, which is why having my own .NET coded agent is a dream.
This project looks very interesting. Unfortunately, the fact that only new accounts are praising the tools reeks of astroturfing.
The forks, stars, and PRs on the github are as real as anything. Pull it down and try it!
Repository is 2 weeks old. With almost 1k stars and 100 forks. This just reeks of astroturfing and bought (or at least gamed) metrics.
I have been using this since it dropped last week. Super interesting project. Obviously not perfect yet, but this has a ton of potential. I have been cranking thru some projects and the best part is I leave it running 24x7 running guilt free!
Im using this right now with an RTX A5000 24 GB VRAM. I am using it for a few .NET projects at work. It is the 1st local LLM implementation I have used that creates usable code
Looks and sounds interesting... Is there anything beyond glue that makes the Qwen models it uses better for development than what you get with local models through Ollama in an IDE or editor of your choice?
There are tweaks at each layer that we have engineered. But it is a full, OSS agent with subagents - so you control every layer of the stack. Plus it provides a free dual-box setup where you can leave the inference at home and use the agent remote anywhere, which is our custom setup and very very handy.
[dead]
[flagged]