Our newest project is taking flight: meet codename goose! 🪶
Today, we launched an open source on-machine AI Agent. It’s modular, works with your preferred LLM, and integrates seamlessly with developer tools and other software via MCP.
Developers, check it out!
https://block.github.io/goose/blog/2025/01/28/introducing-codename-goose/
Today, we launched an open source on-machine AI Agent. It’s modular, works with your preferred LLM, and integrates seamlessly with developer tools and other software via MCP.
Developers, check it out!
https://block.github.io/goose/blog/2025/01/28/introducing-codename-goose/
Comments
On one hand, this is handy. On the other, it's utterly fucking terrifying. Is there some way to restrict commands besides asking the LLM nicely?
https://block.github.io/goose/docs/guides/using-goosehints
I'd like something more deterministic. Like an ACL or regex controlling access to files that's directly checked by whatever tool is used for file access, not the LLM.
There's also a shell tool that just straight up passes LLM output to Bash.