I’ve been building VT Code, a Rust-based terminal coding agent that combines semantic code intelligence (Tree-sitter + ast-grep) with multi-provider LLMs and a defense-in-depth execution model. It runs in your terminal with a streaming TUI, and also integrates with editors via ACP and a VS Code extension.
* Semantic understanding: parses your code with Tree-sitter and does structural queries with ast-grep.
* Multi-LLM with failover: OpenAI, Anthropic, xAI, DeepSeek, Gemini, Z.AI, Moonshot, OpenRouter, MiniMax, and Ollama for local—swap by env var.
* Security first: tool allowlist + per-arg validation, workspace isolation, optional Anthropic sandbox, HITL approvals, audit trail.
* Editor bridges: Agent Conext Protocol supports (Zed); VS Code extension (also works in Open VSX-compatible editors like Cursor/Windsurf).
* Configurable: vtcode.toml with tool policies, lifecycle hooks, context budgets, and timeouts.
GitHub: https://github.com/vinhnx/vtcode
I'm not building a coding agent, I'm building an English writing improvement agent.
I ask the agent to generate things like:
{ original_sentence: xxxx, improved_sentence: yyyy }
and I use the original_sentence to locate the original content, and replace it with the improved_sentence.
but, the thing is, ai would hallucinate the original_sentence. in some cases, the generated original_sentence doesn't match the actual original sentence.
How does a coding agent solve it?