We've been building with AI tools and noticed there wasn't a good way to manage MCP servers across a team or see what's actually flowing to LLM providers. Who's running what? Which tools are approved? What data is going where or whats shared on AI websites?
So we built CyberCage (<https://cybercage.io>).
# What it does:
- MCP Management — Auto or manual discovery of MCP servers, with approval workflows. Manage allowed MCP servers org-wide (down to individual tools). Secure MCP catalog (integrates with GitHub's MCP Catalog).
- Operations — Manage allowed AI applications org-wide. Full audit logs (Splunk integration available). Notifications via Slack, Teams, Webex, webhooks.
- Network inspection for configured AI domains (but not limited to) to have PII detection, private data exfiltration, de-anonymization and masking
# Works with:
AI IDEs: Claude Code, Cursor, VS Code, Windsurf, Antigravity. Low-code platforms: n8n (native integration).
# In private beta:
On-device network agent for configured AI domains. Content inspection for PII and sensitive data. Packet metadata anomaly analysis.
# Coming soon:
BYOLLM (bring your own models for inspection). Browser extensions.
See it in action: <https://youtu.be/Zy7XhkQkUlk>
We built this for visibility and control over AI tooling without slowing teams down.
P.S. We're planning to open source CyberSmol v1.0 — a small model fine-tuned for AI threat detection — once it's ready.
Happy to answer questions ♥
This needs to be locally hostable and auditable to be interesting.
reply