Replies: 2 comments
-
|
This is a really important topic. The security gap between enterprise and open-source AI agent frameworks is something the community needs to address. For teams running open-source CrewAI in production, there are a few practical things you can layer on:
One open-source project worth checking out is ClawMoat — it's specifically designed as a runtime security layer for AI agents. It can intercept agent actions and validate them against configurable security policies. Think of it as a WAF but for AI agents. The OWASP Top 10 for LLM Applications is also a great framework for thinking about what security controls you need: https://owasp.org/www-project-top-10-for-large-language-model-applications/ Would love to see CrewAI adopt a middleware/plugin architecture for security so the community can build these protections as extensions. |
Beta Was this translation helpful? Give feedback.
-
|
Quick update: ClawMoat is now on npm — |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi
I read on the website that the enterprise edition of CrewAI comply with SOC 2 and HIPAA. and also "Built with robust security protocols to keep your data safe.".
I was wondering if some one has any information or reading materials about that - im trying to understand CrewAI'a enterprise approach and implementation of security - in terms of data security , communications security and etc. Will be great to see that they has such a thing but i just can't see that we are there yet(knowing only the open source crewAI)
Beta Was this translation helpful? Give feedback.
All reactions