by kaushik92 on 5/29/25, 2:52 PM with 4 comments
Infosec and compliance teams are now responsible for tracking security and compliance risks of a growing number of AI agents across external and internal apps and third-party vendors.
compliant-llm gives you a way to:
- Define and run comprehensive red-teaming tests for AI agents - Maps test outcomes to compliance frameworks like NIST AI RMF - Generate detailed audit logs and documentation - Integrate with Azure, OpenAI, Anthropic, or wherever you host your models - With an open-source, self-hosted solution
Install and launch the red-teaming dashboard locally:
pip install compliant-llm
compliant-llm dashboard
This opens an interactive UI for running AI compliance checks and analyzing results.We’re at v0.1, and would love your feedback. Tell us about the compliance or AI risk issues you’re facing, and we’ll prioritize what matters most.
by aavci on 5/29/25, 6:12 PM
by praveenkumarnew on 5/29/25, 6:15 PM
by nikhil896 on 5/29/25, 10:24 PM
by andrewski77 on 5/29/25, 7:54 PM