FAQs
Frequently asked questions.
The AI Risk platform covers many separate areas of risk management, for example:
AI model risk management. It detects hallucinations, toxic language, and other potential model failures.
Confidential information. It detects and blocks (or allows if you want) confidential information, personal identifying information, and secret keys (e.g. your Chat GPT key) from being sent to the external AI model, such as an LLM.
Use case. It contains the user to the use case for the specific AI agent.
Compliance covers a number of areas, including regulatory compliance where applicable. For example, the AI Risk Platform records all conversations as well as the metadata, such as user, time, cost, documents and data accessed, etc. That data can be used by a compliance team for e-discovery, the system administrators to review hacking or data exfiltration attempts, and the AI development team to review user feedback and identify strong and weak points of the process.