AI Risk, Inc. January 2025 Newsletter

Top AI News
The Colorado Artificial Intelligence Act, the first in the US, goes into effect on February 1, 2025. It focuses on protecting consumers and applies to all companies doing business in the state employing “high risk” AI systems (e.g. for banking or healthcare). We discuss the ramifications in detail below. See our video on YouTube discussing how to comply with the rule: https://youtu.be/CAUk_xhTSiU
DeepSeek's Disruptive AI Model: Chinese startup DeepSeek has unveiled its latest AI model, claiming performance on par with leading U.S. models but developed at a fraction of the cost. This development has the potential to disrupt the global AI landscape, challenging the dominance of established tech giants. See our video discussing this on YouTube: https://youtu.be/p0NWuMzyces
Reid Hoffman Raises $24.6 Million for AI Cancer-Research Startup: LinkedIn co-founder Reid Hoffman, alongside cancer researcher Siddhartha Mukherjee, has launched Manas AI, an AI-driven drug discovery startup. The company secured $24.6 million in initial funding, with significant investments from General Catalyst, Greylock, and Microsoft. Manas AI aims to leverage artificial intelligence to analyze complex biological data, focusing initially on breast cancer, prostate cancer, and lymphoma. WSJ
Donald Trump Announces Up to $500 Billion for Stargate AI Project: President Donald Trump has unveiled the Stargate initiative, an AI infrastructure project with funding up to $500 billion. The project is a collaboration with tech giants like OpenAI and Oracle, with an initial $100 billion allocated for building an AI data center in Texas. SoftBank is also contributing, with CEO Masayoshi Son indicating that investments could create over 100,000 jobs. The Times & The Sunday Times
IPO Watchlist: AI and Machine Learning Startups Poised for Public Offering: Several AI and machine learning startups are preparing for initial public offerings (IPOs), including Anthropic, Cohere, and Mistral AI. These companies have secured significant funding rounds, such as Amazon's $4 billion investment into Anthropic and Cohere's $270 million Series C. Despite challenges, the AI sector is expected to drive a resurgence in IPO activity. PitchBook
AI Leads Venture Capital Investments in 2024: In 2024, AI startups attracted nearly $19 billion in venture capital funding, accounting for 28% of all VC investments. This surge includes significant investments in sub-sectors like generative AI and AI infrastructure, reflecting the technology's expanding applications across various industries. GoingVC | Venture Capital Ecosystem
Feature: AI Regulatory Compliance Is Here and Now!
The Colorado Artificial Intelligence Act (CAIA) applies to companies that develop, deploy, or operate AI systems within Colorado, regardless of whether they are headquartered in the state. This includes businesses that offer products or services in Colorado, such as banks, which often use AI for loan decisions, fraud detection, and customer service. To comply, banks must conduct detailed risk and impact assessments for high-risk AI systems, ensuring these systems do not result in unfair discrimination or harm to customers. Transparency is a critical requirement; banks must disclose when AI is used to make decisions impacting customers (e.g., loan approvals) and provide customers the right to opt out of fully automated decision-making. Banks are also required to implement robust AI governance frameworks that include regular audits of AI models to prevent bias, ensure fairness, and maintain accuracy. Additionally, banks must adhere to Colorado's data privacy laws, ensuring customer data used by AI systems is secure, minimized, and processed only with explicit consent when sensitive information is involved. Continuous monitoring and reporting are also essential, with regulators empowered to review compliance and impose penalties for violations.
Using the NIST AI Risk Management Framework as a Safe Harbor
As state and local rules start to proliferate in the US, Colorado (and hopefully other states) say that if a company is using the national NIST AI RMF, that is a safe harbor for compliance with the (often more detailed) state rules.
Introduction to the NIST AI RMF
The NIST Artificial Intelligence Risk Management Framework (AI RMF) is a voluntary framework designed to help individuals, organizations, and society manage the risks associated with artificial intelligence (AI) systems. It promotes the trustworthy and responsible use of AI, ensuring that AI developments align with societal values and legal requirements. This framework is structured around four key functions: Governing, Mapping, Measuring, and Managing. Here's an overview of each section and guidance on how to comply with the framework.
Governing
Governing involves setting the policies and procedures necessary to manage AI risks effectively. It ensures that AI deployment aligns with organizational values, legal requirements, and societal norms. It starts with a policy about what you will do (and not do) with AI, but also what models to use. Make sure there is a responsible person for AI at your company.
Mapping
Map who can do what with AI and potential risks associated with AI. At a large company, onboarding an employee should automatically set them up with certain AI access. On the risk side, high probability/high impact risks are the most important to address, including cybersecurity specifically for AI and maintaining data privacy and security.
Measuring
Measuring involves assessing the performance and effectiveness of AI systems in mitigating identified risks. Part of this is having a full audit tool that tracks what each user, agent, and model does, including model updates. This data is available for compliance and regulatory purposes, but can also be used by the technology team. Users can also report problems with the AI, creating a key “feedback loop” for something that otherwise might not be noticed.
Managing
Managing AI risks involves reactive measures – such as responding to a cyber incident involving AI, as well as proactive measures, such as re-authorizing your AI admins on a regular basis. Develop and implement strategies to reduce identified risks to acceptable levels. Prepare for potential incidents by developing response plans and conducting regular drills.
Conclusion
AI Regulation is already here. Artificial Intelligence Risk, Inc. has an award-winning enterprise AI platform customized for your industry. This includes a built-in compliance system for regulated industries (e.g. banks and health care) that facilitates full compliance with the NIST AI RMF. Please contact us for a free demo or trial.
AIR-GPT was used in the production and editing of this article.
Copyright © 2025 by Artificial Intelligence Risk, Inc. This article may not be redistributed in whole or part without express written permission from the author.
Comments