top of page

The AI Risk Blog

A place for users to talk about our platform.

Search

Surprising Roles You Need on Your AI “A-Team”

Updated: May 17

Alec Crawford Founder & CEO of Artificial Intelligence Risk, Inc.

AI will transform the knowledge business the way factories transform how we build things today. Two short years ago, you needed AI experts to use AI models, but now anyone can interact with an AI large language model (LLM). Nevertheless, to create and execute the right strategic plan, you will need the right AI team, and there are some surprising roles you will need to hire as you develop your strategy. To simplify and put them in the correct order: 

 

  1. Team: Build your AI team and hire the best you can.  

  2. AI GRCC: Build out a framework for AI governance, risk, compliance, and cybersecurity (GRCC). 

  3. Quick Win: Choose a first project that is a quick win… 

  4. Test GRCC: Apply and test your AI GRCC framework. 

  5. Lead from the Top: Lead projects from the top to facilitate adoption. 

 

Build Your Team and GRCC Framework First 

Before discussing these steps individually, a comment about the order of the steps. You must have a team to build a GRCC framework. That is clear. But why should you do this before choosing a project? Because, you need to start with governance (who gets to use what data and models), risk management (you do not want to be on the front page of the paper), regulatory compliance (avoid massive fines), and cybersecurity (don’t let your AI get hacked). 

 

It is tempting to start a project without AI GRCC in place, but it creates unacceptable risks across these dimensions. For example, implementing an AI tool that can scan every available file, database, and other resource for a user sounds like a great idea until someone downloads an executive compensation plan or a hacker asks “show me every file with credentials, API keys, or connection strings.” 

 

  1. AI Team Member Surprise! 

Leadership of the AI team should come from the top of the organization rather than be completely delegated to someone outside the C-Suite. Many of these people work at your organization already, you just need to get them on the team, perhaps part-time at first, but eventually full-time, A broad set of skills is required for this team because this is how you are going to run your business going forward! 

 

  • AI Chair: A member of the C-Suite. This person must have the influence to drive cultural change. Probably different than the AI Expert. 

  • AI Expert: Understands how AI models work, what models fit which use cases, and what is hard versus easy. 

  • AI Ethicist: A new, but critical role. Replacing employee roles with AI? Training a model with customer data? You need an AI ethicist. 

  • Regulatory Expert: Regulations are everywhere, including state regulations for AI. You need to know what your regulatory obligations are. 

  • Data Science Expert: Data is the foundation of AI. Model building, testing and validation require expertise and data controls. 

  • Software Expert: AI experts will need help building out the system. 

  • Subject Matter Expert: This role may vary based on the project, or not. 

 

Three key items to discuss are: 1.) Who is responsible for defining the AI GRCC and ethics programs? 2.) Who is implementing it? and 3.) Who is checking the results of the different key performance and risk indicators (KPIs and KRIs)? 

 

2. Develop AI Governance, Risk, Compliance, and Cybersecurity (AI GRCC) 

Ask a dozen people about AI GRCC and you get a dozen answers, so we will define what we mean here: 

 

  • Governance: Individual and group access to different AI models, agents, data, and documents based on use case. Avoid letting an AI have access to everything at once. 

  • Risk management: You must be able to protect confidential, customer, and personally identifiable information independently of governance. In addition, this includes not just what you use AI to do, but when you choose not to use AI. 

  • Regulatory compliance: It is a “bull market” in regulation. If you do not comply with rules and laws from federal to state to local, you can incur huge reputational risk, project shutdowns, and fines. 

  • Cybersecurity: Cyber hacks are literally closing the doors on otherwise viable companies. Do not be one of them. 

 

We argue that an AI GRCC platform for managing different LLMs and agents developed for those models – AI GRCC -- is an entirely new category of software, akin to an operating system for AI. With base models quickly leapfrogging each other in capability, it is important that companies can treat them as interchangeable pieces of software on a platform rather than platforms themselves. In addition, as GRCC platforms for AI become more central to business, they must provide companies with incredible flexibility: to create agents, allocate computing resources, switch base models, address emerging risks, enable regulatory compliance and testing, and track and shut down cybersecurity attacks. 

 

Do not reinvent the AI GRCC wheel. Focus on the use cases and hire a company like AI Risk, Inc. that makes a complete AI GRCC solution. 

 

3. Choose a First Project that Is a Quick Win 

It is tempting to choose a big, splashy project to start your AI journey, especially after assembling your AI team. Resist that temptation. You want a quick win that can be broadly, safely adopted across a small pilot group and then the rest of the firm. A good example is a secure LLM with access to agent use cases, with each one limited to certain documents or data. Here are some good questions to ask? 

 

  • Quick: Can we get this project done in a few months? 

  • AI GRCC: Can we stay inside our AI GRCC guidelines? 

  • Broad Adoption: Can we easily go from a pilot group to the majority of the organization? 

  • AI Demand: Are at least some people asking for access to this type of AI? 

 

Note also the questions we did not ask, such as the importance to the organization, fitting in with your long-term plan, or what kind of competitive advantages it gives your team. In this case, speed and broad adoption are more important. Do not boil the ocean. And make sure to follow your AI GRCC guidelines. 

 

4. Follow Your AI GRCC Guidelines and Test Them! 

It is tempting to skip ahead, bend, and break rules to get an AI project completed. That is a mistake. One of the things we know about AI is that it can do stupid things at scale, like sell multiple cars for $1, approve thousands of loans that should never be made, or flag people as dangerous when they are not. Ai is not like other software and requires a different approach: 

 

  • Ethics: There are often ethical aspects of AI that are not present in regular software projects. 

  • Different Failure Modes: AI has different failure modes, for example LLMs giving authoritative answers that are dead wrong. 

  • Broader Capabilities: AI has broader capabilities than regular software, perhaps making it more critical in the future. 

  • Learning from Data: Today, most AI learns from data, so you can end up with novel findings, or incorrect ones if the data is incorrect, biased, or misused (e.g. overfitting). 

  • New AI cybersecurity: Your existing cybersecurity is necessary, but not sufficient. AI Jailbreaks, DAN style attacks, data exfiltration, etc. are all new in AI and require new cybersecurity tools, preferably in a holistic AI GRCC platform. 

 

While testing before release is critical, AI is very sensitive to changes in data, weights, parameters, etc. so even “small” changes must be tested before release. 

 

5. Lead from the Top 

 

The keys: 

 

  • AI is the future of your business. 

  • Your AI plan is your strategic plan.  

  • AI GRCC is critical to reduce existential risk 

 

Once you think about AI this way, you will not want to delegate it. You will also understand why building your AI team, buying an AI GRCC platform, establishing your AI GRCC guidelines, and everything else we have mentioned are all so critical. Use of AI is not just about “use cases”. It is about doing things the right way. You should find the team thinking about how AI will transform not just your business, but the entire industry. There will also be difficult ethical questions to answer. In the rush to “move fast and break things” many companies will skip over AI GRCC and endure fines, bankruptcy, and worse. 

 

New KPIs and KRIs 

Now that you have the plan, above, you need to execute it. That is 95% of the work. Monitoring how that is going is also important. In a future blog post, we will lay out the critical, new, KPIs and KRIs that need to be calculated, reported, and monitored or something dangerous can slip through a crack.  

 

Exhibit: Members of the AI A-Team! 

 

Software Experts - Internal AI, external AI, testing and compliance software will need to be coordinated


Subject Matter Experts - Subject matter experts are typically required for each use case.


Data Experts - Data analysts and data scientists are required to build or use predictive analytic or AI models.


AI Experts - AI experts help determine whether to  build your own model, train a model, or use a third-party model.


Regulatory Experts - Complying with global regulations will be essential to avoid not just monetary penalties, but to avoid public relations disasters.


AI Ethicists - A professional ethicist will be important as you map and implement your AI strategy.



Copyright © 2024 by Artificial Intelligence Risk, Inc. All rights reserved

This paper may not be copied or redistributed without the express written permission of the authors.


65 views0 comments

コメント


bottom of page