This article first appeared on radicalcompliance.com February 20th, 2023.

Today we are going to keep looking at artificial intelligence and how corporations can get ahead of the risks thereof. Our previous post on AI was primarily a list of potential risks that could run rings around your company if you’re not careful; so what steps can the board and senior executives take to prevent all that?

Well, first things first. AI is a new technology. So the first question governance and risk assurance teams should ask themselves is simply: how did you manage the adoption of new technologies in the past?

Plenty of people will answer, “Poorly,” or “I don’t know” or some similar answer. Those answers actually demonstrate an important point. Lots of previous technologies were adopted haphazardly by employees first; and then senior management woke up to the need for committees and workstreams and SWOT analyses and all that fun stuff.

The goal today is to avoid a repeat of that dynamic — and the person who can help the board and senior management avoid it will be a valuable person indeed. So where can risk, audit, and compliance professionals turn for advice, and how can you put that advice to work in your own company?

Enter the risk management frameworks.

The most notable AI risk management framework right now comes from NIST, which released a voluntary, Version 1.0 framework for artificial intelligence in January. In fact, however, a few other AI frameworks are already out there:

From the above list, the NIST and COSO frameworks are the most useful for compliance and audit executives because they are true risk management tools that help you understand how to implement AI at an actual corporation. The others are worth reading, but they’re more a collection of good ideas for how AI should work, or pitfalls of bad AI that you want to avoid. That’s nice, but someone still needs to put structure and discipline around all that awareness; COSO and NIST help you do that.

Mapping Out AI Risks

I don’t know about you, but one thing that intimidates me about AI is the sheer number of issues that it poses to corporations. This isn’t like switching from Oracle to SAP to run your business systems, or moving from an in-house email system to one managed by GMail. Those business processes are already mature and well-understood; you’re simply switching around the technology that the humans use to run those systems.

Artificial intelligence will let corporations design entirely new business processes. It’s more akin to the adoption of cloud computing or the arrival of mobile devices. It will allow you to set new strategic goals, change your financial targets, and redefine your human capital needs. That said, AI will also change how your company interacts with customers, employees, and third parties — which, in turn, will create new operational and compliance risks.

Simply put, you’ll need to think about how you’ll use AI, and how others will use it. You’ll need to consider how others’ use of AI affects you, and your use of AI affects them.

To that end, I cooked up this risk-reward matrix:

Risks we pose to ourselves by using AI

Risks we pose to others by the use of AI

Risks others pose to us by their use of AI

Benefits we can bring to ourselves by using AI

Benefits we can bring to others by our using AI

Benefits we can gain by others’ use of AI

The above matrix is one example of how an in-house risk committee could start to game out the implications of AI. Bring together the people within your enterprise who’d have good insight for each of those squares, such as:

  • IT
  • Sales & marketing
  • HR
  • Finance
  • Compliance & privacy
  • Legal

Then start brainstorming. Or, assign people to the squares most relevant to them, to work up a list of potential risks and benefits. For example, compliance teams would presumably have lots to say about risks the company poses to itself and others; sales would have better insights about the benefits of you using AI and the risks of others using AI.

Then the committee can reconvene to compare notes. See where risks and benefits overlap, or which risks and benefits are the largest, and therefore should get the most attention. Start to develop a process to manage AI’s arrival in your enterprise and your broader world.

A Word on Governance

Risk management frameworks always start with governance, and for good reason: to create systems that steer your employees toward a few basic goals, even if the day-to-day steps in that journey feel a bit rocky and improvisational.

So when we talk about a risk-reward matrix and in-house risk committees, we’re really talking about establishing a governance process to manage your company’s embrace of artificial intelligence. A few points then come to mind.

First, you should establish some sort of governance process because that’s something the board will want to see. Technically, the board doesn’t establish that governance process itself; it exists to assure that you, the management team, have established a sensible governance process. If you haven’t, and your company slowly finds itself outflanked by competitors who are embracing AI smartly – it’s not the board’s job to step in and develop that AI governance process. It’s the board’s job to replace the management team with new managers who can.

Second, establish a governance process because without one, employees in your enterprise will start implementing AI on their own. That creates the one risk that senior managers hate most of all: that they get surprised by something they didn’t know their company was doing.

I can recall one of the largest fast food businesses in the world (won’t name them here, but you’ve eaten there) grappling with social media in the early 2010s. The company settled on a policy that when local units wanted to try something new on social media, they first had to review that project with a team at corporate HQ comprised of legal and IT executives. Once corporate approved the local team’s idea, that idea became an “approved use” that any other local teams could use freely. That sort of approach would fit well with AI too.

Anyway, that’s enough for today. We’ve barely begun with artificial intelligence and there will be plenty more to say about it in the future.