The potential of AI technology is that it’s likely going to touch all aspects of our lives in ways that we can’t even imagine at the moment. But, whenever there’s a technology that has that kind of impact, it comes with equal levels of risk.
Earlier this year, Australian government workers fed grant applications into generative AI tools, including ChatGPT, to generate assessments of each one, which critics say could infringe on applicants’ confidentiality and security. And in the U.K., a journalist was able to bypass Lloyds Bank’s voice security features using AI to access his own account. This is just a small sample of this year’s submissions to the AI Incident Database, which tracks examples of AI systems causing safety issues, discrimination or other real-world problems.
The fact is, this technology won’t have the transformational impact we want it to if companies don’t manage these risks. That’s why it’s so important for businesses that see opportunities to use AI to also develop policies around AI, and processes and systems, to ensure company-wide compliance. Here’s how.
Identify your risk
Risk can arise at any aspect of the value chain; the way data is collected to train AI can pose a risk, but so can the way users interact with it, where it’s hosted and what environments the model is running in. Keeping an AI model running too long can lead to data drift, so decisions around when to take a model offline can lead to risk, too. That’s why companies need to understand the potential problems that can materialize from an AI model’s life cycle, so they can develop rules for every single stage.
It’s also important to understand the legal regulations that apply to AI, and the industry best practices. When it comes to laws, Europe is leading the charge at the moment with the EU AI Act, which is setting standards around AI regulation, much like the General Data Protection Regulation (EU GDPR) did for information privacy. It’s likely to pass next year. In Canada, it’s the Artificial Intelligence and Data Act (AIDA), which is expected to come into effect in 2025. But there are also professional standards that industry bodies develop for themselves. If you’re an organization, it makes a ton of sense to hold yourself to those high standards, even if you aren’t legally required to do so, because it will help you build better, more reliable, more trustworthy systems, which ultimately become your competitive advantage.
Speak to stakeholders
An effective AI policy requires input from many different sources. Someone needs to identify what the business needs are in the first place. A lawyer will need to be involved in how the data is being collected and prepared. Your head of machine learning or your head of AI should review the technical decisions that have been made in terms of processing the data and the model the company has chosen. Product designers should be involved in decisions around how the user interacts with the model. You might also need input from HR, sales and marketing. By having these departments involved, you’re making sure that there’s no significant deficiencies in developing your company’s AI policy as these different perspectives are considered and taken into account.
Develop your policies
This isn’t an easy task; there’s actually quite a lot of noise in this space, so it can be really hard for companies to know which standards they should follow and which ones don’t make sense for their business. But, by involving stakeholders, you can help filter out that noise and develop a policy that works for your company.
Some other challenges may arise, like how to actually measure individual systems and how they manage all of their models. One good place to start is the Canadian government’s voluntary Code of Conduct for Generative AI, which spells out six principles: accountability, safety, fairness and equity, transparency, human oversight and monitoring and validity and robustness. There’s also software out there that can help guide you through this process.
Put your processes and systems into practice—and measure for success
Once you’re actually implementing your policy, it’s important to make sure that it’s working. Companies generally already have qualitative metrics for success; they might say this model has to meet this score on this metric for us to continue using it in production. But there are also qualitative factors to consider, including employee and customer satisfaction or business impact. It’s important to set regular review cycles to make sure your model is working as intended and is offering the benefits you’re expecting. If it’s a system that could do quite a lot of damage—for example, one used to make lending decisions at a bank, which could be potentially discriminatory—your company might want to review it on a six-month or one-year basis. If it’s not as high-risk—think, an AI algorithm in a mobile chess game—you might review it every two years or every four years.
Pay attention to the market and update as needed
This is a fast-moving space where things are changing all the time, so perhaps the most important part of implementing an AI policy is keeping up-to-date on regulatory changes and industry best practices, and modifying your policy as needed.
Ryan Donnelly spent six years as a lawyer at leading international law firms specializing in data protection regulations before co-founding Enzai, a software platform that helps businesses ensure they have proper AI governance policies and practices in place.