AI Regulation 101

Understanding the EU’s Artificial Intelligence Act can help turn good defense (compliance) into offense–new forms of revenue with AI, AI-for-good, and differentiation.

Artificial intelligence has power and promise, but it also comes with significant risk and responsibility for companies that employ it. Unfortunately, much of the conventional wisdom about regulation is that it's a purely defensive obligation, a tax on business excellence.

But, as we know from sports, great defense can lead to great offense. It's no different for AI. In 2021, a European Union Commission issued a proposal for AI regulation called the Artificial Intelligence Act, or AIA. If you dig into it, there's a lot of good stuff to learn. But, most importantly, its principles can yield both compliance and innovation.

The act defines four types of AI use based on their potential to harm:

  1. The vast majority of applications have minimal risk and will be unregulated in the EU, like using AI in video games or email SPAM filters.

  2. Limited risk AI, like chatbot use or AI-driven selection of videos to show. This type of AI will require disclosure and transparency and asked to opt-in, just as we opt in for marketing and email communications.

  3. High-Risk AI systems are where AI risks--and obligations--get serious. Using AI in medical devices, financial transactions like loan approvals, transportation systems, law enforcement, or what I call Justice-Tech, including using AI for political purposes, all have important obligations and controls. Demonstrating compliance at this level earns this EU Mark.

  4. Finally, there's unacceptable risk. These uses of AI are explicitly banned. This includes so-called "China-Style" social scoring of citizens and restriction of freedoms, sending subliminal messages to unwitting users of apps, using AI to manipulate children in toys, like a doll, or big-brother-type uses of biometrics in law enforcement. These types of AI have been mischaracterized in the media, but the EU sent a message by explicitly banning their use.

The AI Act provides a blueprint for compliance. But it's much more. These principles are also the key to generating new revenue streams, increasing safety, engaging customers, automating business processes, and creating incredible value for society.

The AIA blueprint includes seven principles:

  1. Human agency and oversight of AI

  2. Technical robustness and safety

  3. Privacy and governance of not only algorithms but DATA they use

  4. Transparency about why, where, how, and who uses AI and the data it uses

  5. Diversity, fairness, and bias mitigation

  6. Societal and environmental well-being in healthcare, farming, education, infrastructure management, energy, transportation and logistics, public services, security, justice, energy efficiency, climate change impact and adaptation.

  7. And, of course, accountability.

In part two of this series, I'll examine these principles, case studies, and best practices for you to do AI right and turn great defense into offense.

Discuss this post on LinkedIn.




Previous
Previous

Today is My Last Day at TIBCO!

Next
Next

The Future of MOD-ifiable Software