EU AI Regulation Needs Balance to Protect Startups, Promote Innovation

Engine
4 min readAug 5, 2021

by Porter Enstrom, Policy Fellow, Engine Foundation

European policymakers concerned about the risks posed by human biases entering Artificial Intelligence (AI) systems introduced a draft proposal earlier this year that would govern AI within the European Union and American startups operating there. Eliminating bias in AI is necessary to protect human rights, and balanced regulation can provide the legal certainty required to foster socially-beneficial innovation. However, the proposed AI law misses this mark in its current form and instead discourages the uptake of AI in the EU and threatens the vitality of AI startups on both sides of the Atlantic.

The proliferation of bias in AI threatens both those affected by the biases and our confidence in AI technologies. Certain guardrails should be in place to ensure that human biases aren’t reflected in AI products, but these guardrails should be light-touch to foster innovation and greater adoption of AI technologies that benefit consumers, government, and enterprise alike.

The Commission’s proposal struggles to adequately balance these goals. Their proposal attempts to shelve AI into three categories — prohibited, high risk, and low risk — with different requirements for each tier. For example, AI used for education, employment, healthcare, and other functions is deemed high-risk and includes significant obligations which can head off socially important innovation or lump in seemingly innocuous uses of AI.

These obligations fall on both “providers” — third parties, including U.S. startups, that develop AI for customers — and “users” — companies and other organizations that utilize AI systems. A company can also be both a user and provider when they develop AI that they use in the EU. “Providers” of high-risk AI are subject to more stringent obligations and would be required to maintain a risk management system, test to identify risks, automatically log events, and establish data governance controls. Implementing a new risk management system could cost between $229,000 to $391,000 for the first year, while conformity assessment cost projections range from $118 to $1,186,000, due in part to ambiguity in the law. These costs will divert limited startup resources, increase the burden of bringing a product to market, and could discourage investment for new applications of AI.

“Users” of “high-risk” AI must maintain constant human oversight and monitoring. This is estimated to cost around $6000 to $10,000 per year — adding to the cost of implementing an AI solution for their enterprise. Such commercial users may also face higher costs for AI products as a result of the costs imposed upon providers. Higher costs to implement an AI solution will lower the uptake of AI applications throughout the EU, reducing the potential market for U.S. AI startups. Together with the burdens imposed on providers, these costs for companies and governments buying AI will reduce competitiveness and market access for U.S. startups looking to operate in the EU.

We’ve seen similar regulatory frameworks, like GDPR, stifle innovation by drastically increasing compliance costs, especially for startups. As a benchmark, a 2018 study estimated that companies spent an average of $1.3 million coming into compliance with GDPR. These high burdens initially led to many startups and companies failing or leaving the EU market. Under the proposed AI law, for SMEs with one high-risk AI system, coming into compliance may cost up to $474,000. The proposed law creates barriers that will prevent innovation from reaching the market and may have similar consequences.

In an attempt to mitigate the burdens placed upon startups, the proposal includes a framework for establishing regulatory sandboxes with priority access for startups and small-scale companies. However, sandbox participants would still be liable for all regulations under the proposal and only stand to benefit from the supervision of EU authorities. Without room for experimentation or significantly lower regulatory burdens, the advantage of this environment is somewhat unclear. Features that meaningfully lower burdens and promote innovation will be necessary if startups are to engage with the process.

Startups should have the opportunity to innovate in the field of AI. Whether by automating healthcare data collection or simplifying the recruitment process for diverse talent, startups are paving the future for new and interesting ways to leverage AI. Unfortunately, the EU’s proposal could prevent startups trying to use AI to solve these problems from entering the EU market. To ensure that startups creating innovative solutions to society’s most pressing problems with AI technology are able to thrive and compete in the EU, the Commission should ensure balance in its proposed law.

The EC is soliciting feedback on the proposed AI law until August 6, 2021.

Engine is a non-profit technology policy, research, and advocacy organization that bridges the gap between policymakers and startups. Engine works with government and a community of thousands of high-technology, growth-oriented startups across the nation to support the development of technology entrepreneurship through economic research, policy analysis, and advocacy on local and national issues.

--

--

Engine

Engine is the voice of startups in government. We are a nonprofit that supports entrepreneurship through economic research, policy analysis, and advocacy.