State Policy Update: What have states been up to regarding AI and how will it impact startups?

Engine
5 min readAug 8, 2024

--

By Min Jun Jung and the Engine Policy Team

In a series of State Policy Updates, we are exploring how state initiatives are impacting the innovation happening in startup ecosystems across the country. While much of Engine’s work centers on policy advocacy and education in the federal landscape, state-based policymakers impact startup formation and growth through their own work. The rules state legislatures create can vary slightly, significantly — or outright conflict with — each other. State legislatures move faster than Congress, creating patchworks of varying rules that can touch every area of running a startup, from privacy to payroll, adding layers of complexity, creating duplicate costs, erecting barriers to market, and steering where startups can scale. This blog post addresses AI, while others in the series tackle data privacy and young user developments.

This week, state lawmakers from across the country gathered in Kentucky for an annual confab, discussing policy issues and digesting learnings from their colleagues across the country. Much of the conversation has focused on artificial intelligence, since “AI” has been the buzzword for state legislatures across the country this year, with more than two hundred AI-related bills introduced in January alone and over four hundred active by February. More than forty states have stepped into this legislative arena, proposing measures to govern the burgeoning field of AI.

In March, Utah enacted the Artificial Intelligence Policy Act, one of the first AI-specific consumer protection laws. The law imposes disclosure requirements for the use of generative AI and holds companies accountable for AI-generated content, restricting their ability to deny responsibility for statements produced by AI that violate existing consumer protections. The Act also established an Office of AI Policy, which oversees the creation of a regulatory sandbox for AI development in Utah. This sandbox offers participating startups “regulatory mitigation” for up to two years with reduced fines and cure periods, providing an environment to test AI technologies and bring products to market more quickly.

In May, Colorado followed with SB 205, the Colorado AI Act — the country’s first comprehensive AI legislation. The law focuses on regulating systems used in sectors like education, employment, and healthcare — areas considered high risk by the legislation. Many startups are developing AI solutions in those areas, and the regulation would create compliance costs and limit their competitiveness. Certain small businesses and startups are exempt from some of the law’s requirements, but most AI startups, however, will likely need to grapple with many of the same requirements as well-funded industry leaders and incumbents.

Moreover, as Colorado becomes the first in what is likely to be a series of states creating their own AI rulebook, it would likely lead startups to need to create different but substantively similar documentation for various states — generating marginal, but meaningful costs that additionally weigh on their competitiveness. Some state policymakers recognized these negative impacts, as substantively similar legislation in Connecticut failed to move forward after the Governor there said he would veto it over its likely negative impact on innovation.

Meanwhile in California, whose legislature is still open through the end of the month, has several AI-related legislative efforts that stand to impact startups. One bill that has caught the attention of academics, civil society groups, industry, and startups alike is SB 1047, which passed the Senate in May and seems on track to pass in the Assembly. The bill is aimed at preventing existential risks and focuses on preemptive measures, requiring developers to assess risks before training an AI system and mandating a shutdown capability (kill switch), along with requirements for regular compliance reporting. This bill would disincentivize model development and innovation-enhancing open-source AI — imposing significant regulatory hurdles on both startups that work in model development and startups that harness existing open-source AI resources.

California has other concerning AI bills that may be sent to the governor’s desk later this year. AB 2930 would regulate automated decision making tools in order to prevent discrimination — and is nominally similar to an eponymous (and controversial) California Privacy Protection Agency draft rulemaking. The discrimination that the bill aims to address is already unlawful, regardless of human or AI means. Another bill, AB 2013 requires that developers publicize documentation regarding the types of data used to train the AI system. While transparency is beneficial, disclosure requirements that are too granular could stifle innovation by exposing a startup’s ‘secret sauce,’ inviting litigation over content of training data, and imposing significant compliance requirements on startups involved in model development.

On the other hand, California’s SB 893, which passed the Senate, will establish a California Artificial Intelligence Research Hub to facilitate collaboration between government, academia, and the private sector in AI development and research. This bill would promote increased access to data, expand public computing infrastructure, and would also help to grow the AI talent pool as a consequence. By addressing these resource constraints that impact startups, the legislation stands to benefit the startup ecosystem and support responsible AI innovation.

Even with comprehensive legislative approaches from some states, the overall legislative landscape remains fragmented and predominantly sector-specific. For instance, in the realm of employment, Illinois’ AI Video Interview Act regulates AI in hiring practices while New York City’s Local Law 144 mandates bias audits for automated decision-making tools. Regarding chatbots, California and New Jersey have enacted laws that require disclosure when AI systems interact with humans, similar to Utah’s requirements. Florida has also passed legislation to specifically regulate AI in political advertising.

The upshot:

The AI legislative landscape is not only highly fragmented but also rapidly evolving, creating significant challenges for startups in the space. For startups, compliance is complicated by regulations varying not just from state to state, but also from sector to sector. This complexity is exacerbated by fragmented data privacy laws, making it exceedingly difficult for startups to develop or deploy AI systems effectively. These challenges underscore the pressing need for rules to be balanced, clear, and consistent nationwide that ensure the regulatory environment does not become a bottleneck for innovation.

Disclaimer: This post provides general information related to the law. It does not, and is not intended to, provide legal advice and does not create an attorney-client relationship. If you need legal advice, please contact an attorney directly.

Engine is a non-profit technology policy, research, and advocacy organization that bridges the gap between policymakers and startups. Engine works with government and a community of thousands of high-technology, growth-oriented startups across the nation to support the development of technology entrepreneurship through economic research, policy analysis, and advocacy on local and national issues.

--

--

Engine
Engine

Written by Engine

Engine is the voice of startups in government. We are a nonprofit that supports entrepreneurship through economic research, policy analysis, and advocacy.

No responses yet