Partisan ‘big tech’ talking points could catch startups in the crosshairs

Engine
6 min readSep 16, 2022

By The Engine Advocacy & Foundation Team

Updated 9/28/2022: The blog post below has been updated to reflect updates to the status of the Journalism Competition and Preservation Act.

Lawmakers do not have a clear vision for how the Internet ecosystem should work. If anything, they have competing and mutually exclusive visions. But that doesn’t stop them from threatening policy changes that would alter the way tech companies, including startups, have to operate.

When given the opportunity — whether at hearings, in considering legislation, during press conferences, or in talking to constituents on the very Internet platforms they decry — members of Congress will complain about tech companies and push legislative proposals that would change the legal landscape for the entire tech ecosystem, including startup founders. This is hardly a new phenomenon, but in the last week we’ve seen several tech policy conversations where contradictory, party-line talking points are on full display that — if taken seriously as policy ideas on everything from privacy, to national security, to content moderation, and more — would make life much harder for thousands of startups across the country.

At multiple hearings this week — one held by the Senate Judiciary Committee and another held by the Senate Committee on Homeland Security and Governmental Affairs — many lawmakers used the few minutes they get to ask questions to air unrelated grievances and advance partisan points that either talked past or directly contradicted points being made by other lawmakers. Depending on who was talking, Internet companies are either faulted for doing too little or too much — on the same issue. During the Senate Judiciary hearing, Sen. Lindsey Graham (R-S.C.) previewed an upcoming piece of legislation with Sen. Elizabeth Warren (D-Mass.) that would create a separate agency to regulate Internet companies. While the text of the proposal isn’t public, Graham expressed interest in an agency that could limit how companies moderate and remove user-generated content, mandate an appeals process for users who are unhappy when their content is removed, and create a licensing regime for companies that want to host user-generated content. Depending on who is in charge of that agency, the outcome would be wildly different; what one side of the aisle sees as “misinformation” and “incitement of violence” the other side sees as “protected speech” and “political organizing,” and vice versa. (And while the conversations are focused on “big tech,” we’ve long explained how the pressure around content moderation can impact startups.)

But we’ve been here before, and when the rubber meets the road, those disagreements become even more apparent. For example, last week, a Senate panel was considering bipartisan competition legislation purportedly aimed at big tech and the challenges facing local media, but the conversation unraveled when diverging views on content moderation came up. The bill, the Journalism Competition and Preservation Act, would create a specific system to force certain larger tech companies to link to and pay for certain news content. It’s always been controversial, with public advocates, journalists, law professors, tech groups, and libraries voicing opposition. But consideration of the bill stalled in committee when Republicans added an amendment aimed at addressing alleged “censorship.” Even though the bill, led by a Democratic senator, would necessarily impact content on large platforms (because it would prevent them from removing or demoting certain links), bipartisan negotiations fell apart over the amendment. (At a September markup, the Senate Judiciary Committee voted to advance an amended version of the bill.)

Through all of this, what has emerged is a policy agenda — or, more accurately, several policy agendas that occasionally (appear to) intersect — driven by concerns about a handful of decisions made by a few companies. But there is no consensus on what those concerns are, whether those concerns are based in facts, what potential solutions can be advanced through policymaking, and what tradeoffs those solutions might present. In one conversation, a lawmaker will criticize a company for working with law enforcement, and another will criticize the company for not working closely enough with law enforcement. Within minutes, a company will be accused of colluding with other companies and also accused of failing to work with others in the industry to respond to a crisis. In the same hearing, a company can be criticized for “censorship” over removing a piece of content and criticized for failing to take down “misinformation” when it did not remove enough similar content.

These mutually-exclusive visions form the foundation of confusing and contradicting policy solutions that create uncertainty for startups and pose headwinds as they try to grow. What’s worse for startups, is that for all the airtime given to technology policy debates, little time is spent grappling with the actual complexities of these issues. For instance, the cost and efficacy of technological and human resources it takes to do content moderation at scale varies widely based on company size and the type of content they host. While large tech companies can build AI to attempt to detect something like violence in video in real time and hire tens of thousands of employees around the world working on content moderation, startups that host user-generated content already spend a large amount of their very limited budgets on content moderation.

All policy solutions present tradeoffs, and that’s especially true in the ever-changing world of technology and if policymakers take into account (which they should) the needs of startups. Requiring a platform to be open to all makes it harder for a platform to remove bad actors. Decisions to promote information from trusted sources and remove contradicting information leads to accusations of collusion and censorship. Overzealous enforcement of policies regarding scanning for illegal content, even in situations that seem like they should be black-and-white like child sexual abuse material, can lead to rule-abiding users having their accounts taken away. Requiring that users provide identifying information to keep certain products and services away from children means there are fewer opportunities to be anonymous online.

These tradeoffs require nuanced, in-depth conversations rooted in reality that allow for multiple perspectives to be considered and balanced carefully. But most public congressional conversations fail to grapple with those tensions, and instead feature policymakers rattling through antagonistic talking points and terse yes-or-no questions that can’t be fully answered without context. Public hearings are often used to score political points, but many lawmakers and their staff spend a lot of time in thoughtful, private conversations as they attempt to craft legislation. These private conversations are an important part of legislating that the public and key stakeholders like startups and technical experts often don’t see, leaving them understandably disenchanted about the policymaking process.

And too often, the high-tension, high-profile public conversations are used as a basis for writing new rules, which create a wide range of unintended consequences, including a political and policy landscape that’s expensive and time-consuming, at best — and impossible, at worst — for startups to navigate. Startups need clarity and consistency as they launch and plot a plan for growth on bootstrap budgets. They need to know what the rules of the road are and that those rules won’t change for them based on how a large company executive gets publicly grilled about whatever is driving the headlines that day. And when the public grilling is what gets the most attention, it’s understandable that startup founders and other experts lose faith that the policymaking process can recognize and look out for their concerns.

There is no doubt that policymakers have a role to play in making sure the technology industry and the Internet ecosystem are functioning, including for startups. Every year, Engine publishes a Startup Policy Agenda outlining steps policymakers should take to advance technology policy and support the startup ecosystem. Those policy recommendations are based on our regular conversations with founders, investors, and other startup ecosystem members across the country. They talk about desired education and immigration policy changes to fill the STEM talent pipeline, the need for a federal privacy framework that protects consumers and creates clear rules of the road around data collection and use, a wide range of actions the government could take to make the startup ecosystem more equitable and accessible to underrepresented founders, and more. These are rarely the kind of headline-grabbing topics that lawmakers will surface during their five minutes with a tech executive, but they’re the policy changes the startup ecosystem needs. They may not rile up voters or go viral the way partisan talking points do, but they’re a great place to start a real policy conversation.

Engine is a non-profit technology policy, research, and advocacy organization that bridges the gap between policymakers and startups. Engine works with government and a community of thousands of high-technology, growth-oriented startups across the nation to support the development of technology entrepreneurship through economic research, policy analysis, and advocacy on local and national issues.

--

--

Engine

Engine is the voice of startups in government. We are a nonprofit that supports entrepreneurship through economic research, policy analysis, and advocacy.