Seeking Consensus from Competing Perspectives
AI is advancing rapidly, bringing tremendous promise along with universally understood societal risks. History is ridden with cautions of when the pursuit of profit has compromised safety. In the case of AI, public wellbeing intersects with commercial priorities and the winners and losers will depend on how the regulatory story unfolds. How AI should be regulated has become a debated issue, with differing perspectives emerging from technology incumbents, startups, and investors.
Incumbents Prefer Limited or Self-Regulation
Industry leaders wish to remain captains of the AI ship they built.
Mark Zuckerberg, whose opposition to Elon Musk’s doomsday AI warnings prompted internet virality, contends against overly restrictive regulations “If you’re arguing against AI then you’re arguing against safer cars that aren’t going to have accidents, and you’re arguing against being able to better diagnose people when they’re sick.”1 In the past, he has maintained existing controls across safety, privacy and data security make additional regulation unnecessary. He believes companies like Facebook can self-police AI risks through internal practices, best understood by engineers building the technology. The irony comes from Meta’s disbanding of their responsible innovation team during September 2022 layoffs, which affected those overseeing evaluation of civil rights and ethics in AI development on their platform.
Sundar Pichai struck a slightly different tone, plodding “every product of every company will be impacted by the quick development of AI” warning that society needs to prepare for technologies like the ones it’s already launched.2 Google has launched a document outlining “recommendations for regulating AI,” but Pichai said society must quickly adapt with regulation, laws to punish abuse and treaties among nations to make AI safe for the world as well as rules that “Align with human values including morality.” The document regulators ““consider trade-offs between different policy objectives, including efficiency and productivity enhancement, transparency, fairness, privacy, security, and resilience.” There will always be a tug of war between corporate entities struggling against oversight and government regulators seeking to protect the public. Google has taken a pro-regulatory approach, but the very nature of their recommendation document promotes regulatory sway in a direction they have proposed, favorable to them.
To their credit, tech firms have developed remarkable innovations. And who knows the technology better than its creators? Still, public faith in self-governance waned after episodes like the Facebook-Cambridge Analytica scandal. Some oversight appears prudent.
Brad Smith of Microsoft noted in 2021 that the tech sector needs to “learn from issues like privacy”, where a wait-and-see approach resulted in an overdue intervention via GDPR. Smith is against the open letter, supported by Elon Musk, Steve Wozniak, and Yuval Noah Harari among others, calling to pause AI experimentation to allow regulation and safety to catch up. Premature regulation risks constraining further beneficial breakthroughs. “Rather than slow down the pace of technology, which I think is extraordinarily difficult, I don’t think China’s going to jump on that bandwagon,” Smith said. “Let’s use six months to go faster.”
Large tech firms that dominate today's landscape have generally favored a light touch regulatory approach to AI. Taking that stance with AI is universally regarded as reckless but incumbents are able to sway regulatory frameworks with recommendations that favor their businesses while rushing ahead in the race to dominate the space. They benefit from the rapid pace of AI development. With billions spent annually on AI research, favoring industry self-regulation over government intervention prevents an existential threat to their dominance.
Startups Seek a Level Playing Field
AI startups trying to establish footholds despite the incumbents' dominance tend to favor more aggressive regulation, but not without nuance. Favoring guardrails to prevent unchecked capability races prevents incumbents from crowding them out. Even the best funded are financially unequipped to compete in an “unbound race” towards ever-increasing power and capability and defined constraints ensure everyone competes on a level playing field.
OpenAI’s Sam Altman (though looking more like the incumbents) has been most vocal about the fine balance between laxness and over-regulation. After a world tour meeting with government officials to try and influence regulatory frameworks, he expressed skepticism centered on designation of “high risk” systems, in which OpenAI may be included, as it is currently drafted in E.U. law.
Anthropic CEO Dario Amodei concurs that oversight is important for steering powerful AI responsibly. In a presentation he gave to the Senate in July he warned that AI is much closer than anticipated to overtaking human intelligence and even helping to produce weapons of mass destruction. Amodei, whose AI company is structured as a “public benefit corporation,” recommended US policies to secure AI supply chains — from the semiconductors that provide computing power to their resulting models.
Scale AI CEO Alexandr Wang echoes the sentiment asking specifically for “Algorithmic transparency standards [that] help build public trust in AI systems without revealing sensitive IP" in his recent presentation to the House Armed Services subcommittee. Wang is urging congress to pass a major national security bill with dozens of provisions to further adopt AI. The board narrowly passed the House and is awaiting a senate vote.
Anthropic regulatory lead Jack Clarke, posted a 6,000+ word tweet thread on AI policy calling out large tech companies “doing follow the birdie with governments - getting them to look in one direction, and away from another area of tech progress”. He suggests that even large companies that have large AI policy teams are effectively using it as “brand defense after the PR teams”.
With clear regulatory guideposts established, startups can focus innovation on beneficial applications instead of unchecked ability. However, the calls for nuance reflect startups' interest in oversight that sustains responsible innovation without imposing excessive burdens on fledgling firms. Striking that productive balance remains an evolving challenge. While regulatory frameworks like Europe's AI Act point in the right direction on risk-based oversight, the compliance complexity risks may still favor incumbents with fire power and man power to abide.
Investors Weigh Risks Against Returns
Investment firms help shape which AI ventures thrive through capital allocation. But financial incentives don't always align with social impacts.
Marc Andreessen recognizes AI requires long-term thinking but warns against government regulation that will stifle new entrants to the market. As an investor tasked with finding the next “Trillion Dollar Idea”, as he describes it, in the space he tweets “Big AI companies should be allowed to build AI as fast and aggressively as they can—but not allowed to achieve regulatory capture, [and] not allowed to establish a government-protect cartel that is insulated from market competition due to incorrect claims of AI risk”.
Sonya Huang at Sequoia Capital suggested that the startups themselves can create technology to mitigate risks of this technology. She recognized that questions around ethics and regulations are “very real [and] thorny” however used the example of hallucinations, which is the tendency of these models to make up things on the fly to suggest “That’s getting solved by these foundational model companies. I wouldn’t be surprised if in the next six to 12 months we have models that are actually capable of truthfulness”. More than half of Sequoia's roughly 20 new investments this year have been focused on AI, up from around a third last year, numbers that have not been previously reported; favoring regulatory frameworks that support startup innovation is aligned with their financial incentive.
VCs continue to pour billions annually ($52B in 2022) into AI and they have yet to substantially favor or fund ventures prioritizing explainability, safety, or social responsibility. Vinod Khosla, who is also invested in numerous companies in the space, suggested that efforts to moderate the rate of progress, such as the hiatus in research, are misguided or even self-motivated. He warned entrepreneurs that the proposed halt of AI advancement over-focusing on ethics and responsibility can undermine competitiveness. Khosla, Andreessen and some peers, often cite China's unfettered AI surge as justification to move as fast as possible, risks be damned. (Teaser!... more on varying government approaches to AI regulation in the next part of this series!)
Here lies the hardest tradeoff: Investing responsibly often conflicts with maximizing near-term returns. But whether they actively advocate for equitable regulation may depend on just how quickly the returns on that investment materialize.
Charting a Responsible Path Forward: Seeking Shared Objectives
Crafting a balanced regulatory approach requires engagement from all stakeholders. Heavy restrictions can easily stifle beneficial innovation, consolidating incumbent dominance. But unfettered development brings risks including job losses, inequality, and unpredictable accidents.
Allowing tech giants to self-police has repeatedly failed, harming consumers and society. But prescriptive top-down regulation struggles to keep pace with technology's rapid evolution. Startups need latitude to build AI responsibly. But principles-first development alone cannot compete with the allure of power, profits and rapid capabilities growth. Lastly, investors must balance risks with their structural incentive to prioritize financial reward.
While perspectives differ, certainly common ground exists. All benefit from AI that uplifts society, not wreaks havoc. But unbridled races risk harmful accidents, and excessive regulation stifles progress.