Google Pushes for Looser AI Regulations, Calls for Copyright and Policy Reforms

You are currently viewing Google Pushes for Looser AI Regulations, Calls for Copyright and Policy Reforms

In a bold move to influence AI regulations, Google has proposed easing copyright restrictions on AI training while advocating for a more unified approach to AI policy in the U.S. The tech giant argues that current laws on intellectual property, AI hardware exports, and liability are holding back innovation. With lawmakers introducing hundreds of AI-related bills, Google warns that fragmented AI regulations could slow progress and hinder the U.S.’s ability to compete globally.

Google Urges Reform in AI Training Copyright Laws

One of the central points in Google’s policy proposal is a call for looser copyright laws governing AI training data. The company insists that AI models should be allowed to train on publicly available copyrighted content, arguing that fair use and text-and-data mining exceptions are essential for AI development.

While Google claims that using copyrighted content for AI training does not significantly harm rights holders, critics—including content creators and publishers—disagree. Several lawsuits have already been filed against Google, alleging that it used copyrighted material without proper authorization. Until U.S. courts clarify the legal standing of AI training under fair use, the debate over AI regulations and copyright law is set to continue.

Also Read: French Publishers Sue Meta Over Copyright Violations in AI Training

AI Hardware Export Restrictions Could Hurt U.S. Competitiveness, Google Warns

Beyond copyright concerns, Google is also challenging AI hardware export regulations imposed under the Biden administration. These restrictions, aimed at limiting the sale of advanced AI chips to certain foreign countries, could negatively impact cloud computing providers and slow AI innovation, according to Google.

While Microsoft has signaled confidence in complying with the rules, Google argues that the current framework places unfair burdens on AI businesses. The company is urging policymakers to find a balance between national security and fostering AI growth, ensuring that restrictive AI regulations do not weaken U.S. leadership in the field.

Sustained Government Investment in AI Research is Critical

Another major concern raised in Google’s proposal is the potential reduction in AI research funding. The company warns that cutting federal grants for AI research could stall progress and put the U.S. at a disadvantage against global competitors.

With recent budget constraints, many AI research institutions fear that reduced funding will limit access to cutting-edge AI technologies. Google’s position aligns with industry experts who argue that government-backed AI research is crucial for maintaining technological leadership. Without sustained investment, AI advancements could slow down significantly, impacting industries reliant on AI-driven automation and decision-making.

Google Pushes for a Unified Federal AI Regulatory Framework

The AI regulatory landscape in the U.S. is becoming increasingly complex. In just the first two months of 2025, lawmakers have introduced 781 AI-related bills, creating a fragmented legal environment that could make compliance difficult for AI companies.

Google is urging Congress to establish a single, unified set of AI regulations at the federal level to prevent confusion and ensure consistency across industries. Without cohesive AI regulations, businesses may face conflicting requirements from different states, which could hinder AI innovation and adoption.

Google Rejects Strict AI Liability Rules

Liability is another major point of contention in the ongoing AI regulations debate. Some policymakers argue that AI developers should be held legally responsible for the way their models are used. However, Google strongly opposes this, emphasizing that misuse of AI tools often happens outside a developer’s control.

The company was a vocal opponent of California’s SB 1047 bill, which sought to introduce strict liability measures for AI developers. Google argues that end users, not AI creators, should bear responsibility for AI-related risks. This position highlights a broader industry push to shift liability away from AI manufacturers and onto those using the technology.

Google Pushes Back Against AI Transparency Mandates

Transparency is a growing concern in AI regulations, with governments worldwide considering new laws requiring AI developers to disclose details about their models. The European Union’s AI Act and California’s AB 2013 propose strict transparency requirements, which Google warns could expose trade secrets and increase cybersecurity risks.

While the company acknowledges the need for some level of transparency, it argues that overly strict disclosure rules could make it easier for bad actors to manipulate AI systems. Google is advocating for a balanced approach that protects AI security while maintaining regulatory compliance.

The Future of AI Regulations: What Comes Next?

As the debate over AI regulations intensifies, Google’s policy proposal underscores the key areas of contention in AI governance, including copyright laws, AI hardware exports, liability, and transparency.

With multiple lawsuits, political debates, and policy shifts on the horizon, the coming months will be crucial in determining how AI regulations evolve in the U.S. Google, OpenAI, Microsoft, and other tech leaders will continue to play a significant role in shaping these discussions as the battle over AI policy and regulation unfolds.

Leave a Reply