As artificial intelligence (AI) becomes increasingly integrated into daily life, the need for clear legal frameworks to assign responsibility grows urgent. Who is liable when an autonomous system causes harm: the algorithm’s developer, the hardware manufacturer, the operator, or the end user? This question is at the forefront of recent legislative efforts in Russia, the European Union, the United States, and China, each tackling the challenge of regulating AI in distinct ways.
Russia: Delivery Robots Take to the Streets
In Russia, Yandex, a leading tech company, has proposed an Experimental Legal Regime (ELR) to the Ministry of Economic Development, allowing autonomous delivery robots to operate on sidewalks and bike paths. The proposal includes:
- A speed limit of 25 km/h (15.5 mph).
- Mandatory identification via license plates and QR codes.
- Compulsory liability insurance of 500,000 RUB (approximately $5,000 USD).
- A protocol for handling accidents involving robots.
This initiative underscores a broader challenge: ensuring safety and legal compliance as autonomous systems navigate public spaces.
European Union: The World’s First Comprehensive AI Law
On July 12, 2024, the European Union published the Artificial Intelligence Act (AI Act) in its Official Journal, with the law taking effect on August 1, 2024. As the world’s first comprehensive AI regulation, the AI Act adopts a risk-based approach:
- It bans “unacceptable risk” AI systems, such as social scoring or real-time facial recognition in public spaces (except in narrowly defined cases, e.g., law enforcement).
- “High-risk” AI systems—used in healthcare, transportation, or hiring—must meet stringent requirements for transparency, data governance, and human oversight.
- Bans on unacceptable-risk AI take effect February 2, 2025.
- Other provisions will roll out gradually, with full obligations for high-risk systems effective August 1, 2027.
The AI Act directly addresses risks like discrimination, algorithmic bias, human rights violations, and insufficient human oversight. Rather than focusing on “unappealable” decisions, it emphasizes ensuring humans can intervene in critical AI-driven outcomes.
United States: A Push to Pause State-Level AI Rules
In May 2025, House Republicans, led by Rep. Brett Guthrie, Chair of the House Energy and Commerce Committee, introduced a bill proposing a 10-year moratorium on state-level AI regulations. The goal is to prevent a patchwork of state laws and consolidate authority at the federal level. However:
- Critics argue the bill may violate the 10th Amendment, which reserves powers to states.
- Its passage in the Senate is uncertain due to the Byrd Rule, which limits non-budgetary provisions in reconciliation bills.
- With no comprehensive federal AI law, states have stepped in—California alone has enacted over 20 AI-related laws.
This leaves U.S. AI regulation fragmented, creating compliance challenges for businesses operating across state lines.
China: State Control and Algorithmic Ethics
China pursues a centralized, state-driven approach to AI regulation. In September 2024, the National Information Security Standardization Technical Committee (NISSTC) released draft technical standards for labeling AI-generated content (e.g., watermarks, metadata), building on an existing framework:
- 2021: Ethical principles for AI development.
- 2022: Rules on “deep synthesis” technologies, governing deepfakes and generative algorithms.
- 2023: Interim measures for generative AI, mandating algorithm registration, content censorship, and adherence to socialist values.
- 2024: A global AI governance strategy, positioning China as a counterpoint to Western models.
Unlike the EU’s human rights focus, China prioritizes content control, public morality, and national security, ensuring transparency in content origins and ideological alignment.
Who’s Responsible for AI Actions?
AI systems blur traditional lines between developers, manufacturers, operators, users, and insurers. Below is a breakdown of potential liabilities:
Stakeholder | Potential Liability |
Software Developer | Algorithm errors, inadequate testing, biased data |
Hardware Manufacturer | Hardware failures, faulty AI integration |
Operator/Owner | Improper operation, lack of oversight procedures |
User | Misuse of AI, violation of usage guidelines |
Insurance Company | Compensation when liability is unclear or shared |
Liability frameworks vary by jurisdiction: the EU mandates robust human oversight, China enforces centralized algorithm registration, and the U.S. lacks a unified model, relying on state-level rules and existing laws.
Takeaways: How Businesses Can Prepare for AI Regulation
- Risk Assessment: Evaluate whether your AI systems could cause harm, discrimination, or legal violations.
- Contract Structuring: Include clauses in contracts to allocate liability, limit damages, and secure insurance.
- AI System Audits: Implement mechanisms for transparency, traceability, and human intervention.
- Regulatory Monitoring: Stay updated on evolving laws in key markets — EU, U.S., China, and Russia— and align internal policies accordingly.
AI is becoming a central player in civil and commercial interactions. As regulations tighten, proactive compliance today will minimize legal and reputational risks tomorrow.
Need Legal Support?
If you’re looking to assess AI-related legal risks, revise contracts with vendors, or develop compliance policies, reach out at info@danilovpartners.com — our law firm can help. We offer expert guidance tailored to the applicable regulatory landscapes.