Why ancient legal principles apply to AI as well: From Hammurabi to DeepSeek and ChatGPT
![](https://i0.wp.com/strategianews.net/wp-content/uploads/2025/02/aff844a0-1d87-413d-88e0-b6dd1c21d4b8.jpg?resize=780%2C470&ssl=1)
By Jovan Kurbalija :Executive Director, DiploFoundation & Head, Geneva Internet Platform
Department of Research, Strategic Studies and International Relations 12-02-2025
![](https://i0.wp.com/strategianews.net/wp-content/uploads/2025/02/ad3a1f9b-b75f-4202-968a-3208457addb8.jpg?resize=368%2C368&ssl=1)
In 1754 BCE, King Hammurabi of Babylon etched a radical idea into stone: accountability. His code declared that if a builder’s negligence caused a house to collapse, killing its owner, the builder would face consequences. This week (10-11 February 2025), as policymakers come to Paris to discuss AI regulation, they should be reminded of this 4,000-year-old principle: legal responsibility is on those who develop, deploy, and benefit from AI.
Back to legal basics
Hammurabi’s code wasn’t about houses; it was about human responsibility. Roman law, the Napoleonic Code, and modern legislation echo this truth: the law regulates relations among people, not tools themselves. When horses revolutionised 19th-century life, we didn’t invent “horse law.” Instead, courts applied property, liability, and contract rules to disputes among people using horses. The same logic applied to the internet. As legal scholar Frank Easterbrook quipped in 1996, there is no ‘law of the horse,’” thus, there is no need for “internet law.”
This logic extends to AI. If an algorithm defames someone, existing libel laws apply. If a self-driving car malfunctions, product liability rules hold manufacturers accountable. The issue isn’t a lack of laws—it’s a lack of will to apply them to AI.
The key problem: Section 230’s immunity shield
The key problem in AI regulation is the spirit of the legal anomaly: Section 230 of the 1996 US Communications Decency Act. This law, designed to protect fledgling internet platforms, grants them near-total immunity for user-generated content. Imagine if Hammurabi’s builder could dodge blame by claiming, “The house built itself.” That’s precisely what Section 230 allows: platforms evade responsibility for social media content and, nowadays, for AI-generated deepfakes, harassment, or fraud hosted on their systems.
Restoring accountability—making companies liable for deploying harmful AI—would resolve most issues without new regulations.
Long-term risks: Vigilance, not hysteria
Proponents of AI regulation often cite apocalyptic scenarios: rogue algorithms posing an existential threat to humanity. But fear is a poor policymaker. The precautionary principle—acting only when risks are very likely to occur—should guide us. Recent attempts to deal with perceived long-term risks of AI via controlling the “processing power” of AI systems have already failed. DeepSeek, a compact AI model rivalling giants like ChatGPT, proves that the legal limitations of AI hardware and algorithms are futile. Instead, focus on concrete harms: use existing laws to penalise AI-enabled discrimination, fraud, or copyright theft.
Discuss ethics but focus on law
The AI ethics industry has exploded, with over 1,000 ethical codes, declarations, and guidelines adopted by businesses, governments, and international organisations. While ethical discussions aren’t inherently harmful, they risk becoming a distraction. Law is the enforceable minimum of ethics—and no amount of philosophical debate can replace its teeth.
Ethics frameworks for AI are like safety seminars for arsonists: well-meaning but futile without consequences. Focus on the law first. When AI harm occurs, ask not, “Was this algorithm ethical?” but “Who broke the law?”
Enforce existing and invent new AI rules exceptionally
Jovan Kurbalija, AI governance pyramide
Think of AI regulation as a pyramid: at its base lie hardware and algorithms—too distant for the impact of AI on society and difficult to regulate. Leave them be. The middle tier? Data. Here, we already possess robust tools: enforce existing privacy laws like the GDPR. Better yet, crack down on AI platforms scraping copyrighted books, art, or music to train AI systems without permission—intellectual property law already forbids this.
The apex is where real urgency lies: AI’s public impact. When algorithms discriminate in hiring, distort markets, or spread defamation, existing frameworks—consumer rights, anti-bias statutes, tort law—are more than sufficient. Need to adapt? Let courts apply existing rules to AI as they have been doing for internet cases over the last three decades.
Humans rule, machines follow
AI is a tool, like a hammer or a horse. Hammurabi didn’t regulate hammers; he held builders accountable. We need no “AI law” because the law already binds the humans behind the machines. If we discard the logic of Section 230’s misguided immunity and recommit to timeless principles—liability, transparency, and justice—we’ll govern AI just fine. After all, 4,000 years of legal wisdom can’t be wrong.