The EU’s AI Act could have a chilling effect on open source efforts, experts warn

The nonpartisan think tank Brookings this week published a piece decrying the bloc’s regulation of open source AI, arguing it would create legal liability for general-purpose AI systems while simultaneously undermining their development. Under the E.U.’s draft AI Act, open source developers would have to adhere to guidelines for risk management, data governance, technical documentation and transparency, as well as standards of accuracy and cybersecurity.

If a company were to deploy an open source AI system that led to some disastrous outcome, the author asserts, it’s not inconceivable the company could attempt to deflect responsibility by suing the open source developers on which they built their product.

“This could further concentrate power over the future of AI in large technology companies and prevent research that is critical to the public’s understanding of AI,” Alex Engler, the analyst at Brookings who published the piece, wrote. “In the end, the [E.U.’s] attempt to regulate open-source could create a convoluted set of requirements that endangers open-source AI contributors, likely without improving use of general-purpose AI.”

In 2021, the European Commission — the E.U.’s politically independent executive arm — released the text of the AI Act, which aims to promote “trustworthy AI” deployment in the E.U. As they solicit input from industry ahead of a vote this fall, E.U. institutions are seeking to make amendments to the regulations that attempt to balance innovation with accountability. But according to some experts, the AI Act as written would impose onerous requirements on open efforts to develop AI systems.

The legislation contains carve-outs for some categories of open source AI, like those exclusively used for research and with controls to prevent misuse. But as Engler notes, it’d be difficult — if not impossible — to prevent these projects from making their way into commercial systems, where they could be abused by malicious actors.

In a recent example, Stable Diffusion, an open source AI system that generates images from text prompts, was released with a license prohibiting certain types of content. But it quickly found an audience within communities that use such AI tools to create pornographic deepfakes of celebrities.

Oren Etzioni, the founding CEO of the Allen Institute for AI, agrees that the current draft of the AI Act is problematic. In an email interview with TechCrunch, Etzioni said that the burdens introduced by the rules could have a chilling effect on areas like the development of open text-generating systems, which he believes are enabling developers to “catch up” to big tech companies like Google and Meta.

“The road to regulation hell is paved with the E.U.’s good intentions,” Etzioni said. “Open source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided ‘as is’ — consider the case of a single student developing an AI capability; they cannot afford to comply with E.U. regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on reproducibility of scientific results.”

Instead of seeking to regulate AI technologies broadly, E.U. regulators should focus on specific applications of AI, Etzioni argues. “There is too much uncertainty and rapid change in AI for the slow-moving regulatory process to be effective,” he said. “Instead, AI applications such as autonomous vehicles, bots, or toys should be the subject of regulation.”

Not every practitioner believes the AI Act is in need of further amending. Mike Cook, an AI researcher who’s a part of the Knives and Paintbrushes collective, thinks it’s “perfectly fine” to regulate open source AI “a little more heavily” than needed. Setting any sort of standard can be a way to show leadership globally, he posits — hopefully encouraging others to follow suit.

“The fearmongering about ‘stifling innovation’ comes mostly from people who want to do away with all regulation and have free rein, and that’s generally not a view I put much stock into,” Cook said. “I think it’s okay to legislate in the name of a better world, rather than worrying about whether your neighbour is going to regulate less than you and somehow profit from it.”

To wit, as my colleague Natasha Lomas has previously noted, the E.U.’s risk-based approach lists several prohibited uses of AI (e.g., China-style state social credit scoring) while imposing restrictions on AI systems considered to be “high-risk” — like those having to do with law enforcement. If the regulations were to target product types as opposed to product categories (as Etzioni argues they should), it might require thousands of regulations — one for each product type — leading to conflict and even greater regulatory uncertainty.

An analysis written by Lilian Edwards, a law professor at the Newcastle School and a part-time legal advisor at the Ada Lovelace Institute, questions whether the providers of systems like open source large language models (e.g., GPT-3) might be liable after all under the AI Act. Language in the legislation puts the onus on downstream deployers to manage an AI system’s uses and impacts, she says — not necessarily the initial developer.

“[T]he way downstream deployers use [AI] and adapt it may be as significant as how it is originally built,” she writes. “The AI Act takes some notice of this but not nearly enough, and therefore fails to appropriately regulate the many actors who get involved in various ways ‘downstream’ in the AI supply chain.”

At AI startup Hugging Face, CEO Clément Delangue, counsel Carlos Muños Ferrandis and policy expert Irene Solaiman say that they welcome regulations to protect consumer safeguards, but that the AI Act as proposed is too vague. For instance, they say, it’s unclear whether the legislation would apply to the “pre-trained” machine learning models at the heart of AI-powered software or only to the software itself.

“This lack of clarity, coupled with the non-observance of ongoing community governance initiatives such as open and responsible AI licenses, might hinder upstream innovation at the very top of the AI value chain, which is a big focus for us at Hugging Face,” Delangue, Ferrandis and Solaiman said in a joint statement. “From a competition and innovation perspective, if you already place overly heavy burdens on openly released features at the top of the AI innovation stream you risk hindering incremental innovation, product differentiation and dynamic competition, this latter being core in emergent technology markets such as AI-related ones … The regulation should take into account the innovation dynamics of AI markets and thus clearly identify and protect core sources of innovation in these markets.”

As for Hugging Face, the company advocates for improved AI governance tools regardless of the AI Act’s final language, like “responsible” AI licenses and model cards that include information like the intended use of an AI system and how it works. Delangue, Ferrandis and Solaiman point out that responsible licensing is starting to become a common practice for major AI releases, such as Meta’s OPT-175 language model.

“Open innovation and responsible innovation in the AI realm are not mutually exclusive ends, but rather complementary ones,” Delangue, Ferrandis and Solaiman said. “The intersection between both should be a core target for ongoing regulatory efforts, as it is being right now for the AI community.”

That well may be achievable. Given the many moving parts involved in E.U. rulemaking (not to mention the stakeholders affected by it), it’ll likely be years before AI regulation in the bloc starts to take shape.

The EU’s AI Act could have a chilling effect on open source efforts, experts warn by Kyle Wiggers originally published on TechCrunch

Leave a Reply

Your email address will not be published. Required fields are marked *

Subscribe to our Newsletter