All developers of general purpose AI systems – powerful models that have a wide range of possible uses – must meet basic transparency requirements, unless they’re provided free and open-source, according to an EU document seen by Bloomberg.
Elevate Your Tech Prowess with High-Value Skill Courses
Offering College | Course | Website |
---|---|---|
Indian School of Business | ISB Professional Certificate in Product Management | Visit |
IIT Delhi | IITD Certificate Programme in Data Science & Machine Learning | Visit |
IIM Kozhikode | IIMK Senior Management Programme | Visit |
These include:
- Having an acceptable-use policy
- Keeping up-to-date information on how they trained their models
- Reporting a detailed summary of the data used to train their models
- Having a policy to respect copyright law
Models deemed to pose a “systemic risk” would be subject to additional rules, according to the document. The EU would determine that risk based on the amount of computing power used to train the model. The threshold is set at those models that use more than 10 trillion trillion (or septillion) operations per second.
Currently, the only model that would automatically meet this threshold is OpenAi’s GPT-4, according to experts. The EU’s executive arm can designate others depending on the size of the data set, whether they have at least 10,000 registered business users in the EU, or the number of registered end-users, among other possible metrics.
Discover the stories of your interest
These highly capable models should sign on to a code of conduct while the European Commission works out more harmonized and longstanding controls. Those that don’t sign will have to prove to the commission that they’re complying with the AI Act. The exemption for open-source models doesn’t apply to those deemed to pose a systemic risk.
These models would also have to:
- Report their energy consumption
- Perform red-teaming, or adversarial tests, either internally or externally
- Assess and mitigate possible systemic risks, and report any incidents
- Ensure they’re using adequate cybersecurity controls
- Report the information used to fine-tune the model, and their system architecture
- Conform to more energy efficient standards if they’re developed
The tentative deal still needs to be approved by the European Parliament and the EU’s 27 member states. France and Germany have previously voiced concerns about applying too much regulation to general-purpose AI models and risk killing off European competitors like France’s Mistral AI or Germany’s Aleph Alpha.
For now, Mistral will likely not need to meet the general purpose AI controls because the company is still in the research and development phase, Spain’s secretary of state Carme Artigas said early Saturday.