Impact of ETPC Recommendations on the EU’s General-Purpose AI Code of Practice
At the invitation of the European Commission, the ACM Europe Technology Policy Committee (ETPC) submitted a number of recommendations towards the development of a General-Purpose AI Code of Practice (GPAI). The GPAI is intended to help industry comply with the AI Act legal obligations on safety, transparency and copyright of general-purpose AI models. We are proud to announce that several of those recommendations have now been incorporated into the GPAI.
ACM Recommendation Summary | Correspondence in EU Code of Practice | Reflected? | |
---|---|---|---|
(1) | Scope clarity: Code should focus on model providers, not deployers | Limited: The Code does clarify scope (Recitals, objectives) but deployer references remain in several places | 🔸 Partially incorporated |
(2) | Risk tier guidance: Specify MUST/SHOULD/COULD processes by systemic/high/minimal risk and clarify proportionality | Safety & Security Chapter, Recital (c); Measure 4.1 defines risk tiers and proportionality by severity/probability | ✅ Incorporated |
(3) | Clarify applicability across modalities beyond text/image | Transparency Chapter, Measure 1.1, explicitly includes modalities (text, image, audio, video); also reflected in Safety & Security Chapter | ✅ Incorporated |
(4) | Clarify applicability across ML types beyond text/image models | Same as Rec 3 above — explicitly addressed | ✅ Incorporated |
(5) | Register models efficiently, reduce overhead for metadata registration | Transparency Chapter, Measure 1.1, Model Documentation Form provides structure for repeatable registration | ✅ Incorporated |
(6) | Reconsider treating “Automated AI R&D” as systemic risk and qualify “long-horizon planning” | Code retains “automated R&D” and “long-horizon planning” as systemic risk sources (Appendix 1.3) without narrowing definition | ❌ Not incorporated |
(7) | Robust evaluation should involve technical & non-technical experts explicitly | Safety & Security Chapter, Measure 3.2 references evaluations with appropriate expertise but not explicitly distinguishing non-technical | 🔸 Partially incorporated |
(8) | Evaluation should include parameters/metrics for effectiveness | Reflected in Safety & Security Chapter, Measures 3.1–3.4, emphasis on rigorous model evaluation, metrics, and scientific standards | 🔸 Partially incorporated |
(9) | Alignment with UK/US AI Safety Institutes, etc. | References: Measure 2.1, Recital (e) cite international alignment including AI Safety institutes | ✅ Incorporated |
(10) | Collaboration with standardization bodies (CEN/CENELEC/ETSI); reference to MLSecOps/OWASP/MITRE | Some alignment: Safety & Security Chapter acknowledges international standards but no explicit reference to EU-specific standards setting | 🔸 Partially incorporated |
(11) | Clarify “deployment” vs “release” terminology | Code uses “placing on the market” but still mixes terms in some parts (Measure 1.2); not fully resolved | ❌ Not incorporated |
(12) | Ensure corrective actions proportional to incident consequences | Reflected in Safety & Security Chapter, Commitments 4 and 9, which apply proportionality principle | ✅ Incorporated |
(13) | Incident register for tracking violations (e.g., copyright, security, etc.) | No explicit reference to centralized “incident register” concept; although serious incident reporting is robust (Commitment 9) | ✅ Incorporated |
(14) | Explicit incident response plan for security breaches | Reflected in Security Mitigations section (Commitment 6, Measures 6.1-6.2) | ✅ Incorporated |