Accuracy principle: AI act, GDPR and MDR interplay

Accuracy principle: AI act, GDPR and MDR interplay

Summary: 1. Background – 2. Accuracy principle in the AI act – 3. Accuracy principle in the GDPR – 4. Accuracy principle in the MDR – 5. Conclusions

 

1. Background

On 13.03.2024, the European Parliament approved the new rules on Artificial Intelligence (hereafter “AI Act”[1]). The proposed legislation dates to April 2021 and during this process, the risk-based approach[2] emerged, aiming to ensure widespread application of specific obligations tailored to each specific sector and case. AI systems[3] are carefully examined and ranked according to the level of risk they may pose to users. The purpose of the AI Act is “to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, placing on the market, putting into service and the use of artificial intelligence systems in the Union in conformity with Union values, to promote the uptake of human centric and trustworthy artificial intelligence while ensuring a high level of protection of health, safety, fundamental rights enshrined in the Charter […][4].

In order to guarantee the highest level of protection, together with the risk-based approach, “accuracy” is definitely one of the main requirements of the legislation[5].  In particular, “High-risk AI systems[6] should perform consistently throughout their lifecycle and meet an appropriate level of accuracy, robustness and cybersecurity, in the light of their intended purpose and in accordance with the generally acknowledged state of the art […][7].

Accuracy principle is already well known in the context of the European Union since also the EU GDPR (Reg. EU 2016/679) and the EU MDR (Reg. EU 2017/745) are “impacted” by this relevant requirement and despite the different meanings of the principle, there is an inevitable interaction between the legislations. As evidence the AI Act provides two relevant Annexes:  ANNEX II entitled “List of Union harmonization legislation based on the New Legislative Framework” clearly referring to EU MDR[8] and ANNEX V entitled “EU Declaration on conformity” asking for the declaration of compliance of AI Systems with EU GDPR[9] (whether personal data are in scope).

By this way some considerations are noteworthy in order to understand the ambitious purpose of the AI Act also in terms of consistency with the existing EU legislations.

2. Accuracy principle in the AI act

Under the scope of the AI Act, accuracy is a regulatory requirement for high-risk AI systems. In particular, by virtue of Article 15 (1) “High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout their lifecycle” and again, “the levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions of use” [10]. This accuracy meaning far exceeds the concept of data preciseness, therefore the notion of data accuracy in AI systems concerns not only the data entered, but also the logic and operation of the AI software itself as well as the final output of that processing[11]. Based on this assumption it seems correct to distinguish between the “data accuracy”, principle governing the data protection law, and “statistical accuracy” referred to the accuracy of the AI Systems[12].

In the AI scenario, accuracy provides a quantitative estimate of the expected quality of forecasts. In particular, forecast accuracy is the degree of closeness of the quantity statement to the actual (true) value of that quantity[13]. Naturally, the overall outcome depends itself on the accuracy of the data provided so that, if the data used are not accurate any resulting decision or profile will be flawed[14]. Indeed, inaccuracy inevitably leads to incorrect or misleading results even with important effects to the final scope of the automation process. Indeed, over the past years there have been many discussions about bias[15] and automated AI processes that have amplified the already existing risks of discrimination[16]. It is the case of pulse oximeters with respect to which recent studies have shown that in darker skin people the results overestimate oxygen saturation as much as 12% of the time[17]. By this way the UK Health Secretary Sajid Javid clearly stated “If we only train our AI using mostly data from white patients it cannot help our population as a whole. We need to make sure the data we collect is representative of our nation [18]”.

This is where AI accuracy come in, also considering that AI systems don’t analyze data mechanically but rather they learn from data[19] to provide an “intelligent” response as a sort of “human being”[20]. Specifically, Machine learning is an artificial intelligence technique that is used to design algorithms intended to learn and act on received data. Software developers can use machine learning to create algorithms that are “Locked,” whose function does not change over time, or “Adaptive” so that their behavior changes based on new data[21]. Based on this rationale Article 15 (1) asks for “an appropriate level of accuracy […]” as minimum but essential requirement and that (paragraph 2) accuracy levels and relevant accuracy metrics for high-risk AI systems, are stated in the operating instructions accompanying the system in the whole life cycle. In a nutshell: developers of high-risk systems, must report the level of accuracy of AI, including “metrics” set by the Commission, in terms of robustness and cybersecurity. All again by human hand and verification.

In this regard is worthy of mention the Irish Council for Civil Liberties (ICC) document[22] submitted to the European Parliament during the AI Act developing process and reporting the possible technical errors of AI, also in terms of accuracy. The suggestion was to ascribe accuracy to the broader category of safety and performance in order to expand the list of possible technical requirements such as reliability and false positive rate. Specifically, they suggested “Hence, the regulation should require providers to measure and report the performance of AI systems, not only accuracy, including the appropriateness of the performance metrics for specific AI systems[23]”. The European Commission embraced many ICC instances however, in the final draft, decided on a very cybersecurity-oriented formulation, leaving a little room to elements such as performance.

With these key concepts defined, and due to the final approval of the AI Act by the European Parliament, it seems now essential to analyze the impact and the interaction between the most awaited Act and the existing Regulations, in particular considering that the accuracy principle is one of the pillars of the European legal System, even though with different scope of application.

3. Accuracy principle in the GDPR

The GDPR contains rules on the protection of individuals when their personal data are processed as well as rules on the free movement of personal data. In particular, provides that the Regulation “protects fundamental rights and freedoms of natural persons and in particular their right to the protection of personal data [24]. To guarantee this highest level of protection Article 5 states the essential principles that must be applied to every personal data processing activity and, for the purposes of the topic here, Article 5 paragraph 1 (d) establishes that personal data shall be “accurate and, where necessary, kept up to date; every reasonable step must be taken to ensure that personal data that are inaccurate, having regard to the purposes for which they are processed, are erased or rectified without delay (‘accuracy’)”.  This principle is an important hermeneutical tool in the resolution of concrete cases since accuracy, in the GDPR, is strictly related to the correctness principle. Nevertheless, there is no definition of “accurate” data and since the principle meaning seemed quite “simple”, it was not deemed necessary to explore the aspect further[25]. However, it’s not always so easy to discriminate when the principle is applicable or not, and in many cases, it can’t be considered a “self-explanatory” concept[26]. For instance, if opinions about data subjects or interferences (e.g. profiling) are considered personal data[27] then accuracy principle is applicable[28]. In general, to address the matter, we can leverage on the UK Information Commissioner’s Office (ICO) analysis. Indeed, ICO, through the a contrario argument, recalls the UK Data Protection Act 2018[29] and its definition of “inaccurate”. In particular, as it currently stands,  Article 205 (1) provides that “inaccurate, in relation to personal data, means incorrect or misleading as to any matter of fact [30]. Data can be inaccurate for different reasons (i.e., data entry errors, data collection process etc.) however, whatever the motive, European legislator provided a strong instrument in favor of the data subjects: the right to rectification. By virtue of Article 16 “The data subject shall have the right to obtain from the controller without undue delay the rectification of inaccurate personal data concerning him or her. Taking into account the purposes of the processing, the data subject shall have the right to have incomplete personal data completed, including by means of providing a supplementary statement”. The purpose is quite clear: removing any impact that the use of inaccurate data (i.e., the lack of accuracy) may have with reference to the discriminations that may result. As evidence, inaccuracy and discrimination were included by the Working Party Group (WP29) among the major risks to the right to privacy[31] .

Is worthy of mention Case C-460/20 | Google[32] that began when two managers asked Google, to deindex the results of a search made on the basis of their names. Indeed, the research gave links to some articles that evaluated the group investment very negatively, through wrong and/or inaccurate statements. Google, in response, refused to comply, since it did not know whether the information in the articles was true or not, at least accurate. The Court of Justice has then clarified, that by virtue of Article 17 Reg. EU 2016/679 (GDPR) (i.e. Right to be forgotten) “the operator of a search engine must dereference information found in the referenced content where the person requesting dereferencing proves that such information is manifestly inaccurate[33].  At the core of this Judgment there is the awareness that data inaccuracy may result in adverse effects and because of this risk, it’s even more important to balance the complex relationship between the freedom of information and the right to personal data protection online[34].

In conclusion, when we discuss about the accuracy from a GDPR perspective, we refer to the completeness and consistency of the data as essential principle to guarantee the right to data privacy and the reliability of the information as a right itself.

4. Accuracy principle in the MDR 

Before carrying out discussion about the principle within the Regulation 2017/745 on medical devices (hereinafter MDR)[35], it’s important to underline that the European Parliament in June 2022 released a relevant Study[36] related to the benefits and the implications of the use of AI in healthcare. Indeed, it’s quite clear that the machine learning systems are increasingly being used to improve the medical diagnosis and the clinical practice, as well as to optimize resources so that, the time is rape, to conduct an analysis of the phenomenon impact.

The above-mentioned Study highlights the ethical, clinical, and social risks[37] proposing some mitigation measures in order to guarantee the compliance with the existing regulations and, as a result, the respect of the individuals rights[38]. The study clearly supported the European working group in the assessment of the new AI Act, in particular with reference to the risk assessments methodology proposed to minimize the risks in AI healthcare application.  With particular reference to a definition of a risk-based AI, in 2021 the European Commission shared a proposal for AI regulation[39] classifying the AI tools in three main level of risk: i) unacceptable risk, ii) high risk and iii) law or minimal risk. Where, the unacceptable risk is prohibited, since against the EU values (e.g., real time biometric identification in public areas); the high risk is conditionally permitted (e.g., ensuring robustness, accuracy and cybersecurity); the law or minimal risk is permitted (e.g., not subject to obligations, but some code of conducts are recommendable). There is also a particular area of risk: the transparency risk, permitted under certain conditions of transparency (e.g., in biometric recognition individuals must be notified that they are interacting with AI tools).

The self-risk assessment allows to prevent adverse consequences guaranteeing the human safety in the application of AI tools in the healthcare field. In particular, for any process must be put in place all the possible measures for the risk minimization which would include some, but not all, of these criteria: appointment of a DPO (Data Privacy Officer) to be involved in the very early stage, constant pursuit of Privacy by default and Privacy by design principles at all levels, validation methods to evaluate the AI system reliability, consistency between the AI tools used and the final aimed purpose  to users regarding the AI tools[40].

Taking into account the above aspects, and the sensitivity of the application of AI systems in healthcare, it is clear that the accuracy principle has an even more significant impact. As evidence, the MDR itself provides that “Not later than one year after submission of the information in accordance with paragraph 1, and every second year thereafter, the economic operator shall confirm the accuracy of the data. In the event of a failure to do so within six months of those deadlines, any Member State may take appropriate corrective measures within its territory until that economic operator complies with that obligations” [41].

Primarily, it’s essential to highlight that, Article 2 paragraph 1 of the MDR includes software[42] in the medical device definition and the MDCG 2019-11 clarifies that “medical device software is a software that is intended to be used, alone or in combination, for a purpose as specified in the definition of a medical device[43] in the MDR or IVDR, regardless of whether the software is independent or driving or influencing the use of a device”[44].

With the aforementioned considered, ça va sans dire that, anytime the software can be defined as medical device, AI software must comply with MDR obligations and requirements as set out by Annex I and, in particular, “GENERAL SAFETY AND PERFORMANCE REQUIREMENTS”[45] . In particular, manufacturers must adopt a risk management system[46] as well as risk control measures[47] to reduce, eliminate or control the risk itself. The devices must meet the performance level intended by the manufacturers and must be consistent with the intended purpose.  They also must be safe and effective “and not compromise the clinical condition or the safety of patients, or the safety and health of users or, where applicable, other persons”[48]. With particular reference to the “performance” meaning, the IEC 60601-1[49] provides a clear definition of the mandatory essential performance: “Performance of a clinical function, other than that related to basic safety, where loss or degradation beyond the limits specified by the manufacturer results in an unacceptable risk”. In a nutshell: the clinical performance it’s the ability of a device to achieve its intended purpose leading to a clinical benefit for patients[50]. Furthermore, the manufacturer must be able to provide the evidence of the required clinical performance as per Article 61 of MDR and Annex XIV, through a specific clinical evaluation “based on clinical data providing sufficient clinical evidence”[51]

With these key concepts defined, AI software in scope with the medical device definition, must comply with MDR obligations as well. Thus, if the accuracy principle is a mandatory requirement for any medical device[52] then AI software must guarantee the highest level of accuracy in order to determine specific safety and performance levels as per MDR requirement. For instance, medical devices with measuring functions must comply with the intended purpose fixed by the manufacturer and it implies “accuracy, claimed explicitly or implicitly, where a non-compliance with the implied accuracy could result in a significant adverse effect on the patient’s health and safety”[53]. The result of (in)accuracy in AI software can be a delay in medical intervention and it is potentially dangerous to the patient’s health. It’s also essential to highlight that even though technology accuracy is theoretically different from data accuracy, there is an unquestionable interplay. Indeed, for example, providing the wrong age or the wrong racial origin to a health mobile app (considered medical device if responding to some requirements), may conduct to false results and technology is affected by data (in)accuracy as well. Another example is related to AI software integrated in CT Technology to stitch together several images into a more coherent scan in order to enhance the accuracy of diagnostic processes through radiological imaging[54]. It is common ground to say that also in this case the accuracy of data (i.e., images) provided to the “machine” is strictly connected to the exactness of the diagnostic results so that, even if this distinction does exist, we can assume that data accuracy and technology accuracy in healthcare are strictly intertwined. That being the case, the accuracy principle should be evaluated not only as a legal requirement but also as an ethical need to guarantee the highest level of precision medicine performance as well as the patient right to safety[55].

5. Conclusions

The above scenario, without being exhaustive, demonstrates the interplay between the existing European legislations and the AI Act, whose goal is to harmonize rules and legislations in light of the European AI ecosystem. In particular, the accuracy principle ensures the highest level of “by design” protection from (in)accurate data so that it should be implemented in the input data, the output data[56], and even in the intermediate data of the whole processing activity[57]. Indeed, even the most precise algorithm is going to fail if even one element of the processing activity is inaccurate. Furthermore, precisely because this process is interactive and continuously subject to changes the EU GDPR provides that it “shall be reviewed and updated where necessary[58] and that it should guarantees “the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision”[59]. Since the AI Act aims to enhance existing rights and obligations by imposing specific requirements and the ethical perspective in line with Union values, the human-centric approach is the driver. By this way the accuracy requirement is more than a theoretical discussion, as a matter of fact, it is a pillar of the AI Act and the harmonization legislations recalled by the Act itself. In particular high-risk systems, including some medical devices, must be designed and developed to achieve, in light of their intended purpose, an appropriate level of accuracy, robustness, and cybersecurity to be compliant throughout their entire life cycle[60]. Regardless of the multiple meanings we might further assign to the principle, one aspect is definitely unquestionable: because of the aforementioned Regulations interplay the need of a systematic interpretation of  the rules is essential, specifically with reference to high-risk AI systems.  Actually, the combined application of these regulations will result not only in data accuracy but also (and/or or because of) in clinical benefits through the output of AI software[61].

It seems worth of mention that in the aftermath of the Ai Act approval on March 13th, 2024, some concerns were raised by the MedTech Europe with reference to the “need of alignment of high-risk AI systems requirements and standards from the MDR/IVDR and the AI Act” and to some other areas of the Medical Device and IVR regulations[62]. However, in the opinion of the writer, although more coordination is probably needed to avoid inconsistency, the most important goal seems to have been achieved: in  light of market and technological developments, the European legislation provided a strong legal framework ambitious of ensuring a “human centric and trustworthy artificial intelligence[63] protecting, at the same time, the safety and the fundamental rights ratified by the Charter. While, aware of the need to keep up with the technological development and innovation, the AI Act provides a review and evaluation of the Act itself by three years after the date of entry into application and every four years thereafter in order to report to the European Parliament and Council any necessary alignment and/or revision[64].

Concluding, the AI scenario is now finally regulated and, even admitting that there is always room for improvement, as a matter of fact,  the human-centric approach remains the pillar able to lead the legislation to a higher level of ethical and trustworthy system without preventing development and innovation themselves.

 

 

 

 

 

 


[1] See https://data.consilium.europa.eu/doc/document/ST-5662-2024-INIT/en/pdf
[2]See https://futurium.ec.europa.eu/en/european-ai-alliance/document/eu-ai-acts-risk-based-approach-high-risk-systems-and-what-they-mean-users
[3] Also known as Machine Learning systems, coined by the scientist Arthur Lee Samuel in 1959
[4] AI ACT Whereas (1)
[5] AI ACT Article 15
[6]See https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence#:~:text=High%20risk,cars%2C%20medical%20devices%20and%20lifts.
[7] AI ACT Whereas (49)
[8] AI ACT ANNEX II Part 1 Section A Article 11
[9] AI ACT ANNEX V Article 4a
[10] AI ACT Article 15 paragraph (2)
[11]See  https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/what-do-we-need-to-know-about-accuracy-and-statistical-accuracy/#accuracy
[12] Ibidem
[13] La migliore metrica di errore di previsione, Joannes Vermorel, novembre 2012
[14] Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01)
[15] See https://arxiv.org/abs/2008.07341
[16] Right to access information as collective-based approach to the GDPR’s right to explanation in European law. Erasmus Law Review, Mazur, J. (2018).
[17] Performance Evaluation of Pulse Oximeters Taking into Consideration Skin Pigmentation, Race and Ethnicity, FDA Executive Summary, February 2, 2024
[18] See https://www.theguardian.com/society/2021/nov/21/from-oximeters-to-ai-where-bias-in-medical-devices-may-lurk
[19] Turing A. American Association for Artificial Intelligence; 1995. Computing Machinery and Intelligence. [Google Scholar]
[20] Big data, artificial intelligence, machine learning and data protection, 20170904, V2.2 https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf
[21] See: https://www.aboutpharma.com/digital-health/intelligenza-artificiale-e-machine-learning-quando-il-software-e-un-dispositivo-medico/
[22] See https://www.iccl.ie/wp-content/uploads/2022/03/20220308_ICCL_AIActTechnicalErrors_3Column.pdf
[23] Ibidem
[24] Article 1(2) Reg. EU 2016/679 (GDPR)
[25] Opinions can be incorrect (in our opinion)! On data protection law’s accuracy principle, Published by Oxford University Press, International Data Privacy Law, 2020, Vol. 10, No. 1,Dara Hallinan and Frederik Zuiderveen Borgesius
[26] Why accuracy needs further exploration in data protection Elisabetta Biasin, KU Leuven Centre for IT & IP Law
[27] Dara Hallinan, Frederik Zuiderveen Borgesius, Opinions can be incorrect! In our opinion. On data protection law’s accuracy principle, International Data Privacy Law, ipz025, DOI:10.1093/idpl/ipz025
[28] Example: if a political party combines information regarding characteristics of people living in a particular polling district with names and addresses of individuals, there is a categorization of individuals themselves, and since these information refer to identifiable person, they can be considered as personal information. Source : https://ico.org.uk/for-organisations/direct-marketing-and-privacy-and-electronic-communications/guidance-for-the-use-of-personal-data-in-political-campaigning-1/personal-data/#opinions
[29] UK’s implementation of the General Data Protection Regulation (EU GDPR)
[30] See https://www.legislation.gov.uk/ukpga/2018/12/data.pdf
[31] Why accuracy needs further exploration in data protection Elisabetta Biasin, KU Leuven Centre for IT & IP Law
[32] See: https://curia.europa.eu/jcms/upload/docs/application/pdf/2022-12/cp220197en.pdf
[33] Ibidem
[34] See: https://www.medialaws.eu/rivista/diritto-alla-deindicizzazione-e-notizie-false-la-corte-di-giustizia-precisa-i-confini-tra-oblio-e-liberta-di-espressione/
[35] See: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:32017R0745
[36] Artificial intelligence in healthcare Applications, risks, and ethical and societal impacts. See: https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729512/EPRS_STU(2022)729512_EN.pdf
[37] ibidem
[38] Such as: Transparency, Accuracy, Integrity, Privacy, Security and Equity in the medical treatments.
[39] See https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206
[40] For the complete self-risk assessment check list see  https://www.europarl.europa.eu/RegData/etudes/STUD/2022/729512/EPRS_STU(2022)729512_EN.pdf pp.33-34
[41] Article 31 (5) Regulation 2017/745
[42] MDCG 2019-11 Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR “software is defined as a set of instructions that processes input data and creates output data”
[43] For further details, see Article 2 paragraph 1 Regulation (EU) 2017/746
[44] MDCG 2019-11 Guidance on Qualification and Classification of Software in Regulation (EU) 2017/745 – MDR and Regulation (EU) 2017/746 – IVDR
[45] Regulation (EU) 2017/746, Annex I Chapter 1
[46] Regulation (EU) 2017/746 Annex I Article 3
[47] Regulation (EU) 2017/746 Annex I Article 4
[48] Regulation (EU) 2017/746 Annex I Article 1
[49] IEC 60601-1:2005/AMD2:2020 Amendment 2 – Medical electrical equipment – Part 1: General requirements for basic safety and essential performance
[50] Regulation (EU) 2017/746 Article 2 paragraph 52
[51]   Regulation (EU) 2017/746 Article 61 paragraph 1
[52] Many provisions of the Regulation recall the need of data accuracy to guarantee the safety and effective use of the devices. See Article 32 (5), Annex I Chapter II par. 23.4 (h), Annex II par. 6.2 (f),
[53] MEDDEV 2. 1/5 June 1998
[54] For further details see https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10177423/
[55] See https://www.kolabtree.com/blog/it/medical-device-data-benefits-and-security-challenges/
[56] Inputs are the signals or data received by the system and outputs are the signals or data sent from it. See https://en.wikipedia.org/wiki/Input/output
[57] Artificial Intelligence: accuracy principle in the processing activity AEPD May 2023
[58] Article 24 paragraph 1 Reg. EU 2016/679 (GDPR)
[59] Article 22 paragraph 3 Reg. EU 2016/679 (GDPR)
[60] Article 15 AI Act
[61] See https://www.kolabtree.com/blog/it/medical-device-data-benefits-and-security-challenges/
[62] See https://www.medtecheurope.org/wp-content/uploads/2024/03/medical-technology-industry-perspective-on-the-final-ai-act-1.pdf
[63] Article 1 AI Act
[64] Article (85a) AI Act.

Salvis Juribus – Rivista di informazione giuridica
Direttore responsabile Avv. Giacomo Romano
Listed in ROAD, con patrocinio UNESCO
Copyrights © 2015 - ISSN 2464-9775
Ufficio Redazione: redazione@salvisjuribus.it
Ufficio Risorse Umane: recruitment@salvisjuribus.it
Ufficio Commerciale: info@salvisjuribus.it
***
Metti una stella e seguici anche su Google News
The following two tabs change content below.

Roberta Sole

Roberta Sole, with a Bachelor Degree in Law achieved with honors, and a Master Degree in Law , is a Legal Specialist specifically engaged in the Medical Device industry and clinical field. She focuses her activities on Clinical Legal & Compliance, International Data Privacy Regulations, Regulatory and Corporate liability. In addition to an extensive legal education, she improved her expertise obtaining professional Certifications and recognitions. She also achieved a Bachelor Degree in General Common Law System from the New South Wales University of Sydney. Languages: Italian, English and Spanish.

Articoli inerenti