Read part here
a and part
two of this series on medical records.
It follows from the criterion (as much longer as reasonably follows from the care of a good healthcare provider') that it is the practitioner who checks at the end of the retention period of a file whether the file in question should be kept longer for medical reasons. It is conceivable that this activity, if in the case of the file management of a healthcare institution, is indeed tested against the said criterion, is time-consuming. 'If'; there are healthcare institutions where the 'so much longer' criterion is not applied in the context of file management and which keep medical files for an indefinite period of time after the expiry of the statutory retention period.ii Among other reasons, understaffing and waiting lists prevent these institutions from having their practitioners perform this test.iii In addition, practitioners prefer to spend their scarce time on the medical treatment of the patient rather than on this more administrative task.
How helpful would it be if the deployment of AI enabled a patient-specific assessment of each medical record to be made, resulting in a judgment as to whether or not the medical record should be kept longer than the standard 20-year retention period on medical grounds? And that, based on this assessment, the file would then be automatically destroyed or kept longer? Ideally, the AI system would virtually eliminate the use of the practitioner in this context, involving them in the assessment only in doubtful cases. In the following, I will refer to such an AI system briefly as "the envisioned AI system.
The following examines the legal (im)possibilities and preconditions for the deployment of such an AI system. Whether AI is actually capable of this is, of course, beyond the scope of this article. Here, successively the Wgboiv, viz. Article 7:454 BW, Article 22 AVG and the AI Regulation will be discussed.
Article 7:454 (3) BW: a test by the doctor personally required?
Article 7:454 (1) of the Civil Code lays down the care provider's obligation to keep a file:
The social worker establishes a file relating to the patient's treatment. (...)
The retention period is governed by Article 7:454 (3) of the Civil Code:
Without prejudice to the provisions of Article 455, the social worker shall keep the file for twenty years from the time of the last change in the file, or as much longer as reasonably follows from the care of a good social worker.
The aforementioned paragraphs literally require actions of the caregiver personally, and the Explanatory Memorandum from 1989-1990 also takes the person of the caregiver himself as the starting point.v In practice, however, the setting up of the file is also partly carried out by the ICT department of a healthcare institution. And the retention of medical records even lies for the most part with the ICT department.
Therefore, from the point of view of (personal) care, there do not seem to be any objections to having an AI system review records using the 'or so much longer' criterion rather than by the social worker himself. Of course, the correctness of AI processing will have to be at least at the level of that of a good caregiver, that is, at the level of a reasonably competent and reasonably acting professional.vi That means that a certain margin of error that would occur in a review by caregivers themselves would be permissible. But errors specifically associated or inherent in the chosen AI technique that the social worker could not make, such as fabulation, would not be allowed. Such AI technique would thus be unsuitable for healthcare.
With regard to other parts of the treatment agreement, however, the conclusion will be that the interpersonal contact between patient and caregiver is indispensable for proper treatment and that AI cannot therefore replace the caregiver there. In the first place, personal contact between caregiver and patient is important for the mutual trust that must exist, especially in more radical forms of care where the patient must rely on the caregiver. But it also applies to the more medically substantive aspects of care. For example, although AI is capable of recognizing emotions, the question is whether this can ever replace the "not-fluffy feeling" of some caregivers.
At the same time, it follows from the wording of the aforementioned members that the social worker is liable under disciplinary and possibly civil lawvii for the malfunctioning of the AI system. This will not be considered further here.
The most important documents concerning the professional ethics of doctors, i.e. the Doctors' Oath and the Rules of Conduct for Doctors - documents with which caregivers must comply in the context of treatment under the Wgboviii - do not currently contain any provisions concerning the use of AI in the context of medical treatment.
The conclusion, based on the foregoing, is that the Wgbo, in particular article 7:454 paragraph 3 of the Civil Code, does not contain any impediments for the use of AI as a substitute for the care provider in file management based on the 'or so much longer' criterion. A condition is that the correctness of the AI processing must be at least at the level of a reasonably competent and reasonably acting professional colleague during such a test.
Obstacles and preconditions from article 22 AVG
An AI system that evaluates and selects files for destruction or for x-period of longer retention without (default) intervention by a caregiver using the "or so much longer" criterion and possibly performs the destruction or longer retention itself falls exactly under the description given by Article 22(1) AVG of automated decision-making:
The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or significantly affects him or her in any other way.
The prohibition of Article 22(1) AVG
In the Guideline on automated decision-making and profiling (hereinafter; the Guideline), the WP29 group, the 'predecessor' of the EDPB, has clarified that, despite the use of the word 'right', it concerns a prohibition. ix Therefore, if the prohibition applies to an AI system, the person wishing to use it must comply with it without further ado, without having to invoke his or her right. The Dutch courts have since adopted the interpretations given in this Guideline implicitly or explicitly on a number of occasions, as a result of which they have become part of the Dutch legal order.x
The reason that there is a prohibition is that automated decision-making and profiling can involve great risks for those involved. Profiling, for example, when someone is wrongly placed in a certain profile, can lead to incorrect conclusions or predictions with regard to this person. Something that should ideally not occur in the envisioned AI system, given the potential impact on healthcare delivery and patient health.
In the Schufa judgment, the European Court of Justice considered that the three elements "decision," "exclusively automated processing" and "legal effects" from Article 22(1) AVG are cumulative requirements for this a this prohibition to apply.xi
The following will consider whether the proposed AI system is subject to these three requirements:
'Decision':
According to the Court, this concept has a broad scope.xii A determination by the intended AI system that a file should be destroyed or retained, which is subsequently acted upon, undoubtedly qualifies as a decision.
'Automated processing only:
It can be predicted in advance that a social worker will sometimes, or depending on the precision of the envisioned AI system, somewhat more often, have to check an outcome with respect to a specific file, but the very design of such a system would be that the social worker would no longer be involved in file management using the "or so much longer" criterion. At least no longer at the level of each individual file. In my opinion, an AI system with such a set-up also falls within the scope of Article 22 (1) AVG. Otherwise, all patients whose records the AI system would fully automatically "destroy" or "keep longer" would not be protected by the prohibition of Article 22 (1) AVG. And yet these patients would most likely constitute the majority of all patients reviewed by such an AI system. Such an approach, based on the material consequences of the deployment of AI, is in line with the broader interpretation that the ECJ gave to the concept of 'automated decision-making' in the aforementioned Schufa judgment.xiii
'Automated processing, including profiling':
That the envisaged AI system would use profiling in its assessment may also be clear, when the definition of Article 4(4) AVG is considered:
Any form of automated processing of personal data in which, on the basis of
certain personal aspects of a natural person, in particular with
evaluated, in particular with the aim of evaluating his professional performance, economic situation
health, personal preferences, interests, reliability, behavior, location or
travels. (emphasis mine: IS)
'Legal effect' c.q. 'that otherwise affects him to a substantial degree':
It is worth noting in this connection that the Wgbo grants the patient a number of inalienablexiv rights, patient rightsxv, in addition to "ordinary contractual rights. In order to strengthen the patient's (evidentiary) position, both types of rights have largely been formulated in the form of obligations of the care provider.xvi Therefore, the way in which the care provider, within the framework of the execution of the medical treatment agreement, fulfills an obligation resting on this under the Wgbo, soon touches the legal position of a patient or his patient right, in other words, soon has legal consequences. That a destruction or longer retention of a medical record has at least significance for the patient's legal position seems to be subject to little doubt. This also follows from the Guideline's description of "legal consequences"; "A legal consequence may include something that affects a person's legal status or his rights under a contract. "xvii
That the fulfillment of obligations under the Wgbo quickly involves legal consequences also ensures that this quickly falls under the wording of the prohibition of Article 22(1) AVG. Destroying a file after it has been determined that from the point of view of good care there is no reason to keep it longer and, on the contrary, keeping it longer from the point of view of good care, will not affect the patient in question to a significant degree, but do fall under 22 clause 1 AVG because there are legal consequences.
Thus, the three cumulative conditions for invoking the prohibition of Article 22(1) AVG in respect of the envisaged AI system are met. So does this mean that such a system may not be used for the purpose of medical records management?
The exception to the prohibition of Article 22(2)(a) AVG
Previous question will be answered at the end of this article, but based on Article 22 paragraph 2 sub a AVG, the answer for now is "no. Paragraph 2 sub a excludes from this prohibition the use of AI systems that fall within the wording of the prohibition in paragraph 1, if the decisions taken by the system are necessary for the conclusion or performance of a contract. Thus, in the context of curative care where care is provided on the basis of the medical treatment agreement concluded between the patient and physician, the prohibition of Article 22(1) AVG does not apply. However, pursuant to paragraph 3, appropriate measures must be taken to protect the rights and freedoms and legitimate interests of the data subject, including at least the right to human intervention, the right to make one's point of view known and the right to challenge the decision.
In healthcare practice, however, taking decisive action to safeguard patient rights is often problematic.
Illustrative of this is the way in which the KNMG approaches the exercise of another patient right, i.e. the patient's right to object to the use of his medical data for quality purposes.xviii The KNMG Guideline 'Dealing with medical data' states as conditions for this that patients must be informed in advance and have not objected to it. It states that patients may be informed by means of a privacy statement on the website.xix Anyone consulting the website of any hospital, however, knows how far hidden such a privacy statement is usually placed on the website. In addition, there is no publicly known data on what percent of patients consult the privacy statement on a healthcare facility's website of their own accord or prompted to do so by the provider. Without such data and given this mode of information, I do not believe it can be said that patients have actually been informed (in advance).
Especially when it comes to an irrevocable act such as the destruction of a medical record, truly effective measures will have to be taken to ensure that a patient can actually bring about a review by the provider himself or challenge an impending destruction of his record. In this regard, the recommendations in Appendix 1 of the Guidelines speak of "bringing it explicitly to the attention of the data subject. And when a Web site is used, of 'clearly visible notice'.xx The intended AI system could and should also be helpful here. For example, by informing a patient in a timely manner of a planned destruction or longer retention electronically. Because this is not an activity of the caregiver in current practice and will not be in the envisioned AI system either, this functionality should be a mandatory part of it. In response to such notifications, social workers will undoubtedly be called upon more frequently in connection with the destruction of records, but this remains gain over, in accordance with the (rule of law) book, having each record reviewed by the relevant practitioner himself by the end of the standard retention period.
However, the ban remains in effect for all forms of medical care that are not provided on the basis of a contract such as forced care under the Care and Compulsory Care Act and the Compulsory Mental Health Act.
It is not the place here to go into the desirability of this difference between the respective patient groups, but it must be noted that the difference cannot in any case be explained in principle. Both the Wgbo and the Care and Compulsory Mental Health Care Act and the Compulsory Mental Health Care Act are there (partly) to concretize and realize the human and fundamental right to health care. In addition, the prohibition would also be largelyxxi in force if the Dutch legislator had chosen to regulate curative care not under private law, but under administrative law. However, whether the European legislator, when formulating Article 22 AVG, was mindful of a possible role of the contract in the context of the fulfillment of human rights such as the right to health care, or in the context of a public duty, is doubtful. Recital 71 to the AVG leaves it at the mere observation that in the case of an agreement there is an exception to the prohibition in paragraph 1. In my opinion, this does not alter the fact that in any form of care it would be desirable that the envisaged AI system could be used to determine the retention period on the basis of the 'or so much longer' criterion, provided at least at the level of a good care provider.
However, even if the deployment of the intended AI system would not fall under the prohibition of the first paragraph under the operation of Article 22(2)(a) AVG and appropriate measures have been taken under paragraph 3, it cannot be put into use without further ado.
The condition of Article 22(4) AVG
Indeed, it follows from Article 22(4) AVG that if health data are processed when using an AI system permitted under paragraph 2(a), this requires the explicit consent of the data subjects. It follows from the nature of the matter that this must be obtained from the patient concerned prior to processing of medical records by the intended AI system. In my view, it is not necessary to obtain express consent from all patients who would like their records reviewed in such a manner prior to the commissioning of the envisaged AI system.
The requirement for explicit consent will result in there being three groups of patients; patients who have given explicit consent, patients who have indicated that they do not wish to do so, and patients whose consent is not yet clear. The last two groups will have to be excluded or selected by the record management system or the intended AI system from the test using the "or so much longer" criterion. Because this automated processing does not entail any legal consequences or otherwise significantly affect the patients from these groups - after all, their records will be assessed by the care providers themselves whether and massexxii longer - this processing that does not fall under the prohibition of Article 22(1) AVG.
Paragraph 4 also provides that appropriate measures must be taken to protect the legitimate interests of the data subject. Recital 71 of the AVG specifically mentions with regard to special personal data the danger of discrimination. It is quite conceivable that this could occur with profiling based on health data. For example, when the algorithm is trained exclusively on data concerning heart attacks in men, while a heart attack in women is usually associated with different symptoms than in men. In connection with whether or not longer retention is required, the intended AI system could reach wrong conclusions for women on that basis. Thus, one of the recommendations of the Guidelines concerns evaluating algorithms to prove that they actually function as intended and do not generate discriminatory, incorrect or unjustified results.xxiii
Thus, whether or not the intended AI system can be deployed with respect to an individual record, in the context of the AVG, ultimately depends on whether or not the patient gives express consent to do so. A desirable outcome for this case, in my opinion. This concerns the file obligation and thus also the patient's "file right" which will be decided upon automatically by the envisaged AI system with the application of profiling. With, as indicated above, the necessary risks to proper care and the patient involved.
How important human intervention is also from a legal point of view is shown by the fact that the practice of screening images for tumors using self-learning AI does not fall under the prohibition of Article 22(1) AVG. Such deployment at the heart of care processes has potentially much more far-reaching consequences for the patient than file management, but because it is always the radiologist who makes the final diagnosis, there is no unacceptably risky and thus prohibited deployment of AI. That the patient in this case has no say in the deployment of AI is also justifiable. In this case, AI is deployed to support the caregiver and the choice to do so falls under the medical-professional autonomy of the caregiver.
The mandatory DPIA under Article 35 (1) and (3) (b) AVG
Finally, although the final say on the deployment of the envisaged AI system lies with the patient, the question of whether it may actually be deployed will depend on the outcome of the risk assessment and mitigation in the context of the DPIA required under Article 35 (1) and (3) (b) AVG. In that context, one of the essential characteristics of the intended AI system, i.e. as little human intervention as possible, will have to be closely scrutinized. If there are too many unavoidable risks in this respect, the rationale behind Article 22 (1) AVG will be confirmed, but this will mean the (provisional) premature end of the envisaged AI system.
The AI Regulation
The AI Regulation (hereinafter; AIV) provides rules for AI based on risk categories and will take effect in phases. On February 2, 2025, the rules relating to prohibited AI applications came into force and by August 2027, the AI Regulation will be fully effective. The following will look at which risk category the proposed AI system would fall into and what rules would need to be observed as a result. This assumes that the AI Regulation is already fully in force.
In the case of the envisioned AI system, the algorithm has to draw a conclusion for each file based on things like patient characteristics, disease progression, general knowledge about recidivism and heredity, etc., as to whether or not to keep it longer. Given this multitude of factors that must also be considered in their interrelationships, it is unlikely that the conclusion can be drawn based on simple "if-then" rules. The envisioned AI system, also given the desire to keep the caregiver as much as possible "out of the loop," will need to be able to work with this autonomously and, in doing so, be able to derive how it can arrive at a conclusion regarding retention based on input from the file in question and other comparable files. Given all these requirements that the envisaged system will have to meet, it is an AI system that falls under the operation of the AIV, cf. the definition of AI system from Article 3 sub 1 AIV. What follows is an examination of which AIV rules would apply to the proposed system.
High-risk AI? The MDR
Perusal of the restrictively worded Article 5 AIV on prohibited AI applications reveals that the envisaged AI system does not fall under this prohibition. The question to be answered is whether or not the envisaged AI system falls under the category of high-risk AI of Article 6 AIV. Given the systematics of this article, the following questions are sequentially relevant:
1. Is the proposed AI system as a product or safety component thereof covered by European safety regulations as listed in Annex I, and if so, should it be subject to conformity assessment under these regulations? (Article 6(1) a. and b. AIV)
2. If the previous question must be answered in the negative, is there an AI system as referred to in Annex III?
3. If yes, is there an exception under Article 6(3) AIV?
4. If so, does the exception (to the exception) formulated in the last sentence of paragraph 3 apply?
Re 1.
The European regulation potentially applicable here is the Medical Devices Regulationxxiv (hereinafter: MDR) mentioned in number 11. of Appendix 1 to the AIV, which as a European regulation is directly applicable in the Netherlands. Article 2, paragraph 1, first subparagraph MDR lists a number of itemsxxv that could qualify as "medical devices," including software. However, not all software used in medical care qualifies as a 'medical device'. This will also require meeting the other elements of the definition of Article 2(1) MDR relevant to software, such as 'intended by the manufacturer'. With some elements, however, the question arises as to how narrow or broadly they should be interpreted. Does 'used on humans' imply that the device must be applied directly to the patient or may it also make only an indirect contribution to care, for example in the context of record management? And does 'treatment or alleviation of disease' imply that it is an act in the field of medicine, as referred to in article 446 paragraph 1 of the Civil Code, or can it also include the broader performance of the medical treatment agreement, which therefore includes file management? The words "or relief" indicate the former. However, the phrase following the enumeration in this paragraph, and certainly the English version thereof, in my opinion provides the necessary clarity;
'where the main intended action in or on the human body is not achieved by pharmacological or immunological means or by metabolism, but can be supported by those means.' xxvi
Stated differently, with the deployment of the medical device, a certain action in or on the human body is intended. Pharmacological or immunological agents or metabolism may be supportive in this intended action of the medical device.
It is clear that with the deployment of the intended AI system, no particular action in the patient's body is intended. Suppose that the intended AI system erroneously destroyed a record and, as a result, the physician is unable to make the correct diagnosis, causing physical harm to the patient. Then although this harm is partly - the doctor could also have chosen to complete the missing information - the indirect result of the wrongful destruction, it is not an intended effect of the intended AI system on the patient's body.
This conclusion can also be drawn from the Medical Device Coordination Group's (hereafter MDCG) Guidance on Qualification and Classification of Software in the MDR and IVDR (hereafter Guidance). Regarding information systems such as the EHR, it is considered:
Information Systems that are intended only to transfer, store, convert, format, archive data are not qualified as medical devices in themselves. However, they may be used with additional modules which maybe qualified in their own right as medical devices (MDSW).xxviii
Modules that are intended to provide additional information that contributes to diagnosis, therapy and follow-up (e.g. generate alarms) are qualified as medical devices.xxix
Archiving usually includes valuation and selection. The question is whether the MDCG assumed this and whether selection based on the "so much longer" criterion would be included. More guidance can be found with the second sentence. It is reasonable to assume that "therapy" and "follow-up" in the narrow sense, should be understood as a medical act or support thereof, respectively. The envisaged AI system would not be used for that purpose, nor to contribute to diagnosis. With a slight caveat that when writing the Guidance the MDCG will not have had in mind an AI system that selects in the context of archiving on medico-substantive criteria, it can therefore be said that also on the basis of the Guidance the envisaged AI system cannot be regarded as a medical device. A conformity assessment under Article 52 MDR is therefore not at issue, so Article 6(1)(b) is not met.
Thus, although not a medical device, the intended AI system could qualify as a safety component given the definition of Article 3(1)(14): 'an AI system (...) or whose failure or defective functioning endangers the health and safety of persons (...). However, this does not make Article 6(1)(a) applicable. Indeed, the intended AI system is not intended to be used as a safety component of a product covered by the MDR. After all, it will be deployed with an SPD and that, as described above, is an sich not a medical device.
In addition, the Guidance notes the following about software failure:
It must be highlighted that the risk of harm to patients, users of the software, or any other person, related to the use of the software within healthcare, including a possible malfunction is not a criterion on whether the software qualifies as a medical device.
Were this different, every EHR would soon have to be classified as a medical device, for example, because many EHRs are sometimes unexpectedly "out of order.
Thus, with a slight caveat, it can be concluded that the proposed AI system would therefore not qualify as a high-risk AI system under Article 6(1) AIV.
Re 2.
Annex III to the AIV lists a number of so-called use cases; AI applications in a number of societal application areas for a number of specifically defined purposes. Both the application areas and the specifically defined purposes are listed exhaustively. The intended AI system does not fall under any of the described use-cases. The use-case that comes closest to the deployment of the intended AI system concerns 5 sub d: "systems of for triage of patients in need of urgent medical care. Clearly, however, this only sees distinctions to patients based on their medical condition in emergency situations. Questions 3. and 4. concern only high-risk AI covered by Annex III and are therefore not relevant here any further.
Therefore, the envisaged AI system cannot qualify as a high-risk AI system under Article 6(2) AIV either. Since an AI system qualifies as a high risk AI only when Article 6(1) or 6(2) AIV applies, the conclusion is that the envisaged AI system should not qualify as a high risk AI system.
This means that Article 10 AIV on data and data governance, more specifically on assessing a dataset for possible bias that could lead to discrimination, for example, does not apply. This also applies to Article 14 AIV on human oversight which should aim to prevent or mitigate risks to health, safety or
fundamental rights and for the assessment of the impact on fundamental rights of an AI system under Article 27 AIV.
Transparency obligations
Do the transparency obligations of Chapter IV AIV apply then? As the title of the chapter indicates, these are obligations for certain AI systems listed in Article 50 paragraphs 1 to 4. The question may be raised whether the proposed AI system falls under paragraph 3, which formulates an information obligation for the user-responsible party of a biometric categorization system. Such a system is intended to classify natural persons into specific categories based on their biometric data. From the definition of "biometric data," it appears that in some cases these data may also qualify as health data. However, many times, unlike health data, they will not provide information about the health status of the person in question. While the envisioned AI system also classifies natural persons into specific categories, it does so primarily on the basis of health data and possibly sometimes to a limited extent on the basis of biometric data. As such, the envisaged system does not fall under the operation and obligations of Article 50(3) AIV, so the persons "exposed" to the envisaged AI system do not need to be informed about its operation.
But of course there is a transparency obligation under the AVG. Articles 13 and 14 require that the data subject be informed of the existence of automated decision-making. And at least in the case of profiling, useful information about the underlying logic, importance and expected consequences of that processing for the data subject must be provided.
Conclusion and some recommendations
Conclusion
Since the AI system is also not subject to any transparency obligations under the AIV, it falls into the "category" not regulated by the AIV that given its low risk. A perhaps astonishing conclusion that in any case stands in sharp contrast to the fact that the envisaged AI system as described above is in principle covered by the prohibition of Article 22(1) AVG. Whereby the reason that the prohibition does not apply after all lies in the "accidental" fact that there is the execution of a contract. However, provided that the explicit consent of the patient has been obtained for its deployment and adequate protective measures have been taken, the intended AI system may therefore be deployed on the basis of the applicable regulations for the automated enforcement of the retention period of medical records using the 'or so much longer' criterion.
Recommendations
Therefore, the protective measures formulated by Article 22 AVG for the case of derogation from the prohibition of paragraph 1 will have to be of such a nature when the envisaged AI system is developed and put into operation that the basically high risk of automated decision-making without human intervention can be mitigated.
This should not be limited to the rights that Article 22(3) AVG gives explicitly and by way of example to the data subject such as the right to object. As described above, given the complexity of the task, it will most likely involve autonomous, self-learning AI. Thereby, as a result of incorrect self-learning, for example due to an unrepresentative data set, or due to an error in the algorithm itself, the system may come to incorrect medically substantive conclusions as well as discrimination against certain patients (groups). Thus, before the envisaged system is put into use, due consideration will also be given to the robustness and potentially discriminatory outcomes of the envisaged AI system as part of the mandatory DPIA (see above). And because of the human rights aspects, it is thus highly recommended to extend the DPIA with IAMA or a voluntary FRIA.
Given the risks mentioned, providers and those responsible for use should consider meeting the other requirements of the AIV for high-risk AI on a voluntary basis in addition to the FRIA. However, perhaps practice will show that this cannot be avoided anyway if one wants to deploy AI that at a minimum meets the requirement of good care.
In addition to promoting the introduction AI, one of the aims of the AIV is to protect health and fundamental Charter rights. Therefore, it is conceivable that, for cases such as this one where the use of the AI system may jeopardize the health of those affected and their fundamental right to health care, the European Commission could, over time, supplement Annex III to the AIV with a use case under which this case would fall. There would then be high-risk AI under the AIV.
A DPIA and FRIA will also address the need for the proposed AI system. As described above, in the context of applying the "or so much longer" criterion, the need for an automated technical solution is well-founded, invoking the current state of our healthcare system. Rather, the question will be whether it is always necessary to deploy an autonomously operating and self-learning AI system, with the aforementioned risks involved. The Tuberculosis Data Archiving Directive states that, in deviation from the standard 20-year period, the medical record should be kept for life and so should an X-ray of the lungs with abnormalities. This explicitly invokes the "so much longer" criterion. xl Such simple determinations can well be converted into 'if-then' rules for an algorithm. It is possible to use RPA, robotic process automation, to run these rules on the EHR. The dangers of an autonomous and self-learning AI system are then not an issue. Now this will not be possible for every medical specialty, but to the extent that it is, it offers the relevant profession the opportunity to enforce the retention period using the 'or so much longer' criterion in a simple, low-risk way. In doing so, Article 22 AVG will still apply, as it involves automated decision-making with legal consequences. But the protective technical measures to be taken will be of a simple nature.
Finally, a recommendation for the event that - perhaps (partly) on the basis of this article - a healthcare institution draws the conclusion that the deployment of AI in the context of maintaining the retention period of the medical record is not (yet) opportune and the records must therefore be retained indefinitely longer. In that case, at least the records whose 20-year retention period has expired should no longer be accessible to users of the EHR, on the grounds of breaking the glass. RPA can be helpful in this regard with very little risk. And finally, the following; the old paper records still present in large numbers can be digitized.