Menu

Filter by
content
PONT Data&Privacy

0

AI in healthcare (part 2): privacy

Suppose your hospital is approached by a vendor specializing in image analysis software tools. The supplier would like to gain access to scans of patients diagnosed with skin cancer in order to develop a software tool that can identify skin cancer at an early stage. You would then receive a substantial discount on its purchase cost. Recognizable? Then make sure you read through the tips below before negotiating with the vendor.

October 24, 2021

Background articles

Background articles

This blog is specifically about privacy. In part 1 of this blog discussed the crucial preliminary questions, the legal framework and some contractual tips.

Privacy Law

When applying AI in healthcare, patient privacy must be considered. An AI tool needs data to function properly. All information about patients is classified as "personal data. The AVG and the UAVG apply to the processing of personal data. In addition, caregivers are subject to medical confidentiality. This presents the necessary challenges. In any case, pay attention to the following points of interest.

Health data

Data that says something about a person's health qualifies as "special personal data. Scans of patients diagnosed with skin cancer are an example. Due to the sensitive nature of special personal data, the processing of such data is in principle prohibited. Processing of special personal data is only allowed when it can be based on one of the specific exceptions contained in the AVG or the UAVG. Express patient consent is one possibility, but there are alternatives. For example, the UAVG includes exceptions for scientific research, proper treatment of patients and ensuring the quality of care provided.

Medical confidentiality

In addition to the rules on health data in the AVG and UAVG, the WGBO states that the healthcare provider may not provide information about the patient to anyone other than the patient without consent. Medical confidentiality may interfere with the application of AI in healthcare. However, others than the patient do not include those who are "directly involved" in the performance of the treatment agreement. For example, the KNMG mentions nurses, physician assistants and dieticians. The Personal Data Authority (AP) further takes the position that the storage of patient data in the cloud does not require consent because (also) the cloud provider can be considered 'directly involved' in the performance of the treatment agreement. Given the AP's example, it is not inconceivable that an AI provider could be considered "directly involved," but that is not yet a generally accepted position.

Processor Agreement

Possibly the AI supplier proposes to enter into a processor agreement with the hospital, in which the AI supplier is designated as a processor. In that case, ask carefully what exactly the AI supplier will do with the personal data and for what purposes. After all, an AI supplier that processes personal data for its own purposes cannot be considered a processor, which means that a processor agreement should not be concluded. Since data in AI is usually used not only to make intelligent predictions but also (precisely) to continuously improve the AI tool, there may soon be processing by the AI supplier for its own purposes. In the sense of the AVG, the AI supplier is then not a processor, but a data controller. Even then, it is wise to conclude a contract with clear privacy agreements, but not a processor agreement.

Automated decision-making

The AVG contains a prohibition on making decisions that significantly affect the patient and are based solely on automated processing of personal data. When a treatment choice is made on the basis of AI then this prohibition must be taken into account. After all, a medical treatment choice may eminently be a decision that significantly affects the patient. Involving automatically generated recommendations in a treatment choice does not violate this prohibition, because the decision is then not 'solely' based on automated processing. But if a care provider always follows automatically generated recommendations then this fly does not apply. Incidentally, there are some exceptions to this prohibition, including explicit patient consent.

'AI & algorithms' focus area of the AP

The AP oversees the use of personal data. AI & algorithms is one of the AP's three focus areas through 2023. In the healthcare sector, the AP has already taken enforcement action with some regularity. In 2019, the AP imposed a fine of EUR 460,000 on the HagaZiekenhuis and in 2021 a fine of EUR 440,000 on the OLVG hospital, for insufficient security of patient records.

New draft AI Regulation

On April 21, 2021, the European Commission (EC) published a draft AI Regulation. This draft regulation provides rules on the development, commodification and use of AI-driven products, services and systems within the EU.

The regulation has a risk-based approach that distinguishes between AI applications with different risks (unacceptable, high, low or minimal). If a healthcare AI system qualifies as a medical device under the Medical Device Regulation (2017/745/EU), effective May 26, 2020, it will likely fall into the "high risk" category. When an AI system falls into the "high risk" category, there are a lot of requirements that must be met, including risk management, data management, technical documentation, registration requirements, transparency and disclosure to users, human oversight, accuracy, robustness and cybersecurity.

If your organization is contributing to the development of an AI system or to its maintenance, it is wise to take into account the requirements of the regulation now and set up your organization accordingly in advance.

Share article

Comments

Leave a comment

You must be logged in to post a comment.

KENNISPARTNER

Martin Hemmer