It is impossible to imagine our daily lives without AI, and AI is increasingly being used in healthcare as well. In this fact sheet, we outline what healthcare providers should pay attention to when purchasing and using AI from a health law perspective. We also provide tips and recommendations on proper use.
Artificial Intelligence (or: AI) is not a well-defined legal concept. AI refers to systems that can perform tasks and exhibit "intelligent" behavior, it usually refers to software. For example, consider a tool that can predict with a high degree of accuracy which patients will show up for a consultation, or that supports doctors in analyzing MRI scans.
Of interest to all users of AI is the AI Act that will take effect incrementally over 2024-2026. Also the AVG is relevant. Specifically for AI used in healthcare, it is important to determine whether it is a medical device. AI is a medical device if it has a medical purpose. Medical devices must meet the (quality) requirements of the Medical Device Regulation (MDR). The MDR has four risk classes, the higher the risk class the more stringent the requirements the medical device must meet. All medical devices must have a CE mark. The CE mark shows that the device meets the minimum requirements of the MDR.
Good care
For all AI deployed in the care process, under the Wkkgz, the care provider must ensure that the AI is safe and of good quality. The deployment of AI must result in good care. Also, once the AI takes effect, the AI must meet the requirements of the AI-ACT. It is the healthcare provider's responsibility to check this before the AI is deployed.
If the AI application has a medical purpose and is a medical device, the application must also meet the minimum quality requirements from the MDR, the healthcare provider must check upon purchase that the AI application has a CE mark and is registered in Eudamed.
Employees working with AI applications must be given proper instructions. Also, the healthcare provider should perform the updates offered by the supplier and thus "maintain" it properly.
Inform patient
Often AI is used in the context of executing a treatment agreement, but not always. Under the Wgbo and the Wkkgz, patients must be properly informed about their treatment.
If AI is a medical device, the patient must be properly informed: what is the AI application suitable for? What are the risks in case of incorrect use? If the AI is not from the healthcare provider itself, inform patients about it.
When using AI, unexpected events can occur that result in harm to a patient. The bar for liability is high. Whether the healthcare provider is liable if a patient is harmed will always depend on the circumstances of that case. By obtaining insurance, healthcare providers can mitigate liability risks, but never completely eliminate them.
The provider is responsible for ensuring that the AI meets quality requirements. If it complies, then the provider is usually liable. The patient will often hold the healthcare provider or caregiver liable, not the supplier. If the healthcare provider is sued for a defective AI application, the healthcare provider may be able to recover damages from the supplier because the supplier has product liability.
If the AI application is a medical device then the supplier must ensure that a product is CE marked. CE marking is nothing more than a declaration that minimum legal requirements are met. This does not necessarily make the product suitable for application by the healthcare provider. It may also turn out later that the product has defects.
For individual caregivers, if they use AI in the provision of care, they can in principle assume that a safe and high-quality device is being used. If they follow the care provider's policy and there are no signs that this policy is inadequate, then as an individual care provider you cannot be held accountable for this. When in doubt about the appropriateness of the digital triage tool or policy, the caregiver should discuss it with the healthcare provider. It is also important that the social worker himself also has a good understanding of how the AI application works and can inform patients about it. If the doctor acts contrary to what can be expected of a reasonable and competent professional, he or she can be subject to disciplinary action. That threshold is also high.
The above article is not advice, but offers insight into the main aspects in outline form. Each application and situation is different.