symbolic image of news about AI/ML-based regulated software

News

Keep me posted!

Yet another AI Ethics Document, this time by the UN System

26.09.2022

Image of the UN logo, representative for the United Nations System.

The United Nations System (comprising the UN's six principal organs) has very recently released its "Principles for the Ethical Use of Artificial Intelligence in the United Nations System". These principles will be mandatory for the development and use of any AI system within the UN system.

The speed at which various organizations publish almost identical sets of ethical or trustworthy AI principles and definitions is breathtaking. It would have been more helpful if the UN system had (1) simply reused an existing definition, and (2) stated how they intend to validate compliance with their principles.

FDA Policy towards AI/ML Changed Dramatically

19.09.2022

image of the FDA (Food and Drug Administration) building

The FDA's policy towards AI/ML based products has changed drastically. In Oct 2021, they have published 10 high-level guiding principles for the development of good machine learning practices (GMLP). As of today, the FDA hasn't officially released the GMLP themselves. They have, however, very consistently stated their new expectations in various (pre-)submission responses we have seen within the last few months. The expectations have much to do with the quality of training and in particular validation data, and the avoidance of unwanted biases.

While the consistency of the FDA feedback is good, it would be really helpful if the FDA made their expectations official to avoid failed submissions, and thus to save time and money for manufacturers.

Model Calibration Useful for Explainable AI (XAI)

16.09.2022

figure showing the ML process: data is fed into the model, badly calibrated class probabilities are post processed into well calibrated class predictions

For manufacturers of AI/ML-based medical devices, one of the main obstacles is transparency / explainability of the system, shortly XAI. Especially for deep neural networks, XAI is hard to achieve and a topic of ongoing research.

One technique to improve transparency / explainability is calibration. In a well-calibrated model, the score of the output neurons can be interpreted as confidence values of the model predictions. Evidently, the information that the model made its prediction with a confidence of, e.g., 89%, greatly contributes to the interpretability of the system.

We have compiled a short article that describes the idea and the effect of post-calibration. Post-calibration is particularly interesting because it adds to an existing model another output layer that yields much better calibration than the original calibration. If you're interested, check out our article.

MDCG Position Paper "Transition to the MDR and IVDR": No AI Guidance

14.09.2022

picture of the european commision with flag as representative for the MDCG

The MDCG has issued a position paper on the "Transition to the MDR and IVDR". The paper recognizes that the limited capacities of the notified bodies "may lead to disruption of supply of devices needed for health systems and patients ...".

In the paper the MDCG announces additional guidance for clinical evaluation. This is highly appreciated. The paper does not, however, mention the much anticipated guidance for AI/ML-based devices. I would be positively surprised if it were to see the light of day before 2024.

Dire Scientific Evidence for CE-marked AI software products

21.07.2022

picture from the paper Artificial intelligence in radiology: 100 commercially available products and their scientific evidence.

Artificial intelligence in radiology: 100 commercially available products and their scientific evidence conclusion: "For 64/100 products, peer-reviewed evidence on its efficacy is lacking. Only 18/100 AI products have demonstrated (potential) clinical impact."

No wonder the FDA has raised its standards for certification.

Czech Presidency moving EU AI Act forward

11.06.2022

image representing the czech republic

The EU AI act is moving forward: the upcoming Czech presidency plans to come up with a revised compromise text by Jul 20 and to collect comments on that text by Sep 2. This ambitious timeline hints towards the adoption in the parliament within this year or early next year. We stay tuned and keep you posted on the changes in the planned compromise text.

FDA tackeling COVID-related EUA requests

05.06.2022

symbol picture of a coronavirus particle

The FDA has provided an update on the impact the massive amount of COVID-19 related requests for Emergency Use Authorization (EUA) has had on premarket review times.

Since Jan 2020, the FDA has received the enormous number of 8,000 COVID-related EUA requests and pre-submissions, and has granted some 2,300 clearances.

Yesterday's update announces a step back to normal with the reopening of acceptance of presubmissions for non-COVID IVDs.
This is good news for medical devices manufacturers who seek FDA clearance. The transparency that the FDA communicated their review times with is something manufacturers can only dream of in the EU MDR and IVDR realm.

IMDRF definition of "Bias" unhelpful

23.05.2022

IMDRF logo

The IMDRF has recently published the document "Machine Learning-enabled Medical Devices: Key Terms and Definitions". The intention of the document is to clarify common machine learning terms as a basis for further standardization work. These definitions have, however, shortcomings with unintended, negative impact. One of them is the definition of biases that has been adopted from ISO/IEC TR 24027:2021. It is a rather broad definition stating that a bias is a "systematic difference in treatment of certain objects, people, or groups in comparison to others."

As the IMDRF document points out this definition does not necessarily imply unfairness. It also includes treatment that has been optimized towards specific sub-groups. According to this definition personalized treatment is highly biased, at the same time highly desirable.

This is not an academic discussion because current good machine learning practices all require to avoid biases. Upcoming standards and regulations will do the same. For notified bodies, bias avoidance is a high priority topic for auditing ML-based medical devices. The IMDRF (and therefore ISO/IEC TR 24027:2021) definition of biases is not the right basis for this.

FDA warns Health Care Providers of AI-based Brain Imaging Software

28.04.2022

symbol image showing AI-based brain imaging software

The FDA has issued a warning with regard to the intended use of certain AI-based brain imaging software. The obvious problem is that the affected software devices are intended for prioritization and triage, but doctors use them for diagnostic assistance.

The less obvious problem is, in my humble opinion, that the option of categorizing imaging software this way by virtue of the intended purpose only helps lowering the quality requirements at the risk of foreseeable misuse. It is quite natural for a doctor to use a software for diagnostic support if the software seems to work well. Even the more as examining brain images for vessel occlusion does not seem to me as a task that needs prioritization and triage in the first place, but thorough diagnosis. 

Fantastic Review of EU AI Act by Lovelace Institute

26.03.2022

Ada Lovelace Institute Logo

Lilian Edwards from the renowned Ada Lovelace Institute has written an excellent review of the proposed EU AI act. While she generally appreciates the proposal as an "excellent starting point for a holistic approach to AI regulation", she also criticizes the following four shortcomings:

  • (1) Treating AI systems like traditional products cannot cover the combined responsibilities of a complex web of development, adaptation, re-fitting, procurement, outsourcing, and re-use of data from a variety of sources, etc.
  • (2) The people who are impacted by AI systems and whose fundamental rights should be protected by the AI act have no rights under the AI act, e.g. to file complaints about a system.
  • (3) The risk classification in the AI act seems arbitrary and lacks clear and transparent criteria.
  • (4) All AI systems in the AI act's scope should be subject to an assessment of the risk to fundamental rights, not only those deemed high-risk by the European commission.

The paper is one of the best reads on the topic we've come across, and definitely well spent time for anyone interested in AI regulation. 

Biometric Verification vs. Identification well defined by Biometrics Institute

11.03.2022

logo of teh biometrics institute

The Biometrics Institute has published these definitions and illustrations with the aim to establish a common understanding of some basic and highly relevant concepts. In particular, we appreciate the clear distinction of verification vs. identification, because the latter one will be considered high-risk or even prohibited under the upcoming AI act.