In the second part of this blog series, we are not going to be talking about dollar bills or the scene from the film ‘A Miracle on 34th Street’, rather looking at the level of autonomy trusted to artificial intelligence (AI) in the medical device sector.
Trusting the tools
In the first part of this blog series (Artificial Intelligence: Are machines improving our lives?) we looked at function analysis as a means of determining which activities lend themselves better to human beings and which ones lend themselves better to technology. We are still very much in an early phase of AI in the medical device sector, however machine learning has been around for decades e.g. ECG analysis systems learning ideal normal QRS complexes.
Figure 1: QRS Complex
One area we have been active in as a consultancy over the last year is software for retinal diagnostics. There are a number of devices now on the market, utilizing fundus cameras that automatically analyse fundus images from each eye to detect conditions such as diabetic retinopathy. Looking at the De Novo submission for one such device the FDA level of concern for the software is understandably major, hence the concern that the software may go ‘off-piste’ is correspondingly high.
There have been devices that have been introduced in numerous markets such as the US, EU and Australia, where lower classifications have been approved on the basis of confirmation of the results by a clinician.
In medical devices generally the advantage in comparison to other safety-related industries is, that a clinically trained person can check the homework of the device and overrule, at the expense of increased time and cost. In the third part of this series we will look at AI in the automotive industry, where reaction time is a significantly more important factor.
Utilising the power of AI
One area that has been highlighted as a major success for AI, is in the detection of melanoma. As someone who had a large chunk of his left shoulder cut out 2 years ago, the importance of detecting melanoma early cannot be over-emphasised. Detected in a timely fashion melanoma is nearly 100% curable. Like many people you listen to the advice about moles or patches on the skin changing shape, but how often do we check and particularly on your shoulder or back. Again, back to the topic of function analysis the jobs humans and medical devices are better suited to.
Apps running on mobile phones can be used to regularly check the suspect areas on the skin and compare and analyse changes giving an earlier and more reliable detection. There are many such apps utilizing AI on the market already.
Figure 2: Melanoma Aftermath
There is a significant amount of work in progress at the moment to define standards for artificial intelligence. Much of this work is still a fair bit away from release as an international standard, let alone as a harmonised standard. The existing medical device standards such as IEC 62304 software life-cycle and ISO 14971 risk management, do not address yet specific topics such as AI. The Association for Advancement of Medical Instrumentation (AAMI) along with BSI have produced a good guidance document proposing how AI could be addressed in the medical device sector and the FDA has published recent guidance on the subject in an AI/machine learning action plan.
There is, though, a real need to bring standards to the market that guide not only on requirements, but implementation of e.g. neural networks and algorithms. How to verify and validate AI is also a key area particularly for adaptive algorithms.
There is no doubt that AI can improve efficiency of processes in the medical device sector, but the applications where it is utilised have to be suited to AI. Where there are concerns and human intervention is regularly required it won´t bring a huge advantage, but areas such as melanoma detection can certainly bring major benefits to this field.
In the final part of this blog series, we shall look at AI being utilised in the automotive sector, the challenges associated with the reaction times and lack of human intervention if it goes wrong.
By Alastair Walker, Consultant