Re-evaluating Artificial Intelligence (AI) in 2020

Introduction

The medical device industry is constantly changing and it is impossible to keep up with the infinitesimal changes. Every once in a while, it is good to sit back and re-evaluate what used to be common knowledge. The term Artificial Intelligence (AI) is often used in conjunction with describing the problems that current computer systems cannot solve. However, as technology continues to advance, yesterday’s impractical AI becomes today’s real solutions. Thanks to these recent advances, image analysis and natural language processing (NLP) are two areas moving out of AI research and into real solutions.

Significant challenges, resolved

Developing a new medical device is challenging. There must be a reasonable balance between risk and reward or cost and market value. If a product cannot show clinical efficacy, the market value could be zero. Even if there is some market value, if it costs more to develop the product than could ever be recouped, the product probably won’t be developed to maturity. Very recently, the technologies used to solve image and NLP problems have often been outside of this reasonable balance. For example, in 2016, Google’s AlphaGo system beat the world’s top Go player, but even just “training time for AlphaGo cost $35 million.” In 2011, IBM’s Watson required a super computer the size of a room to beat previous Jeopardy champions. However, even with the massive effort and computer power, Watson could not process images or voice. Watson depended on, “text messaging in order to receive the clue.” It is clear that the significant costs associated with these technologies would prevent the development of new applications. However, for image and natural language problems, the landscape has changed. Low cost, high performance solutions may be available.

Image and NLPs solved by AI

In the last 10 years, there has been a dramatic increase in the ability of computers to interpret and make decisions based on images and natural language information. AI researchers often compare the performance of their technologies using benchmark data-set competitions. The best known competition for images was ImageNet. The ImageNet benchmark was released in 2010 and state of the art performance was 75% accuracy. However, by 2017, state of the art improved to around 95% accuracy- basically as good as humans. Computers had “solved” the ImageNet benchmark and the competition stopped running, because the problem wasn’t difficult enough. Something similar happened to a popular benchmark for text, SQuAD 1.1 was released in 2016 and scores were far below human levels. By late 2018, computers were beating human level performance at SQuAD 1.1 and researchers created the more difficult SQuAD 2.0 to keep humans on top. However, computers once again beat humans and SQuAD 2.0 was “solved” in mid 2019.

The research behind the technologies used to solve ImageNet and SQuAD are all in the open. The cost to experiment with these technologies has been greatly reduced. Now, the technologies are working their way through academic research and finding a wide range of AI applications including, “identifying brain tumors on magnetic resonance images” and “interpreting retinal imaging.” What new applications are possible now that computers are better and faster than human experts?

What happened? Deep Learning

The dramatic improvement in AI for vision and natural language processing applications is made possible by Deep Learning. Deep learning was one of the biggest buzzwords of the AI community in the 2010s. Deep learning refers to the use of deep neural networks to solve problems, but deep learning’s potential was known before 1990. There were several key advancements that led to the explosion in deep learning's popularity.

One key was graphics cards, also known as GPUs or graphics processing units. Gamers have been using GPUs to make their computer games look great for a long time. Around 2010, AI researchers realized that GPUs could be used to get massive speed ups in the training of neural networks. Now, research on deep learning could go 100x faster, and is cheaper than ever before.

The next factor was automatic differentiation tools. Without going into too much technical detail, before automatic differentiation, a researcher would first need graduate level understanding of some complex mathematics and advanced knowledge of deep neural networks to develop a novel deep learning system. Back then, the researcher would have to hand code their knowledge into a representation the computer could work with. In 2015, Keras and Tensorflow were released which dramatically reduced the time needed to experiment with new deep learning systems. With these tools, even an undergraduate with a passing interest in deep learning could experiment with novel solutions. This helped to accelerate the advance of state of the art solutions to AI problems. This is how ImageNet and SQuAD were “solved” and how a computer could beat a human at the game of Go.

The last technology piece is Transfer Learning. The idea of Transfer Learning has been around for a while, but is increasingly made available in software tools. Thanks to the relatively open AI research community, the state of the art models for handling images and text are shared with the community. While training a state of the art model from scratch requires extremely large datasets and expensive amounts of compute time, once a model is trained, it can be modified to another problem cheaply with little data. Instead of millions of example images, sometimes a few hundred images is enough to validate that good enough accuracy can be obtained. With transfer learning, the cost of a highly accurate deep learning model for a specific problem is drastically reduced.

Why now?

The cost of developing an accurate deep learning model keeps dropping. Now is a great time to re-evaluate which problems can be solved. Online courses which train engineers in deep learning are spreading the knowledge around. The cost of an experiment to validate the feasibility of a solution has dropped. The initial investment (to validate a solution will be viable) the market is no longer millions of dollars of computer equipment, a dedicated team of AI researchers, and subject matter experts labeling tens of thousands of examples. Deep learning is making it into the market. Apple has released their smartwatch detection of heart beat irregularities. Skin cancer can be classified by a deep neural network with the same accuracy as a dermatologist. Will your product be next?

If you want to make sure your medical devices are cyber-secure, check out CypherMed Cloud. CypherMed Cloud works hand-in-hand with analysis software and cloud-based AI/Machine Learning algorithms. With security at its core, it provides cryptographically strong authentication of users, enforcing the unique privilege levels and controls needed between users and preventing sensitive information from being read by unauthorized parties either in storage or in transmission. 

Need help on this topic?
Contact Us
Colin Blower

Colin is a Principal Software Engineer for Promenade Software. He is passionate about building software tools to solve problems. With his experience ranging from hardware controllers to Cloud AI, he is ready to build effective tools for a wide range of projects.

linkedin logo
SUBSCRIBE TO
NEWSLETTER
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
ABOUT
PROMENADE SOFTWARE

Promenade Software, Inc. specializes in software development for medical devices and other safety-critical applications.
Promenade is ISO 13485, and CypherMed Cloud is SOC2 Type II certified.