13.1 C
New Delhi
Saturday, December 14, 2024
HomeTechGoogle’s AI model can help improve neural networks in medical research

Google’s AI model can help improve neural networks in medical research


NEW DELHI: A team of researchers at Google has developed a new Artificial Intelligence (AI) model, which they claim can have a big impact on medical research and clinical applications. Led by Shekoofeh Azizi, an AI resident at Google Research, the model can help create a self supervised deep neural network that can improve the efficiency of clinical diagnosis of such algorithms.


The key conflict that this research tried to solve was to make deep neural networks more robust and efficient in crucial medical applications. In various medical research tasks such as cancer, practitioners do not always have ample data sets that are clearly labelled in terms of what they constitute. This has typically made it difficult for medical AI researchers to create efficient training models for deep neural networks to identify medical data with high accuracy.

Called Multi-Instance Contrastive Learning (MICLe), Azizi and his team have created what is called a ‘self supervised learning’ model. The key postulate of self supervised machine learning models is that they are trained on unlabelled data, thereby enabling the application of AI in niche areas where collection of clearly defined data sets may be difficult – such as in cancer research itself.

In her paper, Azizi says, “We conducted experiments on two distinct tasks: dermatology skin condition classification from digital camera images, and multi-label chest X-ray classification, to demonstrate that self-supervised learning on ImageNet, followed by additional self-supervised learning on unlabelled domain-specific medical images, significantly improved the accuracy of medical image classifiers. We introduce the novel MICLe method that uses multiple images of the underlying pathology per patient case, when available, to construct more informative positive pairs for self-supervised learning.”

MICLe itself is based on Google’s existing research into self-supervised convolutional neural network models. At the 2020 International Conference on Machine Learning (ICML), Google researchers presented Simple Framework for Contrastive Learning, or SimCLR – which MICLe is based on. Simply put, SimCLR uses multiple variations of the same image to learn multiple representations of the data that it has. This helped make the algorithm more robust and accurate in terms of its identification.

With MICLe, the researchers used multiple images of a patient, which did not have clearly labelled data points. The first layer of the algorithm used an available repository of images with labelled data, ImageNet in this case, to give the algorithms an initial round of training. Azizi said that her team then applied a second layer of images, this time without labelled data, to make the algorithm create image pairs. This enabled the neural network to learn multiple representations of a single image, something that is critical in medical research.

In clinical treatments, images regularly have differing viewpoints and conditions as medical imagery cannot be orchestrated or choreographed. After the above two layers of training, the researchers then applied a very limited data set of labelled images to fine tune the algorithm for application on targets. The researchers said that alongside accuracy, such algorithms can also significantly reduce the cost and time spent in developing AI models for medical research.

“We achieved an improvement of 6.7% in top-1 accuracy and an improvement of 1.1% in mean area under the curve (AUC) on dermatology and chest X-ray classification respectively, outperforming strong supervised baselines pre-trained on ImageNet. In addition, we show that big self-supervised models are robust to distribution shifts, and can learn efficiently with a small number of labelled medical images,” Azizi summed up in her research.



Source link

- Advertisment -

YOU MAY ALSO LIKE..

Our Archieves