Extract relevant predictive features from WSIs | Tribun Health

Written by Tribun Health on

Extract relevant predictive features from WSIs | Tribun Health

Share this article

subscribe to our blog

[NOVEMBER 30TH, 2022]

Authors: Mélanie Lubrano, Tristan Lazard, Guillaume Balezo, Yaëlle Bellahsen-Harrar,
Cécile Badoual, Sylvain Berlemont, and Thomas Walter - Tribun Health & Mines Paris PSL

In computational pathology, predictive models from Whole Slide Images (WSI) often rely on Multiple Instance Learning (MIL), where the WSIs are represented as a bag of tiles, encoded by a Convolutional Layers. Slide-level predictions are then achieved by building models on the aggregation of these tile encodings. The tile encoding strategy thus plays a key role for such models. In the latest approaches the encodings are obtained with unrelated data sources, full supervision, or self-supervision.

Additionally, annotated histopathological datasets are difficult to obtain they require expert’s time and knowledge and the heterogeneity in the data makes it sometimes difficult to obtain accurate local annotations. Even when a small proportion of labelled annotation exists, it is not enough to support fully supervised techniques. Yet, even in small quantities, expert annotations carry meaningful information that one could use to enforce biological context to deep learning models and make sure that networks learn appropriate patterns.

In this work, we propose a framework to reconcile self-supervision and full supervision, showing that a combination of both provides efficient encodings, both in terms of performance and in terms of biological interpretability.

We applied our method to the grading of cervical biopsies and shed light upon the internal representations that trigger classification results, providing a method to reveal relevant phenotypic patterns for the classification.

Qualitative analysis of features visualizations demonstrated to be relevant for histopathological task and that class-discriminative patterns were indeed identified by the model.Extract relevant predictive

Mixed Supervision, in addition to extract more relevant features from images, paves the way towards more interpretability of deep learning models and hopefully a wider acceptance by specialists in a medical context.

Figure: Most predictive features for each class - The top row displays the most predicitve feature model for the class. The bottom row displays the tile expressing the feature the most. Extracted tiles correlate with class-specific biomarkers.

Image2

Image3

 

 Download the digital scientific poster by filling out the form below:

 

For more information, contact us at:
mlubrano@tribun.health
+33 6 50 21 68 81