The first aim is to extend the modelling of spatial locations to incorporate additional clinical and demographic information (e.g. disability/neuropsych scores, gender, etc.). Such a model can then be used in the discrimination of different disease subtypes (e.g., vascular dementia vs normal ageing vs Multiple Sclerosis) which would be very valuable from a clinical perspective. In addition, the model can also be used for the creation of biomarkers to measure disease progression, which has applications in clinical trials or evaluating individual patient prognoses.
The second aim is to pursue more fundamental methodological work in the segmentation of lesions, WMH, stroke infarcts and more diverse pathologies in the brain using a combination of supervised and unsupervised machine-learning methods.
During the initial stages of the project, we modelled the distribution of white matter lesions lesions within a population. Our probabilistic model takes as input the lesion segmentation binary maps belonging to various age groups. We used spline approximation to ensure spatial relationship between neighbouring voxel and adopted a Bayesian framework to deal with the initial uncertainty associated with input binary maps obtained from lesion segmentation. We compared our algorithm output with the simulated images to evaluate the performance of our method.
Our algorithm results show a root mean square error value of 1% with respect to the simulated data. We also analysed the effect of various parameters such as knot spacing of splines on the probability map. Finally, when applying our method on a clinical dataset we observed that the overall probability of lesions is higher in later age groups which is in line with the current literature.
Antenatal ultrasound screening is essential for the monitoring fetal cardiac health. Automatic analysis of fetal heart on screening images could aid in the identification and treatment of Congenital Heart Diseases (CHD). Determining the state of fetal heart including the view and orientation is the first step towards automatic fetal cardiac analysis. This is highly challenging since the fetal heart is small with relatively indistinct anatomical structures, which is further compounded by the presence of artefacts in ultrasound images.
In this work,
we identified the presence of the heart and determine its view and
orientation from individual frames of clinical ultrasound videos
using convolutional neural networks (CNNs). The CNN model
achieved a maximum accuracy of approx 90% in detecting one
of the views, with mean orientation error of 54.9 degrees with
respect to manual annotation.
Since AMD occurs in the macular region which is responsible for the central vision, the disease affects the central vision, thus leading to vision loss. If identified and treated in the early stages, vision loss due to AMD can be prevented and reliable restoration of central vision. Hence grading of AMD becomes very significant for the determination of severity stage and for timely treatment. Given a colour fundus image, clinicians observe the size, location and density of drusen to arrive at a decision for early stages and pathologies like geographic atrophy (GA) and choroidal neovascularization (CNV) to observe late stages.
In this work we proposed a novel super-candidate based approach, combined with
robust preprocessing and adaptive thresholding for detection of drusen
and GA, resulting in accurate segmentation with the mean lesion-level overlap of
0.75, even in cases with non-uniform illumination, poor contrast and confounding
anatomical structures. Our method achieved a sensitivity of 80% for high specificity above 90% and high sensitivity
of 95% for specificity of 70% on representative pathological databases
(STARE and ARIA) for both detection and discrimination of AMD pathologies.
OD and macula are vital anatomical features in retina and their localization helps in identification and screening of various vision impairing diseases like Diabetic Retinopathy (DR), Glaucoma and Age-Related Macular Degeneration (AMD), even in the pre-mature stage. Apart from geometric modelling, structural information and crucial anatomical information such as absence of vessels in macula and the distance between OD and macula roughly being 2.5 times OD diameter from OD centre were utilized for building an integrated approach which was more dependable across various datasets. Bringing together all useful information and refinement using advanced mathematical optimization techniques made the method reliable even in the presence of obscuring pathologies. Instead of treating the anatomical structures as independent objects on the image, the structural interrelationship between them was utilized, thereby ensuring accurate detection.
The results was evaluated using a rigorous evaluation metric on various publicly available datasets and datasets obtained from local hospitals and the performance was observed to be on par or better with the state of art. Also, the algorithm was observed to be fast, robust to variations in acquisition settings and fundus cameras, field of view, magnifications, illumination, ethnicity, and shown to work even in pathological images.
Diabetic macular edema (DME) is one of the vision-impairing manifestations of Non-Proliferative Diabetic Retinopathy (NPDR). Early detection and treatment of DME can prevent permanent vision loss in people suffering from DR. However, the clinical detection through bio microscopy is time-consuming. In this paper, a computerized grading method has been proposed to determine the DME severity based on the spatial distribution of exudative lesions around macula. The method utilized a multi-scale, histogram based thresholding technique for exudate detection which detects HE of various sizes and intensities. The region around macula was classified into zonal levels and severity of the DME was graded based on the presence of exudative lesions in each zone.
The proposed method was evaluated on public datasets and
heterogeneous dataset collected from local hospitals, representing
diversity in pathology and imaging conditions. The method achieved the sensitivity of 89.54% for 9.1 false positive per image (FPPI) for exudate detection and 85.44% accuracy for DME grading.