Main Page ——————— Publication ——————— Research ——————— Personal

Polyp Detection, Localization and Segmentation in Colonoscopy Images

During a colonoscopy exploration, clinicians inspect the intestinal wall in order to detect polyps. Unfortunately, despite clinician’s skills, some of the polyps are still missed, especially those small and/or hidden behind folds. Losing polyps may result in the lesion evolving badly and, taking into account the intervals between explorations, once it is found it might be too late for the patient. Thus, there is room for computational support systems to aid clinicians on this task. Polyp detection is thus the task of generating an annotation (labelling) of an input colonoscopy image which says roughly where a polyp would be (if any). A more advanced form of this is polyp segmentation where the labelling is done in such a way as to closely surround the border of the polyp. This research is doing in collaboration between ZiuZ Visual Intelligence, University of Groningen and Universitat Autònoma de Barcelona

Machine Vision

This project is being delevoped by the companies members of the Innovation Cluster Drachten - ICD and aims to take the step towards graphic data aquisition and processing. Details of this project are confidential.

Smart Machines

This project is being delevoped by the companies members of the Innovation Cluster Drachten - ICD and aims to take the step towards predictive maintenance via remote sensing and big data. Details of this project are confidential.

Endoscopic Images

Pill endoscopy cameras generate hours-long videos that need to be manually inspected by medical specialists. Technical limitations of pill cameras often create large and uninformative color variations between neighboring frames, which make exploration more difficult. To increase the exploration efficiency, we propose an automatic method for joint intensity and hue (tone) stabilization that reduces such artifacts. Our method works in real time, has no free parameters, and is simple to implement. We thoroughly tested our method on several realworld videos and quantitatively and qualitatively assessed its results and optimal parameter values by both image quality metrics and user studies. Both types of comparisons strongly support the effectiveness, ease-ofuse, and added value claims for our new method.

3D Gap Filling

Volumetric shapes can be affected by multiple types of defects, including cracks and holes. Removing such defects is delicate, as it can also affect details of the shape, which should be preserved. We present a method for the robust detection and removal of such defects based on the shape’s surface and curve skeletons. For this, we first classify gaps, or indentations, present in the input shape by their position with respect to the shape’s curve skeleton, into details (which should be preserved) and defects (which should be removed). Next, we remove defects, and preserve details, by using a local reconstruction process that uses the reconstruction power of the shape’s surface skeleton. We demonstrate our method by comparing it against classical morphological solutions on a wide collection of real-world shapes.

Dermoscopy Images

We propose a method for digital hair removal from dermoscopic images, based on a thresholdset model. For every threshold, we adapt a recent gap-detection algorithm to find hairs, and merge results in a single mask image. We find hairs in this mask by combining morphological filters and medial descriptors. We derive robust parameter values for our method from over 300 skin images. We detail a GPU implementation of our method and show how it compares favorably with five existing hair removal methods, in terms of removing both long and stubble hair of various colors, contrasts, and curvature. We also discuss qualitative and quantitative validations of the produced hair-free images, and show how our method effectively addresses the task of automatic skin-tumor segmentation for hair-occluded images.

Image and Volume Skeletonization

Computing skeletons of 2D shapes, and medial surface and curve skeletons of 3D shapes, is a challenging task. In particular, there is no unified framework that detects all types of skeletons using a single model, and also produces a multiscale representation which allows to progressively simplify, or regularize, all skeleton types. In this paper, we present such a framework. We model skeleton detection and regularization by a conservative mass transport process from a shape’s boundary to its surface skeleton, next to its curve skeleton, and finally to the shape center. The resulting density field can be thresholded to obtain a multiscale representation of progressively simplified surface, or curve, skeletons. We detail a numerical implementation of our framework which is demonstrably stable and has high computational efficiency. We demonstrate our framework on several complex 2D and 3D shapes.

Inpainting and Face Images

Facial images are often used in applications that need to recognize or identify persons. Many existing facial recognition tools have limitations with respect to facial image quality attributes such as resolution, face position, and artifacts present in the image. In this paper we describe a new low-cost framework for preprocessing low-quality facial images in order to render them suitable for automatic recognition. For this, we first detect artifacts based on the statistical difference between the target image and a set of pre-processed images in the database. Next, we eliminate artifacts by an inpainting method which combines information from the target image and similar images in our database. Our method has low computational cost and is simple to implement, which makes it attractive for usage in low-budget environments. We illustrate our method on several images taken from public surveillance databases, and compare our results with existing inpainting techniques.