Murat Kunt - Selected Publications#


1. “Second-generation image-coding techniques,” by M. Kunt, A. Ikonomopoulos, M. Kocher, Proceedings of the IEEE, 1985 (cited 1133 times, Google Scholar).

The digital representation of an image requires a very large number of bits. The goal of image coding is to reduce this number, as much as possible, and reconstruct a faithful duplicate of the original picture. Early efforts in image coding, solely guided by information theory, led to a plethora of methods. The compression ratio reached a saturation level around 10:1. Later progress in the study of the brain mechanism of vision has opened new vistas in picture coding. Directional sensitivity of the neurons in the visual pathway combined with the separate processing of contours and textures has led to a new class of coding methods capable of achieving compression ratios as high as 70:1.

2. “Recent results in high-compression image coding” (Invited paper) by M. Kunt, M. Benard, R. Leonardi, IEEE Transactions on Circuits and Systems, 1987 (cited 327 times, Google Scholar).

This invited paper provides a review of recent results on object-based coding methods that exhibited improvements in the previous second-generation methods as described in the previous paper).

3. “Spatio-temporal segmentation based on region mapping” by F. Moscheni, S. Bhattacharjee, M. Kunt, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1998 (cited 306 times, Google Scholar).

This paper proposes a technique for spatio-temporal segmentation to identify the objects present in the scene represented in a video sequence. This technique processes two consecutive frames at a time. A region-merging approach based on a weighted, directed graph is first used to identify the objects in the scene. Two complementary graph-based clustering rules are proposed, namely, the strong rule and the weak rule. These rules took advantage of the natural structures present in the graph. Experimental results on different types of scenes have demonstrated the ability of the proposed technique to automatically partition the scene into its constituent objects.

4. “Video segmentation based on multiple features for interactive multimedia applications” by R Castagno, T Ebrahimi, M Kunt, IEEE Transactions on Circuits and Systems for Video Technology, 1998 (cited 247 time, Google Scholar).

A key feature of the proposed system is the distinction between two levels of segmentation, namely, regions and object segmentation. Homogeneous regions of the images are extracted automatically by the computer. Semantically meaningful objects are obtained through user interaction by grouping of regions according to the specific application. The extraction of regions is based on the multidimensional analysis of several image features by a spatially constrained algorithm. The local level of reliability of the different features has been taken into account in order to adaptively weight the contribution of each feature to the segmentation process. Results on the extraction of regions as well as on the tracking of spatiotemporal objects are included in the paper.

5. “Traitement numérique des signaux” by M. Kunt (Book), 1996, PPUR Presses polytechniques, Lausanne (cited 219 times, Google Scholar).

This book is an introduction to the field of digital signal processing for a second year undergraduate course in electrical engineering for students with basic notions of Fourier transformation, linear systems, complex numbers and matrix calculation. Presents basic methods of this discipline such discrete Fourier transform, the Z- transform and digital filtering techniques such as methods for filter design, spectral analysis and homomorphic processing fundamental along with applications. The book also deals with multidimensional digital signals with application to image and video processing.

6. “Integer wavelet transform for embedded lossy to lossless image compression” by J. Reichel, G. Menegaz, M.J. Nadenau, M. Kunt, IEEE Transactions on Image Processing, 2001 (cited 217 times, Google Scholar).

The use of the discrete wavelet transform (DWT) for embedded lossy image compression is well established. One of the possible implementations of the DWT is the lifting scheme (LS). Because perfect reconstruction is granted by the structure of the LS, nonlinear transforms can be used, allowing efficient lossless compression as well. The integer wavelet transform (IWT) is an interesting alternative to the DWT because its rate-distortion performance is similar and the differences can be predicted. This topic has been investigated in a theoretical framework. A model of the degradations caused by the use of the IWT instead of the DWT for lossy compression is presented. The rounding operations were modeled as additive noise and then propagated through the LS structure to measure their impact on the reconstructed pixels. This methodology was verified using simulations with random noise as input. It predicts accurately the results obtained using images compressed by the well-known EZW algorithm. Experiment are also performed to measure the difference in terms of bit rate and visual quality which allowed a better understanding of the impact of the IWT when applied to lossy image compression.

7. “Video coding: The second generation approach” by L. Torres, M. Kunt, Springer Science & Business Media, 2012 (cited 208 times, Google Scholar).

This paper reviews the second generation image and video coding techniques including new concepts from image analysis that greatly improve the performance of the coding schemes for very high compression. This interest has been further emphasized by the future MPEG 4 standard. Second generation image and video coding techniques are the ensemble of approaches proposing new and more efficient image representations than the conventional canonical form. As a consequence, the human visual system becomes a fundamental part of the encoding/decoding chain. More insight to distinguish between first and second generation can be gained if it is noticed that image and video coding is basically carried out in two steps. First, image data are converted into a sequence of messages and, second, code words are assigned to the messages. Methods of the first generation put the emphasis on the second step, whereas methods of the second generation put it on the first step and use available results for the second step. As a result of including the human visual system, second generation can be also seen as an approach of seeing the image composed by different entities called objects. This implies that the image or sequence of images have first to be analyzed and/or segmented in order to find the entities.

8. “Wavelet-based color image compression: Exploiting the contrast sensitivity function” by M.J. Nadenau, J. Reichel, M. Kunt, IEEE Transactions on Image Processing, 2003 (cited 205 times, Google Scholar).

To design efficient image compression techniques, it is necessary to take into account properties of the human visual system so that visually significant information is precisely represented and insignificant components are discarded. The overall sensitivity depends on aspects such as contrast, color, spatial frequency. The so-called contrast sensitivity function helps to regulate quantization parameters. The paper shows how the contrast sensitivity function can be implemented in a precise and locally adaptive way,

9. “High compression image coding using an adaptive morphological subband decomposition” by O. Egger, W. Li, M. Kunt, Proceedings of the IEEE, 1998 (cited 149 times, Google Scholar).

This paper presents a new method for high compression of digital images. Image data is analyzed in terms of texture and non-texture areas. Linear filters are used to decompose texture data into subbands and morphological filters are used for other areas. Without any loss of information, the decomposition leads to a perfect reconstruction. Comparison with other well know techniques such as linear subband coding or JPRG shows that the proposed scheme performs significantly better.

10. “A robust-digital QRS-detection algorithm for arrhythmia monitoring” by A. Ligtenberg, M. Kunt, Computers and Biomedical Research, 1983 (cited 149 times, Google Scholar).

Failures in the heart functions may be dramatic. It is important to monitor the electrocardiogram signal (ECG) to detect abnormal behaviours. The ECG is a roughly periodic signal paced by the heart beats and composed by five waves called P, Q, R, S and T. The QRS complex is the most energic part characterized by steepness, duration, regularity in shape etc. The proposed detector can be separated into five different components such as, a noise filter, a differentiator, an energy collector, a minimal distance classifier, and a minimax trimmer.

Imprint Privacy policy « This page (revision-4) was last changed on Wednesday, 2. August 2023, 21:46 by System
  • operated by