Alan Bovik - Publications#

According to Google Scholar Prof Al Bovik has been cited more than 145,000 times and has an H-Index of 129. He has published more than 1000 technical papers and books many of which have been cited thousands of times. These include:

1. “Image quality assessment: From error visibility to structural similarity,” IEEE Transactions on Image Processing, vol. 13, no. 4, pp. 600-612, April 2004 (with Z. Wang, H.R. Sheikh, and E.P. Simoncelli). Winner of IEEE Signal Processing Society Best Paper Award for 2009; also winner of IEEE Signal Processing Society Sustained Impact Paper Award for 2017. The most-cited paper ever published in any IEEE Signal Processing Society publication (cited > 38000 times). Because of its very heavy adoption in the global television industry it was accorded a Primetime Emmy Award in 2015 by the Academy of Television Arts and Sciences in 2015. Now used heavily by the deep learning community as a perceptual loss function owing to its analytic properties (quasi-convexity, continuous differentiability).

2. “Image information and visual quality,” IEEE Transactions on Image Processing, vol. 15, no. 2, pp. 430-444, February 2006 (with H.R. Sheikh) (cited > 3800 times). Introduced the Visual Information Fidelity concept to the field of picture and video quality prediction, including the breakthrough use of natural distorted video statistics models (modifications of Natural Scene Statistics or NSS models) for video quality prediction. Named a 2017 Google Scholar Classic Paper - highly-cited papers that have stood the test of time, and are among the ten most-cited articles in their area of research published ten years earlier. Supplies four of the six features used in the Netflix VMAF quality prediction engine that controls all VMAF encodes streamed globally. Nominated for the Technology and Engineering and Emmy Award by Netflix, received by Bovik and students in 2020.

3. “No-reference image quality assessment in the spatial domain,” IEEE Transactions on Image Processing, vol. 21, no. 12, pp. 4695-4708, December 2012 (with A. Mittal and A.K. Moorthy) (cited > 3300 times). Introduced the first successful general-purpose blind picture quality prediction model called BRISQUE, based on breakthrough spatial natural image distortion statistics features feeding a support vector regressor (SVM). Concept of using SVM to aggregate perceptual features for quality prediction was adopted by Netflix in creating VMAF. Despite simplicity delivered much high-quality prediction performance than any prior blind quality model.

4. “Making a ‘completely blind’ image quality analyzer,” IEEE Signal Processing Letters, vol. 21, no. 3, pp. 209-212, March 2013. Winner of the IEEE Signal Processing Letters Best Paper Award for 2017 (with A. Mittal and R. Soundararajan) (cited > 3000 times). First-of-a-kind high performance completely blind video quality model that is globally marketed by several companies (e.g., Mathworks, Video Clarity) and used by global television industry for video ingestion decisions and in environments where reference videos are not available, filling important use cases not addressable by SSIM, VIF, VMAF.

5. “Video quality assessment by reduced reference spatio-temporal entropic differencing,” IEEE Transactions on Circuits and Systems for Video Technology, vol. 23, no. 4, pp. 684-694, April 2013. Winner of the IEEE Circuits and Systems for Video Technology Best Paper Award for 2016 (with R. Soundararajan) (cited > 300 times). Created first-of-a-kind space-time and temporal video statistics models (and distorted video statistics models) to achieve state-of-the-art video quality prediction performance. Extension of VIF concept in multiple key ways: in the temporal domain, and in the subsampled (reduced reference) domain. More accurate than earlier VIF or later VMAF models without need for training or SVM learning.

6. “Massive online crowdsourced study of subjective and objective picture quality,” IEEE Transactions on Image Processing, vol. 25, no. 1, pp. 372-387, January 2016 (with D. Ghadiyaram) (cited > 300 times). Innovated the first large-scale crowdsourced study of picture quality. Also innovated the first large-scale study of user-generated content (UGC) instead of pictures distortions by picture quality scientists. Most widely used UGC picture quality database.

7. “Deep convolutional neural models for picture quality prediction,” IEEE Signal Processing Magazine, Special Issue on Deep Learning for Visual Understanding, vol. 34, no. 6, pp. 130-141, November 2017 (with J. Kim et al) (cited > 200 times). Survey and research paper introduced deep learning models to the picture quality field, broadest and most comprehensive study available at the time.

8. “Large scale study of perceptual video quality,” IEEE Transactions on Image Processing, vol. 28, no. 2, pp. 612-627, February 2019 (with Z. Sinno) (cited > 100 times). Innovated the first large-scale crowdsourced study of video quality. Also innovated the first large-scale study of user-generated video content video instead of synthetic distortions introduced by picture quality scientists. Most widely used UGC video quality database.

9. “UGC-VQA: Benchmarking blind video quality assessment for user generated content,” IEEE Transactions on Image Processing, vol. 30, pp. 1057-7149, 2021 (with Z. Tu, Y. Wang, N. Birkbeck, and B. Adsumilli) (cited > 80 times). Large-scale study and comparison of all high-performance UGC video quality models in this increasingly important space of video uploads by casual users, e.g., YouTube, TikTok, and Facebook.

10. “Patch VQ: ‘Patching up the video quality problem,” IEEE Computer Society Conference on Computer Vision and Pattern Recognition, June 19-25, 2021 (with Z. Ying, M. Mandal, D. Mahajan, and D. Ghadiyaram) (cited > 40 times). Describes the largest (by an order of magnitude) video quality study; first that included space, time, and space-time patches; first study of local vs global video quality perception; introduced SOTA video quality model PatchVQ (PVQ), which predicts local, global, and space-time maps of video quality.

Imprint Privacy policy « This page (revision-4) was last changed on Sunday, 4. June 2023, 18:31 by System
  • operated by