好文作者面授招–20150826

【15-27期VALSE Webinar活动】

报告嘉宾1张林(同济大学)
主持人董伟生(西安电子科技大学)
报告题目:基于自然场景统计模型的无参考图像质量评价 [Slides]
报告时间:2015年8月26日20:00(北京时间)
文章信息:
[1] Lin Zhang, Lei Zhang, and Alan C. Bovik, “A feature-enriched completely blind image quality evaluator”, IEEE Trans. Image Processing, vol. 24, no. 8, pp. 2579-2591, 2015
[2] Lin Zhang, Ying Shen, and Hongyu Li, “VSI: A visual saliency induced index for perceptual image quality assessment”, IEEE Trans. Image Processing, vol. 23, no. 10, pp. 4270-4281, 2014
[3] Lin Zhang, Lei Zhang, Xuanqin Mou, and David Zhang, “FSIM: A feature similarity index for image quality assessment”, IEEE Trans. Image Processing, vol. 20, no. 8, pp. 2378-2386, 2011
报告摘要:Existing blind image quality assessment (BIQA) methods are mostly opinion-aware. Such opinion-aware methods, however, require a large amount of training samples with associated human subjective scores and of a variety of distortion types. The BIQA models learned by opinion-aware methods often have weak generalization capability, hereby limiting their usability in practice. By comparison, opinion-unaware methods do not need human subjective scores for training, and thus have greater potential for good generalization capability. Unfortunately, thus far no opinion-unaware BIQA method has shown consistently better quality prediction accuracy than opinion-aware methods. Here we aim to develop an opinion-unaware BIQA method that can compete with, and perhaps outperform existing opinion-aware methods. By integrating natural image statistics features derived from multiple cues, we learn a multivariate Gaussian model of image patches from a collection of pristine natural images. Using the learned multivariate Gaussian model, a Bhattacharyya-like distance is used to measure the quality of each image patch, then an overall quality score is obtained by average pooling. The proposed BIQA method does not need any distorted sample images nor subjective quality scores for training, yet extensive experiments demonstrate its superior quality-prediction performance to state-of-the-art opinion-aware BIQA methods.
报告人简介:张林博士现为同济大学软件学院副教授。他曾于2003年和2006年在上海交通大学计算机科学与技术系分别获得学士和硕士学位。之后曾先后供职于Microsoft和Autodesk公司。2008年3月至香港理工大学攻读博士学位,导师为张磊教授。2011年8月博士毕业后,加入同济大学。主要研究兴趣包括生物特征识别和多媒体质量评价。他以第一作者身份已在IEEE TPAMI、IEEE TIP、PR、IVC等期刊上发表论文12篇。根据Google Scholar统计,截至目前,其所发表论文总的被引用次数为1558次;其中,2篇论文入选ESI高被引论文。其论文“FSIM: A feature similarity index for image quality assessment, IEEE Trans. Image Processing, 20 (8) 2378-2386, 2011”为IEEE TIP自2011年以来所有发表论文中被引用次数最高的论文。其论文“Online finger-knuckle-print verification for personal authentication, Pattern Recognition, 43 (7) 2560-2571, 2010”曾获Pattern Recognition杂志最佳论文提名。其论文“3D ear identification using LC-KSVD and local histograms of surface types”获得ICME2015最佳论文提名。他于2013年入选上海市浦江人才计划。

报告嘉宾2吴金建(西安电子科技大学)
主持人王瑞平(中科院计算所)
报告题目:基于结构不确定度的图像恰可识别差阈值估计 [Slides]
报告时间:2015年8月26日21:00(北京时间)
文章信息
[1] Jinjian Wu, Weisi Lin, Guangming Shi, Xiaotian Wang, and Fu Li, “Pattern Masking Estimation in Image with Structural Uncertainty,”IEEE Trans. On Image Processing, vol.22, no.12, pp. 4892-4904, Dec. 2013.
[2] Jinjian Wu, Guangming Shi, Weisi Lin, Anmin Liu, and Fei Qi, “Just Noticeable Difference Estimation for Images with Free-Energy Principle,”IEEE Trans. On Multimedia, vol.15, no.7, pp. 1705-1710, Nov. 2013.
[3] Jinjian Wu, Weisi Lin, and Guangming Shi, “Visual Masking Estimation Based on Structural Uncertainty”, IEEE ISCAS 2013, Beijing China.(Best student paper award)
报告摘要:A model of visual masking, which reveals the visibility of stimuli in the human visual system (HVS), is useful in perceptual based image/video processing. The existing visual masking function mainly takes luminance contrast into account, which always overestimates the visibility threshold of the edge region and underestimates that of the texture region. Recent research on visual perception indicates that the HVS is sensitive to orderly regions which possess regular structures, and insensitive to disorderly regions which possess uncertain structures. Therefore, structural uncertainty is another determining factor on visual masking. In this work, we introduce a novel pattern masking function based on both luminance contrast and structural uncertainty.
报告人简介:吴金建博士现为西安电子科技大学副教授。分别在2008年6月、2013年6月获得西安电子科技大学学士、博士学位。2011年9月至2014年8月,赴新加坡南洋理工大学从事助理研究员、博士后研究员工作。吴金建一直从事图像处理领域的理论和应用研究工作。主要研究方向包括视觉感知建模、脑机接口、数字图像处理等方面工作。主持国家自然基金一项、高校研究生创新项目一项。目前以第一作者身份在IEEE TIP, IEEE TMM,Elsevier JVCI和IEEE ISCAS、IEEE DSP等重要学术期刊和国际会议上发表学术论文20余篇,获ISCAS2013 “最佳学生论文奖”,申请国家发明专利2项。担任IEEE TIP, IEEE TMI,IEEE TCSVT, IEEE TMM等十余个国际重要期刊评审专家,以及IEEE ICIP、IEEE ISCAS、 IEEE VCIP、IEEE ChinaSIP等十余个国际学术会议分会主席、程序委员会委员或审稿专家。

(Visited 608 times, 1 visits today)