报告嘉宾1：刘霁 (University of Rochester)
报告题目：Feature Selection via Sparse Learning [Slides]
 Ji Liu, Ryohei Fujimaki, and Jieping Ye, “Forward-Backward Greedy Algorithms for General Convex Smooth Functions over A Cardinality Constraint”, ICML, 2014.
 Ji Liu, Peter Wonka, and Jieping Ye, “A Multi-stage Framework for Dantzig Selector and Lasso”, Journal of Machine Learning Research, 2012.
 Ji Liu, Peter Wonka, and Jieping Ye, “Multi-stage Dantzig Selector”, NIPS, 2010.
报告摘要：Feature selection plays an important role in various classification and regression problems. Sparse learning (compressed sensing) is a hot topic and methodology recently in machine learning. This talk connects the feature selection task and the recent progresses in sparse learning. Several sparse learning approaches (including several approaches developed by the speaker) will be introduced in this talk. In particular the theoretical error bounds will be compared to show the difference among these approaches and provide intuitive understanding on them.
报告人简介：Ji Liu is currently an assistant professor in Computer Science and Goergen Institute for Data Science at University of Rochester (UR). He received his Ph.D., Masters, and M.S. degrees from University of Wisconsin-Madison, Arizona State University, and University of Science and Technology of China respectively. His research interests cover a broad scope of machine learning, optimization, and their applications in other areas such as computer vision, data mining, and data analysis. His recent research focus is on asynchronous parallel optimization, sparse learning (compressed sensing) theory and algorithm, online learning, abnormal event detection, and feature / pattern extraction in bio image analysis. He created the machine learning and optimization group at UR. He received the award of Best Paper honorable mention in SIGKDD 2010. He published 20+ papers in the past 5 years in top journals and conferences including JMLR, SIOPT, TPAMI, NIPS, ICML, UAI, SIGKDD, ICCV, CVPR, ECCV, etc.
报告题目：Image Feature Learning for Cold Start Problem in Display Advertising [Slides]
 Kaixiang Mo, Bo Liu, Lei Xiao, Yong Li, Jie Jiang, Image Feature Learning for Cold Start Problem in Display Advertising, International Joint Conference on Artificial Intelligence (IJCAI 2015), July 25th – July 31st, 2015, Buenos Aires, Argentina.
报告摘要：In online display advertising, state-of-the-art Click Through Rate(CTR) prediction algorithms rely heavily on historical information, and they work poorly on growing number of new ads without any historical information. This is known as the the cold start problem. For image ads, current state-of-the-art systems use handcrafted image features such as multimedia features and SIFT features to capture the attractiveness of ads. However, these handcrafted features are task dependent, inflexible and heuristic. In order to tackle the cold start problem in image display ads, we propose a new feature learning architecture to learn the most discriminative image features directly from raw pixels and user feedback in the target task. The proposed method is flexible and does not depend on human heuristic. Extensive experiments on a real world dataset with 47 billion records show that our feature learning method outperforms existing handcrafted features significantly, and it can extract discriminative and meaningful features.
报告人简介：Kaixiang Mo is currently a PhD student at Department of Computer Science and Engineering in Hong Kong University of Science and Technology, working on data mining and machine learning with Prof. Qiang Yang. He obtained BEng in Computer Science and Engineering from Sun Yat-sen University. His research interests include Transfer Learning, Crowdsourcing, Deep Learning.