- 王晓刚报告摘要：In this talk, I will report our NIPS 2014 works on deep learning for face recognition, i.e. DeepID2 and Multi-View Perception (MVP). With a novel deep model and a moderate training set with 200,000 face images, 99.15% accuracy has been achieved on LFW, the most challenging and extensively studied face recognition dataset. Deep learning provides a powerful tool to separate intra-personal and inter-personal variations, whose distributions are complex and highly nonlinear, through hierarchical feature transforms. It is essential to learn effective face representations by using two supervisory signals simultaneously, i.e. the face identification and verification signals. Some people understand the success of deep learning as fitting a complex model with many parameters to a dataset. To clarify such misunderstanding, we investigate face recognition process in deep nets, what information is encoded in neurons, and how robust they are to data corruptions. We discovered several interesting properties of deep nets. In Multi-View Perception, a hybrid deep model is proposed to simultaneously accomplish the tasks of face recognition, pose estimation, and face reconstruction. It employs deterministic and random neurons to encode identity and pose information respectively. Given a face image taken in an arbitrary view, it can untangle the identity and view features, and in the meanwhile the full spectrum of multi-view images of the same identity can be reconstructed. It is also capable to interpolate and predict images under viewpoints that are unobserved in the training data.
- 山世光报告摘要：Deep learning models, especially CNN, has been applied to face recognition with dramatic success, especially under the evaluation protocol of Labeled Faces in the Wild (LFW), when big face data is available. In this talk, except show some recent results of CNN feature for video-based face processing, I will also show that alternative deep models such as Auto-Encoder can also facilitate face recognition impressively, especially for face alignment (our ECCV14 paper) and pose normalization (our CVPR14 paper) purpose. Both works might imply in case of “small” data, elaborate deep models can also work well.
- 杨铭报告摘要：This talk covers our recent work on DeepFace for unconstrained face recognition. In the face recognition pipeline, we revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments. We are able to greatly improve face recognition accuracy on the widely used LFW benchmark, both in the verification (1:1) and identification (1:N) protocols. For instance, with a single CNN model, our method reaches an accuracy of 98% on the Labeled Faces in the Wild (LFW) dataset. Moreover, we directly compare, for the first time, with the state of the art Commercially-Off-The-Shelf system and show a sizable leap in performance.
- Face representations learned by deep models. [Slides]
- Deep auto-encoder for face alignment and pose normalization.[Slides]
- DeepFace for unconstrained face recognition.[Slides]
(Visited 4,336 times, 4 visits today)