HomeNewsListDr. Ming Yang visited on October 9th

Dr. Ming Yang visited on October 9th

Dr. Ming Yang from Facebook AI Research visited our lab on October 9th. He gave a talk on "DeepFace: Closing the Gap to Human-Level Performance in Face Verification" in Room 10-206, Rohm Building at 13:30 and talked with Professor Gu and students after the talk.  

Fig 1. The talk is about face verification

Fig 2. Dr. Ming Yang is giving the talk

Dr. Ming Yang is a research scientist in the AI Research at Facebook Inc. since 2013. He received the B.E. and M.E. degrees in electronic engineering from Tsinghua University, Beijing, China, in 2001 and 2004, respectively, and the PhD degree in electrical and computer engineering from Northwestern University, Evanston, Illinois, in June 2008. From 2004 to 2008, he was a research assistant in the computer vision group of Northwestern University. After his graduation, he joined NEC Laboratories America, Cupertino, California as a research staff member. His research interests include computer vision and machine learning, in particular, face recognition, large-scale image retrieval, and intelligent multimedia content analysis. He has co-authored 24 papers in ICCV/CVPR/PAMI with over 1450 citations and the h-index 23 according to Google Scholar.



DeepFace: Closing the Gap to Human-Level Performance in Face Verification


This talk is the extended version of our presentation in CVPR 2014 with more coverage on the overview of face recognition and the state-of-the-art of constrained face recognition in FRVT 2013. In modern face recognition, the conventional pipeline consists of four stages: detect => align => represent => classify. We revisit both the alignment step and the representation step by employing explicit 3D face modeling in order to apply a piecewise affine transformation, and derive a face representation from a nine-layer deep neural network. This deep network involves more than 120 million parameters using several locally connected layers without weight sharing, rather than the standard convolutional layers. Thus we trained it on the largest facial dataset to-date, an identity labeled dataset of four million facial images belonging to more than 4,000 identities. The learned representations coupling the accurate model-based alignment with the large facial database generalize remarkably well to faces in unconstrained environments, even with a simple classifier. Our method reaches an accuracy of 97.35% on the Labeled Faces in the Wild (LFW) dataset, reducing the error of the current state of the art by more than 27%, closely approaching human-level performance.