Seminar on "DMTK: Making Very Large-Scale Machine Learning Possible" & "Deep Residual Learning in Image Classification and Transfer Learning"

  • Posted on: 6 June 2016
  • By: hadmin

Title: DMTK: Making Very Large-Scale Machine Learning Possible

Speaker: Taifeng WANG
                  Lead Researcher, Microsoft Research Asia
Date:     26 April 2016 (Tuesday)
Time:     10:30 a.m. – 12:00 nn
Venue:   Room 513, William M. W. Mong Engineering Building

Distributed machine learning has become more important than ever in this big data era. Especially in recent years, practices have demonstrated the trend that bigger models tend to generate better accuracies in various applications. However, it remains a challenge for common machine learning researchers and practitioners to learn big models, because the task usually requires a large number of computation resources. In order to enable the training of big models using just a modest cluster and in an efficient manner, we released the Microsoft Distributed Machine Learning Toolkit (DMTK), which contains both algorithmic and system innovations. These innovations make machine learning tasks on big data highly scalable, efficient and flexible.

Taifeng WANG.JPG

About the speaker:
Taifeng Wang is now a lead researcher in Artificial Intelligence group, Microsoft Research Asia. He joined MSRA in July 2006 after graduating from University of Science and Technology of China. His research interest includes large scale machine learning, computational advertising and distributed system. He is currently leading a project in MSRA focusing on building a parallel machine learning platform. Prior to that, he had been working on sponsored ads and search engine techniques for several years, primarily working on ads click prediction, ads keyword selection, ads optimization and search engine static ranking algorithm. He has published papers and served as PC on premium conferences such as WWW, KDD, AAAI, WSDM, and SIGIR. In addition, he has shipped several techniques to Windows Azure machine learning and Bing ads based on his research works. He also has over 10 related US patents filed or under processing.

Title: Deep Residual Learning in Image Classification and Transfer Learning

Speaker: Shaoqing REN
                  University of Science and Technology of China

Date:     26 April 2016 (Tuesday)
Time:     10:30 a.m. – 12:00 nn
Venue:   Room 513, William M. W. Mong Engineering Building

Deeper neural networks are difficult to train. We will present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. In this framework, we explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced functions. We can provide comprehensive empirical evidence showing that these residual networks are easier to optimize, and therefore gain more accuracy from considerably increased depth. With this deep network, our team won the 1st place in the ILSVRC 2015 (aka, ImageNet Competition) classification task and detection task, COCO 2015 detection task and segmentation task. In this talk, I will introduce our method and findings of deep residual learning in image classification and transfer learning.

Shaoqing REN.JPG

About the speaker:
Shaoqing Ren is currently a final year Ph.D. student in the joint Ph.D. program between the University of Science and Technology of China and Microsoft Research Asia. His supervisor is Dr. Jian Sun. His research interest includes computer vision and machine learning, especially detection and localization of face and general objects.










Tuesday, April 26, 2016 - 10:30 to 12:00