講者

« 回列表

Representation Learning on Big and Small Data

演講摘要

The approaches in feature extraction can be divided into two categories: model-centric and data-driven. The model-centric approach relies on human heuristics to develop a computer model to extract features from data. These models were engineered by scientists and then validated via empirical studies. A major shortcoming of the model-centric approach is that unusual circumstances that a model does not take into consideration during its design can render the engineered features less effective. Contrast to the model-centric approach, which dictates representations independent of data, the data-driven approach learns representations from data. Example data-driven algorithms are multilayer perceptron (MLP) and convolutional neural network (CNN), which belong to the general category of neural network and deep learning. In this talk I will first explain why my team at Google in 2006 embarked on the data-drive approach. In 2008, we funded the ImageNet project at Stanford, and subsequently in 2011 filed two data-driven patents, one on data-driven feature extraction and the other on data-driven object recognition. We parallelized five widely used machine learning algorithms including SVMs, PFP, LDA, Spectral Clustering, and CNN, and open-sourced all these algorithms. Recently at HTC, we announced DeepQ Open AI platform, which features these scalable algorithms with enhancements. In particular, I will explain how we have made configuring a distributed system to run CNN both simple and cost effective. In 2012, the world was convinced that the data-driven approach to be effective, after Alexnet achieved breakthrough accuracy in the ImageNet Large-Scale Visual Recognition Challenge (ILSVRC) competition. This talk will walk through subsequent enhancements on Alexnet in three perspectives: representation ability, optimization ability, and generalization ability. Unfortunately, most applications face the problem of small data. I will share our experiences with transfer learning and GANs, both positive and negative ones. This talk concludes with a list of open research issues.

講者簡介

張智威 (Edward Y. Chang)
  • 張智威 (Edward Y. Chang) 個人網站
  • HTC Research & Healthcare / President
  • Edward Chang currently serves as the President of Research and Healthcare (DeepQ) at HTC. Ed's most notable work is co-leading the DeepQ project (with Prof. CK Peng at Harvard), working with a team of physicians, scientists, and engineers to design and develop mobile wireless diagnostic instruments. Such instruments can help consumers make their own reliable health diagnoses anywhere at any time. The project entered the Tricorder XPRIZE competition in 2013 with 310 other entrants and was awarded second place in April 2017 with 1M USD prize. The deep architecture that powers DeepQ is also applied to power Vivepaper, an AR product Ed's team launched in 2016 to support immersive augmented reality experiences.

    Prior to his HTC post, Ed was a director of Google Research for 6.5 years, leading research and development in several areas including scalable machine learning, indoor localization, social networking and search integration, and Web search (spam fighting). His contributions in parallel machine learning algorithms and data-driven deep learning (US patents 8798375 and 9547914) are recognized through several keynote invitations and the developed open-source codes have been collectively downloaded over 30,000 times. His work on IMU calibration/fusion (US patents 8362949, 9135802, 9295027, 9383202, and 9625290) with project X was first deployed via Google Indoor Maps (see XINX paper and ASIST/ACM SIGIR/ICADL keynotes) and is now widely used on mobile phones and VR/AR devices. Ed's team also developed the Google Q&A system (codename Confucius), which was launched in over 60 countries.

    Prior to Google, Ed was a full professor of Electrical Engineering at the University of California, Santa Barbara (UCSB). He joined UCSB in 1999 after receiving his PhD from Stanford University. He is a recipient of the NSF Career Award and Google Innovation Award. He is also an IEEE Fellow for his contributions to scalable machine learning.

歡迎在此登錄您的大名及電子郵件地址,日後任何台灣資料科學協會舉辦的相關活動,我們將會以電子郵件通知您。謝謝。