Home
Research
Publications
Teaching
Codes & Data
ML Group
ML Seminar

Machine Learning Seminars

Organizer: Justin Romberg and Le Song

Subscribe to the mailing list: mldm-seminar@lists.gatech.edu


Sponsors


Speakers

Hulda Haraldsdottir, Klaus 1116W, Wednesday, March 16, 2016, 2 pm - 3 pm

Title: Randomised sampling and early diagnosis of Parkinson's disease

Abstract: Computational modelling is increasingly used to tackle problems in biology. Methods for modelling biological systems are built on algorithms developed in Mathematics and Computer Science. In turn, biological applications challenge existing algorithms and drive the development of new ones. Here, we present challenges that arise in constraint-based modelling of human metabolism for application to Parkinson's disease research. We propose to address these challenges with tractable model formulations and scalable algorithms. Randomised sampling of constraint-based models enables unbiased characterisation of the metabolic capabilities of cells and organisms. However, uniform sampling of large-scale models has proven algorithmically challenging. We report recent advances towards efficient sampling of genome- scale human models and outline plans to apply our results to identify novel biomarkers for early diagnosis of Parkinson's disease.

Bio: Hulda Haraldsdottir is a research associate from Univ. of Luxembourg.

Adam Kalai, Klaus 1116W, Wednesday, Feb 3, 2016, 2 pm - 3 pm

Title: Crowdsourcing and Machine Learning

Abstract: People understand many domains more deeply than machine learning systems. Beyond simply labeling data, how can humans help such systems learn? In particular, we describe active learning algorithms that query people to help uncover the latent representation for a data set. We also discuss how humans can help generate data, choose the questions to ask in the first place, and assist in more complex Aritifical Intelligence tasks.

This talk covers collaborations with Serge Belongie, Kamalika Chaudhuri, Ce Liu, Sivan Sabato, Ohad Shamir, Omer Tamuz, Donald Trump, Santosh Vempala, you, and James Zou.

Bio: Adam Kalai is a founding member of Microsoft Research New England in Cambridge, MA. Adam Tauman Kalai received his BA (1996) from Harvard, and MA (1998) and PhD (2001) under the supervision of Avrim Blum from CMU. After an NSF postdoctoral fellowship at M.I.T. with Santosh Vempala, he served as an assistant professor at the Toyota Technological institute at Chicago and then at Georgia Tech. He is now a Principal Researcher at Microsoft Research New England. His honors include an NSF CAREER award and an Alfred P. Sloan fellowship. His research focuses on human computation, machine learning, and algorithms.

Dhruv Batra, Klaus 1116W, Wednesday, Dec 2, 2015, 2 pm - 3 pm

Title: Making Diverse Predictions and Visual Question Answering

Abstract: In the first half, I will describe a line of work in my lab where we have been developing machine perception algorithms that output not just a single-best solution, rather a /diverse/ set of plausible guesses. I will discuss inference in graphical models, connections to submodular maximization over a "doubly-exponential” space, and show results on problems such as semantic segmentation, pose estimation, and prepositional phrase attachment resolution in image captions.

In the second half, I will describe VQA, the task of free-form and open-ended Visual Question Answering (VQA). Given an image and a natural language question about the image (e.g., “What kind of store is this?”, “How many people are waiting in the queue?”, “Is it safe to cross the street?”), the machine’s task is to automatically produce an accurate natural language answer (“bakery”, “5”, “Yes”). Answering any possible question about an image is one of the ‘holy grails’ of AI requiring integration of vision, language, and reasoning. We have collected and recently release a dataset containing > 250,000 images (from MS COCO and Abstract Scenes Dataset), >750,000 questions, and ~10 Million answers. Preliminary versions of this VQA dataset have begun enabling the next generation of AI systems based on deep learning techniques for understanding images (CNNs) and language (RNNs, LSTMs).

Bio: Dhruv Batra is an Assistant Professor at the Bradley Department of Electrical and Computer Engineering at Virginia Tech, where he leads the VT Machine Learning & Perception group. He is a member of the Virginia Center for Autonomous Systems (VaCAS) and the VT Discovery Analytic Center (DAC). His research interests lie at the intersection of machine learning, computer vision, and AI, with a focus on developing scalable algorithms for learning and inference in probabilistic models for holistic scene understanding. He is a recipient of Carnegie Mellon Dean's Fellowship (2007), two Google Faculty Research Awards (2013, 2015), Virginia Tech Teacher of the Week (2013), Army Research Office (ARO) Young Investigator Program (YIP) award (2014), the National Science Foundation (NSF) CAREER award (2014), and Virginia Tech CoE Outstanding New Assistant Professor award (2015). His research is supported by NSF, ARO, ARL, ONR, DARPA, Amazon, Google, Microsoft, and NVIDIA. Research from his lab has been featured in Bloomberg Business, The Boston Globe, and a number of popular press magazines and newspapers. Prior to joining VT, he was a Research Assistant Professor at Toyota Technological Institute at Chicago (TTIC), a philanthropically endowed academic computer science institute located in the campus of University of Chicago. He received his M.S. and Ph.D. degrees from Carnegie Mellon University in 2007 and 2010 respectively, advised by Tsuhan Chen. In past, he has held visiting positions at the Machine Learning Department at CMU, CSAIL MIT, and Microsoft Research.

Stefano Ermon, Klaus 1116E&W, Wednesday, Nov 18, 2015, 2 pm - 3 pm

Title: Decision Making and Inference under Limited Information and High Dimensionality

Abstract: Statistical inference in high-dimensional probabilistic models (i.e., with many variables) is one of the central problems of statistical machine learning and stochastic decision making. To date, only a handful of distinct methods have been developed, most notably (MCMC) sampling, decomposition, and variational methods. In this talk, I will introduce a fundamentally new approach based on random projections and combinatorial optimization. Our approach provides provable guarantees on accuracy, and outperforms traditional methods in a range of domains, in particular those involving combinations of probabilistic and causal dependencies (such as those coming from physical laws) among the variables. This allows for a tighter integration between inductive and deductive reasoning, and offers a range of new modeling opportunities. As an example, I will discuss applications in the emerging field of Computational Sustainability.

Bio: Stefano Ermon is currently an Assistant Professor in the Department of Computer Science at Stanford University, where he is affiliated with the Artificial Intelligence Laboratory. He completed his PhD in computer science at Cornell in 2015. His research interests include techniques for scalable and accurate inference in graphical models, statistical modeling of data, large-scale combinatorial optimization, and robust decision making under uncertainty, and is motivated by a range of applications, in particular ones in the emerging field of computational sustainability. Stefano has (co-)authored over 20 publications, and has won several awards, including two Best Student Paper Awards, one Runner-Up Prize, and a McMullen Fellowship.

Anup Rao, Klaus 1116E&W, Wednesday, Nov 14, 2015, 2:00 pm - 3:00 pm

Title: Fast, Provable Algorithms for Isotonic Regression in all Lp-norms

Abstract: Given a directed acyclic graph G, and a set of values y on the vertices, the Isotonic Regression of y is a vector x that respects the partial order described by G, and minimizes ||x−y||, for a specified norm. This talk gives improved algorithms for computing the Isotonic Regression for all weighted Lp-norms with rigorous performance guarantees. Our algorithms are quite practical, and their variants can be implemented to run fast in practice.

Bio: Anup Rao is a Postdoctoral Fellow in the School of Computer Science at Georgia Tech since August 2015, working with Santosh Vempala. He completed his PhD from Yale University mathematics department and Yale Institute For Network Science, and his advisor was Daniel Spielman. Prior to that he received my bachelors degree in Engineering Physics from IIT Bombay.

Santinder Singh, Klaus 1116E&W, Wednesday, Nov 4, 2015, 2 pm - 3 pm

Title: Reinforcement Learning: From Vision to Action and back

Abstract: Stemming in part from the great successes of other areas of Machine Learning, there is renewed hope and interest in Reinforcement Learning (RL) from the wider applications communities. Indeed, there is a recent burst of new and exciting progress in both theory and practice of RL. I will describe some results from my own group on estimating models from data, on a simple new connection between planning horizon and overfitting in RL, as well as some results on combining RL with Deep Learning in Atari games. I will conclude with some lookahead at what we can do, both as theoreticians and those that collect data, to accelerate the impact of RL.

Bio: Satinder Singh is a Professor of Computer Science and Engineering as well as the Director of the Artificial Intelligence Laboratory at the University of Michigan, Ann Arbor. He has been the Chief Scientist at Syntek Capital, a venture capital company, a Principal Research Scientist at AT&T Labs, an Assistant Professor of Computer Science at the University of Colorado, Boulder, and a Postdoctoral Fellow at MIT’s Brain and Cognitive Science department. His research focus is on developing the theory, algorithms and practice of building artificial agents that can learn from interaction in complex, dynamic, and uncertain environments, including environments with other agents in them. His main contributions have been to the areas of reinforcement learning, multi-agent learning, and more recently to applications in cognitive science and healthcare. He is a Fellow of the AAAI (Association for the Advancement of Artificial Intelligence) and has coauthored more than 150 refereed papers in journals and conferences and has served on many program committee’s. He helped cofound RLDM (Reinforcement Learning and Decision Making), a new multidisciplinary meeting that brings together computer scientists, psychologists, neuroscientists, roboticists, control theorists, and others interested in animal and artificial decision making.

Tony Jebara, Klaus 1116E&W, Wednesday, Oct 14, 2015, 1:30 pm - 2:30 pm

Title: Graphical Modeling and Bethe Approximation

Abstract: Graphical models are a marriage of probability theory with graph theory and are useful in many application areas. Unfortunately, inference and learning (two canonical graphical modeling problems) are NP-hard for graphical models with cycles. How can we efficiently tackle these problems in practice? We discuss the Bethe free energy approximation to the intractable partition function. Heuristics like loopy belief propagation (LBP) are often used to optimize the Bethe free energy. Unfortunately, LBP may not converge at all, and if it does, it may not be to a global optimum. To do marginal inference, we instead explore a more principled treatment of the Bethe free energy using discrete optimization. In attractive models we can find the global optimum in polynomial time even though the resulting landscape is non-convex. In general mixed models, we use double-cover methods to bound the global optimum in polynomial time. To do learning, we combine Bethe approximation with a Frank-Wolfe algorithm to circumvent the intractable partition function. This yields a single-loop learning algorithm which is more efficient than previous approaches that interleave iterative inference with iterative parameter updates. We apply these methods to social networks, image data, power networks, brain networks, and financial networks.

Bio: Tony Jebara is Associate Professor of Computer Science at Columbia University. He chairs the Center on Foundations of Data Science as well as directs the Columbia Machine Learning Laboratory. His research intersects computer science and statistics to develop new frameworks for learning from data with applications in social networks, spatio-temporal data, vision and text. Jebara has founded and advised several startups including Sense Networks (acquired by yp.com), Evidation Health, Agolo, Ufora, and Bookt (acquired by RealPage NASDAQ:RP). He has published over 100 peer-reviewed papers in conferences, workshops and journals including NIPS, ICML, UAI, COLT, JMLR, CVPR, ICCV, and AISTAT. He is the author of the book Machine Learning: Discriminative and Generative and co-inventor on multiple patents in vision, learning and spatio-temporal modeling. In 2004, Jebara was the recipient of the Career award from the National Science Foundation. His work was recognized with a best paper award at the 26th International Conference on Machine Learning, a best student paper award at the 20th International Conference on Machine Learning as well as an outstanding contribution award from the Pattern Recognition Society in 2001. Jebara's research has been featured on television (ABC, BBC, New York One, TechTV, etc.) as well as in the popular press (New York Times, Slash Dot, Wired, Businessweek, IEEE Spectrum, etc.). He obtained his PhD in 2002 from MIT. Esquire magazine named him one of their Best and Brightest of 2008. Jebara has taught machine learning to a total of about 2000 students (through real physical classes). Jebara was a Program Chair for the 31st International Conference on Machine Learning (ICML) in 2014. Jebara was Action Editor for the Journal of Machine Learning Research from 2009 to 2013, Associate Editor of Machine Learning from 2007 to 2011 and Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence from 2010 to 2012. In 2006, he co-founded the NYAS Machine Learning Symposium and has served on its steering committee since then.

Nikolay Laptev, Klaus 1116E&W, Wednesday, Oct 7, 2015, 2:00 pm - 3:00 pm

Title: Generic and Scalable Framework for Automated Time-series Anomaly Detection

Abstract: This talk introduces a generic and scalable framework for automated anomaly detection on large scale time-series data. Early detection of anomalies plays a key role in maintain- ing consistency of person’s data and protects corporations against malicious attackers. Current state of the art anomaly detection approaches suffer from scalability, use-case restric- tions, difficulty of use and a large number of false positives. Our system at Yahoo, EGADS, uses a collection of anomaly detection and forecasting models with an anomaly filtering layer for accurate and scalable anomaly detection on time- series. We compare our approach against other anomaly detection systems on real and synthetic data with varying time-series characteristics. We found that our framework allows for 50-60% improvement in precision and recall for a variety of use-cases. Both the data and the framework are being open-sourced. The open-sourcing of the data, in par- ticular, represents the first of its kind effort to establish the standard benchmark for anomaly detection.

Bio: Dr. Laptev completed his PhD in Computer Science at UCLA. The focus of his research is on Big Data machine learning and system design. Besides Big Data machine learning, during his PhD years Nikolay also conducted research on approximation approaches with an error guarantee for general machine learning algorithms. At Yahoo! Labs he conducts research on system design and machine learning over massive data-streams and stored data.

Guy Lebanon, Klaus 1116E&W, Friday, September 25, 2015, 2:00 pm - 3:00 pm

Title: Personalizing the LinkedIn Feed

Abstract: LinkedIn dynamically delivers update activities from a user’s network to more than 300 million members in a personalized feed that ranks activities according their relevance to the user. This presentation describes the feed ranking system and challenges that we face.

Bio: Guy Lebanon currently leads the feed infrastructure and relevance team at LinkedIn. Before coming to LinkedIn he was an advisor to an SVP and a senior manager at Amazon where he lead the machine learning science team at Amazon's main campus in Seattle WA. Prior to that, Guy was a tenured professor at the Georgia Institute of Technology and a scientist at Google and Yahoo. His main research areas are machine learning and data science. Guy received his PhD from Carnegie Mellon University and BA, and MS degrees from Technion - Israel Institute of Technology. Dr. Lebanon has authored over 60 refereed publications. He is an action editor of Journal of Machine Learning Research, was the program chair of the 2012 ACM CIKM Conference, and will be the conference co-chair of AI & Statistics (AISTATS 2015). He received the NSF CAREER Award, the WWW best student paper award, the ICML best paper runner-up award, the Yahoo Faculty Research and Engagement Award, and is a Siebel Scholar.

Richard Peng, Klaus 1116E&W, Monday, August 31, 2015, 1:00 pm - 2:00 pm

Title: Algorithm Frameworks Based on Structure Preserving Sampling

Abstract: Sampling is a widely used algorithmic tool: running routines on a small representative subset of the data often leads to speedups while preserving accuracy. Recent works on algorithmic frameworks that relied on sampling graphs and matrices highlighted several connections between graph theory, statistics, optimization, and functional analysis. This talk will describe some key ideas that emerged from these connections: * Sampling as a generalized divide-and-conquer paradigm. * Implicit sampling without constructing the larger data set, and its algorithmic applications. * What does sampling need to preserve? What can sampling preserve? These ideas have applications in solvers for structured linear systems, network flow algorithms, input-sparsity time numerical routines, coresets, and dictionary learning.

Bio: Richard Peng is am an assistant professor in SCS at the Georgia Institute of Technology. His research interests are in the design, analysis, and implementation of efficient algorithms. Prior to coming to Georgia Tech, He received his PhD in Computer Science at CMU, and was an Instructor in Applied Mathematics at MIT for two years. His thesis, Algorithm Design Using Spectral Graph Theory, won the 2012/2013 CMU SCS Dissertation Award.

Brian Ziebart, Klaus 1116W, Friday, March 27, 2015, 2:00 pm - 3:00 pm

Title: Demystifying Active Learning with Adversarial Estimation

Abstract: Active learning promises to significantly reduce the labeling burden of supervised machine learning methods, but often doesn't deliver in practice. In fact, standard active learning techniques frequently provide worse performance that passive learners (i.e., datapoints labeled at random). This talk will illuminate the fundamental issue hindering active learning methods, present a new approach using adversarial estimation for addressing it, and demonstrate the benefits of the approach on classification tasks. This is joint work with Anqi Liu and Lev Reyzin from two recent papers: "Robust Classification Under Sample Selection Bias" (NIPS 2014) and "Shift-Pessimistic Active Learning using Robust Bias-Aware Prediction" (AAAI 2015).

Bio: Brian Ziebart is an Assistant Professor in the Department of Computer Science at the University of Illinois at Chicago. He received his PhD from Carnegie Mellon University where he was also a postdoctoral fellow. His research interests include machine learning, decision theory, game theory, robotics, and assistive technologies.

Elad Hazan, MiRC 102 A & B Wednesday, March 25, 2015, 2:00 pm - 3:00 pm

Title: Projection-free Optimization and Online Learning

Abstract: Modern large data sets prohibit any super-linear time operations. This motivates the study of iterative optimization algorithms with low complexity per iteration. The computational bottleneck in applying state-of-the-art iterative methods is many times the so-called "projection step". We consider projection-free optimization/learning that replaces projections by more efficient linear optimization steps. We describe the first linearly-converging algorithm of this type for polyhedral sets and how it gives rise to optimal-rate stochastic optimization and online learning algorithms.

Bio: Elad Hazan is a professor in Princeton Univeristy. He is interested in designing efficient algorithms for fundamental problems in machine learning and optimization. Most of his research can be classified under "machine learning", though he is also interested in convex optimization, game theory and computational complexity.

Joseph Gonzalez, MiRC 102 A&B, Friday, March 13, 2015, 2pm--3pm

Title: Learning Systems: Systems and Abstractions for Large-Scale Machine Learning

Abstract: The challenges of advanced analytics and big data cannot be addressed by developing new machine learning algorithms or new large-scale computing systems in isolation. Some of the recent advances in machine learning have come from new systems that can apply complex models to big data problems. Likewise, some of the recent advances in systems have exploited fundamental properties in machine learning to reach new points in the system design space. By considering the design of scalable learning systems from both perspectives, we can address bigger problems, expose new opportunities in algorithm and system design, and define the new fundamental abstractions that will accelerate research in these complementary fields. In this talk, I will present my research in learning systems spanning the design of efficient inference algorithms, the development of graph processing systems, and the unification of graphs and unstructured data. I will describe how the study of graphical model inference and power-law graph structure shaped the common abstractions in contemporary graph processing systems, and how new insights in system design enabled order-of-magnitude performance gains over general purpose data-processing systems. I will then discuss how lessons learned in the context of specialized graph-processing systems can be lifted to more general data-processing systems enabling users to view data as graph and tables interchangeably while preserving the performance gains of specialized systems. Finally, I will present a new direction for the design of learning systems that looks beyond traditional analytics and model fitting to the entire machine learning life cycle spanning model training, serving, and management.

Bio: Joseph Gonzalez is a postdoc in the UC Berkeley AMPLab and a co-founder of GraphLab. Joseph received his Ph.D. from the Machine Learning Department at Carnegie Mellon University where he worked with Carlos Guestrin on parallel algorithms and abstractions for scalable probabilistic machine learning. Joseph is a recipient of the AT&T Labs Graduate Fellowship and the NSF Graduate Research Fellowship.

Yan Liu, KACB 1116W, Friday, March 6, 2015, 2pm--3pm

Title: Learning and Mining in Large-scale Time Series Data

Abstract: Many emerging applications of machine learning involve time series and spatio- temporal data. In this talk, I will discuss a collection of machine learning approaches to effectively analyzing and modeling large-scale time series and spatio-temporal data, including temporal causal models, sparse extreme-value models, and fast tensor-based forecasting models. Experiment results will be shown to demonstrate the effectiveness of our models in climate science and healthcare applications.

Bio: Yan Liu is an assistant professor in Computer Science Department at University of Southern California from 2010. Before that, she was a Research Staff Member at IBM Research. She received her M.Sc and Ph.D. degree from Carnegie Mellon University in 2004 and 2007. Her research interest includes developing scalable machine learning and data mining algorithms with applications to social media analysis, computational biology, climate modeling and healthcare analytics. She has received several awards, including NSF CAREER Award, Okawa Foundation Research Award, ACM Dissertation Award Honorable Mention, Best Paper Award in SIAM Data Mining Conference, Yahoo! Faculty Award and the winner of several data mining competitions, such as KDD Cup and INFORMS data mining competition.

Robert Nowak, 2015

Title: TBD

Abstract: TBD

Bio: Robert Nowak is the McFarland-Bascom Professor in Engineering at the University of Wisconsin-Madison, where his research focuses on signal processing, machine learning, optimization, and statistics. He is a professor in Electrical and Computer Engineering, as well as being affiliated with the departments of Computer Sciences and Biomedical Engineering at the University of Wisconsin. He is also a Fellow of the Wisconsin Institute for Discovery and co-organizer of the SILO seminar series.

Michael Kearns, Scheller College of Business, Room 100, Januray 29, 2015, 11am--12pm

Title: Games, Networks, and People

Abstract: Beginning with the introduction of graphical games and related models, there is now a rich body of algorithmic connections between probabilistic inference, game theory and microeconomics. Strategic analogues of belief propagation and other inference techniques have been developed for the computation of Nash, correlated and market equilibria, and have played a significant role in the evolution of algorithmic game theory over the past decade. There are also important points of departure between probabilistic and strategic graphical models — perhaps most notably that in the latter, vertices are not random variables, but self-interested humans or organizations. It is thus natural to wonder how social network structures might influence equilibrium outcomes such as social welfare or the relative wealth and power of individuals. One logical path that such questions lead to is human-subject experiments on strategic interaction in social networks.

Bio: Michael Kearns is a Professor and National Center Chair in the Department of Computer and Information Science of the University of Pennsylvania. He is also the Founding Director of Warren Center for Network and Data Sciences. His research interests include topics in machine learning, algorithmic game theory, social networks, computational finance, and artificial intelligence. He often examines problems in these areas using methods and models from theoretical computer science and related disciplines. While the majority of his work is mathematical in nature, he has also participated in a variety of empirical and experimental projects, including applications of machine learning to finance, spoken dialogue systems, and other areas. Most recently, he has been conducting human-subject experiments on strategic and economic interaction in social networks.

Vitaly Feldman, Klaus 1116W, January 26, 2015, 1pm--2pm

Title: Preserving Statistical Validity in Adaptive Data Analysis

Abstract: A great deal of effort has been devoted to reducing the risk of spurious scientific discoveries resulting from misapplication of statistical data analysis. Existing approaches to ensuring validity of inferences drawn from data assume a fixed collection of hypotheses to be tested, or analysis to be applied, selected non-adaptively before the data are examined. In contrast, the practice of data analysis in scientific research is by its nature an adaptive process, in which new hypotheses are generated and new analyses are performed on the basis of data exploration and observed outcomes on the same data. We demonstrate a new approach for addressing the challenges of adaptivity based on insights from private data analysis. As an application we show how to safely reuse a holdout set a great many times without undermining its validation power, even when hypotheses, models, and algorithms are chosen adaptively.

Bio: Vitaly Feldman is a research scientist in CS Theory Group at IBM Almaden Research Center. Before joining IBM in Aug 2007, he spent 5 very enjoyable years at Harvard University as a PhD student advised by Prof. Leslie Valiant and as a postdoc. Previously he studied at the Technion from which he received BA and MSc in CS (advised by Nader Bshouty) and worked at IBM Research in Haifa.

Arthur Gretton, Dec 5, Friday, 2014 at 2-3pm, Klaus 1116E & W

Title: Kernel nonparametric tests of homogeneity, independence, and multi-variable interaction

Abstract: We consider three nonparametric hypothesis testing problems: (1) Given samples from distributions p and q, a homogeneity test determines whether to accept or reject p=q; (2) Given a joint distribution p_xy over random variables x and y, an independence test investigates whether p_xy = p_x p_y, (3) Given a joint distribution over several variables, we may test for whether there exist a factorization (e.g., P_xyz = P_xyP_z, or for the case of total independence, P_xyz=P_xP_yP_z). The final test (3) is of particular interest in fitting directed graphical models, as it may be used in detecting cases where two independent causes individually have weak influence on a third dependent variable, but their combined effect has a strong influence, even when these variables have high dimension. We present nonparametric tests for the three cases described, based on distances between embeddings of probability measures to reproducing kernel Hilbert spaces (RKHS), which constitute the test statistics (eg for independence, the distance is between the embedding of the joint, and that of the product of the marginals). The tests benefit from decades of machine research on kernels for various domains, and thus apply to distributions on high dimensional vectors, images, strings, graphs, groups, and semigroups, among others. The energy distance and distance covariance statistics are particular instances of these RKHS statistics. Finally, the tests can be applied for time series data, using a wild bootstrap procedure to approximate the null hypothesis.

Bio: Arthur Gretton is a Reader (Associate Professor) with the Gatsby Computational Neuroscience Unit, CSML, UCL, which he joined in 2010. He received degrees in physics and systems engineering from the Australian National University, and a PhD with Microsoft Research and the Signal Processing and Communications Laboratory at the University of Cambridge. He worked from 2002-2012 at the MPI for Biological Cybernetics, and from 2009-2010 at the Machine Learning Department, Carnegie Mellon University. Arthur's research interests include machine learning, kernel methods, statistical learning theory, nonparametric hypothesis testing, blind source separation, Gaussian processes, and non-parametric techniques for neural data analysis. He has been an associate editor at IEEE Transactions on Pattern Analysis and Machine Intelligence from 2009 to 2013, an Action Editor for JMLR since April 2013, a member of the NIPS Program Committee in 2008 and 2009, an Area Chair for ICML in 2011 and 2012, and a member of the COLT Program Committee in 2013.

Geoff Hinton, Nov 19, Wednesday, 2014 at 1:30-2:30pm, Klaus 1116

Title: Deep Learning

Abstract: I will give a brief history of deep learning explaining what it is, what kinds of task it should be good for and why it was largely abandoned in the 1990's. I will then describe how ideas from statistical physics were used to make deep learning work much better on small datasets. Finally I will describe how deep learning is now used by Google for speech recognition and object recognition and how it may soon be used for machine translation.

Bio: Geoffrey Hinton received his BA in experimental psychology from Cambridge in 1970 and his PhD in Artificial Intelligence from Edinburgh in 1978. He did postdoctoral work at Sussex University and the University of California San Diego and spent five years as a faculty member in the Computer Science department at Carnegie-Mellon University. He then became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto. He spent three years from 1998 until 2001 setting up the Gatsby Computational Neuroscience Unit at University College London and then returned to the University of Toronto where he is a University Professor. He is the director of the program on "Neural Computation and Adaptive Perception" which is funded by the Canadian Institute for Advanced Research. Geoffrey Hinton is a fellow of the Royal Society, the Royal Society of Canada, and the Association for the Advancement of Artificial Intelligence. He is an honorary foreign member of the American Academy of Arts and Sciences, and a former president of the Cognitive Science Society. He has received honorary doctorates from the University of Edinburgh and the University of Sussex. He was awarded the first David E. Rumelhart prize (2001), the IJCAI award for research excellence (2005), the IEEE Neural Network Pioneer award (1998), the ITAC/NSERC award for contributions to information technology (1992) the Killam prize for Engineering (2012) and the NSERC Herzberg Gold Medal (2010) which is Canada's top award in Science and Engineering. Geoffrey Hinton designs machine learning algorithms. His aim is to discover a learning procedure that is efficient at finding complex structure in large, high-dimensional datasets and to show that this is how the brain learns to see. He was one of the researchers who introduced the back-propagation algorithm that has been widely used for practical applications. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning, products of experts and deep belief nets. His current main interest is in unsupervised learning procedures for multi-layer neural networks with rich sensory input.

Ling Liu, Wednesday, Nov 12, 2014 at 2-3pm, Klaus 1116W

Title: Data Analytics as a Service: Dream or Reality

Abstract: Advances in Cloud computing / big data technologies and the explosion of digital contents continue to fuel the big data research and development from both computer science and other science and engineering disciplines. The volume, velocity and variety of big data have raised interesting technical challenges in both data mining/machine learning fields and databases/systems fields, demanding innovations in data analytics models and algorithms as well as innovations in building data analysis tools and systems with auto-scaling and auto-tunning capabilities. In this talk, I will give an overview of the big data research activities conducted in DiSL and discuss research challenges for enabling data analytics as a service.

Bio: Ling LIU is a Professor in the School of Computer Science at Georgia Institute of Technology. She directs the research programs in Distributed Data Intensive Systems Lab (DiSL), examining various aspects of large scale data intensive systems, including performance, availability, security and privacy. Prof. Ling Liu is an internationally recognized expert in the areas of Database Systems, Distributed Computing, Internet Data Management, and Service oriented computing. Prof. Liu has published over 300 international journal and conference articles and is a recipient of the best paper award from a number of top venues, including ICDCS 2003, WWW 2004, 2005 Pat Goldberg Memorial Best Paper Award, IEEE Cloud 2012, IEEE ICWS 2013. Prof. Liu is also a recipient of IEEE Computer Society Technical Achievement Award in 2012 and an Outstanding Doctoral Thesis Advisor award from Georgia Institute of Technology. Prof. Liu is a frequent keynote speaker at various conferences and workshops on topics related to big data technologies. In addition to services as general chair and PC chairs of numerous IEEE and ACM conferences in data engineering, very large databases and distributed computing fields, Prof. Liu has served on editorial board of over a dozen international journals. Currently Prof. Liu is the editor in chief of IEEE Transactions on Service Computing, and serves on the editorial board of half dozen international journals, including ACM Transactions on Web (TWEB), ACM Transactions on Internet Technology (TOIT), Journal of Parallel and Distributed Computing (JPDC), Distributed and Parallel Databases (Springer).

Inderjit Dhillon, Friday, Nov 7, 2014 at 2-3pm, Klaus 1116E

Title: Divide and Conquer Methods for Large-Scale Data Analysis

Abstract: Data is being generated at a tremendous rate in modern applications as diverse as internet applications, genomics, health care, energy management and social network analysis. There is a great need for developing scalable methods for analyzing these data sets. In this talk, I will present some new Divide-and-Conquer algorithms for various challenging problems in large-scale data analysis. Divide-and-Conquer has been a common paradigm that has been widely used in computer science and scientific computing, for example, in sorting, scalable computation of n-body interactions via the fast multipole method, and eigenvalue computations of symmetric matrices. However, this paradigm has not been widely employed in problems that arise in machine learning. I will introduce some recent divide-and-conquer methods that we have developed for three representative problems: (i) classification using kernel support vector machines, (ii) dimensionality reduction for large-scale social network analysis, and (iii) structure learning of graphical models. For each of these problems, we develop specialized algorithms, in particular, tailored ways of "dividing" the problem into subproblems, solving the subproblems, and finally "conquering" them. It should be noted that the subproblem solutions yield localized models for analyzing the data; an intriguing question is whether the hierarchy of localized models can be combined to yield models that are not only easier to compute, but are also statistically more robust. This is joint work with Cho-Jui Hsieh, Donghyuk Shin and Si Si.

Bio: Inderjit Dhillon is the Gottesman Family Centennial Professor of Computer Science and Mathematics at UT Austin, where he is also the Director of the ICES Center for Big Data Analytics. His main research interests are in big data, machine learning, network analysis, linear algebra and optimization. He received his B.Tech. degree from IIT Bombay, and Ph.D. from UC Berkeley. Inderjit is an IEEE Fellow as well as a SIAM Fellow. Additionally, he has received several prestigious awards, including the ICES Distinguished Research Award in 2013, the SIAM Outstanding Paper Prize in 2011, the Moncrief Grand Challenge Award in 2010, the SIAM Linear Algebra Prize in 2006, the University Research Excellence Award in 2005, and the NSF Career Award in 2001. He has published over 100 journal and conference papers, and has served on the Editorial Board of the Journal of Machine Learning Research, the IEEE Transactions of Pattern Analysis and Machine Intelligence, Foundations and Trends in Machine Learning and the SIAM Journal for Matrix Analysis and Applications.

Joydeep Ghosh, Friday, Oct 24, 2014 at 2-3pm, Klaus 1116E

Title: Predictive Healthcare Analytics under Privacy Constraints

Abstract: The move to electronic health records is producing a wealth of information, which has the potential of providing unprecedented insights into the cause, prevention, treatment and management of illnesses. Analyses of such data also promises numerous opportunities for much more effective and efficient delivery of healthcare. However (valid) privacy concerns and restrictions prevent unfettered access to such data. In this talk I will first provide a perspective on the privacy vs. utility trade-off in the context of healthcare analytics. I will then outline two approaches that we have recently and successfully taken that provide privacy-aware predictive modeling with little degradation in model quality despite restrictions on what can be shared or analyzed. The first approach focuses on extracting predictive value from data that has been aggregated at various levels due to privacy concerns, while the second introduces a novel, non-parametric sampler that can generate "realistic but not real" data given a dataset that cannot be shared as is.

Bio: Joydeep Ghosh is currently the Schlumberger Centennial Chair Professor of Electrical and Computer Engineering at the University of Texas, Austin. He joined the UT-Austin faculty in 1988 after being educated at, (B. Tech '83) and The University of Southern California (Ph.D’88). He is the founder-director of IDEAL (Intelligent Data Exploration and Analysis Lab) and a Fellow of the IEEE. Dr. Ghosh has taught graduate courses on data mining and web analytics every year to both UT students and to industry, for over a decade. He was voted as "Best Professor" in the Software Engineering Executive Education Program at UT. Dr. Ghosh's research interests lie primarily in data mining and web mining, predictive modeling / predictive analytics, machine learning approaches such as adaptive multi-learner systems, and their applications to a wide variety of complex real-world problems. He has published more than 300 refereed papers and 50 book chapters, and co-edited over 20 books. His research has been supported by the NSF, Yahoo!, Google, ONR, ARO, AFOSR, Intel, IBM, and several others. He has received 14 Best Paper Awards over the years, including the 2005 Best Research Paper Award across UT and the 1992 Darlington Award given by the IEEE Circuits and Systems Society for the overall Best Paper in the areas of CAS/CAD. Dr. Ghosh has been a plenary/keynote speaker on several occasions such as MICAI'12, KDIR'10, ISIT'08, ANNIE’06 and MCS 2002, and has widely lectured on intelligent analysis of large-scale data. He served as the Conference Co-Chair or Program Co-Chair for several top data mining oriented conferences, including SDM'13, SDM''12, KDD 2011, CIDM’07, ICPR'08 (Pattern Recognition Track) and SDM'06. He was the Conf. Co-Chair for Artificial Neural Networks in Engineering (ANNIE)'93 to '96 and '99 to '03 and the founding chair of the Data Mining Tech. Committee of the IEEE Computational Intelligence Society. He has also co-organized workshops on high dimensional clustering, Web Analytics, Web Mining and Parallel/ Distributed Knowledge Discovery.

Hadi Esmaeilzadeh, Wednesday, Sept 17, 2014 at 2-3pm, Klaus 1116W

Title: Trainable Approximate Accelerators: Learning to Compute Fast

Abstract: As our Dark Silicon study shows, the benefits from continuous transistor scaling are diminishing due to energy and power constraints. Further, our results show that the current paradigm of general-purpose processors, multicore processors, will fall significantly short of historical trends of performance improvements in the next decade. These shortcomings may drastically curtail computing industry from continuously delivering new capabilities, the backbone of its economic ecosystem. To this end, radical departures from conventional approaches are necessary to provide continued performance and efficiency gains in general-purpose computing. In this talk, I will present our work on using machine learning algorithms to accelerate conventional code. The core idea is to using machine learning algorithms to learn how a region of code behaves. After learning, we replace the original region of code with a dedicated hardware that accelerates the machine learning algorithm. This approach enabled us to design the first ever analog-digital general-purpose processor that delivers 24x improvement in energy-delay product for a diverse set of applications. While this approach accelerates compute intensive applications, it does not mitigate the long latency of the memory accesses. I will talk about a new technique that predicts the value of the load instructions and effectively tackles the memory bottleneck. Our work shows significant gains in performance and efficiency when machine learning algorithms are used for code or memory approximation.

Bio: Hadi is the Catherine M. and James E. Allchin Early Career Professor of Computer Science at Georgia Institute of Technology. His dissertation has received the 2013 William Chan Memorial Dissertation Award. He founded and directs the Alternative Computing Technologies (ACT) Lab, where he and his students are working on developing new technologies and cross-stack solutions to improve the performance and energy efficiency of computer systems for emerging applications. Hadi received his Ph.D. from the Department of Computer Science and Engineering at University of Washington in 2013. He also has a Master's degree in Computer Science from The University of Texas at Austin (2010), and a Master's degree in Electrical and Computer Engineering from University of Tehran (2005). His research has been recognized by three Communications of the ACM Research Highlights and three IEEE Micro Top Picks. His work on dark silicon has also been profiled in New York Times.

Ping Ma, Thursday, Sept 11, 2014 at 11am, Advisory Board Room, 402 Groseclose Bldg

Title: Leveraging in Big Data Regression

Abstract: Advances in science and technology in the past a few decades have led to big data challenges across a variety of fields. Extraction of useful information and knowledge from big data has become a daunting challenge to both the science community and entire society. To tackle this challenge requires major breakthroughs in efficient computational and statistical approaches to big data analysis. In this talk, I will present some leveraging algorithms, which make a key contribution to resolving the grand challenge. In these algorithms, by sampling a very small representative sub-dataset using smart algorithms, one can effectively extract relevant information of vast data sets from the small sub-dataset. Such algorithms are scalable to big data. These efforts allow pervasive access to big data analytics especially for those who cannot directly use supercomputers. More importantly, these algorithms enable massive ordinary users to analyze big data using tablet computers.

Bio: Ping Ma is a Professor in the Department of Statistics, University of Georgia. He is interested in nonparametric methods and inverse problems, and their applications to big data and computational biology problems.