Plenary Speakers

Stephen P. Boyd - Stanford University

Convex Optimization with Abstract Linear Operators

Tuesday 22 March 2016, 10.20 - 11.10 (Grand Ballroom II & III)

Video

View the video [embedded MP4] and slides/notes [PDF] of this presentation.

Abstract:

Domain specific languages (DSLs) for convex optimization, such as CVX and YALMIP and the more recent CVXPY and Convex.jl, are very widely used to rapidly develop, prototype, and solve convex optimization problems of modest size, say, tens of thousands of variables, with linear operators described as sparse matrices. These systems allow a user to specify a convex optimization problem in a very succinct and natural way, and then solve the problem with great reliability, with no algorithm parameter tuning, and a reasonable performance loss compared to a custom solver hand designed and tuned for the problem. In this talk we describe recent progress toward the goal of extending these DSLs to handle large-scale problems that involve linear operators given as abstract operators with fast transforms, such as those arising in image processing and vision, medical imaging, and other application areas. This involves re-thinking the entire stack, from the high-level DSL design down to the low level solvers.

Biography:

Stephen P. Boyd is the Samsung Professor of Engineering, in the Information Systems Laboratory, Electrical Engineering Department, Stanford University. He holds courtesy appointments in the departments of Computer Science and Management Science and Engineering, and is a member of the Institute for Computational Mathematics and Engineering. His current interests include convex programming applications in control, machine learning, signal processing, finance, and circuit design.

He received an AB degree in Mathematics, summa cum laude, from Harvard University in 1980, and a PhD in EECS from U. C. Berkeley in 1985. He holds an honorary PhD from Royal Institute of Technology, Stockholm. He is the author of many papers and several books, including Convex Optimization (with Lieven Vandenberghe, 2004) and Linear Matrix Inequalities in Systems and Control (with El Ghaoui, Feron, and Balakrishnan). His group has created many open source software packages, including the widely used packages for convex optimization CVX, CVXPY, and Convex.jl, all available at his website, which is visited more than 1.6 million times per year.

He is a fellow of IEEE and SIAM, and a member of the National Academy of Engineering.

Wen Tong - Huawei

5G Wireless Enabling Technologies

Tuesday 22 March 2016, 11.10 - 12.00 (Grand Ballroom II & III)

Video

View the video [embedded MP4] and slides/notes [PDF] of this presentation.

Abstract:

In this talk, we discuss the global progress with the respect to 5G requirements and standardization, as the foundation for the 5G service and applications, the enhanced mobile broadband, massive connectivity and critical communications represent the new challenges for the wireless research. In this presentation, we further elaborate the 10 enabling technologies for 5G wireless from the signal processing perspective.

Biography:

Dr. Wen Tong is the Huawei Fellow and IEEE Fellow, the Head of Wireless Research, and the Head of Communications Technologies Laboratories, Huawei 2012 LAB.

Prior to joining Huawei in March 2009, Dr. Wen Tong was the Nortel Fellow and global Head of the Network Technology Labs at Nortel. He received the M.Sc. and Ph.D degrees in Electrical Engineering in 1986 and 1993 and joined the Wireless Technology Labs at Bell Northern Research in 1995 in Canada. He has pioneered fundamental technologies in wireless with 280 granted US patents. Dr. Tong was Nortel’s Most Prolific Inventor.

Dr. Tong has conducted the advanced research work spanning from 1G to 4G wireless at Nortel. He had been the director of Wireless Technology Labs from 2005 to 2007. From 2007 to 2009, Dr. Tong was the head of Network Technology Labs, responsible for Nortel’s global strategic technologies research and development. In 2007, Dr. Tong was inducted as first batch of Nortel Fellow.

Since 2010, Dr. Tong is the vice president, CTO of Huawei Wireless and head of Huawei Wireless Research leading one of the largest wireless research organizations in the industry with more than 700 research experts. In 2011, Dr. Tong was elected as first batch of Huawei Fellow, and Dr. Tong was appointed the Head of Communications Technologies Labs of Huawei 2012 LAB, a corporative centralized next generation research initiative and he spearhead to lead Huawei’s 5G wireless research and development.

In 2014, he was the recipient of IEEE Communications Society’s Industry Innovation Award for “the leadership and contributions in development of 3G and 4G wireless systems”. Dr. Tong serves as Board of Director of WiFi Alliance and he is a Fellow of Canadian Academy of Engineering.

Michael Unser - EPFL

Sparsity and Inverse Problems: Think Analog, and Act Digital

Wednesday 23 March 2016, 11.00 - 12.00 (Auditorium)

Video

View the video [embedded MP4] and slides/notes [PDF] of this presentation.

Abstract:

Sparsity and compressed sensing are very popular topics in signal processing. More and more researchers are relying on l1-type minimization scheme for solving a variety of ill-posed problems in imaging. The paradigm is well established with a solid mathematical foundation, although the arguments that have been put forth in the past are deterministic and finite-dimensional for the most part.

In this presentation, we shall promote a continuous-domain formulation of the problem (“think analog”) that is more closely tied to the physics of imaging and that also lends it itself better to mathematical analysis. For instance, we shall demonstrate that splines (which are inherently sparse) are global optimizers of linear inverse problems with total-variation (TV) regularization constraints.

Alternatively, one can adopt an infinite-dimensional statistical point of view by modeling signals as sparse stochastic processes. The guiding principle it then to discretize the inverse problem by projecting both the statistical and physical measurement models onto a linear reconstruction space. This leads to the specification of a general class of maximum a posteriori (MAP) signal estimators complemented with a practical iterative reconstruction scheme (“act digital”). While the framework is compatible with the traditional methods of Tikhonov and TV, it opens the door to a much broader class of potential functions that are inherently sparse, while it also suggests alternative Bayesian recovery procedures. We shall illustrate the approach with the reconstruction of images in a variety of modalities including MRI, phase-contrast tomography, cryo-electron tomography, and deconvolution microscopy.

Biography:

Michael Unser is professor and director of EPFL's Biomedical Imaging Group, Lausanne, Switzerland. His primary area of investigation is biomedical image processing. He is internationally recognized for his research contributions to sampling theory, wavelets, the use of splines for image processing, stochastic processes, and computational bioimaging. He has published over 250 journal papers on those topics. He is the author with P. Tafti of the book “An introduction to sparse stochastic processes”, Cambridge University Press 2014.

From 1985 to 1997, he was with the Biomedical Engineering and Instrumentation Program, National Institutes of Health, Bethesda USA, conducting research on bioimaging.

Dr. Unser has held the position of associate Editor-in-Chief (2003-2005) for the IEEE Transactions on Medical Imaging. He is currently member of the editorial boards of SIAM J. Imaging Sciences, IEEE J. Selected Topics in Signal Processing, and Foundations and Trends in Signal Processing. He co-organized the first IEEE International Symposium on Biomedical Imaging (ISBI’2002) and was the founding chair of the technical committee of the IEEE-SP Society on Bio Imaging and Signal Processing (BISP).

Prof. Unser is a fellow of the IEEE (1999), an EURASIP fellow (2009), and a member of the Swiss Academy of Engineering Sciences. He is the recipient of several international prizes including three IEEE-SPS Best Paper Awards and two Technical Achievement Awards from the IEEE (2008 SPS and EMBS 2010).

Li Deng - Microsoft Research

Deep Learning for AI: From Machine Perception to Machine Cognition

Thursday 24 March 2016, 11.00 - 12.00 (Auditorium)

Video

View the video [embedded MP4] and slides/notes [PDF] of this presentation.

Abstract:

Deep learning has profoundly reshaped the landscape of speech recognition (since 2010) and image understanding (since 2012), two major fields of artificial intelligence (AI) pertaining to machine perception. Since about two years ago, this rapid progress in machine perception enabled by deep learning has advanced gradually towards a number of more challenging and vital areas of AI. These areas are central to the cognitive functions in human intelligence, encompassing natural language, reasoning, attention, memory, knowledge, action, and decision making, many of which involve the analysis of sequential signals and of other forms of structured information expressed as symbolic entities and their relations.

This plenary presentation will provide an overview of the recent history and current status of deep learning research, as well as its industrial deployment, in selected AI areas of machine perception and cognition, including state of the art performance with associated deep learning methods. A wide ranging applications of deep learning by major tech companies (limited to those disclosed to the public) will be summarized and analyzed. The presentation will end with issues about major challenges for future development of deep learning in order to reach brain-like AI competence. The challenges to be addressed, some from my possibly biased perspective, will comprise: handling uncertainty and application-domain constraints with integrated neural network and Bayesian learning, modeling memory and reasoning via unified symbolic and neural computation, and unsupervised learning by building ultra-strong application-specific priors free of label-paired training data.

Biography:

Li Deng received a Ph.D. from the University of Wisconsin-Madison. He was an assistant and then tenured full professor at the University of Waterloo, Ontario, Canada during 1989-1999. Immediately afterward he joined Microsoft Research, Redmond, USA as a Principal Researcher, where he currently directs the R&D of its Deep Learning Technology Center he founded in early 2014. Dr. Deng’s current activities are centered on business-critical applications involving big data analytics, natural language text, semantic modeling, speech, image, and multimodal signals. Outside his main responsibilities, Dr. Deng’s research interests lie in solving fundamental problems of machine learning, artificial and human intelligence, cognitive and neural computation with their biological connections, and multimodal signal/information processing. In addition to over 70 granted patents and over 300 scientific publications in leading journals and conferences, Dr. Deng has authored or co-authored 5 books including 2 latest books: Deep Learning: Methods and Applications (NOW Publishers, 2014) and Automatic Speech Recognition: A Deep-Learning Approach (Springer, 2015), both with English and Chinese editions. Dr. Deng is a Fellow of the IEEE, the Acoustical Society of America, and the ISCA. He served on the Board of Governors of the IEEE Signal Processing Society. More recently, he was the Editor-In-Chief for the IEEE Signal Processing Magazine and for the IEEE/ACM Transactions on Audio, Speech, and Language Processing; he also served as a general chair of ICASSP and area chair of NIPS. Dr. Deng’s technical work in industry-scale deep learning and AI has impacted various areas of information processing, especially Microsoft speech products and text- and big-data related products/services. His work helped initiate the resurgence of (deep) neural networks in the modern big-data, big-compute era, and has been recognized by several awards, including the 2013 IEEE SPS Best Paper Award and the 2015 IEEE SPS Technical Achievement Award “for outstanding contributions to deep learning and to automatic speech recognition.”

Johan Suykens - KU Leuven

Learning with Primal and Dual Model Representations: A Unifying Picture

Friday 25 March 2016, 11.00 - 12.00 (Auditorium)

Video

View the video [embedded MP4] and slides/notes [PDF] of this presentation.

Abstract:

Many existing methods are making use of regularization, sparsity or kernel-based approaches. While in parametric models sparsity is achieved by regularization, in kernel-based models it is obtained by the choice of an appropriate loss function. Which new synergies or common frameworks could one develop along these different avenues?

In this talk we explain how learning with primal and dual model representations offers a unifying framework. Many core problems in supervised and unsupervised learning, and beyond, can be characterized in this way. The relevance of this setting is shown for sparse modelling, robustness, networks and big data. Another illustration is the matrix singular value decomposition, for which a new variational principle and non-linear extensions have been recently obtained. Finally, also a new theory for deep learning with kernel machines will be proposed in which duality plays an important role.

Biography:

Johan A.K. Suykens was born in Willebroek Belgium, May 18 1966. He received the master degree in Electro-Mechanical Engineering and the PhD degree in Applied Sciences from the Katholieke Universiteit Leuven, in 1989 and 1995, respectively. In 1996 he has been a Visiting Postdoctoral Researcher at the University of California, Berkeley. He has been a Postdoctoral Researcher with the Fund for Scientific Research FWO Flanders and is currently a full Professor with KU Leuven. He is author of the books "Artificial Neural Networks for Modelling and Control of Non-linear Systems" (Kluwer Academic Publishers) and "Least Squares Support Vector Machines" (World Scientific), co-author of the book "Cellular Neural Networks, Multi-Scroll Chaos and Synchronization" (World Scientific) and editor of the books "Nonlinear Modeling: Advanced Black-Box Techniques" (Kluwer Academic Publishers), "Advances in Learning Theory: Methods, Models and Applications" (IOS Press) and "Regularization, Optimization, Kernels, and Support Vector Machines" (Chapman & Hall/CRC). In 1998 he organized an International Workshop on Nonlinear Modelling with Time-series Prediction Competition. He has served as associate editor for the IEEE Transactions on Circuits and Systems (1997-1999 and 2004-2007) and for the IEEE Transactions on Neural Networks (1998-2009). He received an IEEE Signal Processing Society 1999 Best Paper Award and several Best Paper Awards at International Conferences. He is a recipient of the International Neural Networks Society INNS 2000 Young Investigator Award for significant contributions in the field of neural networks. He has served as a Director and Organizer of the NATO Advanced Study Institute on Learning Theory and Practice (Leuven 2002), as a program co-chair for the International Joint Conference on Neural Networks 2004 and the International Symposium on Nonlinear Theory and its Applications 2005, as an organizer of the International Symposium on Synchronization in Complex Networks 2007, a co-organizer of the NIPS 2010 workshop on Tensors, Kernels and Machine Learning, and chair of ROKS 2013. He has been awarded an ERC Advanced Grant 2011, VUB leerstoel 2012-2013, and has been elevated IEEE Fellow 2015 for developing least squares support vector machines.


ICASSP 2016 Patrons