Tutorials

Advance Registration of the Tutorials are now CLOSED. However, ICASSP 2016 welcomes all interested tutorial attendees to obtain onsite registrations.

Our registration desk will be open before Tutorials begin on both March 20 and March 21. The onsite registration fees should be paid by credit cards to simplify the process and a receipt will be provided.

On site registration fees are as follows:

Full-Day Tutorials (each)RegularUS$320
StudentUS$210
Half-Day Tutorials (each)RegularUS$190
StudentUS$125

For further information, please contact Registration Chair, Prof. Jian Li.

ICASSP 2016 will continue the tradition of previous ICASSPs and will offer a wide selection of high-quality tutorials on hot topics for the signal processing community.

This year we offer 16 tutorials to cover the diverse interests of the attendees. Fourteen tutorials will be three hours long and one will be six hours. Each tutorial will provide an overview of the state of the art of a particular topic by renowned presenters. Supporting material will be distributed.

The tutorials are scheduled before the start of regular conference sessions in four parallel tracks on Sunday and Monday.

We hope the attendees will enjoy the diverse choice of tutorials.

Jian Li, Jose Principe
ICASSP 2016 Tutorial Chairs

List of Tutorials

If a tutorial does not receive sufficient registrations, it will be cancelled and the tutorial registrants can switch to a different tutorial or receive refund in full the tutorial fee.

Sunday, 20th March, 2016, Morning

9am-12:30pm (30 minutes break for morning coffee/tea at 10:30am)

TUT-I- Learning Nonlinear Dynamical Models Using Particle Filters (1/2)
Presenter: Thomas Schön (Uppsala University)

TUT-II- 3D Room Reconstruction from Sound
Presenter: Alessio Del Bue and Marco Crocco (Istituto Italiano di Tecnologia)

TUT-III- Convex Optimization Techniques for Super-Resolution Parameter Estimation
Presenter: Yuejie Chi (Ohio State University) and Gongguo Tang (Colorado School of Mines)

TUT-IV- Massive MIMO: Fundamentals
Presenter: Thomas L. Marzetta (Alcatel-Lucent) and Erik G. Larsson (Linköping University)

Sunday, 20th March, 2016, Afternoon

1:30pm-5:00pm (30 minutes break for morning coffee/tea at 3:00pm)

TUT-I- Learning Nonlinear Dynamical Models Using Particle Filters (2/2)
Presenter: Thomas Schön (Uppsala University)

TUT-VI- A Signal Processing Perspective of Financial Engineering
Presenter: Daniel P. Palomar (Hong Kong University of Science and Technology) and Yiyong Feng (Credit Suisse (Hong Kong))

TUT-VII- The Performance of Non-Smooth Convex Relaxation Methods for Structured Signal Recovery
Presenter: Babak Hassibi (California Institute of Technology)

TUT-VIII- Millimeter Wave Wireless Communications
Presenter: Robert W. Heath Jr. (The University of Texas at Austin)

Monday, 21st March, 2016, Morning

9am-12:30pm (30 minutes break for morning coffee/tea at 10:30am)

TUT-V- Network Statistical Inference in Complex Engineered Networks
Presenter: Chee Wei Tan (City University of Hong Kong) and Wenyi Zhang (University of Science and Technology of China)

TUT-IX- Discontinuities-Preserving Image and Motion Coherence: Computational Models and Applications
Presenter: Jiangbo Lu (Advanced Digital Sciences Center), Dongbo Min (Chungnam National University) and Minh N. Do (University of Illinois at Urbana-Champaign)

TUT-X- Bayesian-Inspired Non-Convex Methods for Sparse Signal Recovery
Presenter: Bhaskar D. Rao (University of California at San Diego) and Chandra R. Murthy (Indian Institute of Science)

TUT-XII- Supervised Speech Separation
Presenter: DeLiang Wang (Ohio State University)

Monday, 21st March, 2016, Afternoon

1:30pm-5:00pm (30 minutes break for morning coffee/tea at 3:00pm)

TUT-XIII- Computational Visual Attention: Approaches and Applications
Presenter: Weisi Lin (Nanyang Technological University) and Zhenzhong Chen (Wuhan University)

TUT-XIV- Energy-Efficient Resource Allocation for 5G Wireless Networks via Fractional Programming Theory
Presenter: Alessio Zappone (Technische Universitaet Dresden) and Eduard A. Jorswieck (Dresden University of Technology)

TUT-XV- Multiscale Signal Processing for Wearable Health: Sleep, Stress, and Fatigue Applications
Presenter: Danilo Mandic and Valentin Goverdovsky (Imperial College London)

TUT-XVI- Phase Retrieval: Theory, Algorithms, and Applications
Presenter: Yonina Eldar (Technion) and Mahdi Soltanolkotabi (University of Southern California)



TUT-I- Learning Nonlinear Dynamical Models Using Particle Filters

Abstract:
The aim of this tutorial is to show how to perform learning and inference in probabilistic models of nonlinear/non-Gaussian dynamical systems. We do not aim to cover all different methods that are available, but instead we aim to clearly describe where and how the need for sequential Monte Carlo (SMC) arises and focus on the underlying key principles. The state space model (SSM) offers a general tool for modeling and analyzing dynamical phenomena. We will in this tutorial show how to learn nonlinear and possibly non-Gaussian SSMs. One of the key challenges in learning these SSMs is the intractability of estimating the system state. Sequential Monte Carlo (SMC) methods, such as the particle filter (introduced more than two decades ago), provide numerical solutions to the nonlinear state estimation problems arising in SSMs. When combined with additional identification techniques, these algorithms provide solid solutions to the nonlinear model learning problem. We describe two general strategies for creating such combinations and explain why and how SMC is a natural tool for implementing these strategies. The SMC method will also be thoroughly introduced and importantly also its recent combination with Markov chain Monte Carlo (MCMC) methods. We consider both the Bayesian and the maximum likelihood formulations, without making any individual ranking among them.

Biography:
Thomas B. Schön is Professor of the Chair of Automatic Control in the Department of Information Technology at Uppsala University. He received the PhD degree in Automatic Control in Feb. 2006, the MSc degree in Applied Physics and Electrical Engineering in Sep. 2001, the BSc degree in Business Administration and Economics in Jan. 2001, all from Linköping University. He has held visiting positions with the University of Cambridge (UK), the University of Newcastle (Australia) and Universidad Técnica Federico Santa María (Valparaíso, Chile). He is a Senior member of the IEEE. He was awarded the Automatica Best Paper Prize in 2014, and in 2013 he received the best PhD thesis award by The European Association for Signal Processing (EURASIP). He received the best teacher award at the Institute of Technology, Linköping University in 2009. Schön's main research interest is nonlinear inference problems, especially within the context of dynamical systems, solved using probabilistic methods, more specifically sequential Monte Carlo (SMC), particle MCMC and graphical models. He has worked on SMC and the topic of this tutorial for 14 years.


TUT-II- 3D Room Reconstruction from Sound

Abstract:
The tutorial will review state-of-art techniques aimed at solving the problem of 3D room reconstruction from sound, i.e. the inference of 3D position of the planar boundaries of an enclosure, from a set of audio signals acquired by one or more microphones, given one or more acoustic sources emitting a signal.

3D room reconstruction from sound is closely related to a set of problems, being currently subject of active investigation, such as source localization and tracking, microphone self-calibration, and signal dereverberation. The tutorial will explain in detail how techniques previously devised for solving such problems can be exploited and adapted to the room reconstruction problem, and, in turn, how knowledge of 3D geometry may improve the solution of the above problems. Depending on the amount of prior knowledge on the microphone and source positions, the emitted signal and the geometry of the enclosure (number of walls, convexity etc.), a number of sub-problems can be defined, from the most controlled scenario to the completely unconstrained problem “in the wild” and the tutorial will organize such sub-problems into a comprehensive taxonomy.

The tutorial will be defining a pipeline for 3D room reconstruction describing each stage with a problem solving attitude starting with the acoustic modeling of an enclosure, with particular emphasis on the Image Source Model, currently at the core of the most part of the methods. In the following, all the key steps, including Room Impulse Response Estimation, finding of Times of Arrival of the echoes, echo sorting and estimation of planar surfaces will be addressed. For each step pros and cons of the most promising techniques will be discussed, including sparsity-based approaches, beamforming, maximum likelihood and low-rank constraints. The interesting connection between signal processing and geometry, implied by the problem, will be described, distinguishing approaches decoupling the two aspects and approaches treating them as a whole.

Particular attention will be dedicated to recent uncalibrated methods that try to solve jointly the microphone and source localization and the room 3D estimation, possibly starting from unknown natural audio signals. Finally, in order to encourage comparative evaluations of current and future methods, a new dataset based on audio measurements taken in real rooms will be provided and an evaluation protocol, relying on ground truth geometric data will be described. A discussion on open problems and future research directions will conclude the tutorial.

Biographies:
Alessio Del Bue is a Tenure-Track Researcher leading the Visual Geometry and Modelling (VGM) Lab of the PAVIS Department at the Istituto Italiano di Tecnologia (IIT). Previously (2006-2009), he was a senior researcher at the Institute for Systems and Robotics (ISR) at the Instituto Superior Técnico (IST) in Lisbon, Portugal. Before that, he obtained his Ph.D. in 2006 under the supervision of Dr. Lourdes Agapito in the Department of Computer Science at Queen Mary University of London. His main area of research is computer vision and signal processing with a particular focus on dynamic 3D scene understanding using multi-modal data (mainly video and audio). He is also active in several interdisciplinary projects at IIT by scientifically supporting life science and robotic departments. He is the author or co-author of more than 80 peer-reviewed publications. He was a co-chair of the 5th International Symposium on 3D Data Processing, Visualization and Transmission (3DPVT 2010) and he will be the tutorial chair of the 18th International Conference on Image Analysis and Processing (ICIAP 2015). He has been the organizer of tutorial courses and workshops at major computer vision conferences. In particular he was organizing a serie of tutorial on non-rigid 3D reconstruction “Computer Vision in a Non-Rigid World” in different venues (2009, 2010, 2011 and 2013: www.isr.ist.utl.pt/~adb/tutorial/). He is also a reviewer for several computer vision, signal and image processing conferences and journals.

Marco Crocco received the Laurea degree in electronic engineering in 2005 and the Ph.D. degree in electronic engineering, computer science and telecommunications in 2009 from the University of Genova, uner the supervision of Prof. Andrea Trucco. From 2005 to 2010, he was with the Department of Biophysical and Electronic Engineering (DIBE), University of Genova, in the Acoustic, Antennas Arrays, and Underwater Signals (A3US) Laboratory of the Signal Processing and Telecommunications Group. In 2010 he joined the Pattern Analysis and Computer Vision (PAVIS) department at the Istituto Italiano di Tecnologia (IIT), Genova, as a Post Doc Researcher. In 2014 he joined the Visual Geometry and Modeling Lab under the same PAVIS department. His main research interests include array signal processing, 3D geometry with audio and pattern recognition applied to multisensory data. He has been appointed inventor of two international patents. He is associate editor for Pattern Recognition Letters, ELSEVIER. He has been reviewer for several journals and conferences including Transactions on Signal Processing, Transactions on Audio Speech and Language Processing, CVPR, ICCV, ACM MULTIMEDIA, ICASSP, EUSIPCO. He is co-author of about 45 publications including international journals, proceedings of international conferences and book chapters.


TUT-III- Convex Optimization Techniques for Super-Resolution Parameter Estimation

Abstract:
Parameter estimation, or estimating the set of parameters describing a signal from its noisy samples is of great interest in many sensing and imaging applications. Conventional spectrum estimation methods are developed mostly under the assumption that the signal is uniformly sampled and is contaminated by Gaussian noise. Novel applications in the era of big data present a set of unique challenges. Examples include estimation of the spectrum of ultra-wide band signals with a limited sampling rate for cognitive radios, handling missing data and outliers due to sensor failures or attacks, parameter estimation in the presence of interfering modes or users, and calibration of large sensor arrays. Popularized by Compressed Sensing (CS), convex optimization techniques have been recognized as an emerging tool that is able to address some or all of the above challenges. In most applications, the underlying parameters lie in a continuous space and CS relies on discretization of the continuous parameter space into a fine grid. This, however, raises both theoretical and algorithmic issues. More recently, convex optimization based on the atomic norm is introduced as a new method to directly estimate the continuous-valued parameters without discretization, achieving super-resolution and super-precision.

The focus of this tutorial is convex optimization approaches for super-resolution parameter estimation. This tutorial will examine the history of parameter estimation algorithms, and motivate the necessity to develop new classes of algorithms driven by new applications in seismic imaging, fluorescence imaging, neural science, and machine learning. This tutorial will discuss super-resolution parameter estimation algorithms based on L-1 minimization that requires discretization, and atomic norm minimization that does not require discretization. Performance analysis and numerical comparisons will be provided for problems of interests, highlighting both benefits and drawbacks of convex optimization approaches for parameter estimation.

Biographies:
Yuejie Chi is an assistant professor in the Electrical and Computer Engineering Department at the Ohio State University, with a joint appointment in the Biomedical Informatics Department at the Wexner Medical School since September 2012. She received a M.A. and a Ph.D. in Electrical Engineering from Princeton University in 2009 and 2012 respectively, and a B.Eng. in Electrical Engineering from Tsinghua University, China in 2007. She received the Young Investigator Program Awards from AFOSR and ONR respectively in 2015, and the Ralph E. Powe Junior Faculty Enhancement Award from ORAU in 2014. She is the recipient of the IEEE Signal Processing Society Young Author Best Paper Award in 2013 and the Best Paper Award at the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP) in 2012. Her research interests include high-dimensional data analysis, statistical signal processing, machine learning and their applications in network inference, spectrum sensing and estimation, image analysis and bioinformatics.

Gongguo Tang is an Assistant Professor in the Electrical Engineering and Computer Science Department at Colorado School of Mines since 2014. He received his Ph.D. degree in Electrical Engineering from Washington University in St. Louis in 2011. He was a Post-doctoral Research Associate at the Department of Electrical and Computer Engineering, University of Wisconsin-Madison in 2011–2013, and a visiting scholar at University of California, Berkley in 2013. Gongguo’s research interests are in the area of signal processing, convex optimization, machine learning, and their applications in data analysis, optics, imaging, and networks.


TUT-IV- Massive MIMO: Fundamentals

Abstract:
The proposed tutorial will give the audience a thorough grounding in the fundamentals of Massive MIMO: what distinguishes it from earlier multiple antenna wireless technologies, why it is considered a breakthrough technology, how it actually works, and what constitutes its ultimate limitations. Building on only elementary communication theory and statistical signal processing, the tutorial will show how to obtain substantially closed-form performance expressions for complicated multi-cell Massive MIMO deployments. In turn these performance analyses lead to a thorough intuitive understanding of the interplay of system parameters, in addition to being an indispensable tool for first order Massive MIMO system design.

Biographies:
Thomas L. Marzetta is the originator of Massive MIMO. He is Group Leader of Large Scale Antenna Systems at Bell Labs, Alcatel-Lucent within the Network Energy Program, and Co-Head of their FutureX Massive MIMO project. Dr. Marzetta received the PhD and SB in Electrical Engineering from Massachusetts Institute of Technology in 1978 and 1972, and the MS in Systems Engineering from University of Pennsylvania in 1973. He worked for Schlumberger-Doll Research in petroleum exploration and for Nichols Research Corporation in defense research before joining Bell Labs in 1995 where he served as the Director of the Communications and Statistical Sciences Department within the former Math Center. Dr. Marzetta was Coordinator of the GreenTouch Consortium’s Large Scale Antenna Systems Project, and he is a Member of the Advisory Board of MAMMOET (Massive MIMO for Efficient Transmission), an EU-sponsored FP7 project. For his achievements in Massive MIMO he has received the 2015 IEEE W. R. G. Baker Award, the 2015 IEEE Stephen O. Rice Prize, and the 2014 Thomas Alva Edison Patent Award, among others. He was elected a Fellow of the IEEE in 2003, and he became a Bell Labs Fellow in 2014. In May 2015 he received an Honorary Doctorate from Linköping University.

Erik G. Larsson is Professor of Communication Systems at Linköping University (LiU) in Linköping, Sweden. He has previously held positions at Royal Institute of Technology (KTH) in Stockholm, Sweden, University of Florida and the George Washington University, USA. He has published some 100 journal papers on communications and signal processing, he is co-author of the textbook Space-Time Block Coding for Wireless Communications (Cambridge Univ. Press, 2003). He has served as Associate Editor for several major journals, including the IEEE Transactions on Communications, and the IEEE Transactions on Signal Processing. He serves as chair of the IEEE Signal Processing Society SPCOM technical committee in 2015--2016 and as chair of the steering committee for the IEEE Wireless Communications Letters in 2014--2015. He is the General Chair of the Asilomar Conference on Signals, Systems and Computers in 2015 (he was Technical Chair in 2012). He received the IEEE Signal Processing Magazine Best Column Award twice, in 2012 and 2014, and he is receiving the IEEE ComSoc Stephen O. Rice Prize in Communications Theory in 2015.


TUT-V- Network Statistical Inference in Complex Engineered Networks

Abstract:
Networks represent a fundamental medium for spreading and diffusion of various types of behavior and information. Spreading processes are those where the actions, infections or failure of certain nodes increase the susceptibility of other nodes to the same; this results in the successive spread of infections/failures/other phenomena from a small set of initial nodes to a much larger set. Examples include (a) online social networks: cascading processes provide natural models for understanding both the consumption of online media (e.g., viral videos, news) and spread of opinions and rumors on online social networks such as Twitter, (b) online viral marketing: predicting uptake on social buying sites like Groupon etc., (c) security and reliability: epidemic-like spreading of computer virus and malware, (d) social computing in big data: to spot an Internet hoax, verify a Tweet, and timely quarantine to enhance network resilience and limit the damage caused. This tutorial will focus on theories and algorithms for information processing and network inference in these complex engineered networks. Algorithm design based on statistical inference, maximum likelihood estimation/detection and their advances will then be explained with step-by-step instructions for reliable network inference. Finally, new cyber security/forensics protocols and their software implementation in practical online social networks will be introduced.

Biographies:
Chee Wei Tan is an Assistant Professor at the City University Hong Kong. Previously, he was a Postdoctoral Scholar in the Rigorous Systems Research Group at the Dept. of Computing and Mathematical Sciences at Caltech with Prof. Steven H. Low. He received his PhD in Electrical Engineering from Princeton University in 2008, and was with the Edge Lab at Princeton. His PhD advisor was Prof. Mung Chiang. He has worked at Fraser Research Lab and Qualcomm R&D (QRC). He also did his doctoral work as a Visiting Scholar at the Coordinated Science Lab of UIUC. He received his B.S. in Electrical Engineering from the National University of Singapore. He is currently serving as the Chair of the IEEE Information Theory Society Hong Kong Chapter and as an Editor for the IEEE Transactions on Communications. Dr. Tan's research interests include networks, statistical inference in online data analytics, cyber security, mobile and cloud computing, optimization theory and its applications. Dr. Tan received the 2008 Wu Prize for Excellence from Princeton University and a 2011 IEEE Communications Society Asia-Pacific Outstanding Young Researcher Award. He received a 2013 NSF/TCPP Curriculum Initiative Early Adopter Award for parallel computing in network science. He was twice selected to participate at the U.S. National Academy of Engineering China-America Frontiers of Engineering Symposium in 2013 and 2015. He was also the General Chair of the 2015 IEEE Hong Kong-Taiwan Joint Workshop on Information Theory and Communications sponsored by the Croucher Foundation.

Wenyi Zhang is a Professor at the University of Science and Technology of China. He attended Tsinghua University and obtained his Bachelor's degree in Automation in 2001. He studied in the University of Notre Dame, Indiana, USA, and obtained his Master's and Ph.D. degrees, both in Electrical Engineering, in 2003 and 2006, respectively. His PhD advisor was Prof. J. Nicholas Laneman. Prior to joining the faculty of the University of Science and Technology of China, he was affiliated with the Communication Science Institute, University of Southern California, as a postdoctoral research associate with Prof. Urbashi Mitra, and with Qualcomm Incorporated, Corporate Research and Development. Dr. Zhang has served on the editorial board of the IEEE Communications Letters. His research interests include information theory and its applications in wireless communications, and statistical signal processing with an emphasis on detection theory. Dr. Zhang is a 2011 IEEE Communications Society Asia-Pacific Outstanding Young Researcher Awardee, and was selected to participate at the U.S. National Academy of Engineering China-America Frontiers of Engineering Symposium in 2011.


TUT-VI- A Signal Processing Perspective of Financial Engineering

Abstract:
Financial engineering and electrical engineering are seemingly different areas that share strong underlying connections. Both areas rely on statistical analysis and modeling of systems; either modeling the financial markets or modeling wireless communication channels. Having a model of reality allows us to make predictions and to optimize the strategies. It is as important to optimize our investment strategies in a financial market as it is to optimize the signal transmitted by an antenna in a wireless link.

This tutorial intends to explore the multiple connections between quantitative investment in financial engineering and areas in signal processing and communications and we will show how to capitalize on existing mathematical tools and methodologies developed and widely applied in the context of signal processing applications in order to solve problems in the field of portfolio optimization and investment management in quantitative finance. In particular, we will explore financial engineering in several respects:

i) we will provide the fundamentals of market data modeling and asset return predictability, as well as outline state-of-the-art methodologies for the estimation and forecasting of portfolio design parameters in realistic, non-frictionless financial markets;

ii) we will present the problem of optimal portfolio construction, elaborate on advanced optimization issues, and make the connections between portfolio optimization and filter/beamforming design in signal processing;

iii) we will reveal the theoretical mechanisms underlying the design and evaluation of statistical arbitrage trading strategies from a signal processing perspective based on multivariate data analysis and time series modeling; and

iv) we will discuss the optimal order execution and compare it with network scheduling in sensor networks and power allocation in communication systems.

Biographies:
Daniel P. Palomar (S’99–M’03–SM’08–F’12) received an Electrical Engineering and a Ph.D. degree from the Technical University of Catalonia (UPC), Barcelona, Spain, in 1998 and 2003, respectively. He is a Professor in the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology (HKUST), Hong Kong, which he joined in 2006. Since 2013 he has been a Fellow of the Institute for Advance Study (IAS) at HKUST. Dr. Palomar has previously held several research appointments, namely at King’s College London (KCL), London, UK; Technical University of Catalonia (UPC), Barcelona; Stanford University, Stanford, CA; Telecommunications Technological Center of Catalonia (CTTC), Barcelona; Royal Institute of Technology (KTH), Stockholm, Sweden; University of Rome “La Sapienza”, Rome, Italy; and Princeton University, Princeton, NJ. His current research interests include applications of convex optimization theory, game theory, and variational inequality theory to financial systems and communication systems. Dr. Palomar is an IEEE Fellow, a recipient of a 2004/06 Fulbright Research Fellowship, the 2004 Young Author Best Paper Award by the IEEE Signal Processing Society, the 2002/03 best Ph.D. prize in information technologies and communications by the Technical University of Catalonia (UPC), the 2002/03 Rosina Ribalta first prize for the Best Doctoral Thesis in information technologies and communications by the Epson Foundation, and the 2004 prize for the best Doctoral Thesis in Advanced Mobile Communications by the Vodafone Foundation and COIT.

Yiyong Feng received a B.E. degree in Electronic and Information Engineering from the Huazhong University of Science and Technology (HUST), Wuhan, China, in 2010. Since then he has been pursuing a Ph.D. degree in the Department of Electronic and Computer Engineering at the Hong Kong University of Science and Technology (HKUST) and he is graduating in August 2015. From March 2013 to August 2013, Mr. Feng was with the Systematic Market-Making Group at Credit Suisse (Hong Kong) which he will join in 2015 after his Ph.D. graduation. His research interests are in convex optimization, nonlinear programming, and robust optimization, with applications in signal processing, financial engineering, and machine learning.


TUT-VII- The Performance of Non-Smooth Convex Relaxation Methods for Structured Signal Recovery

Abstract:
In the past couple of decades, non-smooth convex optimization has emerged as a powerful tool for the recovery of structured signals (sparse, low rank, etc.) from (possibly) noisy measurements in a variety applications in statistics, signal processing, machine learning, etc. In particular, the advent of compressed sensing has led to a flowering of ideas and methods in this area. While the algorithms (basis pursuit, LASSO, etc.) are fairly well established, rigorous frameworks for the exact analysis of the performance of such methods is only just emerging.

The goal of this tutorial is to develop and describe a fairly general theory for how to determine the performance (minimum number of measurements, mean-square-error, etc.) of such methods for certain measurement ensembles (Gaussian, Haar, etc.). This will allow researchers and practitioners to assess the performance of these methods before actual implementation and will allow them to optimally choose parameters such as the regularizer coefficients, the number of measurements, etc. The theory includes all earlier results as special cases. It builds on an inconspicuous 1962 lemma of Slepian (on comparing Gaussian processes), as well as on a 1988 non-trivial generalization due to Gordon, and introduces concepts from convex geometry (such as Gaussian widths) in a very natural way. The tutorial will explain all this, and its various implications, in some detail.

Biography:
Babak Hassibi is the Gordon M. Binder/Amgen Professor of Electrical Engineering at the California Institute of Technology, where he has been since 2001 and where he was Executive Officer of Electrical Engineering from 2008 to 2015. From 1998 to 2001, he was a Member of the Technical Staff at the Mathematical Sciences Research Center at Bell Laboratories, Murray Hill, NJ, and prior to that he obtained his PhD in electrical engineering from Stanford University. His research interests span different aspects of communications, signal processing and control. Among other awards, he is a recipient of the David and Lucille Packard Foundation Fellowship, and the Presidential Early Career Award for Scientists and Engineers (PECASE).


TUT-VIII- Millimeter Wave Wireless Communications

Abstract:
Communication at millimeter wave (mmWave) frequencies is defining a new era of wireless communication. The mmWave band offers much higher bandwidth communication channels than presently used in commercial wireless systems. Wireless local area networks are already exploiting the 60 GHz mmWave band, while 5G cellular systems are likely to operate at other mmWave frequencies. Because of the large antenna arrays, different channel models, and new hardware constraints, signal processing is different in mmWave communication systems. This tutorial will provide an overview of mmWave wireless communication from a signal processing perspective. Topics covered include propagation models and the presence of sparsity in the channel, power consumption and resulting hardware constraints, MIMO techniques in mmWave including beam training, hybrid beamforming, MIMO with low-resolution analog-to-digital converters, and channel estimation. This tutorial opens the door to future applications of mmWave to transportation, cellular, massive MIMO, and wearables.

Biography:
Robert W. Heath Jr. received the Ph.D. in EE from Stanford University. He is a Cullen Trust for Higher Education Endowed Professor in the Department of Electrical and Computer Engineering at The University of Texas at Austin and a Member of the Wireless Networking and Communications Group. He is the President and CEO of MIMO Wireless Inc and Chief Innovation Officer at Kuma Signals LLC. Prof. Heath is a recipient of the 2012 Signal Processing Magazine best paper award, a 2011 and 2013 EURASIP Journal on Wireless Communications and Networking best paper award, a 2013 Signal Processing Society best paper award, the 2014 EURASIP Journal on Advances in Signal Processing best paper award, and the 2014 Journal of Communications and Networks best paper award. He is a co-author of the book “Millimeter Wave Wireless Communications” published by Prentice Hall in 2014. He is a licensed Amateur Radio Operator, a registered Professional Engineer in Texas, and is a Fellow of the IEEE.


TUT-IX- Discontinuities-Preserving Image and Motion Coherence: Computational Models and Applications

Abstract:
Resulting from light measurements of a real scene, a natural image is not a collection of random numbers simply filling up a 2D matrix. Instead, there is a rather rich amount of redundancy, self-similarity or coherence that exists locally and globally. In the same vein, visual correspondence fields or feature matches, which associate pixels (or feature points) in one image with their corresponding pixels (or feature points) in another image, possess a similar natural coherence property. However, this is just one side of the coin; on the other side, there always exist edges, boundaries or discontinuities due to e.g. the colorful yet non-flat world, independent motions of objects in the scene, and parallax induced by camera movements. As such, discontinuities in different visual “signals” are clearly roadblocks that algorithms have to effectively deal with when exploiting the coherence or smoothness property. Motivated by this, the talk is set centrally on “coherence” and “discontinuities” for images and motions. We will introduce recent work along this line, ranging from modeling and efficient solutions to wide applications.

We will start with a gentle introduction of various state-of-the-art nonlinear edge-aware image smoothing filters (both locally modeled and globally optimized versions). Thanks to their strong power in adaptively dealing with various visual signals as well as significant computational and implementation advantages, the edge-aware image smoothing techniques have found a great variety of applications in image/video processing, computer vision and computer graphics. In these applications, the smoothing techniques have been employed to allow for data adaptivity (or supports) in either local or global forms. We explain their theoretical connections, new insights and generalization. Especially, we focus on fast smoothing approaches e.g. using the bilateral grid, a color line model, multipoint aggregation, domain transform, fast global image smoothing, and so on. Then, their wide and concrete applications in image processing, computer vision and computer graphics will be discussed.

Next, we will move on to the second part of our tutorial -- estimating “motion” fields between two (or more) images, also known as “visual correspondence”, which is a fundamental problem in numerous computer vision applications. In particular, we will cover various labeling optimization techniques, including local, semi-local, and global labeling techniques, which have been developed to design visual correspondence algorithms. Although these approaches have been developed from different perspectives, they share the central goal of efficiently computing a large number of accurate matches between a given pair of images under various challenging conditions. These difficult conditions include, for instance, matching image pairs in presence of significant geometric and photometric transformation (e.g. scale, rotation, wide baseline, large and non-rigid motions, illumination changes, image quality), across different scene contents, or containing a significant number of outliers. We also introduce labeling techniques that effectively deal with the huge discrete label space and/or the high-order Markov Random Field (CRF) model by making use of efficient filtering algorithms and a smart randomized search idea. Finally, we will introduce exciting applications relying on coherent “motion” fields, including scene understanding, robot navigation, computational photography, and 3-D scene reconstruction. We will present the key ideas of these latest applications, while highlighting the essential roles that the “motion” fields are playing.

Biographies:
Jiangbo Lu received the B.S. (with honors) and M.S. degrees in electrical engineering from Zhejiang University, China, in 2000 and 2003, respectively, and the Ph.D. degree in electrical engineering, Katholieke Universiteit Leuven, Belgium, in 2009. He is a Senior Research Scientist with the Advanced Digital Sciences Center (ADSC), a Singapore-based research center of University of Illinois at Urbana-Champaign. He also holds a joint appointment with the Coordinated Science Laboratory (CSL) of the University of Illinois. Some of his research work jointly with his colleagues and project students has led to a few Best Paper Awards (or nominations), as well as ICT awards, such as the AIT Best Paper Award in the IEEE ICCV 2009 Workshop on Embedded Computer Vision together with K. Zhang. He is an Associate Editor for IEEE Transactions on Circuits and Systems for Video Technology (T-CSVT). He received the 2012 TCSVT Best Associate Editor Award. His research interests include computer vision, visual computing, image processing, video communication, interactive multimedia applications and systems, and efficient algorithms for various architectures.

Dongbo Min received the B.S., M.S. and Ph.D. degrees from the School of Electrical and Electronic Engineering at Yonsei University, in 2003, 2005 and 2009, respectively. From 2009 to 2010, he worked with Mitsubishi Electric Research Laboratories (MERL) as a post-doctoral researcher, where he developed a prototype of 3D video system (3DTV). From 2010 to 2015, he worked with Advanced Digital Sciences Center (ADSC) in Singapore, which was jointly founded by University of Illinois at Urbana-Champaign (UIUC) and the Agency for Science, Technology and Research (A*STAR), a Singapore government agency. Since 2015, he has been working with Chungnam National University (CNU), Daejeon, Korea, where he is currently as an assistant professor with the Department of Computer Science and Engineering. His research interests include computer vision, 2D/3D video processing, computational photography, augmented reality, and continuous/discrete optimization.

Minh N. Do was born in Vietnam in 1974. He received the B.Eng. degree in computer engineering from the University of Canberra, Australia, in 1997, and the Dr.Sci. degree in communication systems from the Swiss Federal Institute of Technology Lausanne (EPFL), Switzerland, in 2001. Since 2002, he has been on the faculty at the University of Illinois at Urbana-Champaign (UIUC), where he is currently a Professor in the Department of Electrical and Computer Engineering, and hold joint appointments with the Coordinated Science Laboratory, the Beckman Institute for Advanced Science and Technology, and the Department of Bioengineering. He received a Silver Medal from the 32nd International Mathematical Olympiad in 1991, a University Medal from the University of Canberra in 1997, a Doctorate Award from the EPFL in 2001, a CAREER Award from the National Science Foundation in 2003, and a Young Author Best Paper Award from IEEE in 2008. He was named a Beckman Fellow at the Center for Advanced Study, UIUC, in 2006, and received of a Xerox Award for Faculty Research from the College of Engineering, UIUC, in 2007. He was a member of the IEEE Signal Processing Theory and Methods Technical Committee, Image, Video, and Multidimensional Signal Processing Technical Committee, and an Associate Editor of the IEEE Transactions on Image Processing. He is a Fellow of the IEEE for contributions to image.


TUT-X- Bayesian-Inspired Non-Convex Methods for Sparse Signal Recovery

Abstract:
This is a three hour (half day) tutorial that examines a Bayesian framework to address algorithmic issues that arise in sparse signal recovery problems. There are numerous signal processing and communications applications where this problem naturally arises. Parsimonious signal representation using overcomplete dictionaries for compression, estimation of sparse communication channels with large delay spread as in underwater acoustics, low dimensional representation of MIMO channels, brain imaging techniques such as MEG and EEG, are a few examples.

The emergence of compressive sensing and the associated l1 recovery algorithms and theory have generated considerable excitement and interest in their application. This tutorial will examine more recent developments and a complementary set of tools based on a Bayesian framework to address the general problem of sparse signal recovery and the challenges associated with it. The Bayesian methods show considerable promise and have the flexibility necessary to deal with more general scenarios than hitherto possible. This generality and flexibility greatly facilitates their deployment in practice, even though they generally lead to non-convex optimization problems. The theory behind when and why these non-convex methods work is only now being developed. Signal processing and communications engineers are well versed in statistical methods and so have the background necessary to benefit from this exposure. This tutorial will provide a gentle, yet in-depth overview to this fascinating and nascent area within sparse signal recovery.

Biographies:
Bhaskar D. Rao received the B.Tech. degree in electronics and electrical communication engineering from the Indian Institute of Technology, Kharagpur, India, in 1979 and the M.S. and Ph.D. degrees from the University of Southern California, Los Angeles, in 1981 and 1983, respectively. Since 1983, he has been with the University of California at San Diego, La Jolla, where he is currently a Professor in the Electrical and Computer Engineering department. He is the holder of the Ericsson endowed chair in Wireless Access Networks and was the Director of the Center for Wireless Communications (2008-2011). Prof. Rao’s interests are in the areas of digital signal processing, estimation theory, and optimization theory, with applications to digital communications, speech signal processing, and biomedical signal processing. Prof. Rao was elected fellow of IEEE in 2000 for his contributions to the statistical analysis of subspace algorithms for harmonic retrieval. His work has received several paper awards;

Chandra R. Murthy received the B.Tech. degree in Electrical Engineering from the Indian Institute of Technology, Madras in 1998, the M.S. and Ph.D. degrees in Electrical and Computer Engineering from Purdue University and the University of California, San Diego, in 2000 and 2006, respectively. From 2000 to 2002, he was at Qualcomm Inc., where he worked on WCDMA baseband transceiver design and 802.11b baseband receivers. From Aug. 2006 to Aug. 2007, he worked as a staff engineer at Beceem Communications Inc. on advanced receiver architectures for the 802.16e Mobile WiMAX standard. In Sept. 2007, he joined as a faculty at the Department of Electrical Communication Engineering at the Indian Institute of Science, where he is currently working. His research interests are in the areas of Cognitive Radio, Energy Harvesting Wireless Sensors, MIMO systems with channel-state feedback and sparse signal recovery. He is currently serving as an associate editor for the IEEE Signal Processing Letters and as an elected member of the IEEE SPCOM Technical Committee for the years 2014-16.


TUT-XII- Supervised Speech Separation

Abstract:
The acoustic environment typically contains multiple simultaneous sound sources, and the target speech usually occurs with other interfering sounds. This creates a problem of speech separation, popularly known as the cocktail party problem. Speech separation has a wide range of important applications, including robust automatic speech and speaker recognition, hearing prosthesis, and audio information retrieval (or audio data mining). As a result, a large number of studies in speech and audio processing have been devoted to speech separation, which becomes even more important in recent years with the widespread adoption of mobile communication devices such as smart phones.

Traditional approaches to speech separation include speech enhancement based on analyzing signal statistics, beamforming or spatial filtering, and computational auditory scene analysis. An emerging trend in speech separation is the introduction of supervised learning. So-called supervised speech separation is a data-driven approach that trains a learning machine to perform speech separation. In particular, deep neural networks (DNNs) have been increasingly used for supervised speech separation in recent years. Among the major successes of supervised speech separation is the demonstration of substantial speech intelligibility improvements by hearing impaired listeners in some noisy environments, an accomplishment that has eluded the signal processing field for decades.

This tutorial is designed to introduce the latest developments in supervised speech segregation, with emphasis on DNN based separation methods. The tutorial will systematically introduce the fundamentals of supervised speech separation, including learning machines, features, and training targets. The tutorial will cover both the separation of speech from nonspeech noises and from competing talkers. We will also treat and compare supervised masking and supervised mapping based techniques for speech separation.

The proposed tutorial intends to provide the participants a solid understanding of supervised speech separation with the following foci. First, explain how to formulate the speech separation problem in the supervised learning framework. Second, describe the foundations behind representative algorithms, in conjunction with real-world applications. Third, discuss both monaural (one-microphone) and binaural (two-microphone) speech separation and how to combine them.

Biography:
DeLiang Wang received the B.S. degree in 1983 and the M.S. degree in 1986 from Peking (Beijing) University, Beijing, China, and the Ph.D. degree in 1991 from the University of Southern California, Los Angeles, CA, all in computer science. From July 1986 to December 1987 he was with the Institute of Computing Technology, Academia Sinica, Beijing. Since 1991, he has been with the Department of Computer Science & Engineering and the Center for Cognitive and Brain Sciences at Ohio State University, Columbus, OH, where he is currently a Professor. He also holds a visiting appointment at the Center of Intelligent Acoustics and Immersive Communications, Northwestern Polytechnical University, Xi’an, China. He has been a visiting scholar to Harvard University, Oticon A/S (Denmark), and Starkey Hearing Technologies. Wang's research interests include machine perception and neurodynamics. Among his recognitions are the Office of Naval Research Young Investigator Award in 1996, the 2005 Outstanding Paper Award from IEEE Transactions on Neural Networks, and the 2008 Helmholtz Award from the International Neural Network Society. In 2014, he was named a University Distinguished Scholar by Ohio State University. He serves as Co-Editor-in-Chief of Neural Networks, and on the editorial boards of several journals including IEEE/ACM Transactions on Audio, Speech, and Language Processing. He is an IEEE Fellow.


TUT-XIII- Computational Visual Attention: Approaches and Applications

Abstract:
Visual attention is the cognitive process of selectively concentrating on certain aspect(s) of visual signals by humans to select the information that is most interesting and relevant. In this tutorial, we will introduce the principle of visual attention as well as experiments on visual attention where related studies such as single cell , fMRI , and psychophysical experiments are discussed. In addition, we are to highlight the eye tracking system and its protocols. Afterward, we will introduce the recent advances of computational visual attention methods, including the bottom-up and top-down approaches. The applications of computational visual attention, such as quality assessment, perceptual video coding and communications, are then presented. Finally, we will provide the summary and discussions on future possibilities.

Biographies:
Weisi Lin received his Ph.D. from King’s College, London University, U.K. He served as the Lab Head of Visual Processing, Institute for Infocomm Research, Singapore. Currently, he is an Associate Professor in the School of Computer Engineering. His areas of expertise include image processing, perceptual signal modeling, video compression, and multimedia communication, in which he has published 120+ journal papers and 200+ conference papers, filed 7 patents, and authored 2 books. He is an AE for IEEE Trans. on Image Processing, IEEE Signal Processing Letters and Journal of Visual Communication and Image Representation, and a past AE for IEEE Trans. on Multimedia. He has also served as a Guest Editor for 7 special issues in international journals. He has been a Technical Program Chair for IEEE ICME 2013, PCM 2012, and QoMEX 2014. He chaired the IEEE MMTC Special Interest Group on QoE (2012-2014). He has been an invited/panelist/keynote/tutorial speaker in 10+ international conferences, as well as a Distinguished Lecturer of Asia-Pacific Signal and Information Processing Association (APSIPA), 2012-2013.

Zhenzhong Chen received the B.Eng. degree from Huazhong University of Science and Technology, Wuhan, China, and the Ph.D. degree from the Chinese University of Hong Kong, Shatin, China, both in electrical engineering. He is currently a Professor at Wuhan University (WHU). Before joining WHU, he worked at MediaTek USA Inc. San Jose, CA, USA. His current research interests include visual perception, image processing, multimedia communications, and intelligent systems. He is a Selection Committee Member of ITU Young Innovators Challenges, a member of the IEEE Multimedia Systems and Applications Technical Committee, Co-Chair of IEEE Multimedia Communication TC Networking Technologies for Multimedia Communication IG. He is an editor of Journal of Visual Communication and Image Representation and Editor of IEEE IoT Newsletter. He has been the Special Session Chair of IEEE World Forum of Internet of Things 2014, Publication Chair of IEEE Conference on Multimedia and Expo 2014, and has served as a technical program committee member of the IEEE ICC, GLOBECOM, CCNC, ICME, etc. He has published more than 100 international journal and conference papers. He was a recipient of CUHK Young Scholar Dissertation Award, the CUHK Faculty of Engineering Outstanding Ph.D. Thesis Award, Microsoft Fellowship, ERCIM Alain Bensoussan Fellowship, and First Class Prize of 2015 IEEE BigMM Challenge.


TUT-XIV- Energy-Efficient Resource Allocation for 5G Wireless Networks via Fractional Programming Theory

Abstract:

The tutorial will provide the background and the tools to model, analyze, and solve energy-efficient problems in future wireless networks, as well as cover recent advances in this field. The tutorial will be organized into three main parts.

1) Introduction. The tutorial will start by introducing the problem of energy efficiency, motivating its importance in present and above all future wireless networks. Afterwards, the different metrics that have appeared in the literature to quantify the energy efficiency of a communication system will be introduced and discussed. We will start from the simpler case of a single communication link, gradually moving to the more general scenario of an interference network with multiple communication links, and where multiple antennas, multiple carriers, and multiple hops are present. In all cases, we will show that the energy efficiency is naturally defined by fractional functions which measure the benefit-cost ratio of the data transmissions, in terms of data rate and reliability, and of consumed energy. As a result, maximizing the energy efficiency of a wireless network naturally leads to a fractional program. Fractional programs are in general non-convex and therefore conventional convex optimization tools do not apply. Instead, the theory of generalized concavity and of fractional programming are the most suited tools to tackle fractional optimizations.

2) Fractional programming theory. Motivated by the introductory part, the second part of the tutorial aims at providing the audience with a solid background on fractional programming theory, explaining the concepts and key-tools to understand, formulate, and solve practical energy-efficient problems. By means of simple examples we will show how different energy-efficient problems from real-world systems fit into the frac- tional theory framework. The essential notion of generalized concavity is formally introduced, and the main tools to manage fractional optimization problems are described. All relevant cases that often are encountered in practice are covered. We start from the simpler case of single-ratio problem, gradually moving towards the more advanced scenario of multi-ratio problems, which is often required for heterogeneous networks, where a sum or product of ratios is to be maximized, as well as for worst-case designs, where the goal is the maximization of the minimum of a family of ratios. For each type of problem, the most widely-used solution algorithms are explained and compared.

3) Applications. The third part of the tutorial focuses on applications of the developed framework. Several examples will be provided to show how fractional programming proves to be an extremely valuable tool to solve practical energy-efficient resource management problems, which will become more and more relevant with the advent of 5G wireless networks. We will first introduce a general signal model for interference networks, showing how it can be easily specialized to most of the leading technologies proposed for 5G networks: heterogeneous networks, multi-hop networks, small-cell networks, LTE/A multi-cell and CoMP systems, device-to-device communications, massive MIMO systems, full-duplex transmission. Afterwards, selected state-of-the-art applications will be covered in detail, including examples of energy-efficient MIMO precoding, multi-hop design, multi-carrier scheduling. The resulting programming problems turn out to be challenging non-convex optimization problems in which vector or matrices need to be optimized. The tutorial will show how the framework developed in the first part can be used to systematically tackle such problems, describing the latest advances in this field. In addition the tutorial will explain how to integrate fractional programming with other optimization tools, such as sequential optimization, in order to further extend the range of possible applications.

Both centralized and distributed resource allocation approaches will be introduced and compared in terms of energy-efficient performance, computational complexity, and overhead. In addition, quantitative analysis regarding the practical implementation of state-of-the-art resource allocation algorithms for future wireless networks will be provided, based on the experience gained from our daily work in the 5G Lab Germany (previously known as Dresden 5G Lab).

At the end of the tutorial, we will discuss the latest research directions and open issues that in our opinion represent the most important challenges that still lie ahead for the successful implementation of energy-efficient 5G networks.

The target audience includes both academic researchers interested in studying the theoretical foundation of energy-efficient wireless communications, and industry practitioners wishing to learn the latest results and findings about energy efficiency in wireless networks. Being focused on providing the fundamentals of energy efficiency analysis and optimization, the tutorial is quite self-contained and only very little prior knowledge is expected form the attendees. All concepts will be rigorously introduced, but the explanation will be corroborated by many examples and figures. At the same time, detailed references for those interested in deepening the theoretical details will be provided.

Biographies:
Alessio Zappone obtained his Master degree in Telecommunication engineering and his Ph.D. degree in electrical engineering in 2007 and 2011, respectively, from the Universita ́ degli Studi di Cassino e del Lazio Meriodionale, Cassino, Italy. Since October 2012, Alessio is with the Technische Universitaet Dresden, managing the project CEMRIN on energy-efficient resource allocation in wireless networks, funded by the German Research Foundation (DFG). Since 2015, Alessio serves as Associate Editor of the IEEE Signal Processing Letters, and he is a Guest Editor of the IEEE JSAC Special issue on “Energy-Efficient Techniques for 5G Wireless Communication Systems”.

Eduard A. Jorswieck was born in 1975 in Berlin, Germany. He received his Diplom-Ingenieur (M.S.) degree and Doktor-Ingenieur (Ph.D.) degree, both in electrical engineering and computer science from the Technische Universitaet Berlin, Germany, in 2000 and 2004, respectively. He was with the Fraunhofer Institute for Telecommunications, Heinrich-Hertz-Institut (HHI) Berlin in the Broadband Mobile Communication Networks Department from December 2000 to February 2008. Since February 2008, he has been the head of the Chair of Communications Theory and Full Professor at Dresden University of Technology (TUD), Germany. Dr. Jorswieck is senior member of IEEE. He is a member of the IEEE SPCOM Technical Committee (2008 - 2013). Since 2011, he acts as Associate Editor for IEEE Transactions on Signal Processing. Since 2008, continuing until 2011, he has served as an Associate Editor for IEEE Signal Processing Letters. Since 2012, he is Senior Associate Editor for IEEE Signal Processing Letter. Since 2013, he serves as Associate Editor for IEEE Transactions on Wireless Communications. In 2006, he received the IEEE Signal Processing Society Best Paper Award.


TUT-XV- Multiscale Signal Processing for Wearable Health: Sleep, Stress, and Fatigue Applications

Abstract:
This tutorial brings together three main aspects of future wearable health technology: (i) adequate signal processing algorithms, (ii) miniaturised hardware for 24/7 continuous monitoring of the mind and body, and (iii) development of applications for use in natural environments. Based upon our 10 years of experience in human-computer interface, we will bring together the latest advances in multiscale signal processing, complexity science, and their application in real-world scenarios for next-generation personalised healthcare, such as sleep, fatigue and stress. Our particular emphasis will be on solutions to the challenges posed by the imperfect but ultra-wearable, unobtrusive, and discreet sensors. To this end, insights into the biophysics of the generation and acquisition of human physiological responses will be used as a foundation, and indeed the motivation, for the multiscale signal processing algorithms covered. We will also discuss opportunites in multi-person behavioural science, enabled by our own wearable sensing platforms, such as vital sign monitoring from inside the ear canal (ECG, EEG, respiration, etc.) and our miniaturised biosignal acquisition unit.

Biographies:
Danilo P. Mandic is a Professor in signal processing with Imperial College London, UK, and has been working in the areas of adaptive signal processing and bioengineering. He is a Fellow of the IEEE, member of the Board of Governors of International Neural Networks Society (INNS), member of the Big Data Chapter within INNS and member of the IEEE SPS Technical Committee on Signal Processing Theory and Methods. He has received five best paper awards in Brain Computer Interface, runs the Smart Environments Lab at Imperial, and has more than 300 publications in journals and conferences. Prof Mandic has received the President Award for Excellence in Postgraduate Supervision at Imperial.

Valentin Goverdovsky received the MEng. in electronic engineering and obtained his PhD in communications from Imperial College London, UK. He is currently a Rosetrees Fellow at the Department of Electrical and Electronic Engineering of Imperial College London. His research focuses on biomedical instrumentation, analog integrated circuits and radio frequency communications. Dr Goverdovsky has won the Eric Laithwaithe Award at Imperial College for best research in the year 2014. His recent work has been on the development of wearable biosensing platforms for 24/7 monitoring of brain and body functions in the context of traumatic brain injury.


TUT-XVI- Phase Retrieval: Theory, Algorithms, and Applications

Abstract:
In many areas of science and engineering, one has access to magnitude only measurements as detectors can often only record the modulus of the scattered radiation from an object and not its phase. Phase retrieval is the problem of recovering a signal from such measurements. Due to its practical significance in imaging science ranging from X-ray crystallography to astronomy and optics, over the past century numerous heuristics have been developed to solve such problems.

Novel theoretical developments as well as exciting new applications in the area of Coherent Diffraction Imaging (CDI) aimed at inferring 3D structure of molecules have led to a renewed interest in the phase retrieval problem. This tutorial gives an overview of phase retrieval, the physics behind it as well as old and new applications and algorithms. We will also discuss very recent theory showing the success of these algorithms as well as recent research efforts in imaging applications. Furthermore, we discuss how recent results in this area apply more broadly to problems involving quadratic measurements such as blind deconvolution. We hope that this tutorial will serve as an introduction to the field in order to inspire more researchers to join this exciting area.

Biographies:
Yonina C. Eldar received the B.Sc. degree in Physics in 1995 and the B.Sc. degree in Electrical Engineering in 1996 both from Tel-Aviv University (TAU), Tel-Aviv, Israel, and the Ph.D. degree in Electrical Engineering and Computer Science in 2002 from the Massachusetts Institute of Technology (MIT), Cambridge. Dr. Eldar has received numerous awards for excellence in research and teaching, including the IEEE Signal Processing Society Technical Achievement Award (2013), the IEEE/AESS Fred Nathanson Memorial Radar Award (2014), and the IEEE Kiyo Tomiyasu Award (2016). She is a member of the Young Israel Academy of Science and Humanities and the Israel Committee for Higher Education, and an IEEE Fellow. She is author of the book “Sampling Theory: Beyond Bandlimited Systems” and co-author of the books ”Compressed Sensing” and “Convex Optimization Methods in Signal Processing and Communications”, all published by Cambridge University Press.

Mahdi Soltanolkotabi completed his Ph.D. in electrical engineering at Stanford University in 2014. He was a postdoctoral researcher in the EECS and Statistics departments at UC Berkeley during the 2014-2015 academic year. His research is on mathematical data analysis focusing on design and understanding of computationally efficient algorithms for convex and non-convex optimization, high dimensional statistics, machine learning, signal processing and computational imaging.


ICASSP 2016 Patrons