Monday, January 22nd, 2018

9:30 am10:30 am

Astrophysics is transforming from a data-starved to a data-swamped discipline, fundamentally changing the nature of scientific inquiry and discovery. New technologies are enabling the detection, transmission, and storage of data of hitherto unimaginable quantity and quality across the electromagnetic, gravity and particle spectra. The observational data obtained in the next decade alone will supersede everything accumulated over the preceding four thousand years of astronomy. Within the next year there will be no fewer than 4 large-scale photometric and spectroscopic surveys underway, each generating and/or utilizing tens of terabytes of data per year. Some will focus on the static universe while others will greatly expand our knowledge of transient phenomena. Maximizing the science from these programs requires integrating the processing pipeline with high-performance computing resources coupled to large astrophysics databases with near real-time turnaround. Here, I will present an overview of the history of transient studies in astronomy and the first of these programs, which fundamentally changed the way we studied these phenomena - The Palomar Transient Factory (PTF). In particular, I will highlight how PTF has enabled a much more robust nearby supernova program, allowing us to carry out next generation cosmology programs with both Type Ia and II-P supernovae, while at the same time discovering events that previously exploded only in the minds of theorists. I will also discuss the synergy between these programs and future spectroscopic surveys like the Dark Energy Spectroscopic Instrument.

11:00 am12:00 pm

To identify scientifically valuable objects like supernovae or asteroids on the sky, astronomical imaging surveys have historically adopted a manual approach, employing humans to visually inspect data for signatures of the events. But recent advances in the capabilities of telescopes, detectors, and supercomputers have fueled a dramatic rise in the data production rates of such surveys, straining the ability of their teams to quickly and comprehensively look at images to perform discovery. In this talk I describe how machine learning has provided a transformative solution to this astronomical problem. As an example, I describe a Random Forest approach that our group developed to automate the discovery of supernovae and other interesting objects on images, and I present results from its use in the Dark Energy Survey Supernova program (DES-SN), where it was trained using a sample of 898,963 signal and background events generated by the transient detection pipeline. After reprocessing the data collected during the first DES-SN observing season (2013 September through 2014 February) using the algorithm, the number of transient candidates eligible for human scanning decreased by a factor of 13.4, while only 1.0% of the artificial Type Ia supernovae (SNe) injected into search images to monitor survey efficiency were lost, most of which were very faint events. This advance dramatically improved the efficiency of the survey. An implementation of the algorithm and the training data are available at at http://portal.nersc.gov/project/dessn/autoscan.

1:30 pm2:30 pm

Earthquakes pose a critical threat to people and infrastructure in cities around the world.  By combining advances in earthquake science with new communication and sensing capabilities, it is now possible to provide warning of coming earthquake shaking.  Both traditional seismic sensor networks and the sensors in personal smartphones, make it possible to rapidly detect earthquakes.  This makes it possible to warn of coming earthquake shaking providing time for individuals to move to a safe space, to slow and stop trains, to isolate hazardous machinery and chemicals at work, and thereby reduce damage and injuries.

The first session of this mini course will take place on Monday, January 22 from 1:30 to 2:30 p.m.; the second session of this mini course will take place on Monday, January 22 from 3:00 p.m. to 4:30 p.m.

3:00 pm4:00 pm

This session will continue to discuss earthquake early warning by using non-traditional ways – MyShake, which is a global smartphone seismic network that harnesses the power of crowdsourcing to detect earthquakes. In this talk, I will take you on a journey of the technical details of how we build the smartphone seismic network. The application running on the phone can detect the earthquake-like motion on the phone and confirm an earthquake on the server by aggregating data from multiple phones. It uses an Artificial Neural Network (ANN) algorithm running on the phone to distinguish earthquake motion from human activities recorded by the accelerometer on board. Once the ANN detects earthquake-like motion, it sends a message with a timestamp and location in near real-time back to the server, where it further confirms the presence of an earthquake based on a cluster of triggers from the phones both in time and space. At the same time, a 5-min time series data will be recorded and uploaded to the server. Based on the time series data, we also build a convolutional neural network running on the server to continue to classify the nature of the waveforms (earthquake or not). You can find more information at http://myshake.berkeley.edu/.

The first session of this mini course will take place on Monday, January 22 from 1:30 to 2:30 p.m.; the second session of this mini course will take place on Monday, January 22 from 3:00 p.m. to 4:30 p.m.

Tuesday, January 23rd, 2018

9:30 am10:30 am

The world consists of many things that move: People go to work, home, school, to shopping and entertainment centers every day, using public transit systems, cars and taxis.  Goods move on roads, over water or by air; and food travels a long distance to meet its consumer. Thus, massive movement processes are underway in the world every day and it is critical to ensure their safe, timely and efficient operation. Towards this end, low-cost sensing and acquisition of the movement data is being achieved: from GPS devices, RFID and barcode scanners, to smart commuter cards and smartphones, snapshots of the movement process are becoming available.

In this two-part presentation I will describe two tools for understanding and shaping urban mobility: (i) a big data system for stitching together movement snapshots and reconstructing urban mobility at a very fine-grained level, and (ii) “nudge engines”. The system provides an interactive dashboard and a querying engine for answering questions such as: What is the crowding at a train station? Where are packages held up and how can their delivery be sped up? Do New York Knicks fans tip taxi rides more when their team wins than when it loses?  I will then describe how movement processes can be shaped using “nudge engines”, engines which incentivize commuters to shift their travel patterns using monetary incentives and personalized offers.  Nudge Engines have been deployed world-wide: to incentivize commuters at Infosys—Bangalore, Stanford University (in a DoT-funded project), Singapore Metro, and BART (Bay Area Rapid Transit); they have also been used to incentivize employees of Accenture-USA to undertake more physical activity.  I will describe the components of a nudge engine (notably lottery-like payoffs and personalized offers) as well as a mathematical model for analyzing their performance.

The first session of this mini course will take place on Tuesday, January 23 from 9:30 to 10:30 a.m.; the second session of this mini course will take place on Tuesday, January 23 from 11:00 a.m. to 12:00 p.m.

11:00 am12:00 pm

The world consists of many things that move: People go to work, home, school, to shopping and entertainment centers every day, using public transit systems, cars and taxis.  Goods move on roads, over water or by air; and food travels a long distance to meet its consumer. Thus, massive movement processes are underway in the world every day and it is critical to ensure their safe, timely and efficient operation. Towards this end, low-cost sensing and acquisition of the movement data is being achieved: from GPS devices, RFID and barcode scanners, to smart commuter cards and smartphones, snapshots of the movement process are becoming available.

In this two-part presentation I will describe two tools for understanding and shaping urban mobility: (i) a big data system for stitching together movement snapshots and reconstructing urban mobility at a very fine-grained level, and (ii) “nudge engines”. The system provides an interactive dashboard and a querying engine for answering questions such as: What is the crowding at a train station? Where are packages held up and how can their delivery be sped up? Do New York Knicks fans tip taxi rides more when their team wins than when it loses?  I will then describe how movement processes can be shaped using “nudge engines”, engines which incentivize commuters to shift their travel patterns using monetary incentives and personalized offers.  Nudge Engines have been deployed world-wide: to incentivize commuters at Infosys—Bangalore, Stanford University (in a DoT-funded project), Singapore Metro, and BART (Bay Area Rapid Transit); they have also been used to incentivize employees of Accenture-USA to undertake more physical activity.  I will describe the components of a nudge engine (notably lottery-like payoffs and personalized offers) as well as a mathematical model for analyzing their performance.

The first session of this mini course will take place on Tuesday, January 23 from 9:30 to 10:30 a.m.; the second session of this mini course will take place on Tuesday, January 23 from 11:00 a.m. to 12:00 p.m.

1:30 pm3:00 pm

Bike-sharing systems are changing the urban transportation landscape; for example, New York launched the largest bike-sharing system in North America in May 2013, with individual trips exceeding 15 million rides in 2017. We have worked with Citibike, using analytics and optimization to change how they manage the system. Huge rush-hour usage imbalances the system - we answer the following two questions: where should bikes be at the start of a day and how can we mitigate the imbalances that develop?

We will survey the analytics we have employed for the former question, where we developed an approach based on continuous-time Markov chains combined with integer programming models to compute daily stocking levels for the bikes, as well as methods employed for optimizing the capacity of the stations. For the question of mitigating the imbalances that result, we will describe both heuristic methods and approximation algorithms that guide both mid-rush hour and overnight rebalancing, the positioning of corrals, which have been one of the most effective means of creating adaptive capacity in the system. More recently, we have guided the development of Bike Angels, a program to incentivize users to make “rebalancing rides”, and we will describe its underlying analytics.  We will also discuss a number of significant challenges that remain as these systems evolve.

This is joint work with Daniel Freund, Shane Henderson, & Eoin O’Mahony, as well as Hangil Chung, Aaron Ferber, Nanjing Jian, Ashkan Nourozi-Fard, and Alice Paul.

3:30 pm5:00 pm

Ridesharing systems are increasingly important components of the transportation infrastructure, and understanding how to design them efficiently, and how they affect society at large, is one of the pressing problems of the day. Moreover, from an academic perspective, ridesharing platforms are amazing laboratories for real-time algorithms, presenting challenges in large-scale learning, real-time stochastic control and market design. In this talk, we will go over the basic structure of a ridesharing system, summarize the state-of-the-art in theoretical models and algorithms, and discuss the main open questions posed by these platforms.

Wednesday, January 24th, 2018

9:30 am10:30 am

I will give a very brief introduction to algorithms as a way of conveying how Computer Science theorists think about the world. I will also give a brief introduction to algorithmic game theory and risk-averse decision making, mentioning applications to network routing and transportation.

11:00 am12:30 pm

Linear-time algorithms have long been considered the gold standard of computational efficiency. Indeed, it is hard to imagine doing better than that, since for a nontrivial problem, any algorithm must consider all of the input in order to make a decision. However, as extremely large data sets are pervasive, it is natural to wonder what information can be computed in sub-linear  time. Over the past decades, several surprising advances have been made on designing such algorithms. We will give a non-exhaustive survey of this emerging area, highlighting recent progress and directions for further research.  Special attention will be given to (1) sub-linear time algorithms for estimating parameters of graphs (2) local algorithms for optimization problems and (3) algorithms for estimating parameters of discrete distributions over large domains, with a number of samples that is sub-linear in the domain size.

2:00 pm3:00 pm

Control theory provides a set of mathematical representations and analytical tools for understanding and design complex, interconnected, decision-making systems.  Key principles include the role feedback as a mechanism for providing robust performance in the presence of uncertainty and the role of feedback as a means of designing the dynamics of an interconnected system.  Feedback can be used to create modularity and shape well-defined relations between inputs and outputs in a structured hierarchical manner, enabling the creation of robust, large-scale, complex systems.  In this set of talks, I will introduce the ideas from control theory in a manner that is intended to be accessible to scientists and engineers from a diverse of backgrounds, with an emphasis on the architectures and tools that might be useful in the contact of real-time decision-making systems.

The first session of this mini course will take place on Wednesday, January 24 from 2:00 to 3:00 p.m.; the second session of this mini course will take place on Wednesday, January 24 from 3:30 to 4:30 p.m.

3:30 pm4:30 pm

Control theory provides a set of mathematical representations and analytical tools for understanding and design complex, interconnected, decision-making systems.  Key principles include the role feedback as a mechanism for providing robust performance in the presence of uncertainty and the role of feedback as a means of designing the dynamics of an interconnected system.  Feedback can be used to create modularity and shape well-defined relations between inputs and outputs in a structured hierarchical manner, enabling the creation of robust, large-scale, complex systems.  In this set of talks, I will introduce the ideas from control theory in a manner that is intended to be accessible to scientists and engineers from a diverse of backgrounds, with an emphasis on the architectures and tools that might be useful in the contact of real-time decision-making systems.

The first session of this mini course will take place on Wednesday, January 24 from 2:00 to 3:00 p.m.; the second session of this mini course will take place on Wednesday, January 24 from 3:30 to 4:30 p.m.

Thursday, January 25th, 2018

9:30 am10:30 am

In part I of this lecture, we motivate the historic transformation of our power systems in the coming decades, with its opportunities and challenges.  We introduce basic concepts in alternating current (ac) power systems, including phasor representation, balanced three-phase systems, per-phase analysis, and complex power.  We present simple models of transmission lines, transformers, and generators.


The first session of this mini course will take place on Thursday, January 25 from 9:30 to 10:30 a.m.; the second session of this mini course will take place on Thursday, January 25 from 11:00 a.m. to 12:00 p.m.

11:00 am12:00 pm

In part II of this lecture, we use the concepts and models in part I to derive power flow equations that model the steady-state behavior of ac transmission and distribution grids.  We describe algorithms commonly used for solving power flow equations.  We formulate optimal power flow (OPF) problems.  It is a nonconvex quadratic constrained quadratic program that generally NP-hard. It is fundamental as numerous power system applications can be formulated as OPF.  We describe ways to deal with nonconvexity, distributed solutions, and real-time solutions.


The first session of this mini course will take place on Thursday, January 25 from 9:30 to 10:30 a.m.; the second session of this mini course will take place on Thursday, January 25 from 11:00 a.m. to 12:00 p.m.

1:30 pm3:00 pm

The morning lectures by Prof Low cover power grid operations today, with discussion devoted to several "information signals,”  measured in units of volts, vars, frequency, megawatts, and megawatt-hours.  Regarding the user of electricity (such as a typical member of the audience),  what units are most important in terms of quality of life?  Once we grasp the answer, we can consider how to make a more resilient grid, even with the introduction of significant volatile energy from renewable sources such as solar and wind generation.

The talk will proceed with a short survey of the remarkable robust distributed control architecture that is commonplace today, and new distributed control architectures of the future.   A few words will be devoted to the "fallacy of price signals,"  leading to the final lecture by Prof Poolla.

3:30 pm5:00 pm

This is the fourth and final part of the boot camp on Smart Grid. We will begin by describing the history of deregulation of electricity markets. We then describe the commonly used two settlement market structure: day ahead or bulk markets, and real-time markets for balancing. We discuss network effects and locational prices for electricity. We then summarize markets for transmission rights and storage.

After this summary, we critique the existing market structures and discuss new research problems that arise in the context of the smart grid. These include pricing of demand response, real-time tariffs, differentiated services. We close with a describing a vision for Grid2050 where the electricity delivery may evolve into interconnected micro-grids. 

Friday, January 26th, 2018

9:30 am10:30 am

We have embarked on a journey at the frontier of high energies with the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. As the intensity and energy of the LHC has progressed, Caltech physicists and students, together with colleagues around the world, continue to break new ground in our understanding of the forces of nature, dark matter and the early universe. Using CMS and the LHC collider, two of the most complex instruments ever devised, and new methods developed at Caltech and at peer institutions, we are:

• Homing in on the properties of the Higgs boson particles thought to be responsible for mass in the universe
• Searching for Supersymmetry that brings together particle physics and spacetime, and
• Searching for evidence of extra dimensions of space, and other exotic new particles

This lecture will present the latest results from the high energy frontier of particle physics and give a perspective on the road ahead, towards the next round of discoveries.

If time permits I will also cover some of the growing computing, data and network challenges on the horizon, and how we are working to address these challenges: with a new generation of intelligently operated global scale distributed systems and a candidate new architecture of the Internet. Within the past couple of years the use of machine learning methods has brought renewed promise, both to extend the reach of particle physics data analysis, and to help optimize workflow in the new class of global software-defined distributed systems.

The first session of this mini course will take place on Friday, January 26 from 9:30 to 10:30 a.m.; the second session of this mini course will take place on Friday, January 26 from 11:00 a.m. to 12:00 p.m.

11:00 am12:00 pm

We have embarked on a journey at the frontier of high energies with the Large Hadron Collider (LHC) at CERN in Geneva, Switzerland. As the intensity and energy of the LHC has progressed, Caltech physicists and students, together with colleagues around the world, continue to break new ground in our understanding of the forces of nature, dark matter and the early universe. Using CMS and the LHC collider, two of the most complex instruments ever devised, and new methods developed at Caltech and at peer institutions, we are:

• Homing in on the properties of the Higgs boson particles thought to be responsible for mass in the universe
• Searching for Supersymmetry that brings together particle physics and spacetime, and
• Searching for evidence of extra dimensions of space, and other exotic new particles

This lecture will present the latest results from the high energy frontier of particle physics and give a perspective on the road ahead, towards the next round of discoveries.

If time permits I will also cover some of the growing computing, data and network challenges on the horizon, and how we are working to address these challenges: with a new generation of intelligently operated global scale distributed systems and a candidate new architecture of the Internet. Within the past couple of years the use of machine learning methods has brought renewed promise, both to extend the reach of particle physics data analysis, and to help optimize workflow in the new class of global software-defined distributed systems.

The first session of this mini course will take place on Friday, January 26 from 9:30 to 10:30 a.m.; the second session of this mini course will take place on Friday, January 26 from 11:00 a.m. to 12:00 p.m.