Monday, August 31st, 2015

9:30 am10:30 am

The second session of this talk will take place on Monday, August 31 from 11:00 am – 12:00 pm; the third session of this talk will take place on Monday, August 31 from 1:30 pm – 2:30 pm.

I will survey the satisfiability algorithms for k-CNF and extensions and explore the connections to lower bounds for depth-3 circuits. I will also provide motivation for ETH and SETH and discuss the Sparsification Lemma.

11:00 am12:00 pm

The first session of this talk will take place on Monday, August 31 from 9:30 am – 10:30 am; the third session of this talk will take place on Monday, August 31 from 1:30 pm – 2:30 pm.

I will survey the satisfiability algorithms for k-CNF and extensions and explore the connections to lower bounds for depth-3 circuits. I will also provide motivation for ETH and SETH and discuss the Sparsification Lemma.

1:30 pm2:30 pm

The first session of this talk will take place on Monday, August 31 from 9:30 am – 10:30 am; the second session of this talk will take place on Monday, August 31 from 11:00 am – 12:00 pm.

I will survey the satisfiability algorithms for k-CNF and extensions and explore the connections to lower bounds for depth-3 circuits. I will also provide motivation for ETH and SETH and discuss the Sparsification Lemma.

3:00 pm4:00 pm

I will survey the main algorithm design techniques for exponential-time algorithms for NP-hard problems.

Tuesday, September 1st, 2015

9:30 am10:30 am

The second session of this talk will take place on Wednesday, September 2 from 9:30 am – 10:30 am; the third session of this talk will take place on Thursday, September 3 from 1:30 pm – 2:30 pm; the fourth session of this talk will take place on Friday, September 4 from 11:00 am – 12:00 pm.

A central goal of theoretical computer science is the classification of computational problems according to their complexity. The theory of NP-hardness has been successful in identifying problems that are unlikely to be solvable in time that is polynomial in the size of the input. Conversely, for many problems of practical interest, the field has developed intriguingly fast polynomial time solutions. However, improving the running times of many of these algorithms has been a longstanding open problem, with little to no progress. It is thus completely plausible that these algorithms are optimal. Showing that this is indeed the case would be a great accomplishment of the theory of algorithms.

Unfortunately, unconditional lower bounds seem hard to obtain. Instead, an alternative approach for proving hardness of problems in P is to identify plausible complexity-theoretic conjectures and show that the currently best algorithms for the problems of interest are optimal if one assumes these conjectures. Such popular conjectures include the (Strong) Exponential Time Hypothesis (that concerns the complexity of CNF-SAT), the conjecture that all-pairs shortest paths (APSP) requires essentially cubic time, or that 3SUM requires essentially quadratic time.

The goal of the minicourse is to expose the audience to the recent growing body of conditional lower bounds for a variety of problems in diverse areas, such as graph optimization, string matching, geometry, and dynamic data structures, and to the attempts at unifying and/or refuting the popular conjectures.

11:00 am12:00 pm

The second session of this talk will take place on Wednesday, September 2 from 11:00 am –12:00 pm; the third session of this talk will take place on Thursday, September 3 from 9:30 am – 10:30 am; the fourth session of this talk will take place on Friday, September 4 from 1:30 pm – 2:30 pm.

Randomness has become an essential tool in algorithm design. However, for many randomized algorithms, later research shows how to modify therandomized algorithm to construct deterministic algorithms solving the same problems. This raises the question: can any randomized algorithm be derandomized, or are randomized algorithms a fundamentally different model of computation from deterministic algorithms?  Starting with Blum, Micali, and Yao in the early 1980's, the algorithmic question of producing general deterministic algorithms has been linked to the central complexity question of proving lower bounds, especially for Boolean circuits. Later research extended and deepened this connection.  In this series of talks, we will give precise statements for such connections, and at least an intuitive idea of many of the proofs.  

1:30 pm2:30 pm

The second session of this talk will take place on Wednesday, September 2 from 1:30 pm – 2:30 pm; the third session of this talk will take place on Thursday, September 3 from 11:00 am – 12:00 pm; the fourth session of this talk will take place on Friday, September 4 from 9:30 am – 10:30 am.

This boot camp tutorial will promote a general philosophy shared by the speaker and others: that studying algorithm design and complexity theory "as a single unit" can lead to significant insights that would not be found by studying them in isolation. In this tutorial, the focus will be on Boolean circuits, surveying many known algorithmic results in circuit analysis and lower bound results in circuit complexity, along with intriguing logical connections that have been developed between the two areas. As an example of the methodology in action, we will cover a proof of the result that non-deterministic exponential time does not have ACC0 circuits of polynomial size, via an algorithm for solving the satisfiability problem on ACC0 circuits.

3:00 pm4:00 pm

The second session of this talk will take place on Wednesday, September 2 from 3:00 pm – 4:00 pm; the third session of this talk will take place on Thursday, September 3 from 3:00 pm – 4:00 pm; the fourth session of this talk will take place on Friday, September 4 from 3:00 pm – 4:00 pm.

We expect that NP-hard problems can be solved only by algorithms that need exponential time in the worst case. However, there is still a lot that can be said about the complexity of such problems. The study of exponential algorithms aims to find the best possible form of exponential running time that can be achieved. Parameterized complexity analyzes the running time of an algorithm as a function of the input size n and some parameter k of the input. The goal is to determine the optimal dependence on the parameter. Of particular interest is the case when only a multiplicative factor of the running time depends on the parameter k, that is, the problem is fixed-parameter tractable (FPT).

The mini-course will survey a selection of algorithmic results, highlighting some nontrivial algorithmic techniques for NP-hard problems. Then we show how the (Strong) Exponential-Time Hypothesis and other conjectures can be used to give lower bounds that, in many cases, tightly match the algorithmic results.

Wednesday, September 2nd, 2015

9:30 am10:30 am

The first session of this talk will take place on Tuesday, September 1 from 9:30 am – 10:30 am; the third session of this talk will take place on Thursday, September 3 from 1:30 pm – 2:30 pm; the fourth session of this talk will take place on Friday, September 4 from 11:00 am – 12:00 pm.

A central goal of theoretical computer science is the classification of computational problems according to their complexity. The theory of NP-hardness has been successful in identifying problems that are unlikely to be solvable in time that is polynomial in the size of the input. Conversely, for many problems of practical interest, the field has developed intriguingly fast polynomial time solutions. However, improving the running times of many of these algorithms has been a longstanding open problem, with little to no progress. It is thus completely plausible that these algorithms are optimal. Showing that this is indeed the case would be a great accomplishment of the theory of algorithms. 

Unfortunately, unconditional lower bounds seem hard to obtain. Instead, an alternative approach for proving hardness of problems in P is to identify plausible complexity-theoretic conjectures and show that the currently best algorithms for the problems of interest are optimal if one assumes these conjectures. Such popular conjectures include the (Strong) Exponential Time Hypothesis (that concerns the complexity of CNF-SAT), the conjecture that all-pairs shortest paths (APSP) requires essentially cubic time, or that 3SUM requires essentially quadratic time. 

The goal of the minicourse is to expose the audience to the recent growing body of conditional lower bounds for a variety of problems in diverse areas, such as graph optimization, string matching, geometry, and dynamic data structures, and to the attempts at unifying and/or refuting the popular conjectures. 
 

 

11:00 am12:00 pm

The first session of this talk will take place on Tuesday, September 1 from 11:00 am – 12:00 pm; the third session of this talk will take place on Thursday, September 3 from 1:30 pm – 2:30 pm; the fourth session of this talk will take place on Friday, September 4 from 1:30 pm – 2:30 pm.

Randomness has become an essential tool in algorithm design. However, for many randomized algorithms, later research shows how to modify therandomized algorithm to construct deterministic algorithms solving the same problems. This raises the question: can any randomized algorithm be derandomized, or are randomized algorithms a fundamentally different model of computation from deterministic algorithms?  Starting with Blum, Micali, and Yao in the early 1980's, the algorithmic question of producing general deterministic algorithms has been linked to the central complexity question of proving lower bounds, especially for Boolean circuits. Later research extended and deepened this connection.  In this series of talks, we will give precise statements for such connections, and at least an intuitive idea of many of the proofs.  

1:30 pm2:30 pm

The first session of this talk will take place on Tuesday, September 1 from 1:30 pm – 2:30 pm; the third session of this talk will take place on Thursday, September 3 from 11:00 am – 12:00 pm; the fourth session of this talk will take place on Friday, September 4 from 9:30 am – 10:30 am.

This boot camp tutorial will promote a general philosophy shared by the speaker and others: that studying algorithm design and complexity theory "as a single unit" can lead to significant insights that would not be found by studying them in isolation. In this tutorial, the focus will be on Boolean circuits, surveying many known algorithmic results in circuit analysis and lower bound results in circuit complexity, along with intriguing logical connections that have been developed between the two areas. As an example of the methodology in action, we will cover a proof of the result that non-deterministic exponential time does not have ACC0 circuits of polynomial size, via an algorithm for solving the satisfiability problem on ACC0 circuits.

3:00 pm4:00 pm

The first session of this talk will take place on Tuesday, September 1 from 3:00 pm – 4:00 pm; the third session of this talk will take place on Thursday, September 3 from 3:00 pm – 4:00 pm; the fourth session of this talk will take place on Friday, September 4 from 3:00 pm – 4:00 pm.

We expect that NP-hard problems can be solved only by algorithms that need exponential time in the worst case. However, there is still a lot that can be said about the complexity of such problems. The study of exponential algorithms aims to find the best possible form of exponential running time that can be achieved. Parameterized complexity analyzes the running time of an algorithm as a function of the input size n and some parameter k of the input. The goal is to determine the optimal dependence on the parameter. Of particular interest is the case when only a multiplicative factor of the running time depends on the parameter k, that is, the problem is fixed-parameter tractable (FPT).

The mini-course will survey a selection of algorithmic results, highlighting some nontrivial algorithmic techniques for NP-hard problems. Then we show how the (Strong) Exponential-Time Hypothesis and other conjectures can be used to give lower bounds that, in many cases, tightly match the algorithmic results.

Thursday, September 3rd, 2015

9:30 am10:30 am

The first session of this talk will take place on Tuesday, September 1 from 11:00 am – 12:00 pm; the second session of this talk will take place on Wednesday, September 2 from 11:00 am – 12:00 pm; the fourth session of this talk will take place on Friday, September 4 from 1:30 pm – 2:30 pm.

Randomness has become an essential tool in algorithm design. However, for many randomized algorithms, later research shows how to modify therandomized algorithm to construct deterministic algorithms solving the same problems. This raises the question: can any randomized algorithm be derandomized, or are randomized algorithms a fundamentally different model of computation from deterministic algorithms?  Starting with Blum, Micali, and Yao in the early 1980's, the algorithmic question of producing general deterministic algorithms has been linked to the central complexity question of proving lower bounds, especially for Boolean circuits. Later research extended and deepened this connection.  In this series of talks, we will give precise statements for such connections, and at least an intuitive idea of many of the proofs.  

11:00 am12:00 pm

The first session of this talk will take place on Tuesday, September 1 from 1:30 pm – 2:30 pm; the second session of this talk will take place on Wednesday, September 2 from 1:30 pm – 2:30 pm; the fourth session of this talk will take place on Friday, September 4 from 9:30 am – 10:30 am.

This boot camp tutorial will promote a general philosophy shared by the speaker and others: that studying algorithm design and complexity theory "as a single unit" can lead to significant insights that would not be found by studying them in isolation. In this tutorial, the focus will be on Boolean circuits, surveying many known algorithmic results in circuit analysis and lower bound results in circuit complexity, along with intriguing logical connections that have been developed between the two areas. As an example of the methodology in action, we will cover a proof of the result that non-deterministic exponential time does not have ACC0 circuits of polynomial size, via an algorithm for solving the satisfiability problem on ACC0 circuits.

1:30 pm2:30 pm

The first session of this talk will take place on Tuesday, September 1 from 9:30 am – 10:30 am; the second session of this talk will take place on Wednesday, September 2 from 9:30 am – 10:30 am; the fourth session of this talk will take place on Friday, September 4 from 11:00 am – 12:00 pm.

A central goal of theoretical computer science is the classification of computational problems according to their complexity. The theory of NP-hardness has been successful in identifying problems that are unlikely to be solvable in time that is polynomial in the size of the input. Conversely, for many problems of practical interest, the field has developed intriguingly fast polynomial time solutions. However, improving the running times of many of these algorithms has been a longstanding open problem, with little to no progress. It is thus completely plausible that these algorithms are optimal. Showing that this is indeed the case would be a great accomplishment of the theory of algorithms. 

Unfortunately, unconditional lower bounds seem hard to obtain. Instead, an alternative approach for proving hardness of problems in P is to identify plausible complexity-theoretic conjectures and show that the currently best algorithms for the problems of interest are optimal if one assumes these conjectures. Such popular conjectures include the (Strong) Exponential Time Hypothesis (that concerns the complexity of CNF-SAT), the conjecture that all-pairs shortest paths (APSP) requires essentially cubic time, or that 3SUM requires essentially quadratic time. 

The goal of the minicourse is to expose the audience to the recent growing body of conditional lower bounds for a variety of problems in diverse areas, such as graph optimization, string matching, geometry, and dynamic data structures, and to the attempts at unifying and/or refuting the popular conjectures. 
 

 

3:00 pm4:00 pm

The first session of this talk will take place on Tuesday, September 1 from 3:00 pm – 4:00 pm; the second session of this talk will take place on Wednesday, September 2 from 3:00 pm – 4:00 pm; the fourth session of this talk will take place on Friday, September 4 from 3:00 pm – 4:00 pm.

We expect that NP-hard problems can be solved only by algorithms that need exponential time in the worst case. However, there is still a lot that can be said about the complexity of such problems. The study of exponential algorithms aims to find the best possible form of exponential running time that can be achieved. Parameterized complexity analyzes the running time of an algorithm as a function of the input size n and some parameter k of the input. The goal is to determine the optimal dependence on the parameter. Of particular interest is the case when only a multiplicative factor of the running time depends on the parameter k, that is, the problem is fixed-parameter tractable (FPT).

The mini-course will survey a selection of algorithmic results, highlighting some nontrivial algorithmic techniques for NP-hard problems. Then we show how the (Strong) Exponential-Time Hypothesis and other conjectures can be used to give lower bounds that, in many cases, tightly match the algorithmic results.

Friday, September 4th, 2015

9:30 am10:30 am

The first session of this talk will take place on Tuesday, September 1 from 1:30 pm – 2:30 pm; the second session of this talk will take place on Wednesday, September 2 from 1:30 pm – 2:30 pm; the third session of this talk will take place on Thursday, September 3 from 11:00 am – 12:00 pm.

This boot camp tutorial will promote a general philosophy shared by the speaker and others: that studying algorithm design and complexity theory "as a single unit" can lead to significant insights that would not be found by studying them in isolation. In this tutorial, the focus will be on Boolean circuits, surveying many known algorithmic results in circuit analysis and lower bound results in circuit complexity, along with intriguing logical connections that have been developed between the two areas. As an example of the methodology in action, we will cover a proof of the result that non-deterministic exponential time does not have ACC0 circuits of polynomial size, via an algorithm for solving the satisfiability problem on ACC0 circuits.

11:00 am12:00 pm

The first session of this talk will take place on Tuesday, September 1 from 9:30 am – 10:30 am; the second session of this talk will take place on Wednesday, September 2 from 9:30 am – 10:30 am; the third session of this talk will take place on Thursday, September 3 from 1:30 pm – 2:30 pm.

A central goal of theoretical computer science is the classification of computational problems according to their complexity. The theory of NP-hardness has been successful in identifying problems that are unlikely to be solvable in time that is polynomial in the size of the input. Conversely, for many problems of practical interest, the field has developed intriguingly fast polynomial time solutions. However, improving the running times of many of these algorithms has been a longstanding open problem, with little to no progress. It is thus completely plausible that these algorithms are optimal. Showing that this is indeed the case would be a great accomplishment of the theory of algorithms. 

Unfortunately, unconditional lower bounds seem hard to obtain. Instead, an alternative approach for proving hardness of problems in P is to identify plausible complexity-theoretic conjectures and show that the currently best algorithms for the problems of interest are optimal if one assumes these conjectures. Such popular conjectures include the (Strong) Exponential Time Hypothesis (that concerns the complexity of CNF-SAT), the conjecture that all-pairs shortest paths (APSP) requires essentially cubic time, or that 3SUM requires essentially quadratic time. 

The goal of the minicourse is to expose the audience to the recent growing body of conditional lower bounds for a variety of problems in diverse areas, such as graph optimization, string matching, geometry, and dynamic data structures, and to the attempts at unifying and/or refuting the popular conjectures. 
 

 

1:30 pm2:30 pm

The first session of this talk will take place on Tuesday, September 1 from 11:00 am – 12:00 pm; the second session of this talk will take place on Wednesday, September 2 from 11:00 am – 12:00 pm; the third session of this talk will take place on Thursday, September 3 from 9:30 am – 10:30 am.

Randomness has become an essential tool in algorithm design. However, for many randomized algorithms, later research shows how to modify therandomized algorithm to construct deterministic algorithms solving the same problems. This raises the question: can any randomized algorithm be derandomized, or are randomized algorithms a fundamentally different model of computation from deterministic algorithms?  Starting with Blum, Micali, and Yao in the early 1980's, the algorithmic question of producing general deterministic algorithms has been linked to the central complexity question of proving lower bounds, especially for Boolean circuits. Later research extended and deepened this connection.  In this series of talks, we will give precise statements for such connections, and at least an intuitive idea of many of the proofs.  

3:00 pm4:00 pm

The first session of this talk will take place on Tuesday, September 1 from 3:00 pm – 4:00 pm; the second session of this talk will take place on Wednesday, September 2 from 3:00 pm – 4:00 pm; the third session of this talk will take place on Thursday, September 3 from 3:00 pm – 4:00 pm.

We expect that NP-hard problems can be solved only by algorithms that need exponential time in the worst case. However, there is still a lot that can be said about the complexity of such problems. The study of exponential algorithms aims to find the best possible form of exponential running time that can be achieved. Parameterized complexity analyzes the running time of an algorithm as a function of the input size n and some parameter k of the input. The goal is to determine the optimal dependence on the parameter. Of particular interest is the case when only a multiplicative factor of the running time depends on the parameter k, that is, the problem is fixed-parameter tractable (FPT).

The mini-course will survey a selection of algorithmic results, highlighting some nontrivial algorithmic techniques for NP-hard problems. Then we show how the (Strong) Exponential-Time Hypothesis and other conjectures can be used to give lower bounds that, in many cases, tightly match the algorithmic results.