We are given a system of polynomial equations (say m equations of degree d each in n variables) over a (large) finite field F_p and we want to determine whether the system has a solution in that finite field (NOT in the algebraic closure). In other words, we want to determine if a polynomial system over a given finite field contains a rational point. We will see a deterministic algorithm for this problem whose running time is poly(m * d^n * log p). In particular, for any fixed value of n, the algorithm has running time polynomial in d, m and log p.
Monday, October 13th, 2014
In this paper we introduce constructible analogs of the discrete complexity classes $\mathbf{VP}$ and $\mathbf{VNP}$ of sequences of functions. The functions in the new definitions are constructible functions on $\mathbb{R}^n$ or $\mathbb{C}^n$. We define a class of sequences of constructible functions that play a role analogous to that of $\mathbf{VP}$ in the more classical theory. The class analogous to $\mathbf{VNP}$ is defined using Euler integration. We discuss several examples, develop a theory of completeness, and pose a conjecture analogous to the $\mathbf{VP}$ vs $\mathbf{VNP}$ conjecture in the classical case. In the second part of the paper we extend the notions of complexity classes to sequences of constructible sheaves over $\mathbb{R}^n$ (or its one point compactification). We introduce a class of sequences simple constructible sheaves, that could be seen as the sheaf-theoretic analog of the Blum-Shub-Smale class $\mathbf{P}_{\mathbb{R}}$. We also define a hierarchy of complexity classes of sheaves mirroring the polynomial hierarchy, $\mathbf{PH}_{\mathbb{R}}$ in the B-S-S theory. We prove a singly exponential upper bound on the topological complexity of the sheaves in this hierarchy mirroring a similar result in the B-S-S setting. We obtain as a result an algorithm with singly exponential complexity for a sheaf-theoretic variant of the real quantifier elimination problem. Finally, we pose the natural sheaf-theoretic analogs of the classical $\mathbf{P}$ vs $\mathbf{NP}$ question, and also discuss a connection with Toda's theorem from discrete complexity theory in the context of constructible sheaves.
This talk presents necessary and sufficient conditions in terms of sign vectors for the injectivity of families of polynomials maps with arbitrary real exponents defined on the positive orthant. Our work relates and extends existing injectivity conditions expressed in terms of Jacobian matrices and determinants. In the context of chemical reaction networks with power-law kinetics, our results can be used to preclude as well as to guarantee multiple positive steady states. In the context of real algebraic geometry, our results reveal a partial multivariate generalization of the classical Descartes' rule of signs, for precluding multiple positive solutions.
This is joint work with Stefan Müller, Elisenda Feliu, Georg Regensburger, Carsten Conradi, and Alicia Dickenstein
Cardiovascular modeling of the blood pressure regulation system requires mathematical description of the viscoelastic mechanical properties of the arterial wall. These mathematical models involve using springs (elastic elements) and dashpots (viscous elements) in various series/parallel configurations, and can be described using linear ODEs in terms of the strain and stress of the system. An essential step in analyzing such a model is to determine whether the model is structurally identifiable, which means that the unknown parameters of the model can be quantified given perfect (noise-free) data. In such a model, the total stress and strain are known, but the individual spring and dashpot constants represent unknown parameters. We will find simple algebraic conditions to determine whether or not a given model is identifiable, and also give a constructive result on how to obtain an identifiable model when forming a spring-dashpot network. In particular, we will show that determining whether or not a model is identifiable amounts to solving a certain polynomial factorization problem.
This is joint work with Adam Mahdi and Seth Sullivant.
We study the question when can a partial matrix be completed to a rank 1 probability matrix, i. e. a matrix with nonnegative entries which sum to 1. The motivation for studying the question comes from statistics: A lack of desired completion can provide a falsication test for partial observations to come from the independence model.
We first answer the question for diagonal partial matrices, then generalize it to block diagonal partial matrices and finally to arbitrary partial matrices. We show how to construct polynomial equations and inequalities that define the semialgebraic set of partial matrices which can be completed to rank 1 probability matrices. In case a partial matrix has a desired completion, we give an algorithm how to construct one, and if it has more than one such completion, we show how to construct a completion that minimizes the distance from a fixed probability distribution. We address the same questions also for diagonal partial tensors.
This talk is based on joint work with Zvi Rosen.
Biochemical reaction systems are usually modeled with mass-action kinetics. I will present some questions on polynomial systems associated to these dynamical systems.
Tuesday, October 14th, 2014
Numerical algebraic geometry is a collection of methods, based primarily on numerical homotopy continuation, for solving systems of polynomial equations. These methods can handle systems with positive-dimensional and zero-dimensional solution sets, including singular sets. Early work in the area concentrated on solving over the complex number field, where algebraic completeness simplifies matters, but recent work has extended the methods to the real number field. In that realm, most of the effort goes into solving for the boundaries of solution sets as well as their singularities. We will discuss the present status of the area along with some applications, such as mechanism design and the least-squares approximation of measured impedance data by rational polynomials, as is useful in linear control systems.
Numerical algebraic geometry is a growing area of applied algebra that focuses on describing solution sets of polynomial systems of equations. This area has already had an impact in kinematics, statistics, PDE's, and pure math.
In this talk, we focus on numerical algebraic geometry for maximum likelihood estimation in algebraic statistics. We motivate the problem with a specific example, present computational results consisting of ML degrees of determinental varieties, and a theoretical result of maximum likelihood duality.
Exploiting a connection between amoebas and tropical curves we devise a method to compute tropical curves using numerical algebraic geometry, implement a heuristic algorithm, and apply this to several examples at the boundary of the capabilities of symbolic methods. (Joint work with Anders Jensen and Josephine Yu.)
Polynomial systems naturally arise in many different contexts within mathematics, the sciences, and engineering. A central concept is discovering algorithms that will find all the isolated zeroes of a system. Systems that come from applications often have additional structure that cause a drop in the root count compared to the BKK bound. This talk will discuss a new algorithm that exploits the structure of a class of polynomial systems that have multiple common subexpressions which appear in the equations. This is joint work with B. Davis and J. Hauenstein.
Hilbert's Nullstellensatz is a cornerstone in Algebraic Geometry. It describes by a Bézout identity when polynomial equations do not share any root in an algebraically closed field. Here I will present sharp results on the heights and degrees of polynomials showing up in the Bezout identity, when the polynomials defining field admits a height notion. And hopefully I will be able to comment on some applications of these results to complexity problems over finite fields.
The effective resolution of systems of polynomial equations is a mathematical problem of fundamental importance, both from a practical and a theoretical standpoint. Given a set of polynomial equations in n variables, and, in view of the applications, the aim is to describe algorithmically the set of its solutions in C^n taking into account the support set of the polynomials involved (that is, the set of monomials that appear with non-zero coefficients).
Unlike the case of dense polynomials, the affine varieties defined by sparse systems (systems with pre-fixed support sets) of n polynomials, even in the generic case, can have components of positive dimension in coordinate linear spaces. In this talk I will present conditions on the set of supports of (not necessarily square) systems that allow us to determine the existence of these components in the generic case, and a symbolic algorithm for their computation. If there is some time left, I will overview the non generic case for systems of n polynomials presenting an upper bound for the degree of the variety the system defines and an algorithm to find witness points in every irreducible component within a complexity which is polynomial in our bound for the degree.
Joint work with Gabriela Jeronimo and Juan Sabia.
Bertini_real is software for decomposing real algebraic sets in any number of variables. Using Bertini's homotopy continuation tracker as its main engine, Bertini_real uses random real linear projections to implicitly parametrize the targeted algebraic set. By finding the critical sets — the places where the set is singular with respect to the projections — and then connecting the computed points on them, we produce a decomposition in terms of cells, where the interior is a singularity-free region, and the boundaries are critical, singular, or an artificially imposed boundary capturing infinite behavior. Bertini_real currently will decompose sets of dimension 1 or 2; i.e. curves and surfaces. Bertini_real is a numerical method for performing decompositions, and offers improvements over existing algorithms in terms of the number of variables defining the variety, as well as allowing tracking on and decomposition of singular components. The cellular decomposition is readily refined, visualized, and 3D printed.
This talk is concerned with certifying that a given point is near an exact root of an overdetermined or singular polynomial system with rational coefficients. The difficulty lies in the fact that consistency of overdetermined systems is not a continuous property. Our certification is based on hybrid symbolic-numeric methods. This is a joint work with Tulay Akoglu, Jonathan Hauenstein and Victor Pan.
Wednesday, October 15th, 2014
Critical point methods are at the core of the interplay between polynomial optimization and polynomial system solving over the reals. These methods are used in algorithms for solving various problems such as deciding the existence of real solutions of polynomial systems, performing one-block real quantifier elimination, answering connectivity queries in semi-algebraic sets, etc.
The input consists of $s$ polynomials in $n$ variables of degree at most $D$. Usually, the complexity of the algorithms is $(s\, D)^{O(n^\alpha)}$ where $\alpha$ is a constant. In the past decade, tremendous efforts have been deployed to improve the exponents in the complexity bounds. This led to efficient implementations and new geometric procedures for solving polynomial systems over the reals that exploit properties of critical points and the so-called polar varieties.
In the first part of this talk, we present an overview of these techniques and their impact on practical algorithms. In the second part, we show how we can exploit algebraic and geometric properties of polar varieties to improve roadmap algorithms for deciding connectivity queries. In particular, we present an algorithm, obtained with E. Schost, that computes roadmaps in smooth bounded real algebraic sets in time which is essentially $O( (s\, D)^{6(n\log(n))})$. This complexity is subquadratic in the output size and a first implementation shows that it can already solve problems that are out of reach of previous approaches.
We discuss applications of theorems of real algebraic geometry (in particular, theorems concerning bounds on the algebraic degrees of solutions to systems of polynomial equations and inequalities) to computational game theory.
A non-zero polynomial of degree d has at most d complex roots. Moreover, Descartes proved that the number of real roots can be also bounded by a function on the number of monomials. A polynomial with t monomials has at most 2t-1 real roots.
What happens if we consider the solutions of a system of polynomials? In the complex case, Bézout's Theorem bounds the number of solutions by the product of the degrees. Is there, in the real case, a similar bound which only depends on the number of monomials? Khovanskií proved that this bound exists, but the given bound is exponential. Is it the optimal bound? This is an open question.
A particular case, known as Sevostyanov's problem, is the case of a system of one degree d polynomial and of one t-sparse polynomial. We will present in this talk a polynomial bound with respect to t and d for this problem. It is a joint work with Pascal Koiran and Natacha Portier.
Descartes’ rule of signs bounds the number of positive roots of an univariate polynomial by the number of sign changes between consecutive coefficients. In particular, this produces a sharp bound depending on the number of monomials. Generalizing Descartes’ rule of signs or the corresponding sharp bound to the multivariable case is a challenging problem. In this talk, I will present a generalization of Descartes’ rule of signs for the number of positive solutions of any system of n real polynomial equations in n variables with at most n+2 monomials. This is a joint work with Alicia Dickenstein.
Let G be a bounded open subset of Euclidean space with real algebraic boundary. In a first part of the talk we consider the case where $G={x: g(x) <=1}$ for some quasi-homogeneous polynomial "g" and derive several properties of "G" as well as the non-Gaussian integral \int exp(-g)dx. In particular we show that the volume of G is a convex function of the coefficients of "g" and solve a generalized version of the Lowner-John problem.
Next, we consider a more general case and under the assumptions that the degree "d" of the algebraic boundary is known and the power moments of the Lebesgue measure on G are known up to order 3d, we describe an algorithmic procedure for obtaining a polynomial vanishing on the boundary. The particular case of semi-algebraic sets defined by a single polynomial inequality raises an intriguing question related to the finite determinateness of the full moment sequence. Our approach relies on Stokes theorem and simple Hankel-type matrix identities.
Techniques based on chordal structure and bounded treewidth have been extensively studied in linear algebra, graphical models, constraint satisfaction, database theory, and many other areas. It is natural then to analyze to what extent chordality might also help to solve systems of polynomial equations. To this end, we propose a new technique, which we refer to as chordal elimination, that relies in elimination theory and Gröbner bases. By maintaining the graphical structure of the input polynomial system in all computations, chordal elimination can outperform standard Gröbner basis algorithms in many cases. Besides the theoretical developments, in this talk we will illustrate the suitability of our methods in examples arising from graph colorings, cryptography, sensor localization and differential equations. Based on joint work with Diego Cifuentes (MIT).
In the sum of squares problem, we want to express the polynomial (x_1^2+…+x_n^2)(y_1^2+…+y_n^2) as f_1^2+…+f_k^2, with f_i bilinear in x and y. k lies somewhere between n and n^2, and it is an open problem to prove a superlunar lower bound on k. A lower bound of n^{1+\eps} with \eps>0 would imply an exponential lower bound on arithmetic circuit complexity of the permanent, in the non-commutative setting. I will describe an approach to the sum of squares problem, based on families of anticommuting matrices. Essentially, one needs to show that it is impossible to have many pairwise anticommuting matrices with high rank. I will prove some partial results in this direction.
Interior point methods track the central path in various convex optimization problems, including quadratic programs with linear constraints. The Zariski closure of the central path, called the central curve, is an algebraic curve, that has been studied by De Loera, Sturmfels and Vinzant for generic linear programs. We follow their footsteps and compute the degree of the central curve for generic quadratic programs. This gives a measure of algebraic complexity for the central curve and can be used to bound the curvature of the central path. We also construct a Grobner basis for the central curve when the program is defined by a generic diagonal matrix.
Thursday, October 16th, 2014
A variety of fundamental computational problems in many fields (in game theory, economics, stochastic processes, verification, and other areas), which on the surface may look quite different, can all be boiled down to computing or approximating a fixed point for an algebraically-defined function which maps a subset of R^n to itself, and which is specified by an algebraic circuit (or formula) using standard gates from among { + , *, - , / , max , min } and using rational coefficients and constants.
The problems in question share the property that the *existence* of a fixed point is not in doubt: it is guaranteed by some classic fixed point theorem (e.g., Brouwer's, Banach's, Tarski's, etc.). But the proofs of existence do not in general yield an efficient way to find a fixed point. For many of these problems, we neither have efficient (P-time) algorithms, nor do we know them to be NP-hard. Indeed, total search complexity classes such as Papadimitriou's PPAD (= piecewise-linear-FIXP) and FIXP have been developed to characterize their complexity.
For some problems in FIXP, by now we know that any non-trivial approximation of an actual fixed point, even in NP, would resolve long standing open problems in arithmetic-vs.-Turing complexity, in particular the PosSLP problem and the square-root-sum problem. For other such problems, by now we have P-time approximation algorithms, using a variety of methods (including variants of Newton's method combined with linear programming).
What makes such a fixed point problem "hard" or "easy"?
In this talk, I will survey a number of such problems, and I will delineate a taxonomy of the (relative) complexity of such problems, based on both their combinatorial and their numerical "difficulty", and based on the nature of their defining algebraic circuits.
This talk is based on joint works in a series of papers with Mihalis Yannakakis and with Alistair Stewart.
We present an algorithm for solving bivariate polynomial systems with coefficients in $\mathbb{Q}$ with essentially optimal bit complexity. The core of the algorithm is a classical Newton iteration procedure. New ingredients are needed, though, such as Kedlaya-Umans' modular composition algorithm and deflation techniques due to Lecerf.
Joint work with Esmaeil Mehrabi.
Polynomial systems are ubiquitous in Mathematics, Sciences and Engineerings, and Groebner basis theory is one of the most powerful tools for solving polynomial systems from practice. Buchberger (1965) gave the first algorithm for computing Groebner bases and introduced some simple criteria for detecting useless S-pairs. Faugere (2002) introduced signatures and rewritten rules to detect useless S-pairs, and his F5 algorithm is significantly much faster than Buchberger's algorithm. In the last ten years or so, there has been extensive effort trying to modify F5 in order to have simpler algorithms that work for arbitrary sequence of polynomials (not necessarily homogeneous) and have rigorous proofs for correctness and finite termination.
In this talk, we present a new framework that underlies an algorithmic foundation for computing Groebner bases for both ideals and the related syzygy modules (the latter is useful for computing free resolutions). We give a simple criterion for strong Groebner bases that contain Groebner bases for both ideals and syzygy modules. This criterion can detect all useless J-pairs (without performing any reduction) for any sequence of polynomials, thus yielding an efficient algorithm for computing Groebner bases. All the rewritten rules in the literature can now be easily explained by this criterion. This is a joint work with Frank Volny IV (National Security Agency) and Mingsheng Wang (Chinese Academy of Sciences).
Minkowski's mixed volume is a generalization of the ordinary n-volume to n-tuples of convex bodies. According to Bernstein's Theorem, the number of non-zero isolated complex solutions of a system of n polynomials in n variables is at most n! times the mixed volume of the convex hull of the support of each polynomial.
This invariant plays therefore an important role in polynomial solving, and a related object (a mixed subdivision) can be used as a starting system for homotopy algorithms.
I will present a new algorithm for mixed volume and mixed subdivision computation. The basic idea behind this algorithm is to explore a finite number of toric varieties. Its complexity can be bounded in terms of certain mixed volumes. In case all of the polytopes are n-dimensional, this algorithm is output sensitive.
The 17th of the problems proposed by Steve Smale for the 21st century is concerned with the complexity for computing an approximate solution of a given system of $n$ complex polynomials in $n$ unknowns. More specifically, the 17th problem asks for a deterministic algorithm for this task that runs in time polynomial, on the average, in the size $N$ of the input system.
In joint work with Felipe Cucker, we managed to perform a smoothed analysis (in the sense of Spielman and Teng) of a linear homotopy algorithm for this problem that uses a random starting system. The smoothed complexity is polynomial in the input size and the variance of the random perturbation of the input systems. The techniques developed for this allow to exhibit and analyze a deterministic algorithm for computing an approximate solution that has an average complexity $N^{\Oh(\log\log N)}$, which is nearly a solution to Smale's 17th problem.
The methodology appears to be quite general: for instance it leads to new algorithms for computing eigenvalues and eigenvectors of matrices that can be shown to run in smoothed polynomial time.
We develop a theory of complexity for numerical computations that takes into account the condition of the input data and allows for roundoff in the computations. We follow the lines of the theory developed by Blum, Shub, and Smale for computations over R (which in turn followed those of the classical, discrete, complexity theory as laid down by Cook, Karp, and Levin among others). In particular, we focus on complexity classes of decision problems and paramount among them, on appropriate versions of the classes P, NP and EXP of polynomial, nondeterministic polynomial, and exponential time, respectively. We prove some basic relationships between these complexity classes and exhibit natural NP-complete problems.
I will report on progress on the complexity of the (eigenvalue,eigenvector) problem for a general n by n complex matrix that Diego Armentano, Carlos Beltran and I are in the process of completing.