oxford-warwick complexity meetingsMeetings take place online and consist of informal talks dedicated to topics of interest in computational complexity theory and related areas. Our goal is for these meetings to serve as a forum for discussion and quick dissemination of results. While presentations should be reasonably self-contained, our target is a more specialised audience.

The duration of each talk is determined by the speaker. Meetings will not be recorded, and anyone interested in complexity theory is welcome to join.
You can join the mailing list for talk announcements and keep track of them using this calendar.  Please contact Igor Oliveira (Warwick) or Rahul Santhanam (Oxford) if you have any questions.

Seminars are typically held on Thursday at 5pm London time (UTC +1) via Zoom. A link will be posted here shortly before each meeting.

Upcoming Talks:

29/October at 5pm London Time (UTC +1)
Ryan Williams (MIT) - Almost-everywhere circuit lower bounds from non-trivial derandomization


12/November at 5pm London Time (UTC +1)
Roei Tell (MIT) - Simple and fast derandomization from very hard functions - Eliminating randomness at almost no cost




Past Talks:


15/October at 5pm London Time (UTC +1)
Christian Ikenmeyer (Liverpool) - Recent progress on GCT  [ Slides ]


Abstract:  In 2016 the papers [Ikenmeyer-Panova FOCS 2016] and [Bürgisser-Ikenmeyer-Panova FOCS 2016] proved that the information about occurrences of irreducible representations in coordinate rings of certain orbit closures is too coarse to separate the algebraic complexity classes VBP and VNP. This disproved a key conjecture in geometric complexity theory. This talk gives an overview of 4 recent papers that were published after this insight.

In general, we know that geometric complexity lower bounds can always be proved by so-called highest weight polynomials. In the paper "On the complexity of evaluating highest weight vectors" with Bläser and Dörfler we prove that deciding nonzeroness of an evaluation of a highest weight polynomial is NP-hard, even on points of Waring rank 3.

This motivates the study of representation theoretic multiplicities as a way to circumvent working with highest weight polynomials directly. In the paper "The Computational Complexity of Plethysm Coefficients" with Fischer we prove that even deciding the positivity of plethysm coefficients is NP-hard, which makes several representation theoretic approaches more difficult to implement.

Despite their NP-hardness, the representation theoretic multiplicities could in principle be a viable path towards separating complexity classes. In the paper "On Geometric Complexity Theory: Multiplicity Obstructions Are Stronger Than Occurrence Obstructions" with Dörfler and Panova we give a first setting where a toy lower bound can be proved with multiplicities, but where the same bound is provably impossible to show using only occurrences.

We build on this result in the paper "Implementing geometric complexity theory: On the separation of orbit closures via symmetries" with Kandasamy. This is a full implementation of the GCT approach in a toy model, using the symmetry groups of two polynomials as a method to obtain multiplicity obstructions

1/October at 5pm London Time (UTC +1)
Rafael Pass (Cornell) - On one-way functions and Kolmogorov complexity  [ Slides ]


Abstract:  We prove the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n)>2n, the following are equivalent: Cryptographic one-way functions exists; The t-time bounded Kolmogorov Complexity problem is mildly hard-on-average. In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography. Joint work with Yanyi Liu.  [ ECCC ]

17/September at 1pm London Time (UTC +1)
Shuichi Hirahara (National Institute of Informatics, Tokyo) - Meta-complexity theoretic approach to complexity theory
 [ Slides ]

Abstract:  In this talk, I will present a broad overview of “meta-complexity theory” --- an emerging field of complexity theory. Meta-complexity refers to the computational complexity of problems asking to compute some complexity. Examples of meta-complexity theoretic problems include the Minimum Circuit Size Problem (MCSP), which asks for the circuit complexity of a given function, and the time-bounded Kolmogorov complexity problem (MINKT), which asks for the time complexity of a given string. Focusing on the relationship among meta-complexity, average-case complexity, cryptography, and Impagliazzo’s five worlds, I will survey recent results and explain new “meta-complexity theoretic” approaches towards resolving old and challenging open questions of complexity theory.

20/August at 5pm London Time (UTC +1)
Srikanth Srinivasan (IIT Bombay) - A robust version of Heged\H{u}s's lemma
 [ Slides ]

Abstract:  Heged\H{u}s's lemma is the following combinatorial statement regarding polynomials over finite fields. Over a field F of characteristic p > 0 and for q a power of p, the lemma says that any multilinear polynomial P\in \F[x_1,\ldots,x_n] of degree less than q that vanishes at all points in {0,1}^n of Hamming weight k\in [q,n-q] must also vanish at all points in {0,1}^n of weight k + q. This lemma was used by Heged\H{u}s (2009) to give a solution to Galvin's problem, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrube\v{s}, Ramamoorthy, Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-2 threshold circuits for computing some symmetric functions.

In this result, we formulate a robust version of Heged\H{u}s's lemma. Informally, this version says that if a polynomial of degree o(q) vanishes at most points of weight k, then it vanishes at many points of weight k+q. We prove this lemma and give some applications to polynomial approximations of Boolean functions.


6/August at 5pm London Time (UTC +1)
Mrinal Kumar (IIT Bombay) - On the existence of algebraically natural proofs
 [ Slides ]

Abstract:
 
A recent line of research in algebraic complexity aims to understand whether there is a Natural Proofs like barrier in algebraic complexity. A key question here is to understand whether the complexity class VP has constructible equations, i.e., whether there is a non-zero polynomial of low degree and circuit complexity which vanishes on the coefficient vectors of all efficiently computable low degree polynomials.

In this talk, we will discuss a recent result where we make some progress on a natural special case of this question, and show the existence of constructible equations for polynomials in VP which have bounded integer coefficients. More formally, we show that for every constant c > 0, there is a family {P_{N, c}} of polynomials with degree and algebraic circuit complexity polynomially bounded in the number of variables N that satisfies the following properties:
 
1. For every family {f_n} of polynomials in VP, where f_n is an n variate polynomial of degree at most n^c with bounded integer coefficients and for N = \binom{n^c + n}{n}, P_{N,c} vanishes on the coefficient vector of f_n.
 
2.There exists a family {h_n} of polynomials where h_n is an n variate polynomial of degree at most n^c with bounded integer coefficients such that for N = \binom{n^c + n}{n}, P_{N,c} does not vanish on the coefficient vector of h_n.

Our proofs are elementary and rely on the existence of (non-explicit) hitting sets for VP, and also extends to the seemingly larger class VNP.

The talk is based on joint work with Prerona Chatterjee, C. Ramya, Ramprasad Saptharishi and Anamay Tengse.

23/July at 5pm London Time (UTC +1)
Benjamin Rossman (Duke University) - Size-depth tradeoff for multiplying k permutations
 [ Slides ] 

Abstract:  We consider the problem of computing the product of k n-by-n permutation matrices. This problem is equivalent to Parity_k when n = 2; it is complete for NC1 when n = 5, and complete for Logspace when k = n. Understanding the formula complexity of this important problem is a concrete approach to separating NC1 from Logspace.

In the bounded-depth setting, the problem is solvable by depth d+1 AC0 formulas of size n^O(min(dk^(1/d), log k)) for all k,n,d. In the regime k <= log log n, a tight n^Omega(log k) lower bound was shown in [R. ’14] up to depth (log n)/(poly k). In this work, we establish a tradeoff n^Omega(dk^(1/2d)) for small depths d <= log k. The exponent 1/2d in this expression improves an 1/(exp d) lower bound of [Beame-Impagliazzo-Pitassi ’98]. We further improve this exponent to 1/d for *monotone* AC0 formulas computing the product of k *sub-permutation* matrices, and to 2/d when AND gates have fan-in polylog n; both bounds are optimal. In this talk, I will describe the interesting combinatorial framework behind these tradeoffs.


16/July at 5pm London Time (UTC +1)
Lijie Chen (MIT) - Sharp threshold results for computational complexity
 [ Slides ]

Abstract:
 
Please see https://eccc.weizmann.ac.il/report/2020/065/

2/July at 5pm London Time (UTC +1)
Or Meir (University of Haifa) - The KRW Conjecture: Past, present, and future
 [ Slides ]

Abstract:  Proving super-logarithmic lower bounds on the depth of circuits is one of the main frontiers of circuit complexity. In 1991, Karchmer, Raz and Wigderson observed that we could resolve this question by proving the following conjecture: Given two boolean functions f,g, the depth complexity of their composition is about the sum of their individual depth complexities.  While we cannot prove the conjecture yet, there has been some exciting progress toward such a proof, some of it in the last few years.

In this talk, I will survey the state-of-the-art in this line of research. In particular, I will describe a recent result that proves the conjecture for a new family of functions based on lifting theorems. I will also propose some future directions, including specific open problems that might be tractable. Based on a joint work with Susanna F. de Rezende, Jakob Nordstrom, Toniann Pitassi, and Robert Robere.


18/June at 
5pm London Time (UTC +1)
Rahul Ilango (MIT) - A lifting-esque theorem for constant depth formulas with consequences for MCSP and lower bounds  [ Slides ]


Abstract:  Connections between the Minimum Circuit Size Problem (MCSP) and a growing list of subfields -- including cryptography, learning theory, and average-case complexity -- strongly motivate understanding the computational complexity of MCSP. While attempts to prove the intractability of MCSP date as far back as the late 1950s, resolving this question remains a major open problem.

This talk will discuss progress in understanding the Minimum Circuit Size Problem for restricted classes of circuits. While Masek proved in the late 1970s that minimizing DNF formulas is NP-hard, extending this to depth-3 formulas with AND and OR gates was open. We show that minimizing depth-d formulas given a function's truth table is NP-hard for all d >= 2, under quasipolynomial-time randomized reductions. Our approach is based on a method to "lift'' depth-d formula lower bounds to depth-(d+1). Combined with existing depth-hierarchy theorems, this method also implies the existence of a function with a 2^{c_d(n^\epsilon)} additive gap between its depth-d and depth-(d+1) formula complexity where c_d > 0 is some constant that depends on d and \epsilon > 0 is some constant *independent* of d.