oxford-warwick complexity meetingsMeetings take place online and consist of informal talks dedicated to topics of interest in computational complexity theory and related areas. Our goal is for these meetings to serve as a forum for discussion and quick dissemination of results. While presentations should be reasonably self-contained, our target is a more specialised audience.

The duration of each talk is determined by the speaker. Meetings will not be recorded, and anyone interested in complexity theory is welcome to join.
You can join the mailing list for talk announcements and keep track of them using this calendar.  Please contact Igor Oliveira (Warwick) or Rahul Santhanam (Oxford) if you have any questions.

A Zoom link will be posted here shortly before each meeting.

Upcoming Talks:


23/September/2021 (Thursday) at 5pm London Time
Oliver Korten (Columbia
) - The hardest explicit construction
https://arxiv.org/abs/2106.00875  

7/October/2021 (Thursday) at 5pm London Time
Rahul Ilango (MIT
) - The Minimum Formula Size Problem is (ETH) Hard


Past Talks:

15/July/2021 (Thursday) at 5pm London Time  [ Slides ]
Mika Göös (EPFL
) - Unambiguous DNFs and Alon-Saks-Seymour
https://eccc.weizmann.ac.il/report/2021/016/  

9/July/2021 (Friday) at 4pm London Time  [ Slides ]
Nutan Limaye (IIT Bombay
) - Superpolynomial lower bounds against low-depth algebraic circuits
https://eccc.weizmann.ac.il/report/2021/081/

1/July/2021 (Thursday) at 5pm London Time  [ Slides ]
William Hoza (UT Austin
) - Better pseudodistributions and derandomization for space-bounded computation
https://eccc.weizmann.ac.il/report/2021/048/

17/June/2021 (Thursday) at 5pm London Time  [ Slides ]
Lijie Chen (MIT) - Hardness vs Randomness, Revised: Uniform, Non-Black-Box, and Instance-Wise
Textbook hardness-to-randomness converts circuit lower bounds into PRGs. But is this black-box approach really necessary for derandomization? In this new work we revamp the classical hardness-to-randomness framework, showing how to convert new types of *uniform lower bounds* into *non-black-box derandomization*, deducing conclusions such as promiseBPP = promiseP without PRGs. Moreover, we show that the same types of lower bounds are in fact *necessary* for any type of derandomization! This reveals a tight connection between any derandomization of promiseBPP (i.e., not necessarily a black-box one) and the foregoing new types of uniform lower bounds.

Our framework also allows a flexible trade-off between hardness and randomness. In an extreme setting, we show that plausible uniform lower bounds imply that "randomness is indistinguishable from useless". That is, every randomized algorithm can be derandomized with an arbitrarily small polynomial overhead, such that no polynomial-time algorithm can find a mistake with non-negligible probability.

Join work with Roei Tell.

10/June/2021 (Thursday) at 1pm London Time  [ Slides ]
Shuichi Hirahara (
National Institute of Informatics, Tokyo) - Average-case hardness of NP from exponential worst-case hardness assumptions
Basing the average-case hardness of NP on the worst-case hardness of NP is a long-standing and central open question in complexity theory, which is known as the question of excluding Heuristica from Impagliazzo’s five possible worlds.  It has been a long-standing open question to base the average-case hardness of PH on the exponential worst-case hardness of UP, and a large body of research has been devoted to explaining why standard proof techniques fail to resolve the open question.

In this work, we develop new proof techniques and resolve the open question.  We prove that if UP is not in DTIME(2^{O(n/log n)}), then NP is hard on average.  Our proofs are based on the meta-complexity of time-bounded Kolmogorov complexity: We analyze average-case complexity through the lens of worst-case meta-complexity by using several new notions such as universal heuristic scheme and P-computable average-case polynomial-time.


20/May/2021 (Thursday) at 5pm London Time  [ Slides ]
Amir Yehudayoff (Technion) - Slicing the hypercube is not easy
https://arxiv.org/abs/2102.05536

6/May/2021 (Thursday) at 3pm London Time  [ Slides ]
Bruno Loff (Porto) - Hardness of constant-round communication complexity
https://eccc.weizmann.ac.il/report/2021/030/

22/April/2021 (Thursday) at 1pm London Time  [ Slides ]
Hanlin Ren (Tsinghua) - Hardness of KT characterizes parallel cryptography
A recent breakthrough of Liu and Pass (FOCS'20) shows that one-way functions exist if and only if the (polynomial-)time-bounded Kolmogorov complexity, K^t, is bounded-error hard on average to compute. In this work, we extend their results by showing, perhaps surprisingly, that the KT complexity is bounded-error average-case hard if and only if there exist one-way functions in constant parallel time (i.e., uniform NC^0). This result crucially relies on the idea of randomized encodings. Previously, a seminal work of Applebaum, Ishai, and Kushilevitz (FOCS'04; SICOMP'06) used the same idea to show that NC^0-computable one-way functions exist if and only if logspace-computable one-way functions exist.

Inspired by the above result, we present randomized average-case reductions among the NC^1-versions and logspace-versions of K^t complexity, and the KT complexity. Our reductions preserve both bounded-error average-case hardness and zero-error average-case hardness. To the best of our knowledge, this is the first reduction between the KT complexity and any variant of K^t complexity.


8/April/2021 (Thursday) at 5pm London Time
Josh Alman (Harvard) - Kronecker products, low-depth circuits, and matrix rigidity
 [ Slides ]
https://arxiv.org/abs/2102.11992

26/March/2021 (Friday) at 5pm London Time
Avishay Tal (Berkeley) - Pseudorandom generators for read-once monotone branching programs
 [ Slides ]
Motivated by the derandomization of space-bounded computation, there has been a long line of work on constructing pseudorandom generators (PRGs) against various forms of read-once branching programs (ROBPs), with a goal of improving the O(log^2(n)) seed length of Nisan’s classic construction [Nis92] to the optimal O(log n).

In this work, we construct an explicit PRG with seed length O~(log n) for constant-width ROBPs that are monotone, meaning that the states at each time step can be ordered so that edges with the same labels never cross each other. Equivalently, for each fixed input, the transition functions are a monotone function of the state.

Our PRG also works for monotone ROBPs that can read the input bits in any order, which are strictly more powerful than read-once AC0. Our PRG achieves better parameters (in terms of the dependence on the depth of the circuit) than the best previous pseudorandom generator for read-once AC0, due to Doron, Hatami, and Hoza [DHH19].

Joint work with Dean Doron, Raghu Meka, Omer Reingold, and Salil Vadhan.


11/March/2021 (Thursday) at 3pm London Time
Yuval Filmus (Technion) - Bounded indistinguishability for simple sources
 [ Slides ]
Braverman's theorem states that polylog(n)-wise independence fools AC^0. In contrast, since the approximate degree of OR is sqrt{n}, there are two sqrt{n}-wise indistinguishable distributions which do not even fool OR!

Motivated by cryptographic applications, and continuing earlier work of Bogdanov, Ishai, Viola and Williamson, we ask whether Braverman-style results can be recovered for simple distributions, such as ones samplable locally or using low degree polynomials.

We relate these questions to the notorious problem of computing inner product using AC^0 circuits with parity gates at the bottom. This suggests that proving Braverman-style results for AC^0 is hard even for linear sources, and so we turn to consider the OR function.

We prove a Braverman-style theorem for OR which holds for constant degree sources and for local sources; in contrast to Braverman's theorem, we only require O(1)-wise indistinguishability. On the other hand, we show that logarithmic degree suffices to sample a pair of sqrt{n}-wise indistinguishable sources which can be distinguished by OR. Our construction also results in a new visual secret sharing scheme.

Joint work with Andrej Bogdanov (CUHK), Krishnamoorthy Dinesh (CUHK), Yuval Ishai (Technion), Avi Kaplan (Technion), Akshayaram Srinivasan (TIFR).

25/February/2021 (Thursday) at 5pm London Time  [ Slides ]
Robert Robere (McGill) - Amortized circuit complexity, formal complexity measures, and catalytic algorithms
Some of the central questions in complexity theory address the amortized complexity of computation (also sometimes known as direct sum problems). While these questions appear in many contexts, they are all variants of the following:

Is the best way of computing T many times in parallel simply to compute T independently each time, or, can we achieve an economy of scale and compute all copies of T more efficiently on average?

In this talk, we discuss some recent results studying the amortized circuit complexity of computing boolean functions in various circuit models. The amortized circuit complexity of a boolean function f is defined to be the limit, as m tends to infinity, of the circuit complexity of computing f on the same input m times, divided by m. We prove a new duality theorem for amortized circuit complexity in any circuit model composed of gates over a finite gate set, showing that the amortized circuit complexity of computing f is equal to the best lower bound achieved by any "formal complexity measure" applied to f. This new duality theorem is inspired by, and closely related to, Strassen's duality theorem for semirings, which has been fruitfully used to characterize the matrix multiplication exponent, the Shannon Capacity of graphs, as well as other important parameters in combinatorics and complexity. We discuss how our new duality theorem can be used to give alternative proofs of upper bounds on amortized circuit complexity, and also the close relationship between amortized circuit complexity and catalytic algorithms, in which an algorithm is provided with an extra input of advice bits that it is free to use, as long as it outputs a new copy of the extra advice on termination.

This is joint work with Jeroen Zuiddam.


12/February/2021 (Friday) at 4pm London Time  [ Slides ]
Susanna F. de Rezende (Czech Academy of Sciences) - Automating tree-like resolution in time n^o(log n) is ETH-hard
We will start this talk by presenting a simplified proof of the breakthrough result of Atserias and Müller (FOCS'19) that automating resolution is NP-hard. This is based on a joint work with Mika Göös, Jakob Nordström, Toni Pitassi, Robert Robere and Dmitry Sokolov. Building on this, we will show that tree-like resolution is not automatable in time n^o(log n) unless ETH is false. This implies that, under ETH, the simple algorithm given by Beame and Pitassi (FOCS'96) that automates tree-like resolution in time n^O(log n) is optimal.

28/January
/2021 at 5pm London Time  [ Slides ]
Alexandros Hollender (Oxford) - The complexity of gradient descent: CLS = PPAD ∩ PLS

https://arxiv.org/abs/2011.01929

14/January/2021 at 5pm London Time  [ Slides ]
Eric Allender (Rutgers) - Cryptographic Hardness under Projections for Time-Bounded Kolmogorov Complexity

A version of time-bounded Kolmogorov complexity, denoted KT, has received attention in the past several years, due to its close connection to circuit complexity and to the Minimum Circuit Size Problem MCSP. Essentially all results about the complexity of MCSP hold also for MKTP (the problem of computing the KT complexity of a string). Both MKTP and MCSP are hard for SZK (Statistical Zero Knowledge) under BPP-Turing reductions; neither is known to be NP-complete.

Recently, some hardness results for MKTP were proved that are not (yet) known to hold for MCSP.  In particular, MKTP is hard for DET (a subclass of P) under nonuniform NC^0 m-reductions.

In new work, we improve this, to show that MKTP is hard for the (apparently larger) class NISZKL under not only NC^0 reductions but even under projections.  Also MKTP is hard for NISZK under P/poly m-reductions. Here, NISZK is the class of problems with non-interactive zero-knowledge proofs, and NISZKL is the non-interactive version of the class SZKL that was studied by Dvir et al.

In this talk, we'll explain why we should care about completeness under projections, and what these new results tell us about where MCSP and MKTP fit in the complexity landscape.
This is joint work with John Gouwar, Shuichi Hirahara, and Caleb Robelle.

3/December/2020 at 5pm London Time
Toniann Pitassi (Toronto/IAS) - Lifting with Sunflowers
 [ Slides ]
In this talk I will first motivate lifting theorems where lower bounds on communication complexity for composed functions are obtained by a general simulation theorem, essentially showing that no protocol can do better than the obvious "query" algorithm. I'll give a self-contained simplified proof of lifting that uses the sunflower lemma, and then give some applications and open problems. This is joint work with Shachar Lovett, Raghu Meka, Ian Mertz and Jiapeng Zhang.

12/November/2020 at 5pm London Time
Roei Tell (MIT) - Simple and fast derandomization from very hard functions - Eliminating randomness at almost no cost
 [ Slides ]
https://eccc.weizmann.ac.il/report/2020/148/

29/October/2020 at 5pm London Time
Ryan Williams (MIT) - Almost-everywhere circuit lower bounds from circuit-analysis algorithms
 [ Slides ]
https://eccc.weizmann.ac.il/report/2020/150/

15/October/2020 at 5pm London Time
Christian Ikenmeyer (Liverpool) - Recent progress on GCT  [ Slides ]

In 2016 the papers [Ikenmeyer-Panova FOCS 2016] and [Bürgisser-Ikenmeyer-Panova FOCS 2016] proved that the information about occurrences of irreducible representations in coordinate rings of certain orbit closures is too coarse to separate the algebraic complexity classes VBP and VNP. This disproved a key conjecture in geometric complexity theory. This talk gives an overview of 4 recent papers that were published after this insight.

In general, we know that geometric complexity lower bounds can always be proved by so-called highest weight polynomials. In the paper "On the complexity of evaluating highest weight vectors" with Bläser and Dörfler we prove that deciding nonzeroness of an evaluation of a highest weight polynomial is NP-hard, even on points of Waring rank 3.

This motivates the study of representation theoretic multiplicities as a way to circumvent working with highest weight polynomials directly. In the paper "The Computational Complexity of Plethysm Coefficients" with Fischer we prove that even deciding the positivity of plethysm coefficients is NP-hard, which makes several representation theoretic approaches more difficult to implement.

Despite their NP-hardness, the representation theoretic multiplicities could in principle be a viable path towards separating complexity classes. In the paper "On Geometric Complexity Theory: Multiplicity Obstructions Are Stronger Than Occurrence Obstructions" with Dörfler and Panova we give a first setting where a toy lower bound can be proved with multiplicities, but where the same bound is provably impossible to show using only occurrences.

We build on this result in the paper "Implementing geometric complexity theory: On the separation of orbit closures via symmetries" with Kandasamy. This is a full implementation of the GCT approach in a toy model, using the symmetry groups of two polynomials as a method to obtain multiplicity obstructions

1/October/2020 at 5pm London Time
Rafael Pass (Cornell) - On one-way functions and Kolmogorov complexity  [ Slides ]

We prove the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n)>2n, the following are equivalent: Cryptographic one-way functions exists; The t-time bounded Kolmogorov Complexity problem is mildly hard-on-average. In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography. Joint work with Yanyi Liu.  [ ECCC ]

17/September/2020 at 1pm London Time
Shuichi Hirahara (National Institute of Informatics, Tokyo) - Meta-complexity theoretic approach to complexity theory
 [ Slides ]
In this talk, I will present a broad overview of “meta-complexity theory” --- an emerging field of complexity theory. Meta-complexity refers to the computational complexity of problems asking to compute some complexity. Examples of meta-complexity theoretic problems include the Minimum Circuit Size Problem (MCSP), which asks for the circuit complexity of a given function, and the time-bounded Kolmogorov complexity problem (MINKT), which asks for the time complexity of a given string. Focusing on the relationship among meta-complexity, average-case complexity, cryptography, and Impagliazzo’s five worlds, I will survey recent results and explain new “meta-complexity theoretic” approaches towards resolving old and challenging open questions of complexity theory.

20/August/2020 at 5pm London Time
Srikanth Srinivasan (IIT Bombay) - A robust version of Heged\H{u}s's lemma
 [ Slides ]
Heged\H{u}s's lemma is the following combinatorial statement regarding polynomials over finite fields. Over a field F of characteristic p > 0 and for q a power of p, the lemma says that any multilinear polynomial P\in \F[x_1,\ldots,x_n] of degree less than q that vanishes at all points in {0,1}^n of Hamming weight k\in [q,n-q] must also vanish at all points in {0,1}^n of weight k + q. This lemma was used by Heged\H{u}s (2009) to give a solution to Galvin's problem, an extremal problem about set systems; by Alon, Kumar and Volk (2018) to improve the best-known multilinear circuit lower bounds; and by Hrube\v{s}, Ramamoorthy, Rao and Yehudayoff (2019) to prove optimal lower bounds against depth-2 threshold circuits for computing some symmetric functions.

In this result, we formulate a robust version of Heged\H{u}s's lemma. Informally, this version says that if a polynomial of degree o(q) vanishes at most points of weight k, then it vanishes at many points of weight k+q. We prove this lemma and give some applications to polynomial approximations of Boolean functions.


6/August/2020 at 5pm London Time
Mrinal Kumar (IIT Bombay) - On the existence of algebraically natural proofs
 [ Slides ]
A recent line of research in algebraic complexity aims to understand whether there is a Natural Proofs like barrier in algebraic complexity. A key question here is to understand whether the complexity class VP has constructible equations, i.e., whether there is a non-zero polynomial of low degree and circuit complexity which vanishes on the coefficient vectors of all efficiently computable low degree polynomials.

In this talk, we will discuss a recent result where we make some progress on a natural special case of this question, and show the existence of constructible equations for polynomials in VP which have bounded integer coefficients. More formally, we show that for every constant c > 0, there is a family {P_{N, c}} of polynomials with degree and algebraic circuit complexity polynomially bounded in the number of variables N that satisfies the following properties:
 
1. For every family {f_n} of polynomials in VP, where f_n is an n variate polynomial of degree at most n^c with bounded integer coefficients and for N = \binom{n^c + n}{n}, P_{N,c} vanishes on the coefficient vector of f_n.
 
2.There exists a family {h_n} of polynomials where h_n is an n variate polynomial of degree at most n^c with bounded integer coefficients such that for N = \binom{n^c + n}{n}, P_{N,c} does not vanish on the coefficient vector of h_n.

Our proofs are elementary and rely on the existence of (non-explicit) hitting sets for VP, and also extends to the seemingly larger class VNP.

The talk is based on joint work with Prerona Chatterjee, C. Ramya, Ramprasad Saptharishi and Anamay Tengse.

23/July/2020 at 5pm London Time
Benjamin Rossman (Duke University) - Size-depth tradeoff for multiplying k permutations
 [ Slides ] 
We consider the problem of computing the product of k n-by-n permutation matrices. This problem is equivalent to Parity_k when n = 2; it is complete for NC1 when n = 5, and complete for Logspace when k = n. Understanding the formula complexity of this important problem is a concrete approach to separating NC1 from Logspace.

In the bounded-depth setting, the problem is solvable by depth d+1 AC0 formulas of size n^O(min(dk^(1/d), log k)) for all k,n,d. In the regime k <= log log n, a tight n^Omega(log k) lower bound was shown in [R. ’14] up to depth (log n)/(poly k). In this work, we establish a tradeoff n^Omega(dk^(1/2d)) for small depths d <= log k. The exponent 1/2d in this expression improves an 1/(exp d) lower bound of [Beame-Impagliazzo-Pitassi ’98]. We further improve this exponent to 1/d for *monotone* AC0 formulas computing the product of k *sub-permutation* matrices, and to 2/d when AND gates have fan-in polylog n; both bounds are optimal. In this talk, I will describe the interesting combinatorial framework behind these tradeoffs.


16/July/2020 at 5pm London Time
Lijie Chen (MIT) - Sharp threshold results for computational complexity
 [ Slides ]
https://eccc.weizmann.ac.il/report/2020/065/

2/July/2020 at 5pm London Time
Or Meir (University of Haifa) - The KRW Conjecture: Past, present, and future
 [ Slides ]
Proving super-logarithmic lower bounds on the depth of circuits is one of the main frontiers of circuit complexity. In 1991, Karchmer, Raz and Wigderson observed that we could resolve this question by proving the following conjecture: Given two boolean functions f,g, the depth complexity of their composition is about the sum of their individual depth complexities.  While we cannot prove the conjecture yet, there has been some exciting progress toward such a proof, some of it in the last few years.

In this talk, I will survey the state-of-the-art in this line of research. In particular, I will describe a recent result that proves the conjecture for a new family of functions based on lifting theorems. I will also propose some future directions, including specific open problems that might be tractable. Based on a joint work with Susanna F. de Rezende, Jakob Nordstrom, Toniann Pitassi, and Robert Robere.


18/June
/2020 at 5pm London Time
Rahul Ilango (MIT) - A lifting-esque theorem for constant depth formulas with consequences for MCSP and lower bounds  [ Slides ]

Connections between the Minimum Circuit Size Problem (MCSP) and a growing list of subfields -- including cryptography, learning theory, and average-case complexity -- strongly motivate understanding the computational complexity of MCSP. While attempts to prove the intractability of MCSP date as far back as the late 1950s, resolving this question remains a major open problem.

This talk will discuss progress in understanding the Minimum Circuit Size Problem for restricted classes of circuits. While Masek proved in the late 1970s that minimizing DNF formulas is NP-hard, extending this to depth-3 formulas with AND and OR gates was open. We show that minimizing depth-d formulas given a function's truth table is NP-hard for all d >= 2, under quasipolynomial-time randomized reductions. Our approach is based on a method to "lift'' depth-d formula lower bounds to depth-(d+1). Combined with existing depth-hierarchy theorems, this method also implies the existence of a function with a 2^{c_d(n^\epsilon)} additive gap between its depth-d and depth-(d+1) formula complexity where c_d > 0 is some constant that depends on d and \epsilon > 0 is some constant *independent* of d.