Logic seminar
However, the limits constructed in the theory of flag algebras are of an algebraic/syntactic nature, due to the minimalist nature of the theory (as opposed to the more semantic/geometric nature of the previous ad hoc limits of the literature). In this talk, I will present the semantic/geometric counterpart to flag algebras: the theory of theons. I will also give a couple of examples of proofs both within continuous combinatorics and of applications that use its tools.
This talk is based on joint work with Alexander A. Razborov.
I will show how to prove the exponential lower bound directly for intuitionistic implicational logic without any translations, using a simple argument based on an efficient version of Kleene's slash. Apart from Frege, it applies directly to sequent calculus and (dag-like) natural deduction, obviating also the need for translation of these proof systems to Frege.
I will also mention a tangentially related problem about classical Frege systems.
One motivation for this work comes from presistent claims by Gordeev and Haeusler, who purport to show that all intuitionistic implicational tautologies have polynomial-size dag-like natural deduction proofs, implying NP = PSPACE. Their claims are false as they contradict the above-mentioned exponential lower bounds (and, in fact, also exponential lower bounds on constant-depth Frege), but for a non-specialist, understanding this requires tracking down multiple papers and some reading between the lines. Our argument consolidates all proof-theoretic components of the lower bound into one simple proof, depending only on the Alon-Boppana circuit lower bound.
[1] Buss, Kolodzieczyk and Thapen: Fragments of approximate counting, JSL 79. [2] Li, Igor Carboni Oliveira: Unprovability of strong complexity lower bounds in bounded arithmetic. ECCC~TR23-022. [3] P. Pudlak: Consistency and games - in search of new combinatorial principles. Proc. Logic Colloquium'03.
Weighted Polynomial Calculus is a natural generalization of MaxSAT-Resolution and weighted Resolution that manipulates polynomials with coefficients in a finite field and either weights in N or Z. We show the soundness and completeness of these systems via an algorithmic procedure.
Weighted Polynomial Calculus, with weights in N and coefficients in F_2, is able to prove efficiently that Tseitin formulas on a connected graph are minimally unsatisfiable. Using weights in Z, it also proves efficiently that the Pigeonhole Principle is minimally unsatisfiable.
I. $S^1_2$ proves that there is a function $g\in E$ hard to approximate by subexponential-size circuits.
II. (Learning from the non-existence of OWFs.) $S^1_2$ proves that a p-time reduction transforms circuits breaking one-way functions to p-size circuits learning p-size circuits over the uniform distribution, with membership queries.
Here, $S^1_2$ is Buss's theory of bounded arithmetic formalizing p-time reasoning.
Further, we show that any of the following assumptions implies that $P \ne NP$, if EF is not p-bounded:
1. (Feasible anticheckers.) $S^1_2$ proves that a p-time function generates anticheckers for SAT.
2. (Witnessing $NP\not\subseteq P/poly$.) $S^1_2$ proves that a p-time function witnesses an error of each p-size circuit which fails to solve SAT.
3. (OWFs from $NP\not\subseteq P/poly$ & hardness of E.) Condition I holds and $S^1_2$ proves that a p-time reduction transforms circuits breaking one-way functions to p-size circuits computing SAT.
The results generalize to stronger theories and proof systems.
This is joint work with Rahul Santhanam.
Based on joint work with Jan Pich.
In recent decades, a particularly interesting case study has been provided by RT^2_2: Ramsey's theorem for pairs and two colours. RT^2_2 and its variants are anomalous in the sense of not being equivalent to any of the typical set existence axioms considered in reverse mathematics, or to each other. Characterizing the first-order part of RT^2_2 is a long-standing and highly interesting open problem. Methods used to make progress on this problem have typically been based on some combination of recursion theory and nonstandard models of arithmetic, with the latter playing an increasingly important role.
I will briefly discuss some major results on the first-order part of RT^2_2 obtained over the years and then talk about some more recent work related to the problem that I have been involved in (jointly with Marta Fiori Carones, Kasia Kowalik, Keita Yokoyama and Tin Lok Wong).
On the surface QBF proof systems seem harder to compare to one another since they vary considerably in how they deal with quantification. In particular there's a vast difference between theory systems that generalise from propositional logic, the proof systems that capture the process of CDCL solving and the proof systems that capture the expansion and CEGAR based solvers. And many results do formally show an incomparability in the proof complexity between different systems as we suspect. However, there are a few non-obvious cases where a simulation is possible. In this talk we will examine the landscape of simulation in QBF, looking at the results over the last decade, as well as our own recent work with simulations based on strategy extraction and propositionalisation. We will interpret the theoretical and practical importance of these results.
We show that Zhuk's algorithm for algebras with linear congruence can be formalized in the theory V^1. Thus, using known methods of proof complexity, Zhuk's algorithm for negative instances of the problem can be augmented by extra information: it not only rejects X that cannot be homomorphically mapped into A but produces a certificate - a short extended Frege (EF) propositional proof - that this rejection is correct.
We show that Zhuk's algorithm for algebras with linear congruence can be formalized in the theory V^1. Thus, using known methods of proof complexity, Zhuk's algorithm for negative instances of the problem can be augmented by extra information: it not only rejects X that cannot be homomorphically mapped into A but produces a certificate - a short extended Frege (EF) propositional proof - that this rejection is correct.
A "Goodstein walk" is any sequence obtained by following the standard Goodstein process but arbitrarily choosing how each element of the sequence is represented. As it turns out, any Goodstein walk is finite, and indeed the longest possible Goodstein walk is given by the standard normal forms. In this talk we sketch a proof of this fact and discuss how a similar idea can be applied to other variants of the Goodstein process, including those based on the Ackermann function.
Joint work with A. Weiermann
This talk is based on joint work with Noah Fleming, Mika Goos, and Stefan Grosser.
In ABRW04 the authors suggested the "functional encoding" of the considered statement for Nisan--Wigderson generator that allows the introduction of "local" extension variables and gave a lower bound on the length of Resolution proofs if the number of extension variables is bounded by the n^2 (where n is the number of inputs of the PRG).
In this talk, we discuss a "heavy width" measure for Resolution that allows us to show a lower bound on the length of Resolution proofs of the considered statement for the Nisan--Wigderson generator with a superpolynomial number of local extension variables. It is a solution to one of the open problems from ABRW04.
1. Provable learning: P proves efficiently that p-size circuits are learnable by subexponential-size circuits over the uniform distribution with membership queries.
2. Provable automatability: P proves efficiently that P is automatable by non-uniform circuits on propositional formulas expressing p-size circuit lower bounds.
Here, P is sufficiently strong and well-behaved if I.-III. holds: I. P p-simulates Jeřábek's system WF (which strengthens the Extended Frege system EF by a surjective weak pigeonhole principle); II. P satisfies some basic properties of standard proof systems which p-simulate WF; III. P proves efficiently for some Boolean function h that h is hard on average for circuits of subexponential size. For example, if III. holds for P=WF, then Items 1 and 2 are equivalent for P=WF.
If there is a function h in NE\cap coNE which is hard on average for circuits of size 2^{n/4}, for each sufficiently big n, then there is an explicit propositional proof system P satisfying properties I.-III., i.e. the equivalence of Items 1 and 2 holds for P.
This is joint work with Rahul Santhanam.
This talk is based on a joint work with Maria Luisa Bonet.
For each k >= 1, there is a language L in NP such that circuits for L of size O(n^k) cannot be learned in deterministic polynomial time with access to n^o(1) EQs.
We employ such results to investigate the (un)provability of non-uniform circuit upper bounds (e.g., Is NP contained in SIZE[n^3]?) in theories of bounded arithmetic. Some questions of this form have been addressed in recent papers of Krajicek-Oliveira (2017), Muller-Bydzovsky (2020), and Bydzovsky-Krajicek-Oliveira (2020) via a mixture of techniques from proof theory, complexity theory, and model theory. In contrast, by extracting computational information from proofs via a direct translation to LEARN-uniformity, we establish robust unprovability theorems that unify, simplify, and extend nearly all previous results. In addition, our lower bounds against randomized LEARN-uniformity yield unprovability results for theories augmented with the dual weak pigeonhole principle, such as APC^1 (Jerabek, 2007), which is known to formalize a large fragment of modern complexity theory.
Finally, we make precise potential limitations of theories of bounded arithmetic such as PV (Cook, 1975) and Jerabek's theory APC^1, by showing unconditionally that these theories cannot prove statements like "NP is not contained in BPP, and NP is contained in ioP/poly", i.e., that NP is uniformly "hard" but non-uniformly "easy" on infinitely many input lengths. In other words, if we live in such a complexity world, then this cannot be established feasibly.
We use the approach to give an explicit sequence of CNF formulas phi_n such that VNP \neq VP iff there are no polynomial-size IPS proofs for the formulas phi_n. This provides the first natural equivalence between proof complexity lower bounds and standard algebraic complexity lower bounds. Our proof of this fact uses the implication from IPS lower bounds to algebraic complexity lower bounds due to Grochow and Pitassi together with a diagonalization argument: the formulas phi_n themselves assert the non-existence of short IPS proofs for formulas encoding VNP \neq VP at a different input length. Our result also has meta-mathematical implications: it gives evidence for the difficulty of proving strong lower bounds for IPS within IPS.
For any strong enough propositional proof system R, we define the *iterated R-lower bound formulas*, which inductively assert the non-existence of short R proofs for formulas encoding the same statement at a different input length, and propose them as explicit hard candidates for the proof system R. We observe that this hypothesis holds for Resolution following recent results of Atserias and Muller and of Garlik, and give evidence in favour of it for other proof systems.
Joint work with Rahul Santhanam
It is a classical result that the theory CT^- with the full induction scheme for the formulae containing the truth predicate, called CT, is not conservative over PA. In fact, it proves so called global reflection principle which says that an arbitrary arithmetical sentence provable in PA is true (in the sense of the truth predicate). On the other hand, a theorem by Kotlarski, Krajewski, and Lachlan shows that CT^- is conservative.
In the presentation, we will discuss Tarski Boundary programme which tries to delineate the conservative and nonconservative extensions of the compositional truth theory CT^-. It turns out that many natural nonconservative truth axioms are exactly equivalent to Global Reflection which appears to be arguably the minimal "natural" truth theory which is not conservative over PA (though now we can systematically produce theories of truth whose strength is below Global Reflection).
On the other hand, Enayat and Visser invented a remarkably effective method of demonstrating that certain extensions of CT^- are in fact conservative which relies on a neat model-theoretic argument. It turns out that the uniformmity of this method allows in fact to obtain much more fine-grained understanding of the proof-theoretic properties of CT^-. In particular, one can show that proofs of arithmetical statements from the compositional truth axioms can be reduced to proofs in Peano Arithmetic with a PTIME algorithm.
This the first half of a two-part talk. It will be devoted mostly to the conservativeness of truth theories while the reducibility issues will be discussed mostly in the second part.
More specifically, suppose kappa is an uncountable regular cardinal (typically kappa can be taken to be the size of the reals). We will review a variety of compactness principles, such as the tree property, stationary reflection, Rado's conjecture, etc., which claim that if all parts of size < kappa of a given structure of size kappa have some property, so does the whole structure.
We will discuss basic models in which such principles hold, consistency strength of these principles, implications between the principles and other hypotheses (such as CH), and some consequences.
The technique known so far to prove depth lower bounds for Stabbing Planes is a generalization of that used for the Cutting Planes proof system. In this work we introduce two new approaches to prove length/depth lower bounds in Stabbing Planes: one relying on Sperner's Theorem which works for the Pigeonhole principle and Tseitin contradictions over the complete graph; a second proving the lower bound for Tseitin contradictions over a grid graph, which uses a result on essential coverings of the boolean cube by linear polynomials, which in turn relies on Alon's combinatorial Nullenstellensatz.
(Joint work with Stefan Dantchev, Nicola Galesi and Abdul Ghani.)
First, we demonstrate our approach for two-party communication and prove an exponential lower bound on the size of tree-like Res(+) refutations of the Perfect matching principle. Then we apply our approach to k-party communication complexity in the NOF model and obtain a lower bound on the randomized k-party communication complexity of Search(BPHP) w.r.t. to some natural partition of the variables, where BPHP is the bit pigeonhole principle. In particular, this lower bound implies that the bit pigeonhole requires exponential tree-like Th(k) proofs, where Th(k) is the semantic proof system operating with polynomial inequalities of degree at most k.
The talk is based on the joint work with Artur Riazanov that is available as ECCC report https://eccc.weizmann.ac.il/report/2020/184/.
This talk is based on two recent papers, joint with Joshua Blinkhorn and Meena Mahajan (LICS'20) and with Benjamin Boehm (ITCS'21).
In this talk, I will survey the technology, with an emphasis on formal mathematics. I will then discuss aspects of interactive theorem proving that may be of interest to the working logician, and places where a better theoretical understanding can lead to progress. Specifically, I'll discuss the need for practical foundations, search procedures, decision procedures, and proof systems.
Joint work with Yaroslav Alekseev, Dima Grigoriev and Edward A. Hirsch. See full version here: https://eccc.weizmann.ac.il/report/2019/142/download
On the other hand, for typical logics of bounded width, the EF and SF systems are p-equivalent, and they are in a certain sense p-equivalent to the classical EF system, for which we do not know any lower bounds, and even candidate hard tautologies are scarce.
The main ingredients in all the lower bounds above (as well as similar bounds for superintuitionistic logics) were variants of the feasible disjunction property (DP), playing a role similar to feasible interpolation in classical proof systems.
This raises the question what happens for logics with bounded branching but unbounded width, such as K4BB_k, S4BB_k, and GLBB_k. In this talk, I will show that while these logics apparently do not have feasible DP as such, they have DP (and more general extension rules) decidable in ZPP^{NP}, and for k=2, even in promise-S_2^P. Moreover, one can also adapt Hrubes's form of "monotone interpolation" to this setting.
As a consequence, we prove superpolynomial speedup of SF over EF assuming PSPACE \ne NP (possibly exponential under a stronger hypothesis) for all modal logics included in GLBB_2 or S4GrzBB_2, and more generally, for all logics L that, in a certain sense, admit all finite binary trees (with reflexive or irreflexive points as convenient) to appear inside L-frames.
On the other hand, for typical logics of bounded width, the EF and SF systems are p-equivalent, and they are in a certain sense p-equivalent to the classical EF system, for which we do not know any lower bounds, and even candidate hard tautologies are scarce.
The main ingredients in all the lower bounds above (as well as similar bounds for superintuitionistic logics) were variants of the feasible disjunction property (DP), playing a role similar to feasible interpolation in classical proof systems.
This raises the question what happens for logics with bounded branching but unbounded width, such as K4BB_k, S4BB_k, and GLBB_k. In this talk, I will show that while these logics apparently do not have feasible DP as such, they have DP (and more general extension rules) decidable in ZPP^{NP}, and for k=2, even in promise-S_2^P. Moreover, one can also adapt Hrubes's form of "monotone interpolation" to this setting.
As a consequence, we prove superpolynomial speedup of SF over EF assuming PSPACE \ne NP (possibly exponential under a stronger hypothesis) for all modal logics included in GLBB_2 or S4GrzBB_2, and more generally, for all logics L that, in a certain sense, admit all finite binary trees (with reflexive or irreflexive points as convenient) to appear inside L-frames.
We prove that Tseitin formulas Ts(G), for an undirected graph G, require proofs of size 2^{tw(G)^{\Omega(1/d)}} in depth d-Frege systems for d<{K \log n}/{\log \log n}, where tw(G) is the treewidth of G and K a constant. If time will allow, we prove tightness of the lower bound. Namely, that for large enough d, there are depth d-Frege proofs for Ts(G) of size 2^{\tw(G)^{\O(1/d)}} \poly(|Ts(G)|).This is a joint work with Dmitry Itsykson, Artur Riazanov and Anastasia Sofronova
Pavel Pudlak, Neil Thapen, Jan Krajíček
organizers