Statistiques
| Révision :

root / CSL17 / arithmetic.tex @ 227

Historique | Voir | Annoter | Télécharger (18,62 ko)

1
\section{An arithmetic for the polynomial hierarchy}\label{sect:arithmetic}
2
%Our base language is $\{ 0, \succ{} , + , \times, \smsh , |\cdot| , \leq \}$. 
3
Our base language consists of constant and function symbols $\{ 0, \succ{} , + , \times, \smsh , |\cdot|, \hlf{}.\}$ and predicate symbols 
4
 $\{\leq, \safe, \normal \}$. The function symbols are interpreted in the intuitive way, with $|x|$ denoting the length of $x$ seen as a binary string, and $\smash(x,y)$ denoting $2^{|x||y|}$, so a string of length $|x||y|+1$.
5
 We may write $\succ{0}(x)$ for $2\cdot x$, $\succ{1}(x)$ for $\succ{}(2\cdot x)$, and $\pred (x)$ for $\hlf{x}$, to better relate to the $\bc$ setting.
6
 
7
We consider formulas of classical first-order logic, over $\neg$, $\cand$, $\cor$, $\forall$, $\exists$, along with usual shorthands and abbreviations. 
8
%The formula $A \cimp B$ will be a notation for $\neg A \cor B$.
9
%We will also use as shorthand notations:
10
%$$ (s=t) = (s\leq t) \cand (t\leq s), \quad (s\neq t) = \neg(s=t).$$ 
11
\textit{Atomic formulas} formulas are of the form $(s\leq t)$, for terms $s,t$.
12
 We will assume, without loss of generality, that formulas are in \textit{De Morgan normal form}, that is to say that in formulas negation can only occur on atomic formulas, and that there is not any occurrence of a subformula of the form $\neg \neg A$.
13
 
14
 
15
 We write $\exists x^{N_i} . A$ or $\forall x^{N_i} . A$ for $\exists x . (N_i (x) \cand A)$ and $\forall x . (N_i (x) \cimp A)$ respectively. We refer to these as \emph{safe} quantifiers.
16
 We also write $\exists x^\normal \leq |t| . A$ for $\exists x^\normal . ( x \leq |t| \cand A )$ and $\forall x^\normal \leq |t|. A $ for $\forall x^\normal. (x \leq |t| \cimp A )$. We refer to these as \emph{sharply bounded} quantifiers, as in bounded arithmetic.
17

    
18

    
19
The theories we introduce are directly inspired from bounded arithmetic, namely the theories $S^i_2$.
20
We include a similar set of axioms for direct comparison, although in our setting these are not minimal.
21
Indeed, $\#$ can be defined using induction in our setting.
22

    
23
The $\basic$ axioms of bounded arithmetic give the inductive definitions and interrelationships of the various function symbols.
24
It also states the fundamental algebraic properties, i.e.\ $(0,\succ{ } )$ is a free algebra, and, for us, it will also give us certain `typing' information for our function symbols based on their $\bc$ specification, with safe inputs ranging over $\safe$ and normal ones over $\normal$.
25
For instance, we include the following axioms:\footnote{Later some of these will be redundant, for instance $\safe (u \times x) $ and $\safe (u \smsh v)$ are consequences of $\Sigma^\safe_i$-$\pind$, but $\safe (x + y)$ is not since both inputs are safe and so we cannot induct.}
26
\[
27
\begin{array}{l}
28
\forall u^\normal. \safe(u) \\
29
\safe (0) \\
30
\forall x^\safe . \safe (\succ{} x) \\
31
\forall x^\safe . 0 \neq \succ{} (x) \\
32
\forall x^\safe , y^\safe . (\succ{} x = \succ{} y \cimp x = y) \\
33
\forall x^\safe . (x = 0 \cor \exists y^\safe.\  x = \succ{} y   )  
34
\end{array}
35
\qquad
36
\begin{array}{l}
37
\forall x^\safe, y^\safe . \safe(x+y)\\
38
\forall u^\normal, x^\safe . \safe(u\times x) \\
39
\forall u^\normal , v^\normal . \safe (u \smsh v)\\
40
\forall u^\safe .\safe(\hlf{u})\\
41
\forall u^\normal .\safe(|x|)
42
\end{array}
43
\]
44
Notice that we have $\normal \subseteq \safe$.
45
A full list of our $\basic$ axioms can be found in Appendix \ref{appendix:arithmetic}.
46

    
47
%\begin{definition}
48
%	[Basic theory]
49
%	The theory $\basic$ consists of the axioms from Appendix \ref{appendix:arithmetic}.
50
%	\end{definition}
51

    
52

    
53
%Notation: if $\vec t=t_0,\dots, t_k$, we will denote as $\safe(\vec t)$ the sequence of formulas $\safe(t_0),\dots, \safe(t_k)$. Similarly for $\normal(\vec t)$.
54
  
55
%\begin{definition}
56
%[Derived functions and notations]
57
%We write $1,2,3,\dots$ for the terms $\succ{} 0, \succ{} \succ{} 0, \succ{} \succ{} \succ{} 0 \dots$, and frequently omit the $\times$ symbol.
58
%We define the functions $\succ 0 x , \succ 1 x$ as $2 x$ and $2x +1$ respectively.
59
%
60
%Need $bit$, $\beta$ , $\pair{}{}{}$.
61
%\end{definition}
62
%
63
%(Here use a variation of S12 with sharply bounded quantifiers and safe quantifiers)
64
%
65
%Use base theory + sharply bounded quantifiers.
66

    
67

    
68
\begin{definition}
69
[Quantifier hierarchy]
70
$\Sigma^\safe_0 = \Pi^\safe_0 $ is the set of formulae whose only quantifiers are sharply bounded and where $\safe , \normal$ do not occur free.
71
We define $\Sigma^\safe_{i+1}$ as the closure of $\Pi^\safe_i $ under $\cor, \cand $, safe existentials and sharply bounded quantifiers.
72
We define $\Pi^\safe_{i+1}$ as the closure of $\Sigma^\safe_i $ under $\cor, \cand $, safe universals and sharply bounded quantifiers.
73
\end{definition}
74

    
75

    
76
Notice that the criterion that $\safe$ does not occur free is not a real restriction, since we can write $\safe (x)$ as $\exists y^\safe . y=x$.
77
The criterion is purely to give an appropriate definition of the hierarchy above.
78

    
79
%\anupam{Collection principles for prenexing? Otherwise need to add closure under sharply bounded quantifiers.}
80

    
81

    
82
\begin{definition}
83
[Polynomial induction]
84
\label{def:polynomialinduction}
85
The \emph{polynomial induction} axiom schema, $\pind$, consists of the following axioms, for each formula $A(x)$:
86
\[
87
\left(
88
A(0) 
89
\cand (\forall x^{\normal} . ( A(x) \cimp A(\succ{0} x) ) )
90
\cand  (\forall x^{\normal} . ( A(x) \cimp A(\succ{1} x) ) ) 
91
\right)
92
\cimp  \forall x^{\normal} . A(x)
93
\]
94
For a class $\Xi$ of formulae, $\cax{\Xi}{\pind}$ denotes the set of $\pind$ axioms where $A(x) \in \Xi$. 
95

    
96
%We write $I\Xi$ to denote the theory consisting of $\basic$ and $\cax{\Xi}{\ind}$.
97
\end{definition}
98

    
99

    
100
\begin{definition}\label{def:ariththeory}
101
Define the theory $\arith^i$ consisting of the $\basic$ axioms, $\cpind{\Sigma^\safe_i } $,
102
%\begin{itemize}
103
%	\item $\basic$;
104
%	\item $\cpind{\Sigma^\safe_i } $:
105
%\end{itemize}
106
and a particular inference rule, called $\rais$, for closed formulas $\forall x. \exists y. A$:
107
\[
108
 \dfrac{\proves \forall \vec x^\normal . \exists  y^\safe .  A }{ \proves \forall \vec x^\normal .\exists y^\normal . A}
109
\]
110
We will write $\arith^i \proves A$ if $A$ is a logical consequence of the axioms of $\arith^i$, in the usual way.
111
\end{definition}
112
%\patrick{I think in the definition of  $\arith^i$ we should impose that the formulas considered are \textit{integer positive}, that is to say that the only negative occurrences of atoms $\safe(t)$, $\normal(t)$ are those occurring in $\forall^{\safe}$ and $\forall^{\normal}$.  Indeed I don't think this can be just proved to be a consequence of a kind of 'normal form' of proofs, as we had discussed (see sect 4.4)}
113
%
114
%\anupam{In induction,for inductive cases, need $u\neq 0$ for $\succ 0$ case.}
115

    
116
\begin{remark}
117
Notice that $\rais$ looks similar to the $K$ rule from the calculus for the modal logic $S4$, and indeed we believe there is a way to present these results in such a framework.
118
However, the proof theory of first-order modal logics, in particular free-cut elimination results used for witnessing, is not sufficiently developed to carry out such an exposition.	
119
	\end{remark}
120

    
121

    
122

    
123

    
124

    
125
\subsection{Graphs of some basic functions}\label{sect:graphsbasicfunctions}
126
%Todo: $+1$,  
127

    
128
We say that a function $f$ is \emph{represented} by a formula $A_f$ if we can prove a formula of the form $\forall ^{\normal} \vec u, \forall ^{\safe} x, \exists^{\safe}! y. A_f$. The variables $\vec u$ and $\vec x$ can respectively be thought of as normal and safe arguments of $f$, and $y$ is the output of $f(\vec u; \vec x)$.
129
 
130
Let us give a few examples for basic functions representable in $\arith^1$:
131
\begin{itemize}
132
\item Projection $\pi_k^{m,n}$: $\forall^{\normal} u_1, \dots, u_m,  \forall^{\safe} x_{m+1}, \dots, x_{m+n}, \exists^{\safe} y. y=x_k$.
133
\item Successor $\succ{}$: $\forall^{\safe} x, \exists^{\safe} y. y=x+1.$. The formulas for the binary successors $\succ{0}$, $\succ{1}$ and the constant functions $\epsilon^k$ are defined in a similar way.
134
\item Predecessor $p$:   $\forall^{\safe} x, \exists^{\safe} y. (x=\succ{0} y \cor x=\succ{1} y \cor (x=\epsilon \cand y= \epsilon)) .$
135
\item Conditional $C$: 
136
$$\begin{array}{ll}
137
\forall^{\safe} x, y_{\epsilon}, y_0, y_1, \exists^{\safe} y. & ((x=\epsilon)\cand (y=y_{\epsilon})\\
138
                                                                                                   & \cor( \exists^{\safe}z.(x=\succ{0}z) \cand (y=y_0))\\
139
                                                                                                   & \cor( \exists^{\safe}z.(x=\succ{1}z) \cand (y=y_1)))\
140
\end{array}
141
$$
142
\item Addition:
143
$\forall^{\safe} x, y,  \exists^{\safe} z. z=x+y$. 
144
\item Prefix:
145

    
146
  This is a predicate that we will need for technical reasons, in the completeness proof. The predicate $\pref(k,x,y)$ holds if the prefix of string $x$
147
  of length $k$ is $y$. It is defined as:
148
  $$\begin{array}{ll}
149
\exists^{\safe} z, \exists^{\normal} l\leq |x|. & (k+l= |x|\\
150
                                                                                                   & \cand \; |z|=l\\
151
                                                                                                   & \cand \; x=y\smsh(2,l)+z)
152
                                                                                                   \end{array}
153
$$
154
\item The following predicates will also be needed in proofs:
155

    
156
$\zerobit(x,k)$ (resp. $\onebit(x,k)$) holds iff the $k$th bit of $x$ is 0 (resp. 1). The predicate $\zerobit(x,k)$  can be
157
defined by:
158
$$ \exists^{\safe} y.(\pref(k,x,y) \cand C(y,0,1,0)).$$
159
\end{itemize}
160

    
161

    
162

    
163

    
164
\subsection{A sequent calculus presentation}
165
\begin{figure}
166
	\[
167
	\small
168
	\begin{array}{l}
169
	\begin{array}{cccc}
170
	%\vlinf{\lefrul{\bot}}{}{p, \lnot{p} \seqar }{}
171
	%& \vlinf{\id}{}{p \seqar p}{}
172
	%& \vlinf{\rigrul{\bot}}{}{\seqar p, \lnot{p}}{}
173
	%& \vliinf{\cut}{}{\Gamma, \Sigma \seqar \Delta , \Pi}{ \Gamma \seqar \Delta, A }{\Sigma, A \seqar \Pi}
174
	\vlinf{id}{}{\Gamma, p \seqar p, \Delta }{}
175
	& \vliinf{cut}{}{\Gamma \seqar \Delta }{ \Gamma \seqar \Delta, A }{\Gamma, A \seqar \Delta}
176
	&&
177
	\\
178
	\noalign{\bigskip}
179
	%\noalign{\bigskip}
180
	\vliinf{\lefrul{\cor}}{}{\Gamma, A \cor B \seqar \Delta}{\Gamma , A \seqar \Delta}{\Gamma, B \seqar \Delta}
181
	&
182
	\vlinf{\lefrul{\cand}}{}{\Gamma, A\cand B \seqar \Delta}{\Gamma, A , B \seqar \Delta}
183
	&
184
	%\vlinf{\lefrul{\laand}}{}{\Gamma, A\laand B \seqar \Delta}{\Gamma, B \seqar \Delta}
185
	%\quad
186
	\vlinf{\rigrul{\cor}}{}{\Gamma \seqar \Delta, A \cor B}{\Gamma \seqar \Delta, A, B}
187
	&
188
	%\vlinf{\rigrul{\laor}}{}{\Gamma \seqar \Delta, A\laor B}{\Gamma \seqar \Delta, B}
189
	%\quad
190
	\vliinf{\rigrul{\cand}}{}{\Gamma \seqar \Delta, A \cand B }{\Gamma \seqar \Delta, A}{\Gamma \seqar \Delta, B}
191
	\\
192
	\noalign{\bigskip}
193
	
194
	\vlinf{\lefrul{\neg}}{}{\Gamma, \neg A \seqar \Delta}{\Gamma \seqar A, \Delta}
195
	&
196
	
197
	\vlinf{\lefrul{\neg}}{}{\Gamma, \seqar \neg A, \Delta}{\Gamma, A \seqar  \Delta}
198
	&
199
	&
200
	%\vliinf{\lefrul{\cimp}}{}{\Gamma, A \cimp B \seqar \Delta}{\Gamma \seqar A, \Delta}{\Gamma, B \seqar \Delta}
201
	%&
202
	%
203
	%\vlinf{\rigrul{\cimp}}{}{\Gamma \seqar \Delta, A \cimp B}{\Gamma, A \seqar \Delta,  B}
204
	
205
	
206
	\\
207
	
208
	\noalign{\bigskip}
209
	%\text{Structural:} & & & \\
210
	%\noalign{\bigskip}
211
	
212
	%\vlinf{\lefrul{\wk}}{}{\Gamma, A \seqar \Delta}{\Gamma \seqar \Delta}
213
	%&
214
	\vlinf{\lefrul{\cntr}}{}{\Gamma, A \seqar \Delta}{\Gamma, A, A \seqar \Delta}
215
	%&
216
	%\vlinf{\rigrul{\wk}}{}{\Gamma \seqar \Delta, A }{\Gamma \seqar \Delta}
217
	&
218
	\vlinf{\rigrul{\cntr}}{}{\Gamma \seqar \Delta, A}{\Gamma \seqar \Delta, A, A}
219
	&
220
	&
221
	\\
222
	\noalign{\bigskip}
223
	\vlinf{\lefrul{\exists}}{}{\Gamma, \exists x . A(x) \seqar \Delta}{\Gamma, A(a) \seqar \Delta}
224
	&
225
	\vlinf{\lefrul{\forall}}{}{\Gamma, \forall x. A(x) \seqar \Delta}{\Gamma, A(t) \seqar \Delta}
226
	&
227
	\vlinf{\rigrul{\exists}}{}{\Gamma \seqar \Delta, \exists x . A(x)}{ \Gamma \seqar \Delta, A(t)}
228
	&
229
	\vlinf{\rigrul{\forall}}{}{\Gamma \seqar \Delta, \forall x . A(x)}{ \Gamma \seqar \Delta, A(a) } \\
230
	%\noalign{\bigskip}
231
	% \vliinf{mix}{}{\Gamma, \Sigma \seqar \Delta , \Pi}{ \Gamma \seqar \Delta}{\Sigma \seqar \Pi} &&&
232
	\end{array}
233
	\end{array}
234
	\]
235
	\caption{Sequent calculus rules, where $p$ is atomic, $i \in \{ 1,2 \}$, $t$ is a term and the eigenvariable $a$ does not occur free in $\Gamma$ or $\Delta$.}\label{fig:sequentcalculus}
236
\end{figure}
237

    
238
In order to carry out witness extraction for proofs of $\arith^i$, it will be useful to work with a \emph{sequent calculus} representation of proofs.
239
Such systems exhibit strong normal forms, notably `free-cut free' proofs, and so are widely used for the `witness function method' for extracting programs from proofs.
240
We introduce the required technical material here only briefly, due to space constraints.
241

    
242
A \emph{sequent} is an expression $\Gamma \seqar \Delta$ where $\Gamma$ and $\Delta$ are multisets of formulas. 
243
The inference rules of the sequent calculus $\LK$ are displayed in Fig.~\ref{fig:sequentcalculus}.
244
Of course, we have the following:
245
\begin{proposition}
246
	$A$ is a first-order theorem if and only if there is an $\LK$ proof of $\seqar A$.
247
\end{proposition}
248

    
249
%We consider \emph{systems} of `nonlogical' rules extending this sequent calculus, which we write as follows,
250
% \[
251
% \begin{array}{cc}
252
%    \vlinf{(R)}{}{ \Gamma , \Sigma' \seqar \Delta' , \Pi  }{ \{\Gamma , \Sigma_i \seqar \Delta_i , \Pi \}_{i \in I} }
253
%\end{array}
254
%\]
255
% where, in each rule $(R)$, $I$ is a finite possibly empty set (indicating the number of premises) and we assume the following conditions and terminology:
256
% \begin{enumerate}
257
% \item In $(R)$ the formulas of $\Sigma', \Delta'$  are called \textit{principal}, those of $\Sigma_i, \Delta_i$ are called \textit{active}, and those of   
258
%$ \Gamma,  \Pi$ are called \textit{context formulas}. 
259
%\item Each rule $(R)$ comes with a list $a_1$, \dots, $a_k$ of eigenvariables such that each $a_j$ appears in exactly one $\Sigma_i, \Delta_i$ (so in some active formulas of exactly one premise)  and does not appear in  $\Sigma', \Delta'$ or $ \Gamma,  \Pi$.
260
%    \item A system $\mathcal{S}$ of rules must be closed under substitutions of free variables by terms (where these substitutions do not contain the eigenvariables $a_j$ in their domain or codomain).  
261
%   \end{enumerate}
262
% 
263
%%The distinction between modal and nonmodal formulae in $(R)$ induces condition 1
264
% Conditions 2 and 3 are standard requirements for nonlogical rules, independently of the logical setting, cf.\ \cite{Beckmann11}. Condition 2 reflects the intuitive idea that, in our nonlogical rules, we often need a notion of \textit{bound} variables in the active formulas (typically for induction rules), for which we rely on eigenvariables. Condition 3 is needed for our proof system to admit elimination of cuts on quantified formulas.
265
%
266
%%\begin{definition}
267
%%[Polynomial induction]
268
%%The \emph{polynomial induction} axiom schema, $\pind$, consists of the following axioms,
269
%%\[
270
%%A(0) 
271
%%\cimp (\forall x^{\normal} . ( A(x) \cimp A(\succ{0} x) ) )
272
%%\cimp  (\forall x^{\normal} . ( A(x) \cimp A(\succ{1} x) ) ) 
273
%%\cimp  \forall x^{\normal} . A(x)
274
%%\]
275
%%for each formula $A(x)$.
276
%%
277
%%For a class $\Xi$ of formulae, $\cax{\Xi}{\pind}$ denotes the set of induction axioms when $A(x) \in \Xi$. 
278
%%
279
%%We write $I\Xi$ to denote the theory consisting of $\basic$ and $\cax{\Xi}{\ind}$.
280
%%\end{definition}
281
%
282
%As an example any axiom can be represented by such a nonlogical rule $(R)$, with no premise ($I=\emptyset$), $\Delta'$ equal to the axiom and $\Gamma=\Sigma'=\Pi$.
283

    
284
We extend this purely logical calculus with certain non-logical rules and initial sequents corresponding to our theories $\arith^i$.
285
 For instance the axiom $\pind$ of Def. \ref{def:polynomialinduction} is represented by the following rule:
286
\begin{equation}
287
\label{eqn:ind-rule}
288
\small
289
\vliinf{\pind}{}{ \normal(t) , \Gamma , A(0) \seqar A(t), \Delta }{ \normal(a) , \Gamma, A(a) \seqar A(\succ{0} a) , \Delta }{ \normal(a) , \Gamma, A(a) \seqar A(\succ{1} a) , \Delta  }
290
\end{equation}
291
where $t$ varies over arbitrary terms and the eigenvariable $a$ does not occur in the lower sequent.
292
%
293
Similarly the $\rais$ inference rule of Dfn.~\ref{def:ariththeory} is represented by the nonlogical rule,
294
 \[
295
 \begin{array}{cc}
296
    \vlinf{\rais}{}{  \normal(t_1), \dots, \normal(t_k) \seqar  \exists  y^\normal .  A }{  \normal(t_1), \dots, \normal(t_k) \seqar \exists  y^\safe .  A}
297
\end{array}
298
\]
299

    
300
%\patrick{In fact, I think we rather need the following nonlogical rule, which implies the previous one but is I guess more general:
301
%\[
302
% \begin{array}{cc}
303
%    \vlinf{\rais}{}{  \normal(t_1), \dots, \normal(t_k) \seqar  \normal(t) }{  \normal(t_1), \dots, \normal(t_k) \seqar \safe(t)}
304
%\end{array}
305
%\]
306
%}
307
%
308
and the $\basic$ axioms are represented by designated initial sequents.
309
For instance here are some initial sequents corresponding to some of the $\basic$ axioms:
310
\[
311
\small
312
\begin{array}{l}
313
\begin{array}{cccc}
314
\vlinf{}{}{\seqar \safe (0)}{}&
315
\vlinf{}{}{\safe(t) \seqar \safe(\succ{} t)}{}&
316
\vlinf{}{}{ \safe (t)   \seqar 0 \neq \succ{} t}{} &
317
\vlinf{}{}{\safe (s) , \safe (t)  , \succ{} s = \succ{} t\seqar s = t }{}\\
318
\end{array}
319
\\
320
\vlinf{}{}{\safe (t) \seqar t = 0 \cor \exists y^\safe . t = \succ{} y  }{}
321
\qquad
322
\vlinf{}{}{\safe(s), \safe(t) \seqar \safe(s+t) }{}\\
323
\vlinf{}{}{\normal (s), \safe(t) \seqar \safe(s \times t)  }{}
324
\qquad
325
\vlinf{}{}{\normal (s), \normal(t) \seqar \safe(s \smsh t)  }{}\\
326
\vlinf{}{}{\normal(t) \seqar \safe(t)  }{}
327
\end{array}
328
\]
329

    
330
The sequent system for $\arith^i$ extends $\LK$ by the $\basic$,  $\cpind{\Sigma^\safe_i } $ and $\rais$ nonlogical rules.
331
 Naturally, by completeness, we have that $\arith^i \proves A$ if and only if there is a sequent proof of $\seqar A$.
332
 In fact, by \emph{free-cut elimination} results \cite{Takeuti87,Cook:2010:LFP:1734064} we may actually say something much stronger.
333
 
334
 Let us say that a sorting $(\vec u ; \vec x)$ of the variables $\vec u , \vec x$ is \emph{compatible} with a formula $A$ if each variable of $\vec x$ occurs hereditarily safe with respect to the $\bc$-typing of terms, i.e.\ never under $\smsh, |\cdot|$ and to the right of $\times$.
335
 
336
\begin{theorem}
337
	[Typed variable normal form]
338
	\label{thm:normal-form}
339
	If $\arith^i\proves  A$ then there is a $\arith^i$ sequent proof $\pi$ of $A$ such that each line has the form:
340
	\[
341
	\normal(\vec u), \safe (\vec x), \Gamma \seqar \Delta
342
	\]
343
	where $\Gamma \seqar \Delta$ contains only $\Sigma^\safe_i$ formulae for which the sorting $(\vec u ;\vec x)$ is compatible.
344
\end{theorem}
345

    
346
Strictly speaking, we must alter some of the sequent rules a little to arrive at this normal form. For instance the $\pind$ rule would have $\normal(\vec u)$ in its lower sequent rather than $\normal (t(\vec u))$. The latter is a consequence of the former already in $\basic$.
347
The proof of this result also relies on a heavy use of the structural rules, contraction and weakening, to ensure that we always have a complete and compatible typing on the LHS of a sequent. This is similar to what is done in \cite{OstrinWainer05} where they use a $G3$ style calculus to manage such structural manipulations.