# Introduction to Metamath

Official Metamath description has 211 pages. This is a "shrinked" version, which does not contain introduction to logic, history of logic, definitions and statements, that mathematicians and computer scient+ists are already familiar with. Explore it in MM Tool.

Having a formal (precise) system for mathematical theories and proofs is useful in many ways. Formal proof can be mechanically verified, which eliminates the risk of errors. And if we want a computer to work with our theories and proofs, having an exact precise system is the only way to do it.

## Core of Metamath

Metamath is a language for defining a **formal grammar**. We define terminal and nonterminal symbols, initial words and production rules. It is also possible to include a word and a sequence of rules, which generate that word, inside a Matamath file. For such word, we can mechanically verify, if it is really generated by our grammar (by processing a sequence of production rules, substituting symbols and comparing the final word with a claimed word). We do it using a `$p`

statement (described below). Since Metamath is a language for defining grammars, there is no Math included. Metamath does not know what "+" or "=" or a bracket is.

## Metamath for Math

Mathematicians use Metamath language to represent mathematics formally. We can define axioms and inference rules as production rules of a formal grammar. A language (a set of words), generated by this grammar, corresponds to set of theorems, provable inside our theory (valid facts). And thanks to `$p`

statements, we can include theorems and formal proofs of these theorems. The grammar approach to proving may seem cumbersome, as proofs are usually quite long, but it elegantly deals with infinitely many axioms (e.g. induction in PA), many possible inference rules rules (Modus Ponens, resolution) etc.

Since Metamath was made to be used for Mathematics, the content of Metamath files is usually described in terms of predicate logic, instead of terms of the grammar theory.

## Metamath syntax

Let's describe parsing algorithm. Only printable ASCII characters can be used in Metamath file. Let's look at Metamath file as a sequence of tokens. Token consinsts of non-white-space characters, tokens are separated by one or more white-space characters (space, new line, tabulator). Following file contains 12 tokens:

$c hello hi bar 0 \phi -> $. $( ( __ $)

Another example, `1 + 1`

contains 3 tokens, while `1+1`

contains just one token.

There are two types of tokens:

- tokens that start with $: Metamath tokens
- tokens that don't start with $: user-defined tokens

Previous example contains 4 Metamath tokens and the rest are user-defined tokens. Let's describe the meaning of Metamath tokens. As we "grouped" characters into tokens, we can group tokens into statements. A statement usually starts and ends with Metamath token.

### $( statement

$( tok tok tok $)

These statements are used to insert **comments**. They may occur anywhere in a Metamath file. Now, let's suppose, that we have removed all comments from a Metamath file. Each of the remaining statements is one of the statements below.

Some statements are identified with a **label token**. Label tokens can contain only letters, digits, period, hyphen or underscore. Labeled statements must have unique labels.

### $c statement

$c -> + - ( ) $.

These statements are used to define **terminal symbols** of a grammar. Terminal symbols are called **constants** in Metamath community (don't confuse with constants from logic). Constants don't have to be defined all in one statement. You can define them at any part of Metamath file.

### $v statement

$v A B C a b c phi \psi $.

These statements are used to define **nonterminal symbols** of a grammar. Nonterminal symbols are called **variables** in Metamath community (don't confuse with variables from logic). Variables don't have to be defined all in one statement. You can define them at any part of Metamath file.

### $d statement

$d vToken vToken ... vToken $.

**Disjoint variable restrictions**. vToken must be previously defined variable. For any two variables A,B from $d statement, when A and B occur together in some word, then sub-words (that are derived from A and from B) should never have any variables in common.

These restrictions are used to introduce a concept of variable substitutability in a comfortable way. E.g. when we have a word A < B and we don't want both A and B be replaced with C (results in C < C), we use `$d A B $.`

. A grammar with $d restrictions can be cnoverted into equivalent grammar with no restrictions, but it will become more complex.

## Types and Hypotheses

In Metamath, each word has so-called **type**. When a variable is substituted by a word, they must have the same type. Types allow us to have multiple variables with the same name (but different types). Grammar with types can be converted into equivalent grammar without types (and without type restrictions), but it would become much larger.

Following 2 kinds of statements are called **hypotheses**.

### $f statement

labelToken $f cToken vToken $.

**Floating hypothesis**. cToken must be previously defined constant, vToken must be previously defined variable. It is used just to define a type of variable.

### $e statement

labelToken $e cToken token token ... token $.

**Essential hypothesis**. cToken must be previously defined constant.

## Assertions

### $a statement

labelToken $a cToken token token ... token $.

**Axiomatic assertion**. cToken must be previously defined constant. It defines a word, that belongs to the language "for no reason" (like axiom).

### $p statement

labelToken $p cToken token token ... token $= labelToken labelToken ... labelToken $.

**Provable assertion**. cToken must be previously defined constant. It defines a word, that belongs to the language "with a reason". The reason ("proof") is a sequence of rewriting rules (between `$=`

and `$.`

) to generate the word (between `$p`

and `$=`

).

Each assertion contains a word (which has or has not to be "proved"). For each assertion, there is a sequence of **mandatory hypotheses** related to it. These are all `$e`

statements that occur before assertion, and all `$f`

statements that occur before assertion, if their variable occurs in assertion word (or in any of mandatory $e hypotheses). Mandatory hypotheses must be "fulfilled" before assertion word can be fulfilled (will be clarified in description of a proof syntax).

### ${ statement

${ statement statement ... statement $}

**Scoping statement**. These statements have a classic "bracket syntax", they can create a tree structure on other Metamath statements, but are usually used only to depth 1. Satements `$c, $v, $d, $f, $e`

defined inside a scope become inactive after the end of a scope. Only assertions (`$a, $p`

) remain active. We use them, when we want to quickly prove some statement (define new variables, disjoint restrictions, `$e`

hypotehses), but we don't want to "pollute" the invironment (we don't need these disjoint restrictions or `$e`

hypotheses in the future) Non-assertion statements become forgotten, while assertions remain true even after the scope.

## Example

Let's write a grammar for propositional logic

$c wff $. $( we use this constant as a type of formula (well formed formula) $) $c ( ) ! -> $. $( brackets, negation, implication $) $v A B C $. $( variables to be used in formulas $) wa $f wff A $. $( hypothesis "wa" which says, that A is a well-formed formula $) wb $f wff B $. wc $f wff C $. $( Following assertions define the rules to create formulas $) wng $a wff ! A $. $( mandatory hypotheses are (wa) $) wim $a wff ( A -> B ) $. $( mandatory hypotheses are (wa, wb) $) $c |- $. $( this constant will be used as a type of provable formula $) $( Schemes of axioms of propositional logic $) a1 $a |- ( A -> ( B -> A ) ) $. a2 $a |- ( ( A -> ( B -> C ) ) -> ( ( A -> B ) -> ( A -> C ) ) ) $. a3 $a |- ( ( ! A -> ! B ) -> ( B -> A ) ) $. $( new scope; otherwise mp1, mp2 will become mandatory for all upcoming assertions $) ${ $( Modus Ponens rule of inference $) mp1 $e |- A $. mp2 $e |- ( A -> B ) $. mp $a |- B $. $( mandatory hypotheses of "mp" are (wa, wb, mp1, mp2) $) $}

## Proof syntax

Proof is defined in $p statement as a sequence of labels. Labels may refer to previous labeled statements `$f, $e, $a, $p`

. Sequence of labels correspond to generation of a proved word in **Reverse Polish Notation**. The main idea is, that there is a stack. Operands are put on top of a stack. Operators take several operands from a stack and put a new operand (result) to the top.

Proof verification consists of going throug a sequence of labels and editing a stack accordingly.

When a hypothesis `$f, $e`

occurs, we put its content as a new word on top of the stack.

wa wa mp1 ---> [ wff A ; wff A ; |- A ]

When an assertion `$a, $p`

occurs, it works as an operator. It takes several items form the stack, substitutes them into assertion word, and puts a new word on top of the stack. Let's look at the details.

The assertion, that we ran into, has K mandatory hypotheses. We remove K top-most items from the stack. First, we check variable types. Fist constant of each word on the stack should be equal to first constant of corresponding mandatory hypothesis. `$f`

mandatory hypotheses define variables to be substituted, corresponding words on stack are values, that will replace these variables. Now we check disjoint variable restriction. Then, substitution is performed on $e hypotheses (if there are any) and compared with stack values. If values are equal, substitution is performed on assertion word, and a result is put on top of stack. Proof is valid, if at the end, there remains just a single word on the stack, which is equal to proof word.

Let's prove a syntactic thing, that (A -> (B -> C)) is a formula. We take proof labels one by one and eidt a stack accordingly.

formula1 $p wff ( A -> ( B -> C ) ) $= wa wb wc wim wim $. 1) [ wff A ] 2) [ wff A ; wff B ] 3) [ wff A ; wff B ; wff C ] 4) [ wff A ; wff ( B -> C ) ] // wim has mandatory hypotheses (wa, wb). // their types (wff, wff) correspond with types of last two items on the stack. // Substitution is B for A, C for B. We substitute into wim = wff ( A -> B ) , // put a result wff ( B -> C ) on stack. Note that type (first token) // is omitted during substitution, there is no wff ( wff B -> wff C ). 5) [ wff ( A -> ( B -> C ) ) ] // same thing. Types correspond. // We substitute A for A, ( B -> C ) for B. We have processed all labels, // stack word is equal to proof word, proof is correct.

Let's prove more interesting statement, that ((A->A)->(A->A)) is a provable formula.

formula2 $p |- ( ( A -> A ) -> ( A -> A ) ) $= wa wa wa wim wim wa wa wim wa wa wim wim wa wa a1 wa wa wa a2 mp $. 5 ) [ wff ( A -> ( A -> A ) ) ] // processing wa wa wa wim wim 12) [ wff ( A -> ( A -> A ) ) wff ( ( A -> A ) -> ( A -> A ) ) ] // next 7 labels 15) [ wff ( A -> ( A -> A ) ) wff ( ( A -> A ) -> ( A -> A ) ) |- ( A -> ( A -> A ) ) ] // next 3 labels 19) [ wff ( A -> ( A -> A ) ) wff ( ( A -> A ) -> ( A -> A ) ) |- ( A -> ( A -> A ) ) |- ( ( A -> ( A -> A ) ) -> ( ( A -> A ) -> ( A -> A ) ) ) ] // next 4 20) [ |- ( ( A -> A ) -> ( A -> A ) ) ] // "mp" has 4 mand. hypotheses (wa,wb,mp1,mp2) // with types (wff, wff, |-, |-), which correspond to types of stack items. // Substitution is ( A -> ( A -> A ) ) for A, ( ( A -> A ) -> ( A -> A ) ) for B. // We substitute into mp1, mp2, results are equal to stack item 3 and 4. We substitute // into mp word, result is equal to proof word, proof is correct.

Note, that `$p`

assertions are used in proofs just like `$a`

assertions. We use them as rewriting rules and don't care about actual proof. Long proofs can be broken into smaller proofs of sub-theorems, e.g. for previous theorem:

th1 $p wff ( A -> ( A -> A ) ) $= wa wa wa wim wim $. th2 $p wff ( ( A -> A ) -> ( A -> A ) ) $= wa wa wim wa wa wim wim $. th3 $p |- ( A -> ( A -> A ) ) $= wa wa a1 $. th4 $p |- ( ( A -> ( A -> A ) ) -> ( ( A -> A ) -> ( A -> A ) ) ) $= wa wa wa a2 $. formula2 $p |- ( ( A -> A ) -> ( A -> A ) ) $= wa th1 wa th2 wa th3 wa th4 mp $.

`$a`

assertions are used to define math syntax (recursive definition of formula), axioms of theory (a1, a2, a3) and inference rules (modus ponens), but also to define new objects and symbols, that are abbreviations of more complex statements. Let's define disjuction $A \vee B$, which is an abbreviation for $\neg A \rightarrow B$.

$c \/ $. $( a constant for disjunction (\vee) $) wor $a wff ( A \/ B ) $. $( syntax of usage, infix notation $) ${ $( one way rewrite $) hy-Vee1 $e |- ( ! A -> B ) $. df-Vee1 $a |- ( A \/ B ) $. $} ${ $( opposite way rewrite $) hy-Vee2 $e |- ( A \/ B ) $. df-Vee2 $a |- ( ! A -> B ) $. $}

Writing formal proofs requires a lot of practice. Most of the work of Metamath community is focused on developing set.mm, which is an enormous library of proofs, that starts with logic and ZF axioms and builds the rest of common mathematics. It is also available in MM Tool.