One way of deciding an equational theory is through normalizing its terms.
Take any equational theory with judgments of the form
This judgment states that the terms and , which are both of type , are equal. The context may contain typing assumptions, e.g. of the form , which are needed to infer that and have type . Thus, the terms and can be open, i.e. have free variables. This kind of judgment subsumes most type theories, e.g.
Given terms and , we would like an algorithm that decides whether the judgment is derivable in this equational theory. One way to do so is to normalize the terms of the calculus: we do so by mapping every term to some mathematical object that canonically represents its equivalence class under equality. Deciding whether is equal to then amounts to computing whether and are mapped to the same representative.
We can capture this axiomatically. Let
be the set of terms of type in context . Moreover, let be an abstract set of "normal forms" in context . A normalizer is then a collection of functions
which map each term to a normal form, such that
where refers to syntactic identity (possibly up to -equivalence).
That is: to decide whether two terms are equal, it suffices to compute their normal forms, and check if they are syntactically identical. This is computable precisely when (a) computing the normal forms, and (b) deciding their equality is.
If the normal forms are a subset of terms, i.e. if , the following axioms suffice:
We have used for syntactic equality. These axioms imply (*). The left-to-right implication is simply Axiom 1. From right-to-left: we have the chain of equalities by using Axiom 2 twice. Axiom 3 is superfluous, but usually satsfied by well-behaved notions of normal form.
Note, however, that normal forms need not be a subset of terms: complicated type theories may require mathematically complicated notions of normal form.
A type theory satisfies the canonicity property exactly when there are normalization functions at closed terms of ground type.
In terms of the previous definition, we only need a collection of functions
satisfying (*), where
consists of the closed introduction forms of ground type. For example, consist of all terms of the form .
Normalization is a property of an abstract rewriting system that every term will eventually be rewritten as an irreducible term, called a normal form.
Strong normalization, or termination requires for all sequences of rewriting will leads to a normal form, and weak normalization only requires one such sequence to exist.
Normalization by evaluation refers to a 'semantic' technique for normalizing terms. In short, terms are interpreted in some sort of semantic structure. This structure is chosen cleverly so as to enable the extraction of normal forms from its elements. See the main article for more.
Fiore, Marcelo. 2002. ‘Semantic Analysis of Normalisation by Evaluation for Typed Lambda Calculus’. In Proceedings of the 4th ACM SIGPLAN International Conference on Principles and Practice of Declarative Programming - PPDP ’02, 26–37. Pittsburgh, PA, USA: ACM Press. https://doi.org/10.1145/571157.571161. pdf
@inproceedings{fiore_semantic_2002,
address = {Pittsburgh, PA, USA},
title = {Semantic analysis of normalisation by evaluation for typed lambda calculus},
isbn = {978-1-58113-528-2},
doi = {10.1145/571157.571161},
language = {en},
booktitle = {Proceedings of the 4th {ACM} {SIGPLAN} international conference on {Principles} and {Practice} of {Declarative} {Programming} - {PPDP} '02},
publisher = {ACM Press},
author = {Fiore, Marcelo},
year = {2002},
pages = {26--37}