.
Mathematical logic
Mathematical logic is a subfield of mathematics exploring the applications of formal logic to mathematics. Topically, mathematical logic bears close connections to metamathematics, the foundations of mathematics, and theoretical computer science.[1] The unifying themes in mathematical logic include the study of the expressive power of formal systems and the deductive power of formal proof systems.
Mathematical logic is often divided into the fields of set theory, model theory, recursion theory, and proof theory. These areas share basic results on logic, particularly first-order logic, and definability. In computer science (particularly in the ACM Classification) mathematical logic encompasses additional topics not detailed in this article; see Logic in computer science for those.
Since its inception, mathematical logic has both contributed to, and has been motivated by, the study of foundations of mathematics. This study began in the late 19th century with the development of axiomatic frameworks for geometry, arithmetic, and analysis. In the early 20th century it was shaped by David Hilbert's program to prove the consistency of foundational theories. Results of Kurt Gödel, Gerhard Gentzen, and others provided partial resolution to the program, and clarified the issues involved in proving consistency. Work in set theory showed that almost all ordinary mathematics can be formalized in terms of sets, although there are some theorems that cannot be proven in common axiom systems for set theory. Contemporary work in the foundations of mathematics often focuses on establishing which parts of mathematics can be formalized in particular formal systems (as in reverse mathematics) rather than trying to find theories in which all of mathematics can be developed.
Subfields and scope
The Handbook of Mathematical Logic makes a rough division of contemporary mathematical logic into four areas:
set theory
model theory
recursion theory, and
proof theory and constructive mathematics (considered as parts of a single area).
Each area has a distinct focus, although many techniques and results are shared among multiple areas. The borderlines amongst these fields, and the lines separating mathematical logic and other fields of mathematics, are not always sharp. Gödel's incompleteness theorem marks not only a milestone in recursion theory and proof theory, but has also led to Löb's theorem in modal logic. The method of forcing is employed in set theory, model theory, and recursion theory, as well as in the study of intuitionistic mathematics.
The mathematical field of category theory uses many formal axiomatic methods, and includes the study of categorical logic, but category theory is not ordinarily considered a subfield of mathematical logic. Because of its applicability in diverse fields of mathematics, mathematicians including Saunders Mac Lane have proposed category theory as a foundational system for mathematics, independent of set theory. These foundations use toposes, which resemble generalized models of set theory that may employ classical or nonclassical logic.
History
Mathematical logic emerged in the mid-19th century as a subfield of mathematics independent of the traditional study of logic (Ferreirós 2001, p. 443). Before this emergence, logic was studied with rhetoric, through the syllogism, and with philosophy. The first half of the 20th century saw an explosion of fundamental results, accompanied by vigorous debate over the foundations of mathematics.
Early history
Further information: History of logic
Theories of logic were developed in many cultures in history, including China, India, Greece and the Islamic world. In 18th-century Europe, attempts to treat the operations of formal logic in a symbolic or algebraic way had been made by philosophical mathematicians including Leibniz and Lambert, but their labors remained isolated and little known.
19th century
In the middle of the nineteenth century, George Boole and then Augustus De Morgan presented systematic mathematical treatments of logic. Their work, building on work by algebraists such as George Peacock, extended the traditional Aristotelian doctrine of logic into a sufficient framework for the study of foundations of mathematics (Katz 1998, p. 686).
Charles Sanders Peirce built upon the work of Boole to develop a logical system for relations and quantifiers, which he published in several papers from 1870 to 1885. Gottlob Frege presented an independent development of logic with quantifiers in his Begriffsschrift, published in 1879, a work generally considered as marking a turning point in the history of logic. Frege's work remained obscure, however, until Bertrand Russell began to promote it near the turn of the century. The two-dimensional notation Frege developed was never widely adopted and is unused in contemporary texts.
From 1890 to 1905, Ernst Schröder published Vorlesungen über die Algebra der Logik in three volumes. This work summarized and extended the work of Boole, De Morgan, and Peirce, and was a comprehensive reference to symbolic logic as it was understood at the end of the 19th century.
Foundational theories
Concerns that mathematics had not been built on a proper foundation led to the development of axiomatic systems for fundamental areas of mathematics such as arithmetic, analysis, and geometry.
In logic, the term arithmetic refers to the theory of the natural numbers. Giuseppe Peano (1889) published a set of axioms for arithmetic that came to bear his name (Peano axioms), using a variation of the logical system of Boole and Schröder but adding quantifiers. Peano was unaware of Frege's work at the time. Around the same time Richard Dedekind showed that the natural numbers are uniquely characterized by their induction properties. Dedekind (1888) proposed a different characterization, which lacked the formal logical character of Peano's axioms. Dedekind's work, however, proved theorems inaccessible in Peano's system, including the uniqueness of the set of natural numbers (up to isomorphism) and the recursive definitions of addition and multiplication from the successor function and mathematical induction.
In the mid-19th century, flaws in Euclid's axioms for geometry became known (Katz 1998, p. 774). In addition to the independence of the parallel postulate, established by Nikolai Lobachevsky in 1826 (Lobachevsky 1840), mathematicians discovered that certain theorems taken for granted by Euclid were not in fact provable from his axioms. Among these is the theorem that a line contains at least two points, or that circles of the same radius whose centers are separated by that radius must intersect. Hilbert (1899) developed a complete set of axioms for geometry, building on previous work by Pasch (1882). The success in axiomatizing geometry motivated Hilbert to seek complete axiomatizations of other areas of mathematics, such as the natural numbers and the real line. This would prove to be a major area of research in the first half of the 20th century.
The 19th century saw great advances in the theory of real analysis, including theories of convergence of functions and Fourier series. Mathematicians such as Karl Weierstrass began to construct functions that stretched intuition, such as nowhere-differentiable continuous functions. Previous conceptions of a function as a rule for computation, or a smooth graph, were no longer adequate. Weierstrass began to advocate the arithmetization of analysis, which sought to axiomatize analysis using properties of the natural numbers. The modern (ε, δ)-definition of limit and continuous functions was already developed by Bolzano in 1817 (Felscher 2000), but remained relatively unknown. Cauchy in 1821 defined continuity in terms of infinitesimals (see Cours d'Analyse, page 34). In 1858, Dedekind proposed a definition of the real numbers in terms of Dedekind cuts of rational numbers (Dedekind 1872), a definition still employed in contemporary texts.
Georg Cantor developed the fundamental concepts of infinite set theory. His early results developed the theory of cardinality and proved that the reals and the natural numbers have different cardinalities (Cantor 1874). Over the next twenty years, Cantor developed a theory of transfinite numbers in a series of publications. In 1891, he published a new proof of the uncountability of the real numbers that introduced the diagonal argument, and used this method to prove Cantor's theorem that no set can have the same cardinality as its powerset. Cantor believed that every set could be well-ordered, but was unable to produce a proof for this result, leaving it as an open problem in 1895 (Katz 1998, p. 807).
20th century
In the early decades of the 20th century, the main areas of study were set theory and formal logic. The discovery of paradoxes in informal set theory caused some to wonder whether mathematics itself is inconsistent, and to look for proofs of consistency.
In 1900, Hilbert posed a famous list of 23 problems for the next century. The first two of these were to resolve the continuum hypothesis and prove the consistency of elementary arithmetic, respectively; the tenth was to produce a method that could decide whether a multivariate polynomial equation over the integers has a solution. Subsequent work to resolve these problems shaped the direction of mathematical logic, as did the effort to resolve Hilbert's Entscheidungsproblem, posed in 1928. This problem asked for a procedure that would decide, given a formalized mathematical statement, whether the statement is true or false.
Set theory and paradoxes
Ernst Zermelo (1904) gave a proof that every set could be well-ordered, a result Georg Cantor had been unable to obtain. To achieve the proof, Zermelo introduced the axiom of choice, which drew heated debate and research among mathematicians and the pioneers of set theory. The immediate criticism of the method led Zermelo to publish a second exposition of his result, directly addressing criticisms of his proof (Zermelo 1908a). This paper led to the general acceptance of the axiom of choice in the mathematics community.
Skepticism about the axiom of choice was reinforced by recently discovered paradoxes in naive set theory. Cesare Burali-Forti (1897) was the first to state a paradox: the Burali-Forti paradox shows that the collection of all ordinal numbers cannot form a set. Very soon thereafter, Bertrand Russell discovered Russell's paradox in 1901, and Jules Richard (1905) discovered Richard's paradox.
Zermelo (1908b) provided the first set of axioms for set theory. These axioms, together with the additional axiom of replacement proposed by Abraham Fraenkel, are now called Zermelo–Fraenkel set theory (ZF). Zermelo's axioms incorporated the principle of limitation of size to avoid Russell's paradox.
In 1910, the first volume of Principia Mathematica by Russell and Alfred North Whitehead was published. This seminal work developed the theory of functions and cardinality in a completely formal framework of type theory, which Russell and Whitehead developed in an effort to avoid the paradoxes. Principia Mathematica is considered one of the most influential works of the 20th century, although the framework of type theory did not prove popular as a foundational theory for mathematics (Ferreirós 2001, p. 445).
Fraenkel (1922) proved that the axiom of choice cannot be proved from the remaining axioms of Zermelo's set theory with urelements. Later work by Paul Cohen (1966) showed that the addition of urelements is not needed, and the axiom of choice is unprovable in ZF. Cohen's proof developed the method of forcing, which is now an important tool for establishing independence results in set theory.[2]
Symbolic logic
Leopold Löwenheim (1915) and Thoralf Skolem (1920) obtained the Löwenheim–Skolem theorem, which says that first-order logic cannot control the cardinalities of infinite structures. Skolem realized that this theorem would apply to first-order formalizations of set theory, and that it implies any such formalization has a countable model. This counterintuitive fact became known as Skolem's paradox.
In his doctoral thesis, Kurt Gödel (1929) proved the completeness theorem, which establishes a correspondence between syntax and semantics in first-order logic. Gödel used the completeness theorem to prove the compactness theorem, demonstrating the finitary nature of first-order logical consequence. These results helped establish first-order logic as the dominant logic used by mathematicians.
In 1931, Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems, which proved the incompleteness (in a different meaning of the word) of all sufficiently strong, effective first-order theories. This result, known as Gödel's incompleteness theorem, establishes severe limitations on axiomatic foundations for mathematics, striking a strong blow to Hilbert's program. It showed the impossibility of providing a consistency proof of arithmetic within any formal theory of arithmetic. Hilbert, however, did not acknowledge the importance of the incompleteness theorem for some time.
Gödel's theorem shows that a consistency proof of any sufficiently strong, effective axiom system cannot be obtained in the system itself, if the system is consistent, nor in any weaker system. This leaves open the possibility of consistency proofs that cannot be formalized within the system they consider. Gentzen (1936) proved the consistency of arithmetic using a finitistic system together with a principle of transfinite induction. Gentzen's result introduced the ideas of cut elimination and proof-theoretic ordinals, which became key tools in proof theory. Gödel (1958) gave a different consistency proof, which reduces the consistency of classical arithmetic to that of intutitionistic arithmetic in higher types.
Beginnings of the other branches
Alfred Tarski developed the basics of model theory.
Beginning in 1935, a group of prominent mathematicians collaborated under the pseudonym Nicolas Bourbaki to publish a series of encyclopedic mathematics texts. These texts, written in an austere and axiomatic style, emphasized rigorous presentation and set-theoretic foundations. Terminology coined by these texts, such as the words bijection, injection, and surjection, and the set-theoretic foundations the texts employed, were widely adopted throughout mathematics.
The study of computability came to be known as recursion theory, because early formalizations by Gödel and Kleene relied on recursive definitions of functions.[3] When these definitions were shown equivalent to Turing's formalization involving Turing machines, it became clear that a new concept – the computable function – had been discovered, and that this definition was robust enough to admit numerous independent characterizations. In his work on the incompleteness theorems in 1931, Gödel lacked a rigorous concept of an effective formal system; he immediately realized that the new definitions of computability could be used for this purpose, allowing him to state the incompleteness theorems in generality that could only be implied in the original paper.
Numerous results in recursion theory were obtained in the 1940s by Stephen Cole Kleene and Emil Leon Post. Kleene (1943) introduced the concepts of relative computability, foreshadowed by Turing (1939), and the arithmetical hierarchy. Kleene later generalized recursion theory to higher-order functionals. Kleene and Kreisel studied formal versions of intuitionistic mathematics, particularly in the context of proof theory.
Formal logical systems
At its core, mathematical logic deals with mathematical concepts expressed using formal logical systems. These systems, though they differ in many details, share the common property of considering only expressions in a fixed formal language. The systems of propositional logic and first-order logic are the most widely studied today, because of their applicability to foundations of mathematics and because of their desirable proof-theoretic properties.[4] Stronger classical logics such as second-order logic or infinitary logic are also studied, along with nonclassical logics such as intuitionistic logic.
First-order logic
Main article: First-order logic
First-order logic is a kind of formal system of logic designed to avoid Russell's Paradox. The well-formed formulas of first-order logic cannot be self-referential. That is, a sentence such as "This statement is true," cannot be expressed in a well-formed formula in first-order logic. The syntax of first-order logic involves only finite expressions, called well-formed formulas, while its semantics are characterized by the limitation of all quantifiers to a fixed domain of discourse.
Early results from formal logic established limitations of first-order logic. The Löwenheim–Skolem theorem (1919) showed that if a set of sentences in a countable first-order language has an infinite model then it has at least one model of each infinite cardinality. This shows that it is impossible for a set of first-order axioms to characterize (up to isomorphism) the natural numbers, the real numbers, or any other infinite structure. As the goal of early foundational studies was to produce axiomatic theories for all parts of mathematics, this limitation was particularly stark.
Gödel's completeness theorem (Gödel 1929) established the equivalence between semantic and syntactic definitions of logical consequence in first-order logic. It shows that if a particular sentence is true in every model that satisfies a particular set of axioms, then there must be a finite deduction of the sentence from the axioms. The compactness theorem first appeared as a lemma in Gödel's proof of the completeness theorem, and it took many years before logicians grasped its significance and began to apply it routinely. It says that a set of sentences has a model if and only if every finite subset has a model, or in other words that an inconsistent set of formulas must have a finite inconsistent subset. The completeness and compactness theorems allow for sophisticated analysis of logical consequence in first-order logic and the development of model theory, and they are a key reason for the prominence of first-order logic in mathematics.
Gödel's incompleteness theorems (Gödel 1931) establish additional limits on first-order axiomatizations. A logical system is effectively given if, given any formula in the language of the system, it is possible to decide whether or not the formula is an axiom. A logical system is sufficiently strong if it is possible to express the Peano axioms in the system. The first incompleteness theorem states that for any consistent, effectively given, sufficiently strong logical system, there exists a statement, called a Gödel sentence, which is true (in the sense that it holds for the natural numbers) but not provable within that logical system (and which indeed may fail in some non-standard models of arithmetic consistent with the logical system). When applied to first-order logic, the first incompleteness theorem implies that any consistent, effectively given, sufficiently strong first-order theory has models that are not elementarily equivalent, a stronger limitation than the one established by the Löwenheim–Skolem theorem. The second incompleteness theorem states that no consistent, sufficiently strong, effectively given axiom system for arithmetic can prove its own consistency. This has been interpreted to show that Hilbert's program cannot be completed.
Other classical logics
Many logics besides first-order logic are studied. These include infinitary logics, which allow for formulas to provide an infinite amount of information, and higher-order logics, which include a portion of set theory directly in their semantics.
The most well studied infinitary logic is \( L_{\omega_1,\omega} \). In this logic, quantifiers may only be nested to finite depths, as in first-order logic, but formulas may have finite or countably infinite conjunctions and disjunctions within them. Thus, for example, it is possible to say that an object is a whole number using a formula of \( L_{\omega_1,\omega} \)such as
\( (x = 0) \lor (x = 1) \lor (x = 2) \lor \cdots.\)
Higher-order logics allow for quantification not only of elements of the domain of discourse, but subsets of the domain of discourse, sets of such subsets, and other objects of higher type. The semantics are defined so that, rather than having a separate domain for each higher-type quantifier to range over, the quantifiers instead range over all objects of the appropriate type. The logics studied before the development of first-order logic, for example Frege's logic, had similar set-theoretic aspects. Although higher-order logics are more expressive, allowing complete axiomatizations of structures such as the natural numbers, they do not satisfy analogues of the completeness and compactness theorems from first-order logic, and are thus less amenable to proof-theoretic analysis.
Another type of logics are fixed-point logics that allow inductive definitions, like one writes for primitive recursive functions.
One can formally define an extension of first-order logic — a notion which encompasses all logics in this section because they behave like first-order logic in certain fundamental ways, but does not encompass all logics in general, e.g. it does not encompass intuitionistic, modal or fuzzy logic. Lindström's theorem implies that the only extension of first-order logic satisfying both the compactness theorem and the Downward Löwenheim–Skolem theorem is first-order logic.
Nonclassical and modal logic
Modal logics include additional modal operators, such as an operator which states that a particular formula is not only true, but necessarily true. Although modal logic is not often used to axiomatize mathematics, it has been used to study the properties of first-order provability (Solovay 1976) and set-theoretic forcing (Hamkins and Löwe 2007).
Intuitionistic logic was developed by Heyting to study Brouwer's program of intuitionism, in which Brouwer himself avoided formalization. Intuitionistic logic specifically does not include the law of the excluded middle, which states that each sentence is either true or its negation is true. Kleene's work with the proof theory of intuitionistic logic showed that constructive information can be recovered from intuitionistic proofs. For example, any provably total function in intuitionistic arithmetic is computable; this is not true in classical theories of arithmetic such as Peano arithmetic.
Algebraic logic
Algebraic logic uses the methods of abstract algebra to study the semantics of formal logics. A fundamental example is the use of Boolean algebras to represent truth values in classical propositional logic, and the use of Heyting algebras to represent truth values in intuitionistic propositional logic. Stronger logics, such as first-order logic and higher-order logic, are studied using more complicated algebraic structures such as cylindric algebras.
Set theory
Main article: Set theory
Set theory is the study of sets, which are abstract collections of objects. Many of the basic notions, such as ordinal and cardinal numbers, were developed informally by Cantor before formal axiomatizations of set theory were developed. The first such axiomatization, due to Zermelo (1908b), was extended slightly to become Zermelo–Fraenkel set theory (ZF), which is now the most widely used foundational theory for mathematics.
Other formalizations of set theory have been proposed, including von Neumann–Bernays–Gödel set theory (NBG), Morse–Kelley set theory (MK), and New Foundations (NF). Of these, ZF, NBG, and MK are similar in describing a cumulative hierarchy of sets. New Foundations takes a different approach; it allows objects such as the set of all sets at the cost of restrictions on its set-existence axioms. The system of Kripke–Platek set theory is closely related to generalized recursion theory.
Two famous statements in set theory are the axiom of choice and the continuum hypothesis. The axiom of choice, first stated by Zermelo (1904), was proved independent of ZF by Fraenkel (1922), but has come to be widely accepted by mathematicians. It states that given a collection of nonempty sets there is a single set C that contains exactly one element from each set in the collection. The set C is said to "choose" one element from each set in the collection. While the ability to make such a choice is considered obvious by some, since each set in the collection is nonempty, the lack of a general, concrete rule by which the choice can be made renders the axiom nonconstructive. Stefan Banach and Alfred Tarski (1924) showed that the axiom of choice can be used to decompose a solid ball into a finite number of pieces which can then be rearranged, with no scaling, to make two solid balls of the original size. This theorem, known as the Banach–Tarski paradox, is one of many counterintuitive results of the axiom of choice.
The continuum hypothesis, first proposed as a conjecture by Cantor, was listed by David Hilbert as one of his 23 problems in 1900. Gödel showed that the continuum hypothesis cannot be disproven from the axioms of Zermelo–Fraenkel set theory (with or without the axiom of choice), by developing the constructible universe of set theory in which the continuum hypothesis must hold. In 1963, Paul Cohen showed that the continuum hypothesis cannot be proven from the axioms of Zermelo–Fraenkel set theory (Cohen 1966). This independence result did not completely settle Hilbert's question, however, as it is possible that new axioms for set theory could resolve the hypothesis. Recent work along these lines has been conducted by W. Hugh Woodin, although its importance is not yet clear (Woodin 2001).
Contemporary research in set theory includes the study of large cardinals and determinacy. Large cardinals are cardinal numbers with particular properties so strong that the existence of such cardinals cannot be proved in ZFC. The existence of the smallest large cardinal typically studied, an inaccessible cardinal, already implies the consistency of ZFC. Despite the fact that large cardinals have extremely high cardinality, their existence has many ramifications for the structure of the real line. Determinacy refers to the possible existence of winning strategies for certain two-player games (the games are said to be determined). The existence of these strategies implies structural properties of the real line and other Polish spaces.
Model theory
Main article: Model theory
Model theory studies the models of various formal theories. Here a theory is a set of formulas in a particular formal logic and signature, while a model is a structure that gives a concrete interpretation of the theory. Model theory is closely related to universal algebra and algebraic geometry, although the methods of model theory focus more on logical considerations than those fields.
The set of all models of a particular theory is called an elementary class; classical model theory seeks to determine the properties of models in a particular elementary class, or determine whether certain classes of structures form elementary classes.
The method of quantifier elimination can be used to show that definable sets in particular theories cannot be too complicated. Tarski (1948) established quantifier elimination for real-closed fields, a result which also shows the theory of the field of real numbers is decidable. (He also noted that his methods were equally applicable to algebraically closed fields of arbitrary characteristic.) A modern subfield developing from this is concerned with o-minimal structures.
Morley's categoricity theorem, proved by Michael D. Morley (1965), states that if a first-order theory in a countable language is categorical in some uncountable cardinality, i.e. all models of this cardinality are isomorphic, then it is categorical in all uncountable cardinalities.
A trivial consequence of the continuum hypothesis is that a complete theory with less than continuum many nonisomorphic countable models can have only countably many. Vaught's conjecture, named after Robert Lawson Vaught, says that this is true even independently of the continuum hypothesis. Many special cases of this conjecture have been established.
Recursion theory
Main article: Recursion theory
Recursion theory, also called computability theory, studies the properties of computable functions and the Turing degrees, which divide the uncomputable functions into sets that have the same level of uncomputability. Recursion theory also includes the study of generalized computability and definability. Recursion theory grew from the work of Alonzo Church and Alan Turing in the 1930s, which was greatly extended by Kleene and Post in the 1940s.
Classical recursion theory focuses on the computability of functions from the natural numbers to the natural numbers. The fundamental results establish a robust, canonical class of computable functions with numerous independent, equivalent characterizations using Turing machines, λ calculus, and other systems. More advanced results concern the structure of the Turing degrees and the lattice of recursively enumerable sets.
Generalized recursion theory extends the ideas of recursion theory to computations that are no longer necessarily finite. It includes the study of computability in higher types as well as areas such as hyperarithmetical theory and α-recursion theory.
Contemporary research in recursion theory includes the study of applications such as algorithmic randomness, computable model theory, and reverse mathematics, as well as new results in pure recursion theory.
Algorithmically unsolvable problems
An important subfield of recursion theory studies algorithmic unsolvability; a decision problem or function problem is algorithmically unsolvable if there is no possible computable algorithm that returns the correct answer for all legal inputs to the problem. The first results about unsolvability, obtained independently by Church and Turing in 1936, showed that the Entscheidungsproblem is algorithmically unsolvable. Turing proved this by establishing the unsolvability of the halting problem, a result with far-ranging implications in both recursion theory and computer science.
There are many known examples of undecidable problems from ordinary mathematics. The word problem for groups was proved algorithmically unsolvable by Pyotr Novikov in 1955 and independently by W. Boone in 1959. The busy beaver problem, developed by Tibor Radó in 1962, is another well-known example.
Hilbert's tenth problem asked for an algorithm to determine whether a multivariate polynomial equation with integer coefficients has a solution in the integers. Partial progress was made by Julia Robinson, Martin Davis and Hilary Putnam. The algorithmic unsolvability of the problem was proved by Yuri Matiyasevich in 1970 (Davis 1973).
Proof theory and constructive mathematics
Main article: Proof theory
Proof theory is the study of formal proofs in various logical deduction systems. These proofs are represented as formal mathematical objects, facilitating their analysis by mathematical techniques. Several deduction systems are commonly considered, including Hilbert-style deduction systems, systems of natural deduction, and the sequent calculus developed by Gentzen.
The study of constructive mathematics, in the context of mathematical logic, includes the study of systems in non-classical logic such as intuitionistic logic, as well as the study of predicative systems. An early proponent of predicativism was Hermann Weyl, who showed it is possible to develop a large part of real analysis using only predicative methods (Weyl 1918).
Because proofs are entirely finitary, whereas truth in a structure is not, it is common for work in constructive mathematics to emphasize provability. The relationship between provability in classical (or nonconstructive) systems and provability in intuitionistic (or constructive, respectively) systems is of particular interest. Results such as the Gödel–Gentzen negative translation show that it is possible to embed (or translate) classical logic into intuitionistic logic, allowing some properties about intuitionistic proofs to be transferred back to classical proofs.
Recent developments in proof theory include the study of proof mining by Ulrich Kohlenbach and the study of proof-theoretic ordinals by Michael Rathjen.
Connections with computer science
Main article: Logic in computer science
The study of computability theory in computer science is closely related to the study of computability in mathematical logic. There is a difference of emphasis, however. Computer scientists often focus on concrete programming languages and feasible computability, while researchers in mathematical logic often focus on computability as a theoretical concept and on noncomputability.
The theory of semantics of programming languages is related to model theory, as is program verification (in particular, model checking). The Curry–Howard isomorphism between proofs and programs relates to proof theory, especially intuitionistic logic. Formal calculi such as the lambda calculus and combinatory logic are now studied as idealized programming languages.
Computer science also contributes to mathematics by developing techniques for the automatic checking or even finding of proofs, such as automated theorem proving and logic programming.
Descriptive complexity theory relates logics to computational complexity. The first significant result in this area, Fagin's theorem (1974) established that NP is precisely the set of languages expressible by sentences of existential second-order logic.
Foundations of mathematics
Main article: Foundations of mathematics
In the 19th century, mathematicians became aware of logical gaps and inconsistencies in their field. It was shown that Euclid's axioms for geometry, which had been taught for centuries as an example of the axiomatic method, were incomplete. The use of infinitesimals, and the very definition of function, came into question in analysis, as pathological examples such as Weierstrass' nowhere-differentiable continuous function were discovered.
Cantor's study of arbitrary infinite sets also drew criticism. Leopold Kronecker famously stated "God made the integers; all else is the work of man," endorsing a return to the study of finite, concrete objects in mathematics. Although Kronecker's argument was carried forward by constructivists in the 20th century, the mathematical community as a whole rejected them. David Hilbert argued in favor of the study of the infinite, saying "No one shall expel us from the Paradise that Cantor has created."
Mathematicians began to search for axiom systems that could be used to formalize large parts of mathematics. In addition to removing ambiguity from previously naive terms such as function, it was hoped that this axiomatization would allow for consistency proofs. In the 19th century, the main method of proving the consistency of a set of axioms was to provide a model for it. Thus, for example, non-Euclidean geometry can be proved consistent by defining point to mean a point on a fixed sphere and line to mean a great circle on the sphere. The resulting structure, a model of elliptic geometry, satisfies the axioms of plane geometry except the parallel postulate.
With the development of formal logic, Hilbert asked whether it would be possible to prove that an axiom system is consistent by analyzing the structure of possible proofs in the system, and showing through this analysis that it is impossible to prove a contradiction. This idea led to the study of proof theory. Moreover, Hilbert proposed that the analysis should be entirely concrete, using the term finitary to refer to the methods he would allow but not precisely defining them. This project, known as Hilbert's program, was seriously affected by Gödel's incompleteness theorems, which show that the consistency of formal theories of arithmetic cannot be established using methods formalizable in those theories. Gentzen showed that it is possible to produce a proof of the consistency of arithmetic in a finitary system augmented with axioms of transfinite induction, and the techniques he developed to do so were seminal in proof theory.
A second thread in the history of foundations of mathematics involves nonclassical logics and constructive mathematics. The study of constructive mathematics includes many different programs with various definitions of constructive. At the most accommodating end, proofs in ZF set theory that do not use the axiom of choice are called constructive by many mathematicians. More limited versions of constructivism limit themselves to natural numbers, number-theoretic functions, and sets of natural numbers (which can be used to represent real numbers, facilitating the study of mathematical analysis). A common idea is that a concrete means of computing the values of the function must be known before the function itself can be said to exist.
In the early 20th century, Luitzen Egbertus Jan Brouwer founded intuitionism as a philosophy of mathematics. This philosophy, poorly understood at first, stated that in order for a mathematical statement to be true to a mathematician, that person must be able to intuit the statement, to not only believe its truth but understand the reason for its truth. A consequence of this definition of truth was the rejection of the law of the excluded middle, for there are statements that, according to Brouwer, could not be claimed to be true while their negations also could not be claimed true. Brouwer's philosophy was influential, and the cause of bitter disputes among prominent mathematicians. Later, Kleene and Kreisel would study formalized versions of intuitionistic logic (Brouwer rejected formalization, and presented his work in unformalized natural language). With the advent of the BHK interpretation and Kripke models, intuitionism became easier to reconcile with classical mathematics.
See also
Portal icon Logic portal
Knowledge representation and reasoning
List of computability and complexity topics
List of first-order theories
List of logic symbols
List of mathematical logic topics
List of set theory topics
Metalogic
Notes
Undergraduate texts include Boolos, Burgess, and Jeffrey (2002), Enderton (2001), and Mendelson (1997). A classic graduate text by Shoenfield (2001) first appeared in 1967.
See also Cohen 2008.
A detailed study of this terminology is given by Soare (1996).
Ferreirós (2001) surveys the rise of first-order logic over other formal logics in the early 20th century.
References
Undergraduate texts
Walicki, Michał (2011), Introduction to Mathematical Logic, Singapore: World Scientific Publishing, ISBN 978-981-4343-87-9.
Boolos, George; Burgess, John; Jeffrey, Richard (2002), Computability and Logic (4th ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-00758-0.
Crossley, J.N.; Ash, C.J.; Brickhill, C.J.; Stillwell, J.C.; Williams, N.H. (1972), What is mathematical logic?, London-Oxford-New York: Oxford University Press, ISBN 0-19-888087-1, Zbl 0251.02001.
Enderton, Herbert (2001), A mathematical introduction to logic (2nd ed.), Boston, MA: Academic Press, ISBN 978-0-12-238452-3.
Hamilton, A.G. (1988), Logic for Mathematicians (2nd ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-36865-0.
Ebbinghaus, H.-D.; Flum, J.; Thomas, W. (1994), Mathematical Logic (2nd ed.), New York: Springer, ISBN 0-387-94258-0.
Katz, Robert (1964), Axiomatic Analysis, Boston, MA: D. C. Heath and Company.
Mendelson, Elliott (1997), Introduction to Mathematical Logic (4th ed.), London: Chapman & Hall, ISBN 978-0-412-80830-2.
Rautenberg, Wolfgang (2010), A Concise Introduction to Mathematical Logic (3rd ed.), New York: Springer Science+Business Media, doi:10.1007/978-1-4419-1221-3, ISBN 978-1-4419-1220-6.
Schwichtenberg, Helmut (2003–2004), Mathematical Logic (PDF), Munich, Germany: Mathematisches Institut der Universität München.
Shawn Hedman, A first course in logic: an introduction to model theory, proof theory, computability, and complexity, Oxford University Press, 2004, ISBN 0-19-852981-3. Covers logics in close relation with computability theory and complexity theory
Graduate texts
Andrews, Peter B. (2002), An Introduction to Mathematical Logic and Type Theory: To Truth Through Proof (2nd ed.), Boston: Kluwer Academic Publishers, ISBN 978-1-4020-0763-7.
Barwise, Jon, ed. (1989), Handbook of Mathematical Logic, Studies in Logic and the Foundations of Mathematics, North Holland, ISBN 978-0-444-86388-1.
Hodges, Wilfrid (1997), A shorter model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-58713-6.
Jech, Thomas (2003), Set Theory: Millennium Edition, Springer Monographs in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-3-540-44085-7.
Shoenfield, Joseph R. (2001) [1967], Mathematical Logic (2nd ed.), A K Peters, ISBN 978-1-56881-135-2.
Troelstra, Anne Sjerp; Schwichtenberg, Helmut (2000), Basic Proof Theory, Cambridge Tracts in Theoretical Computer Science (2nd ed.), Cambridge: Cambridge University Press, ISBN 978-0-521-77911-1.
Research papers, monographs, texts, and surveys
Cohen, P. J. (1966), Set Theory and the Continuum Hypothesis, Menlo Park, CA: W. A. Benjamin.
Cohen, Paul Joseph (2008) [1966]. Set theory and the continuum hypothesis. Mineola, New York: Dover Publications. ISBN 978-0-486-46921-8..
J.D. Sneed, The Logical Structure of Mathematical Physics. Reidel, Dordrecht, 1971 (revised edition 1979).
Davis, Martin (1973), "Hilbert's tenth problem is unsolvable", The American Mathematical Monthly (The American Mathematical Monthly, Vol. 80, No. 3) 80 (3): 233–269, doi:10.2307/2318447, JSTOR 2318447, reprinted as an appendix in Martin Davis, Computability and Unsolvability, Dover reprint 1982. JStor
Felscher, Walter (2000), "Bolzano, Cauchy, Epsilon, Delta", The American Mathematical Monthly (The American Mathematical Monthly, Vol. 107, No. 9) 107 (9): 844–862, doi:10.2307/2695743, JSTOR 2695743. JSTOR
Ferreirós, José (2001), "The Road to Modern Logic-An Interpretation", Bulletin of Symbolic Logic (The Bulletin of Symbolic Logic, Vol. 7, No. 4) 7 (4): 441–484, doi:10.2307/2687794, JSTOR 2687794. JStor
Hamkins, Joel David; Benedikt Löwe, "The modal logic of forcing", Transactions of the American Mathematical Society, to appear. Electronic posting by the journal
Katz, Victor J. (1998), A History of Mathematics, Addison–Wesley, ISBN 0-321-01618-1.
Morley, Michael (1965), "Categoricity in Power", Transactions of the American Mathematical Society (Transactions of the American Mathematical Society, Vol. 114, No. 2) 114 (2): 514–538, doi:10.2307/1994188, JSTOR 1994188.
Soare, Robert I. (1996), "Computability and recursion", Bulletin of Symbolic Logic (The Bulletin of Symbolic Logic, Vol. 2, No. 3) 2 (3): 284–321, doi:10.2307/420992, JSTOR 420992.
Solovay, Robert M. (1976), "Provability Interpretations of Modal Logic", Israel Journal of Mathematics 25 (3–4): 287–304, doi:10.1007/BF02757006.
Woodin, W. Hugh (2001), "The Continuum Hypothesis, Part I", Notices of the American Mathematical Society 48 (6). PDF
Classical papers, texts, and collections
Burali-Forti, Cesare (1897), A question on transfinite numbers, reprinted in van Heijenoort 1976, pp. 104–111.
Dedekind, Richard (1872), Stetigkeit und irrationale Zahlen. English translation of title: "Consistency and irrational numbers".
Dedekind, Richard (1888), Was sind und was sollen die Zahlen? Two English translations:
1963 (1901). Essays on the Theory of Numbers. Beman, W. W., ed. and trans. Dover.
1996. In From Kant to Hilbert: A Source Book in the Foundations of Mathematics, 2 vols, Ewald, William B., ed., Oxford University Press: 787–832.
Fraenkel, Abraham A. (1922), "Der Begriff 'definit' und die Unabhängigkeit des Auswahlsaxioms", Sitzungsberichte der Preussischen Akademie der Wissenschaften, Physikalisch-mathematische Klasse, pp. 253–257 (German), reprinted in English translation as "The notion of 'definite' and the independence of the axiom of choice", van Heijenoort 1976, pp. 284–289.
Frege Gottlob (1879), Begriffsschrift, eine der arithmetischen nachgebildete Formelsprache des reinen Denkens. Halle a. S.: Louis Nebert. Translation: Concept Script, a formal language of pure thought modelled upon that of arithmetic, by S. Bauer-Mengelberg in Jean Van Heijenoort, ed., 1967. From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931. Harvard University Press.
Frege Gottlob (1884), Die Grundlagen der Arithmetik: eine logisch-mathematische Untersuchung über den Begriff der Zahl. Breslau: W. Koebner. Translation: J. L. Austin, 1974. The Foundations of Arithmetic: A logico-mathematical enquiry into the concept of number, 2nd ed. Blackwell.
Gentzen, Gerhard (1936), "Die Widerspruchsfreiheit der reinen Zahlentheorie", Mathematische Annalen 112: 132–213, doi:10.1007/BF01565428, reprinted in English translation in Gentzen's Collected works, M. E. Szabo, ed., North-Holland, Amsterdam, 1969.[specify]
Gödel, Kurt (1929), Über die Vollständigkeit des Logikkalküls, doctoral dissertation, University Of Vienna. English translation of title: "Completeness of the logical calculus".
Gödel, Kurt (1930), "Die Vollständigkeit der Axiome des logischen Funktionen-kalküls", Monatshefte für Mathematik und Physik 37: 349–360, doi:10.1007/BF01696781. English translation of title: "The completeness of the axioms of the calculus of logical functions".
Gödel, Kurt (1931), "Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I", Monatshefte für Mathematik und Physik 38 (1): 173–198, doi:10.1007/BF01700692, see On Formally Undecidable Propositions of Principia Mathematica and Related Systems for details on English translations.
Gödel, Kurt (1958), "Über eine bisher noch nicht benützte Erweiterung des finiten Standpunktes", Dialectica. International Journal of Philosophy 12 (3–4): 280–287, doi:10.1111/j.1746-8361.1958.tb01464.x, reprinted in English translation in Gödel's Collected Works, vol II, Soloman Feferman et al., eds. Oxford University Press, 1990.[specify]
van Heijenoort, Jean, ed. (1967, 1976 3rd printing with corrections), From Frege to Gödel: A Source Book in Mathematical Logic, 1879–1931 (3rd ed.), Cambridge, Mass: Harvard University Press, ISBN 0-674-32449-8, (pbk.) Check date values in: |date= (help)
Hilbert, David (1899), Grundlagen der Geometrie, Leipzig: Teubner, English 1902 edition (The Foundations of Geometry) republished 1980, Open Court, Chicago.
David, Hilbert (1929), "Probleme der Grundlegung der Mathematik", Mathematische Annalen 102: 1–9, doi:10.1007/BF01782335. Lecture given at the International Congress of Mathematicians, 3 September 1928. Published in English translation as "The Grounding of Elementary Number Theory", in Mancosu 1998, pp. 266–273.
Kleene, Stephen Cole (1943), "Recursive Predicates and Quantifiers", American Mathematical Society Transactions (Transactions of the American Mathematical Society, Vol. 53, No. 1) 54 (1): 41–73, doi:10.2307/1990131, JSTOR 1990131.
Lobachevsky, Nikolai (1840), Geometrishe Untersuchungen zur Theorie der Parellellinien (German). Reprinted in English translation as "Geometric Investigations on the Theory of Parallel Lines" in Non-Euclidean Geometry, Robert Bonola (ed.), Dover, 1955. ISBN 0-486-60027-0
Löwenheim, Leopold (1915), "Über Möglichkeiten im Relativkalkül", Mathematische Annalen 76 (4): 447–470, doi:10.1007/BF01458217, ISSN 0025-5831 (German). Translated as "On possibilities in the calculus of relatives" in Jean van Heijenoort, 1967. A Source Book in Mathematical Logic, 1879–1931. Harvard Univ. Press: 228–251.
Mancosu, Paolo, ed. (1998), From Brouwer to Hilbert. The Debate on the Foundations of Mathematics in the 1920s, Oxford: Oxford University Press.
Pasch, Moritz (1882), Vorlesungen über neuere Geometrie.
Peano, Giuseppe (1889), Arithmetices principia, nova methodo exposita (Latin), excerpt reprinted in English stranslation as "The principles of arithmetic, presented by a new method", van Heijenoort 1976, pp. 83 97.
Richard, Jules (1905), "Les principes des mathématiques et le problème des ensembles", Revue générale des sciences pures et appliquées 16: 541 (French), reprinted in English translation as "The principles of mathematics and the problems of sets", van Heijenoort 1976, pp. 142–144.
Skolem, Thoralf (1920), "Logisch-kombinatorische Untersuchungen über die Erfüllbarkeit oder Beweisbarkeit mathematischer Sätze nebst einem Theoreme über dichte Mengen", Videnskapsselskapet Skrifter, I. Matematisk-naturvidenskabelig Klasse 6: 1–36.
Tarski, Alfred (1948), A decision method for elementary algebra and geometry, Santa Monica, California: RAND Corporation
Turing, Alan M. (1939), "Systems of Logic Based on Ordinals", Proceedings of the London Mathematical Society 45 (2): 161–228, doi:10.1112/plms/s2-45.1.161
Zermelo, Ernst (1904), "Beweis, daß jede Menge wohlgeordnet werden kann", Mathematische Annalen 59 (4): 514–516, doi:10.1007/BF01445300 (German), reprinted in English translation as "Proof that every set can be well-ordered", van Heijenoort 1976, pp. 139–141.
Zermelo, Ernst (1908a), "Neuer Beweis für die Möglichkeit einer Wohlordnung", Mathematische Annalen 65: 107–128, doi:10.1007/BF01450054, ISSN 0025-5831 (German), reprinted in English translation as "A new proof of the possibility of a well-ordering", van Heijenoort 1976, pp. 183–198.
Zermelo, Ernst (1908b), "Untersuchungen über die Grundlagen der Mengenlehre", Mathematische Annalen 65 (2): 261–281, doi:10.1007/BF01449999.
External links
Hazewinkel, Michiel, ed. (2001), "Mathematical logic", Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4
Polyvalued logic and Quantity Relation Logic
forall x: an introduction to formal logic, a free textbook by P. D. Magnus.
A Problem Course in Mathematical Logic, a free textbook by Stefan Bilaniuk.
Detlovs, Vilnis, and Podnieks, Karlis (University of Latvia), Introduction to Mathematical Logic. (hyper-textbook).
In the Stanford Encyclopedia of Philosophy:
Classical Logic by Stewart Shapiro.
First-order Model Theory by Wilfrid Hodges.
In the London Philosophy Study Guide:
Mathematical Logic
Set Theory & Further Logic
Philosophy of Mathematics
Undergraduate Texts in Mathematics
Graduate Studies in Mathematics
Retrieved from "http://en.wikipedia.org/"
All text is available under the terms of the GNU Free Documentation License