Math is a lot like a game of checkers. Abstractly, an axiomatic system can be reduced to three things:
- a set of symbols (definitions)
- a set of rules for manipulating those symbols (axioms)
- a set of derived results (theorems)
For example here is an inconsistent system, where two core axioms contradict each other:
System X
(1) X is true (assumption)
(2) X is not true (assumption)
(3) If X is true, then either X is true or pigs can fly. ('or' introduction from 1)
(4) But since X is not true (from 2), pigs can fly ('or' elimination from 1, 2)
(5) Therefore since we can replace 'pigs can fly' with any other random absurdity, if a system is inconsistent, we can derive any assertion from it.
So, if 'inconsistency' means that any result can be derived, 'consistency' just means there is there is least one thing that cannot be derived from the set of assumptions. To show consistency of a simple system consisting of '1=0', it's easy to show we cannot derive '1 != 0'. Since, from '1=0', the most we may derive is '1=2, 1=3, 1=4, ...'.
Therefore '1=0' is only inconsistent if we introduce definitions/axioms that make it so.
Usefulness of 1=0
Though, my next thought is "So '1=0' may be logically consistent, but is it useless? What can you do with a number system in which 1=0?"
However, consider a larger context of use. Likewise we may not see a good use for the number "zero" in only the context of itself. But when combined with other numbers, in a larger system, zero is useful precisely because it is the trivial case. And the trivial case becomes a pivot point for solving an equation, given it's unique properties when applied to operators.
So what is simple a real world example? Suppose we have a ceiling fan or lamp, that has a cyclic switch with four settings:
- off
- low
- medium
- high
Now we could physically replace the fan switch with one with fewer possible states (3, 2, or 1). In the smallest number of states, we have a switch that is hardwired to be either always on or off. This is a system where 1 = 0 (mod 1) best models the situation. So we could view a wire as a cyclic switch, where states 1=0. A collection of switches AND wires, therefore, reduce to a collection of only switches. Not that this is the only way to see a problem, but the benefit is that it leads to a different way to see a problem (without requiring as many different types of components).
Combining inconsistent branches into a consistent superstructure:
There can be a fork in mathematics where on one branch '1=0' is an axiom, and on another branch '1!=0' is an axiom. Both are consistent when taken individually, but they are inconsistent in relation to each other. But are these two systems truly incompatible though?
Not necessarily. Both could be combined into one axiomatic superstructure, where the "loop" property of the number line was a variable.
For example, in a system where '1=0' the numbers loop at 1. In modular arithmetic, the line loops at some finite number > 1. And for standard arithmetic, the loop could be placed somewhere past the largest number we need. Not that this is the only approach to combining the two systems, but it's just an example.
So I think the interesting results from this line of reasoning are:
1. For any statement that is derivable in one axiomatic system, there exists at least one alternate axiomatic system such that the exact opposite result is derivable. Since at least, we may take that opposite result to be the starting axiom of the new system.
2. In general, any two incompatible axiomatic systems can be brought together under one super set of axioms, by simply adding one (or more) variables to axioms that switch results down one branch or the other. This is similar to re-factoring in software engineering, where different cases can always be brought together within the same handler, simply by introducing more variables.
3. The difference between "internal inconsistency of one system" and "external inconsistency between two systems" is arbitrary. Any subset of assumptions may be considered to be an independent axiomatic system. So, consistency problems within a single given system can likewise be resolved by the addition of (at least) one new variable. For example, considering "System X" again, we can introduce a new variable Y that easily resolves the contradiction:
System X+Y
(1') X is true if Y (assumption)
(2') X is not true if not Y (assumption)
(3') it's not possible to derive 'pigs can fly'
So rather than seeing two complete systems or axioms as inconsistent, we may see both as special cases of a larger framework of axioms.
The pros/cons of adding new variables to core axioms is that they increase the resolution of detail, but decrease the simplicity. Really, the practical problem in designing an axiomatic system isn't so much in getting the system to be consistent, but in preventing it from becoming too complex and unwieldy to use.
3 comments:
I got all excited when I saw that 1=0. Then I was deflated again when I learned the explanation was a billion miles over my head. Ah's too dum!
:)
I was thinking of another example from an old Foghorn Leghorn joke. If "nothing" is a unit:
1/2 nothing + 1/2 nothing = 1 nothing
and
1 nothing = 0 nothings
So, with these units, 1=0. It's possible to think of other units where this would apply as well.
Good dispatch and this fill someone in on helped me alot in my college assignement. Say thank you you on your information.
Post a Comment