Sunday, July 18, 2010

Paradox: I'm lying, and that's a promise

I remember years ago, on the old "Star Trek" series, Spock told a robot that he was lying, and it made the robot's head explode.

That paradox fascinated me, because it is such a simple paradox, but it is tricky to analyze. It cleverly sets all the logic about itself in a tight loop. I wondered:

  • Does the statement have a truth value?
  • Is the statement valid?
  • If something is screwy with the statement, what exactly is it?
After giving it some thought, my analysis of this type of paradox was:

Consider:

(1) This sentence is false.

(2) Suppose the statement at (1) is either true or false. The analysis will explore the two branches independently.
(3.a) First assume it is true. Then from (1), we can easily derive that the statement is false.
(3.b) Now assume it is false. No further deduction can be made using (1) within this branch. Since any attempt to prove the truth of (1) would also be relying on (1) as an assumption, which we just assumed to be false.
(4) Hence, either way, the reasoning terminates on the conclusion that the statement at (1) is false.

What's curious is the logic stemming from the paradox is not really symmetrical.

In deriving the falsity of the statement (1), we have a valid line of reasoning (reductio ad absurdum). However, attempting to derive the opposite result -- that the statement is is true -- is invalid (circular reasoning). Using (1) in any manner to prove that (1) is true assumes the conclusion. And if the statement is false, we don't know that the mechanism that would allow a desired inference would even be supported, as there's more than one way a statement can be false.

So then what exactly is screwy with the statement? The problem is simple, but to model this more precisely, I will need better notation.

Define a set of distinct truth values {0, 1, null}.

As an analogy, "0" will function like "false", "1" will function like "true," and just to be safe "null" is anything else that doesn't fit in the previous two categories, for example, an incomplete thought, or gibberish. The value "null" value will propagate to anything it touches.

But how would I determine if a statement would map to one of these values?
What does it means for a mathematical statement to be "true?"

For a simple example, if I defined numbers like:

Let 1 be a primitive
Let + be an operation
Let 2 = 1+1
Let 3 = 1 + 1 + 1
Let 4 = 1 + 1 + 1 + 1
...

Then a statement like 2+2=4 is "true" since it's derivable from the basic definitions/axioms:

4 = 1+1+1+1 = (1+1) + (1+1) = 2+2

Generally, in math, if "a statement is true" it just means that it's derivable from the given set of definitions/axioms. So more precisely:

Define a function truth_value(P) that maps an assertion P to its truth value such that:
The return value is 1, if P is derivable from the given set of definitions/axioms.
The return value is 0, if a contradiction (like 1=0) can be derived from P and the given set of definitions/axioms.
The return value is null otherwise.

Also the statement at (1) contains a semantic structure like "P is false," which operates like an inverter on the truth value. For example if P is true, "P is false" is false. If P is false, "P is false" is true. This function essentially just flips the truth value of P. To model this more precisely:

Define a truth function inverter(x) such that:
The return value is 1, if x = 0.
The return value is 0, if x = 1.

Note: if the number is a value 1 or 0, the function is just:

inverter(x) = 1 - x.

Now the statement at (1) is only talking about truth values. And it's asserting (at least) two different things. If we let the variable S refer to the statement at (1), the assertions are:

The statement is false:

truth_value(S) = 0

and a self-reference roughly like "S = 'S is false'" sets up an identity statement about the truth values:

truth_value(S) = inverter(truth_value(S))

I can extract more things from the statement (like more complex combinations of the above two identities), but ultimately, in exploding a statement into a package of simpler assertions, all these things would be AND'ed together to form the complete thought.

But I'm only interested in showing the falsity of the entire package. It's sufficient that if any one of these components is false, any larger AND statement containing these two assertions will also be false.

Now simplifying, if we let:

x = truth_value(S)

Then what is asserted at the bottom of the paradox is a set of linear equations:

x = 0
x = 1 - x
...

The "solution" implies that 1 = 0. By definition, this is a contradiction, so we can say the entire the complete package of assertions in (1) is false (in a system of binary truth values). The self-reference implicitly sets up an impossible set of constraints, so the reference is exactly what is false.

The interesting morals of the story, I think, are:

Generally a reference can bear a truth value. For example, suppose a wrecking crew shows up, and someone asks "Hey boss, where's the house we are supposed to demolish?" If the boss man points to the wrong house and says "That," it's precisely the reference that is false.

Self-references are fine, but they are assertions, as they implicitly set up an identity relation. Those assertions can be incorrect just like anything else. But in pointing a reference back to the statement containing the reference, the self-reference is effectively camouflaged so as to hardly look like an assertion at all.

Circular reasoning is a tricky issue to diagnose, and any kind of recursive definition will naturally lend itself to this problem.

3 comments:

KW said...

I noticed this one the other day:

Never end a sentence with the word "of."

Kevin said...

That is funny. The rule does exactly what it says not to do (even considering use vs mention). I heard a joke about "formal" English rules the other day: :)

"This is the grammar up with which I cannot put."

KW said...

this makes me blink