Propositional Logic: The Logic of Statements
The next system of logic that you’ll learn in a symbolic logic class is propositional logic.
(Sometimes this is the FIRST system you’ll learn since many textbooks that teach predicate logic will bypass Aristotle’s categorical logic altogether.)
Propositional logic is sometimes called “sentential logic” or “statement logic”, since it deals with logical relationships between statements taken as wholes.
More specifically, propositional logic studies ways of joining and/or modifying entire propositions (or statements or sentences … these are all synonyms for current purposes) to form more complicated propositions, as well as the logical relationships and properties that are derived from these methods of combining or altering statements.
In propositional logic, the simplest statements are treated as indivisible units, and this makes it fundamentally different from Aristotelian logic. This is also the key to understanding the logical features of the fragment of natural language that propositional logic is capable of modeling.
To give an example, in Aristotelian logic we would represent a statement like “All humans are mortal” with this expression: “All H are M”. The capital letters represent classes of objects: H = the class of all humans, and M = the class of all mortal beings.
These symbols don’t represent complete propositions, they represent parts of a proposition. It makes no sense to ask whether H, the class of all humans, is true or false. H doesn’t assert anything.
In propositional logic, the goal is to analyze language at the level of propositions taken as wholes. So the proposition “All humans are mortal” would be a symbolized using a single capital letter that stands for the whole proposition (you could pick anything you want, in this case either “H” or “M” would be fine). This proposition has a truth value — it makes an assertion that could be true or false.
This would be a very simple logic if all we had were simple propositions, but in natural language we’re able to combine simple propositions to make more complex propositions (called “compound” propositions), like this:
“Either I’ll pay for the meal and you’ll pay for drinks, or, if John shows up he’ll pay for both.”
The first key observation in propositional logic is that, even though this complex sentence is composed of several distinct propositions, the entire sentence, taken as a whole, still has a single truth value. The whole sentence can be treated as a proposition which is either true or false.
Here’s the next key observation. What determines the truth value of the whole sentence is the truth values of the individual component sentences, together with the rules for interpreting logical connectives like “and”, “or” and “if…then…”.
Here’s a symbolization for this sentence:
“(M and D) or [if J then (P and Q)]”
Where
M = “I’ll pay for the meal.”
D = “You’ll pay for drinks.”
J = “John shows up.”
P = “John pays for the meal.”
Q = “John pays for drinks.”
(Sorry those last two letters are kind of arbitrary, but really they’re all arbitrary. You just need a unique symbol for each unique proposition.)
What you learn in propositional logic is the rules for determining the truth values of the following compound claims …
- not-p
- p and q
- p or q
- If p then q
… where p and q can be any well-formed simple or compound proposition.
For example, “p and q” is true just in case BOTH p and q are true; if either p or q is false, then the statement “p and q” is false. That’s the rule for evaluating the truth values of conjunctions, statements of the form “p and q”.
(In fact, this rule is what DEFINES the meaning of the word “and” in propositional logic. This is an interesting semantic fact to ponder.)
With these rules you can analyze a compound sentence like the one above, and determine what the truth-value of the sentence is, for any combination of truth values of the component sentences.
This is what it means to say that this logical system is a truth-functional logic. The truth of the whole proposition is a function of the truth of the individual component propositions. In other words, once you fix the truth values of the parts, you automatically fix the truth value of the whole.
In propositional logic you also learn how to construct proofs using various rules of inference (from p and q I can validly infer r) and rules of replacement (in a proof I can always replace p with q, since p and q are logically equivalent propositions).
Okay, my skeptical linguistics major is getting impatient. What does any of this tell us about natural language?
Well, let’s rephrase: what is the fragment of natural language whose logical structure propositional logic is able to model?
Answer: It’s the fragment of natural language where the semantics of propositions, and logical relations between propositions, is determined by the truth-functional definitions of the logical connectives.
Not all of natural language behaves this way, but fragments of it certainly do. We can see, for example, that the following is a deductively valid inference:
“We know that candles need oxygen to burn. But there’s no oxygen in that room, so the candle must not be burning now.”
We can represent the logical structure of this argument like so:
1. If (the candle is burning) then (there’s oxygen in the room)
2. There is no oxygen in the room.
Therefore, the candle is not burning.
And this is one of the basic rules of inference in propositional logic (called “modus tollens” by the Medieval scholars who gave latin names to many of these common logical rules):
1. If p then q
2. not-q
Therefore, not-p
So, there are parts of natural language where the logical structure of the language, and our reasoning within the language, can be accurately modeled using propositional logic.
NOTE: Not All Language is Truth-Functional
It’s very clear that not ALL of natural language can be modeled this way. For one thing, not all of natural language is truth-functional in the way that propositional logic requires.
For example, let’s assume that I mistakenly believe that the Seahawks beat the Patriots in the Super Bowl in 2014. That’s a false belief, the Patriots actually won.
Now let’s also assume that I correctly believe that the moon is not made of green cheese.
The expression “S believes that P” is not truth-functional. We can see this by comparing the following two sentences.
(1) “Kevin believes that (the Seahawks beat the Patriots in the Super Bowl).”
(2) “Kevin believes that (the moon is made of green cheese).”
In both cases, the component statement in brackets is FALSE, right?
But by assumption, statement (1) — the statement about what I believe — is TRUE, while statement (2) is FALSE (I don’t believe the moon is made of green cheese).
Why does this show that “believes that p” is not truth-functional? Because if it was, then the truth value of the whole statement would be uniquely determined by the truth value of the component statement.
But here we have two cases where the truth value of the component is the same (they’re both false), but the truth value of the whole is different (one is true, the other is false).
So, the truth of statements involving “believes that p” is NOT determined by the truth of p.
This is just one of many examples where the semantics of expressions in natural language is NOT properly modeled by the semantics of classical propositional logic.
And it reinforces my point, that formal languages like propositional logic can model aspects, or fragments, of the logical structure of natural language, but no single system can, or even attempts to, model ALL of natural language.
Still, we can use artificial languages like propositional logic to investigate and learn about the logical properties of natural language, by comparing the semantics of these formal languages with the semantics of natural language.