Rating: 8.5/10.
This is an introductory textbook on compositional semantics, which uses higher order logic to represent meaning of words when combined together. This is different from lexical semantics, which is concerned with the meaning of individual words. Below are my notes.
Ch1: Lexical Meaning
Semantics deals with literal meaning, which excludes hidden / metaphorical meaning in poetry, or pragmatic implicatures. Sometimes literal meaning is difficult to process, like center embedding, and there are cases where literal meaning is opposite of what it appears to mean, like “no head injury is too trivial to ignore”.
Ch2: Lexical Semantics
Definition of what counts as a word is complicated by compound words, homonymy, polysemy. Review of sense relations (synonymy, hyponymy, meronymy, etc).
Ch3: Structural Ambiguity
A structural ambiguity is when the same sequence of words can be interpreted in two different ways. This excludes ambiguities that are lexical, so there must not be ambiguities in word senses. What counts as a structural ambiguity or not depends on the syntactic framework we use, eg: does it encode semantic relations in structure.
One type of structural ambiguity is scope ambiguity, like “the doctor didn’t leave because he was angry”. Def: A takes scope over B if B is in the structural domain of A, (or in generative terms, A c-commands B). Eg: in “10-(3×2)”, the “-” takes scope over the “x”.
In German, “Beide Studenten kamen nicht” is ambiguous, but can’t be explained using the above scope rule without violating German syntax. Instead, explain this by movement: the verb is moved into V2 position. Reconstruction is when we undo this movement, then the “nicht” has scope ambiguity in the structure as expected. English doesn’t have V2 word order in most sentences, but it does in wh-questions, and can exhibit the same type of ambiguity like “How many dogs did everyone feed?”
Not all ambiguities can be resolved by movement, for example, “a student read every book”. For scoping rules to work, you need a structure like (((a student) read) every book), but this analysis doesn’t work syntactically. Instead, propose a Logical Form (LF) that’s not tied to surface form or input form, but have LF-movement rules like Quantifier Raising (QR) which produces (every book ((a student) read _)).
Another ambiguity is “I am looking for a book about dogs”, this can take a specific / transparent reading, or a non-specific / opaque reading. LF can move “a book” to the outer scope to give the transparent reading. In summary, studying ambiguities is a good way to test theories of semantics.
Ch4: Introducing Extensions
Frege’s Principle of Compositionality says that the meaning of an expression is derivable from the meaning of its immediate constituents and how they’re put together. The extension of an expression is what it refers to in the real world.
What are the extensions for various words? Easiest case are proper names and definite descriptions (eg: “the president of the United States”), their extensions are the real-world entities. Possible that the entity doesn’t exist (eg: “the king of Canada”) but ignore this. The extensions for common nouns is the set of all items, eg: set of all tables; ignore prototype theory saying that this set is actually fuzzy. For verbs, the extensions is a set of ordered pairs (or triples if the verb is ditransitive, etc) that the verb is applying to.
Generalizing, a sentence has zero valency (because all of its slots are filled), so Frege’s Generalization says the extension of a sentence is a 0-tuple, or equivalently, its truth value.
What order are the elements of the tuple? It’s determined by syntax hierarchy: the outermost element is first in the tuple. A question is whether all languages have the same argument ordering. On the surface, it appears not to, but some theories propose universal underlying ordering, and explain different surface forms using movement.
Alternatively, define thematic relations which are unordered pairs with agent, experiencer, etc, so the order is no longer important. Linking Theory is concerned with how thematic roles correspond to syntactic positions.
Ch5: Composing Extensions
Denote extension of A by [A]. The meaning of logical operators and / or / not are defined by boolean truth tables, so the extension of “A or B” is [A] or [B].
For simple sentences like “Paul sleeps”, the extension is 1 iff the tuple (Paul) is an element of the extension [sleep], which is a set of all the things that are sleeping at the moment. For transitive verbs like “Paul loves Mary”, need an extension for every constituent, including “loves Mary” — here, the extension is the set of all individuals that loves Mary.
Some nouns like “capital” are functional, so “the capital of Italy” extracts the unique tuple (Italy, Rome) from the extension of [capital]. The word “the” forces the result to be a singleton, it’s not this easy, but an approximate analysis treats [the] as a function with all singletons as its domain. The Saxon genitive (“Paul’s place”) similarly extracts a single reference, and coerces the noun “place” into a functional noun so that the result is unique.
Attachment ambiguity can be handled in this theory by showing it’s equivalent to two different logical formulas, and construct a model such that the two formulas give different results. Eg: “the woman and the children from Berlin” can be ((A or B) and C) or (A or (B and C)), and can find values of A, B, C so that they’re different. The extension of plural NPs can be a set of sets, and “the children” selects the maximal set. This allows “I have children” to be true if you have only 1 child, but this is a pragmatic consideration.
Ch6: Quantifiers
Want to handle sentences like “every student snored” or “no fly snored”. Intuitively, “every student snored” is true if the set of students is a subset of snoring things, but need to satisfy Principle of Compositionality. Can do this by defining [every], [a], [no] in clever ways so that they resolve to the correct set operations when plugged in. Such words (whose extensions represent logical operations on sets) are called logical constants.
A problem is that with the technical way we defined it, composition of quantifier NPs with VPs have the subset in the opposite direction from non-quantifier NPs like “Paul”. One way to fix this is by type shifting or Montague Lifting: LIFT(Paul) = {X : Paul in X}, so the simplification gives the correct truth value. Another solution (type-driven interpretation) is attach types to extensions so that they’re either normal subsets or characteristic functions from subsets to 1/0, then function application reduces to the correct subset check in both cases.
Quantifiers may occur in object position, like “Paul loves every girl”. This is handled by Quantifier Raising, producing the equivalent passive sentence “every girl is loved by Paul” before applying composition principles. The copula “is” presents some challenges because the identity case “Biden is the president of the USA” has meaning, where [is] = all pairs (x, x), whereas in predicative usages like “Paul is a nerd”, the words “is a” are meaningless.
Ch7: Propositions
Truth-conditional semantics is insufficient because sentences like “Toronto is bigger than Ottawa” would have the same meaning as “New York is bigger than Boston” because they’re both true. Instead, these sentences have intensional content and convey a proposition, which is defined by the set of possible worlds that a sentence is true. The set of possible worlds is called a Logical Space and a sentence cuts this space into two parts: one where the sentence is true and one where it’s false.
Under this framework, we define logical connectives “and / or / not”, along with material implicature, as operations on propositions (thus, operations on sets of possible worlds). An implication is valid if it’s a tautology (true for all possible worlds).
However, in natural language, truth-functional semantics doesn’t fully explain the meaning of and / or. “S1 and S2” sometimes entail S1 happening before S2, so is not equivalent to “S2 and S1”. If-then statements and the word “because” rely on causation, which doesn’t exist in truth-functional semantics. The word “or” is interpreted as inclusive in some contexts and exclusive in others; to explain this, you must apply pragmatics.
Ch8: Intensions
The intension of A, written [A], is a function from the world to [A]_e, the extension. For propositions, this basically identifies a set of worlds, but without committing to any specific universe. An individual like “Paul” has a constant extension in every world, this is a rigid designator.
Recall sentences like “John knows that Toronto is bigger than Waterloo” where the meaning of the complement is not truth-functional. To handle this, meaning of some complements is an intension rather than an extension. This applies to any verb / adjective that takes a sentence as complement.
How to handle sense relations like hypernymy? Tempting to say, eg, that extension of “woman” is subset of extension of “human”, but this doesn’t work with “professor” and “adult”. One can imagine all professors are adults but this is not necessarily the case always, because you can’t conclude entailment “John is a professor” -> “John is an adult”. Instead, hypernymy requires that extensions are subsets in all possible worlds, which is an intensional relation.
What about lexical semantics? So far, we’ve defined the extension of “dog” to be the set of dogs, but this doesn’t let us identify what are dogs in a new world. We can partially characterize the meaning by defining hypernymy relations, like “dog” is a subset of “animal”. Still, this doesn’t allow us to infer “X is a dog” -> “X is not a cat”. There’s no clear boundary between linguistic and world knowledge.
Hintikka semantics deals with beliefs and knowledge, sentences like “Mary thinks that John is in Rome”. Here, Mary’s doxatic state is the set of worlds where John is in Rome. Similar analysis works for “went” and “know”, and need additional properties to allow inferences involving knowledge (“Mary knows X” implies X).
Historically, both extension and intension have been associated with meaning, but extension is problematic because you can know the meaning of an extension without knowing its extension. Typically, linguists assume extensional meaning unless there are problems with substitution, then they involve intensions.
Ch9: Presuppositions
Analysis of “the present king of France”: extension is the empty set since France does not have a king, but intension uses a technical trick to handle two cases (world where extension is empty and where it’s nonempty) in one expression.
The word “the” presupposes that the referent is unique (at least one exists and there is at most one), but what if it’s not unique? Naive analysis says that when a presupposition is not met, the statement is false, but more typically you would say the statement is misguided, which is different from false.
Some verbs like “know” are verdicial: “I know X” implies X is true. The negation “I didn’t know X” also implies X is true. But problem is, when you combine these, it implies X is always true. The solution is to allow truth value gaps, where presuppositions may be true, false, or undefined. This requires the previous theory to be modified slightly to allow three-way results.
Another subtle problem is analyzing statements that have a presupposition and also an assertion, like “someone stopped drinking”. If we say it means two propositions “someone used to drink” and “someone didn’t drink now”, it’s hard to formally force these two to refer to the same person.
Presuppositions can convey information that’s not already known, in this case the listener adjusts his common ground to accommodate the presupposition.