Tag Archives: meaning

Truth-uncertainty and meaning-uncertainty

Epistemic status: just a half-baked idea, which ought to be developed into something more complete, but since I’m probably not going to do that anytime soon I figured I’d publish it now just to get it out there.

Consider a statement such as (1) below.

(1) Cats are animals.

I’m used to interpreting statements such as (1) using a certain method which I’m going to call the “truth-functional method”. Its key characteristic is, as suggested by the name, that statements are supposed to be interpreted as truth functions, so that a hypothetical being which knew everything (had perfect information) would be able to assign a truth value—true or false—to every statement. There are two problems which prevent truth values being assigned straightforwardly to statements in practice.

The first is that nobody has perfect information. There is always some uncertainty of the sort which I’m going to call “truth-uncertainty”. Therefore, it’s often (or maybe even always) impossible to determine a statement’s truth value exactly. All one can do is have a “degree of belief” in the statement, though this degree of belief may be meaningfully said to be “close to truth” or “close to falsth1” or equally far from both. People disagree about how exactly degrees of belief should be thought about, but there’s a very influential school of thought (the Bayesian school of thought) which holds that degrees of belief are best thought about as probabilities, obeying the laws of probability theory. So, for a given statement and a given amount of available information, the goal for somebody practising the truth-functional method is to assign a degree of belief to the statement. At least inside the Bayesian school, there has been a lot of thought about how this process should work, so that truth-uncertainty is the relatively well-understood sort of uncertainty.

But there’s a second problem, which is that often (maybe even always) it’s unclear exactly what the statement means. To be more exact (the preceding sentence was an exemplification of itself), when you hear a statement, it’s often unclear exactly which truth function the statement is supposed to be interpreted as; and depended on which truth function it’s interpreted as, the degree of belief you assign to it will be different. This is the problem of meaning-uncertainty, and it seems to be rather less well-understood. Indeed, it’s probably not conventional to think about it as an uncertainty problem at all in the same way as truth-uncertainty. In the aforementioned scenario where you hear the statement carrying the meaning-uncertainty being made by somebody else, the typical reponse is to ask the statement-maker to clarify exactly what they mean (to operationalize, to use the technical term). There is of course an implicit assumption here that the statement-maker will always have a unique truth-function in their mind when they make their statement; meaning-uncertainty is a problem that exists only on the receiving end, due to imperfect linguistic encoding. If the statement-maker doesn’t have a unique truth function in mind, and they don’t care to invent one, then their statement is taken as content-free, and not engaged with.

I wonder if this is the right approach. My experience is that meaning-uncertainty exists not only on the recieving end, but also very much on the sending end too; I very often find myself saying things but not knowing quite what I would mean by them, but nevertheless feeling that they ought to be said, that making these statements does somehow contribute to the truth-seeking process. Now I could just be motivatedly deluded about the value of my utterances, but let’s run with the thought. One thing that makes me particularly inclined towards this stance is that sometimes I find myself resisting operationalizing my statements, like there’s something crucial being lost when I operationalize and restrict myself to just one truth function. If you draw the analogy with truth-uncertainty, operationalization is like just saying whether a statement is true or false, rather than giving the degree of belief. Now one of the great virtues of the Bayesian school of thought (although it would be shared by any similarly well-developed school of thought on what degrees of belief are exactly) is arguably that, by making it more clear exactly what degrees of belief are, it seems to make people a lot more comfortable with thinking about degrees of belief rather than just true vs. false, and thus dealing with truth-uncertainty. Perhaps, then, what’s needed is some sort of well-developed concept of “meaning distributions”, analogous to degrees of belief, that will allow everybody to get comfortable dealing with meaning-uncertainty. Or perhaps this analogy is a bad one; that’s a possibility.

Aside 1. Just as truth-uncertainty almost always exists to some degree, I’m fairly sure meaning-uncertainty almost always exists to some degree; operationalization is never entirely completely done. There’s a lot of meaning-uncertainty in statement (1), for example, and it doesn’t seem to completely go away no matter how much you operationalize.

Aside 2. The concept of meaning-uncertainty doesn’t seem to be as necessarily tied up with the truth-functional model to me as that of truth-uncertainty; one can imagine statements being modelled as some other sort of thing, but you’d still have to deal with exactly which example of the other sort of thing any given statement was, so there’d still be meaning-uncertainty of a sort. For example, even if you don’t see ought-statements as truth-functional, as opposed to is-statements, you can still talk about the meaning-uncertainty of an ought-statement, if not its truth-uncertainty.

Aside 3. Another way of dealing with meaning-uncertainty might be to go around the problem, and interpret statements using something other than the truth-functional method.

Footnotes

^ I’m inventing this word by analogy with “truth” because I get fed up with always having to decide whether to use “falsehood” or “falsity”.

Advertisements