Tag Archives: uncertainty

Truth-uncertainty and meaning-uncertainty

Epistemic status: just a half-baked idea, which ought to be developed into something more complete, but since I’m probably not going to do that anytime soon I figured I’d publish it now just to get it out there.

Consider a statement such as (1) below.

(1) Cats are animals.

I’m used to interpreting statements such as (1) using a certain method which I’m going to call the “truth-functional method”. Its key characteristic is, as suggested by the name, that statements are supposed to be interpreted as truth functions, so that a hypothetical being which knew everything (had perfect information) would be able to assign a truth value—true or false—to every statement. There are two problems which prevent truth values being assigned straightforwardly to statements in practice.

The first is that nobody has perfect information. There is always some uncertainty of the sort which I’m going to call “truth-uncertainty”. Therefore, it’s often (or maybe even always) impossible to determine a statement’s truth value exactly. All one can do is have a “degree of belief” in the statement, though this degree of belief may be meaningfully said to be “close to truth” or “close to falsth1” or equally far from both. People disagree about how exactly degrees of belief should be thought about, but there’s a very influential school of thought (the Bayesian school of thought) which holds that degrees of belief are best thought about as probabilities, obeying the laws of probability theory. So, for a given statement and a given amount of available information, the goal for somebody practising the truth-functional method is to assign a degree of belief to the statement. At least inside the Bayesian school, there has been a lot of thought about how this process should work, so that truth-uncertainty is the relatively well-understood sort of uncertainty.

But there’s a second problem, which is that often (maybe even always) it’s unclear exactly what the statement means. To be more exact (the preceding sentence was an exemplification of itself), when you hear a statement, it’s often unclear exactly which truth function the statement is supposed to be interpreted as; and depended on which truth function it’s interpreted as, the degree of belief you assign to it will be different. This is the problem of meaning-uncertainty, and it seems to be rather less well-understood. Indeed, it’s probably not conventional to think about it as an uncertainty problem at all in the same way as truth-uncertainty. In the aforementioned scenario where you hear the statement carrying the meaning-uncertainty being made by somebody else, the typical reponse is to ask the statement-maker to clarify exactly what they mean (to operationalize, to use the technical term). There is of course an implicit assumption here that the statement-maker will always have a unique truth-function in their mind when they make their statement; meaning-uncertainty is a problem that exists only on the receiving end, due to imperfect linguistic encoding. If the statement-maker doesn’t have a unique truth function in mind, and they don’t care to invent one, then their statement is taken as content-free, and not engaged with.

I wonder if this is the right approach. My experience is that meaning-uncertainty exists not only on the recieving end, but also very much on the sending end too; I very often find myself saying things but not knowing quite what I would mean by them, but nevertheless feeling that they ought to be said, that making these statements does somehow contribute to the truth-seeking process. Now I could just be motivatedly deluded about the value of my utterances, but let’s run with the thought. One thing that makes me particularly inclined towards this stance is that sometimes I find myself resisting operationalizing my statements, like there’s something crucial being lost when I operationalize and restrict myself to just one truth function. If you draw the analogy with truth-uncertainty, operationalization is like just saying whether a statement is true or false, rather than giving the degree of belief. Now one of the great virtues of the Bayesian school of thought (although it would be shared by any similarly well-developed school of thought on what degrees of belief are exactly) is arguably that, by making it more clear exactly what degrees of belief are, it seems to make people a lot more comfortable with thinking about degrees of belief rather than just true vs. false, and thus dealing with truth-uncertainty. Perhaps, then, what’s needed is some sort of well-developed concept of “meaning distributions”, analogous to degrees of belief, that will allow everybody to get comfortable dealing with meaning-uncertainty. Or perhaps this analogy is a bad one; that’s a possibility.

Aside 1. Just as truth-uncertainty almost always exists to some degree, I’m fairly sure meaning-uncertainty almost always exists to some degree; operationalization is never entirely completely done. There’s a lot of meaning-uncertainty in statement (1), for example, and it doesn’t seem to completely go away no matter how much you operationalize.

Aside 2. The concept of meaning-uncertainty doesn’t seem to be as necessarily tied up with the truth-functional model to me as that of truth-uncertainty; one can imagine statements being modelled as some other sort of thing, but you’d still have to deal with exactly which example of the other sort of thing any given statement was, so there’d still be meaning-uncertainty of a sort. For example, even if you don’t see ought-statements as truth-functional, as opposed to is-statements, you can still talk about the meaning-uncertainty of an ought-statement, if not its truth-uncertainty.

Aside 3. Another way of dealing with meaning-uncertainty might be to go around the problem, and interpret statements using something other than the truth-functional method.

Footnotes

^ I’m inventing this word by analogy with “truth” because I get fed up with always having to decide whether to use “falsehood” or “falsity”.

Advertisements

On Justine Tunney and the role of the troll

(If you haven’t heard of Justine Tunney, this Gawker article provides a quick introduction. Although this essay isn’t about her in particular; her online persona is just an example of the kind of phenomenon I want to talk about.)

People on my Tumblr dashboard were talking about Justine Tunney a few days ago, and that got me thinking about the question, which I’ve often seen raised by people talking about her, of whether she’s doing some ‘long-game trolling’ or whether she really believes in the things she advocates for, like appointing Eric Schmidt CEO of America.

The answer is actually available straight from the horse’s mouth: she’s doing both of these things. But how is that possible? Isn’t trolling incompatible with seriously advocating for your beliefs, by definition?

To answer that we have to talk about what ‘trolling’ means. Of course, the word has many senses; but the one we’re interested in is the one that Tunney was using the word in in her answer. From context it’s pretty clear that she’s using it in the older sense, not the newer mass-media sense of ‘somebody who is mean to you over the Internet’.

Trolling, in this older sense, is an intriguing phenomenon. Of all the new words that have entered the English language to describe online behaviours—’spamming’, ‘flaming’, ‘doxxing’, etc.—’trolling’ has one of the most complex meanings. It is also remarkable in that it’s difficult to definitively state whether it has a positive or negative connotation. Spamming, flaming and doxxing are always considered negative behaviours by the people who use those words to describe them; on the other hand, there are people like those in this TV programme or this guy who proudly describe themselves as Internet trolls and seem to consider their behaviour ultimately pro-social. That said, trolling is never uncontroversially considered a pro-social behaviour. Trolling is often targeted to specific people, and the people targeted are, the vast majority of the time, not thankful for it. Even non-targeted trolling is usually considered annoying and non-constructive by many of the people who see it. Indeed, if you don’t annoy anybody with your trolling, what you’re doing won’t be considered trolling at all. Trolling is thus necessarily transgressive to a certain degree, and yet not wholly transgressive. It’s a behaviour which is on the borderline between transgressiveness and non-transgressiveness. It’s controversial. And those who value the behaviour consider the best trolls to be those who are maximally controversial.

I think borderline transgressiveness, the courting of controversy, is one of the key distinguishing characteristics of trolling. But another important one is intention. There are, in fact, some unfortunate people who go into online communities and somehow end up being perceived as incredibly annoying by around half of the community. It’s important that there is a significant segment of the community which doesn’t mind the person’s behaviour—if everyone is against them, the person will just get the message eventually and either leave or change their behaviour. But when the community is divided, that’s when you get controversy. Not only will people argue with the unfortunate person, they will also argue with each other about how to treat the unfortunate person. It’s natural for people to wonder in this situation whether the unfortunate person is actually a troll; because if they were a troll, they would be aiming to create controversy, and the unfortunate person has certainly managed to do this. I’ve been a member of a few online communities over the years—not particularly many, probably, compared to some people, but a few—and I’ve seen some really extraordinary examples of this. There is one guy on a forum I used to visit (I still visit it, in fact, but the guy eventually got banned, because he just inadvertently annoyed people way too much, even though he was never did anything unambiguously bad) who, if he had been trolling, was by far the best troll I have ever seen. Unfortunately, though I can’t definitively rule the possibility out, it always seemed more likely to me that that was just the way he was, which is really kind of sad. But the point is, when people on this forum talked about this guy it was always in terms of whether he was a troll; nobody said he was a troll anyway, just because he annoyed people. They recognised that intentionality was necessary.

So I think I’ll adopt this working definition of trolling: trolling is intentional borderline-transgressive online communication. Or, to put it another way, trolling is the act of tailoring one’s interactions via an online medium so as to court an ideally maximal amount of controversy. It can perhaps be compared to similarly borderline-transgressive social behaviours that don’t take place online, such as mockery, brawling or duelling.

Nothing in this definition says that trolling necessarily involves the advocacy of outrageous ideas, à la Justine Tunney. But such advocacy is one of the best ways of courting controversy, and so this kind of advocacy is a common kind of trolling. We might call this kind of trolling ideological trolling. And I think ideological trolling is an especially interesting kind of trolling, because I think it might be to some degree motivated differently from other kinds of trolling.

A simplistic way of understanding ideological trolling would be this: ideological trolls are motivated by the desire to be maximally controversial. Their choice of which ideas to advocate for is determined by this desire: they advocate for the ideas which will be maximally controversial, regardless of whether they believe them or not. Since they aren’t motivated by truth-seeking, they aren’t, in general, intellectually interesting to people who seek the truth. Of course their arguments will have to make a certain degree of sense—otherwise nobody would find them plausible and they wouldn’t succeed in courting controversy—but you would be better off hearing the same arguments from somebody else, who, if they are motivated by truth-seeking, would be more incentivised to avoid irrational arguments and to not overstate their case. If this is the correct way of understanding ideological trolling, then there’s not really any point in listening to Justine Tunney (there might be a point in it if there were no people making similar arguments as her, but there are neoreactionaries available to talk to on the Internet who make similar arguments, are definitely not trolls, and are about as non-irrational as anybody can ever be). I think that’s what makes people so interested in the question of whether Tunney is trolling or not. It’s a proxy for the question of whether it’s worth their time to give her ideas serious consideration or not.

But I think this way of understanding ideological trolling is too simplistic. It rests on a simplistic understanding of the role and effectiveness of rational argumentation. In this understanding, there is an assumption that the best way of getting everyone to reach the truth is for everyone to discuss things in a civil and respectful manner, state their points clearly, make it clear how the conclusions logically follow from the premises, be honest about what they believe and not believe, etc. And I think this is not the case. I’m afraid this is going to be the weakest part of this essay; I haven’t tried to develop a proper critique of this assumption yet and it would probably take me too long to get to one that I’d be satisfied with including. But in working towards this critique, here are some things that I would point to:

  • Reasoned argumentation may not always be the best way of persuading people.
  • It is often difficult to articulate exactly what you believe; you may need to search through different potential means of articulation until you find one that works for you.
  • This difficulty of articulation is to some extent built-in and unavoidable, because of the nebulosity of all concepts.
  • It is sometimes difficult to know exactly what you believe (again, this is related to nebulosity). But there may still be value in advocating for beliefs you’re not sure you have; after all, the fact that you can’t definitively rule them out is evidence that they might be true, even though to be sure of them would be better evidence.
  • Communities are susceptible to pressures for ideological conformity which are harmful to the goal of truth-seeking. Deliberate exaggeration and devil’s-advocate-playing can help mitigate against these pressures.

The last point is, I think, especially relevant in the case of ideological trolling. It means that ideological trolls can serve a helpful, unique role within an online community whose members are motivated by truth-seeking. They’re the people who push the boundaries of the community’s Overton window. That’s probably why, in many of the thoughtful online communities I’ve been involved in, there have been one or two trolls. They’re needed. But pushing the Overton window isn’t the only thing trolls are good for; the ideological troll role also facilitates the dissemination of ideas and refinements of ideas that may be difficult to articulate into the discourse. On a personal level, some people may find that they are more effective participants in the discourse in the ideological troll role.

(I don’t know if similar benefits can be claimed for other kinds of trolls; I haven’t thought about it too much. Some other kinds of trolls probably are overwhelmingly motivated by the desire for negative attention, and are not really engaging in any pro-social behaviour. My impression is that the ideological trolls are generally the only ones that tend to acquire a degree of actual respect within a community, which fits in with this.)

To go back to the question I started this essay with: I suspect the distinction between the things she really believes and the things she just advocates for to be provocative isn’t important to Justine Tunney. She knows about the benefits of ideological trolling, she knows about the nebulosity of the truth, the difficulty of drawing the boundary around what you know and the near-certain imperfection of any one person or community’s set of beliefs; and she figures that it’s best to put the ideas out there rather than keep them to herself only because they might be wrong. Of course, this is just speculation; I haven’t talked to Tunney. But I think this is a plausible account of what motivates ideological trolls like Tunney, and if it’s not quite the right description for her, it might be the right description for others.

And to wrap things up, I have a prescription to make. If, while you’ve been reading this, you’ve thought that the ideological troll role sounds like one that could work for you… then you have my permission and encouragement to go forth and troll! In my opinion, the world could do with more people in it like Justine Tunney.