Monday, November 8, 2021

What is Misinformation?

    Misinformation seems bad, and there seems to be an awful lot of it. Because of that, we might be sympathetic to some of the following claims:

Critical thinking classes should teach students how to identify misinformation.

People who spread misinformation, even unintentionally, are blameworthy.

There ought to be financial, legal, or social consequences for knowingly spreading misinformation.

    And if we want to teach people how to identify misinformation, morally evaluate those who spread it, or prescribe consequences for spreading it, it would be useful to have a clear idea of what misinformation is. A reasonable starting point would be that misinformation is just inaccurate information, and so:

(M1)     A piece of information I is misinformation just in case I is false.

    This is appealingly simple, and it gets many paradigmatic cases of misinformation right. A libelous headline reading ‘Brandon Carey Steals Cats!’ would be false, and it would also be misinformation. But (M1) faces two serious problems. First, many pieces of information, including paradigmatic cases of misinformation, are just not the right sort of thing to be false. A deepfake video is misinformation, but videos don’t have truth values—a video can be manipulated or inauthentic, but it can’t be false. So, since (M1) requires that misinformation be false, (M1) is too narrow. Second, many pieces of misinformation are true. For example, suppose that the ministry of propaganda distributes flyers proclaiming that our glorious leader is undefeated in professional mixed martial arts. If our glorious leader has never competed in MMA, then the information on those flyers is true, but it is still misinformation.

    We can avoid both problems by instead requiring that misinformation be misleading in the sense that, whether or not the information itself is true, it will tend to produce false beliefs in those who consume it:

(M2)     A piece of information I is misinformation just in case I is misleading.

    This is an improvement. Deepfake videos can’t be false, but they are typically misleading in the sense that they will tend lead people to falsely believe that the subject depicted in the video did whatever they are depicted doing. So, (M2) can account for pieces of visual misinformation that do not have any truth value. Similarly, while the ministry’s true-but-misleading propaganda claim poses a problem for (M1), (M2) correctly counts this as a piece of misinformation, since it will tend to produce false beliefs about our glorious leader’s combat prowess.

    But (M2) also faces a problem: some clear cases of misinformation will nevertheless tend to produce true beliefs. Suppose, for example, that I use a bunch of Twitter bots to spread rumors that a public figure that I personally dislike has committed tax fraud, even though I have no evidence at all to suggest that they have. Based on the tweets from my army of bots, several people then come to believe that this person has committed tax fraud. These tweets seem like a paradigm case of misinformation that we should teach people to be wary of, blame people for spreading, etc., and yet it is consistent with the story so far that these tweets are not misleading. If it turns out, unbeknownst to me or anyone else, that this public figure has in fact committed tax fraud, then the rumors I’ve spread are not misleading after all—the beliefs based on this information are true!

    Similarly, a fake news site motivated by ad revenue may craft thousands of headlines to maximize clicks and engagement, without any regard for whether the headlines are true. Nevertheless, some of those many headlines will coincidentally turn out to be true, and people who come to believe the content of those headlines will thereby form true beliefs. But none of this prevents these headlines from being misinformation, and so being misleading cannot be a necessary condition for being misinformation.

    So, misinformation need not be false, and it need not be misleading. What, then, do cases of misinformation have in common? I propose that the key characteristic of misinformation is that it is epistemically defective in the following sense:

(M3)     A piece of information I is misinformation just in case a belief based on I cannot be knowledge.

    (M3) has several virtues. First, it still gives the right results in the cases that (M1) gets right. If you believe the content of a false headline, that belief will not be knowledge, because knowledge requires truth. Furthermore, (M3) has all of the virtues of (M2) over (M1), since beliefs based on misleading information will also be false and so not knowledge. 

    But (M3) also avoids the problem of accidentally true misinformation for (M2), since a belief that is accidentally true cannot be knowledge. On the assumption that a tweet from a new account with no followers has an evidential value of approximately 0, people who truly believe that someone committed tax fraud on the basis of my Twitter bots’ tweets will not have knowledge, because their beliefs are not justified. And even if a fake news site copies the format, branding, and other conventions of a known reliable source so effectively that its readers are in fact justified in believing the contents of those headlines that turn out to be true, those readers will still not have knowledge. They will have justified, true, beliefs that they are nevertheless lucky to be right about in a way that is roughly analogous to traditional Gettier cases.

    (M3) is still probably not quite right, though. If you believe that I steal cats based on a deepfake video of me doing so, that belief won’t be knowledge because it’s false. But if you instead believe that there is a video that appears to show me abusing cats, that belief is true and plausibly qualifies as knowledge. To refine (M3), I would need to find an appropriate way of distinguishing between the ways in which these two beliefs are based on misinformation, but I don’t know how to do that.

Brandon Carey

Philosophy Department

Sacramento State


  1. Hi Brandon, thanks for this nice piece.

    My two cents: I think we should stick to something closer to M1. Things that are misleading in other senses should just be called other things, and we've got lots of good words for these. I don't think we need a word that applies to every way that a message can be epistemically bad for us, and these ways don't all need to have something essential in common to achieve functional clarity. Also, I don't think it advances things much to make knowledge the fundamental concept, since conceptions of knowledge vary even more widely.

    Pointy headed remark: information as it is used in physics is factive. We speak of the information of the physical system itself. There is no such thing as false information. If Jones is not in the driveway there is no such thing as the information that Jones is in the driveway. This is obviously a technical use, but it may be worth asking whether it is worth following. Science does not seem to me to currently offer us a clear qualitative or semantic notion of information, though lots of people are trying to hammer one out.

    1. Thanks, Randy. We certainly can just have different terms for different kinds of bad information, but M1 seems to miss many paradigm cases of what people are taking about when they make the kind of practical, normative, and policy claims about misinformation I mentioned at the start. If I were writing a policy intended to limit the spread of misinformation on social media, I think it would be a mistake to use M1 as a definition of 'misinformation' in writing that policy, because the range of cases that I think that kind of policy is intended to address more closely matches the extension of M3.

      To your second point, I have nothing against a factive sense of 'information', but it's not obvious to me how it would help us to understand misinformation. If something in the neighborhood of M1 is true, then misinformation is just not information at all, and even if we moved in the direction of M2 or M3, being information in this factive sense would not be necessary for being misinformation.

    2. Thanks Brandon, I think my first point is preserved by replying that you could write a perfectly good policy by explaining that it aims to limit the spread of misinformation, deception and what have you. I just don't see why you would selection misinformation as a term that should cover the other things you are worrying about. An argument might be "Well, the media tends to use this term as a catch all, so let's go with it." But why would we want to dignify a use with such tawdry provenance? Why shouldn't we rather say that the word "misinformation" is used too promiscuously in the media, and adopt a narrower notion for the sake of clarity? I think time is better spent coming up with a term like "epistemic harm" (though sexier would be better) that encompasses all the smaller notions.

      I agree that if we went with a factive sense of the term, then none of these definitions are any good.

  2. Hi Brandon, this is very interesting, and made me think about the role of deception in the definition of misinformation.

    It seems to me that what your two counterexamples to M2 have in common is that there is deception involved, and that it is in virtue of this that they are cases of misinformation (independently of the truth-value of the beliefs they generate in the audience.)

    This would go in the same line as the definition of fake news that emphasizes the "fake" aspect of it: fake news are those that pretend to be news (i.e. information concerning recent events) but are not. The content of a given instance of fake news may be true or false, but, on this view, that's not relevant to the definition.

    I think this emphasis on the source of the intended piece of information is compatible with your M3 because the intended information, if produced in this way, can't be knowledge (from a reliabilist perspective) because it comes from an unreliable source, although it would still make sense to say that beliefs about this piece of intended information can be knowledge (e.g. that this intended information exists).

    I’m not sure if this makes sense. Perhaps I am mixing misinformation with deception, while you are trying to keep them separated. Have you thought of the role of deception in the definition of misinformation?