William G. Lycan
University of North Carolina

[T]he perceptual model does not withstand scrutiny.
                        --David Rosenthal<1>

    What is consciousness?—to coin a question.  According to “higher-order representation” (HOR) theories of consciousness, a mental state or event is a conscious state or event just in case it (itself) is the intentional object of one of the subject’s mental representations.
    That may sound odd, perhaps crazy.  In fact, because of the richly diverse uses of the word “conscious” in contemporary philosophy of mind, it is bound to sound odd to many people.  So I must begin by specifying what I here mean by “conscious state or event” (hereafter just “state,” for convenience).

I  The explanandum

    A state is a conscious state iff it is a mental state whose subject is (directly or at least nonevidentially) aware of being in it.  For the duration of this paper, the latter biconditional is to be taken as a stipulative definition of the term “conscious state.”  I think that definition is not brutely stipulative or merely technical, because it records one perfectly good thing that is sometimes meant by the phrase, as in “a conscious memory,” “a conscious decision.”  But it is necessary to be careful and painfully explicit about usage, because the phrase “conscious state” has also been used in at least two entirely different senses, as respectively by Dretske (1993) and Block (1995).  Failure to keep these and other senses straight not only can lead but has led to severe confusion.)
    To come at my target subject-matter from a slightly different angle, mental or psychological states fall roughly into three categories:  (a) States whose subjects are aware of being in them, such as felt sharp pains, or reciting a poem carefully to oneself in an effort to recall its exact words.  (b) States whose subjects are not aware of being in them, but could have been had they paid attention; you realize that you have been angry for some time but were unaware of it, or you do not notice hearing a tiny whine from your computer monitor until you concentrate on the quality of the ambient noise.  (c) States that are entirely subterranean and inaccessible to introspection, such as language- or visual-processing states and just possibly Freudian unconscious ones.  A theory is needed to explain the differences between states of these three kinds.  Such a theory is what I here shall call a theory “of consciousness”--though there is no known limit to the variety of intellectual structures that have somewhere or another been called “theories of consciousness” (see Lycan (2002)).

II  HOR theories

    A “higher-order representation” theory of consciousness in the foregoing sense is one according to which: what makes a conscious state conscious is that the state is the intentional object of, or is represented by, another of the subject’s mental states, suitably placed or employed.<2>   Given our definition of “conscious state,” that idea is not so strange after all.  On the contrary; it may now seem too obvious:  If I am aware of being in mental state M, then tautologically, my awareness itself is an awareness of, and that “of” is the “of” of intentionality; my state of awareness has M as its intentional object or representatum.  Therefore, for M to be a conscious state is (at least) for M to be represented by another of my mental states, QED.<3>
    It is easy to see how HOR theories explain the differences between the foregoing three types of mental state.  States of type (a) are states which are the objects of actual higher-order representations.  States of type (b) are ones which are not the objects of actual higher-order representations, but which could have been and perhaps come to be so.  States of type (c) are those which, for reasons of our psychological architecture, cannot be internally targeted by person-level metarepresentations (“internally” because, of course, cognitive psychologists come to represent the processing states of subjects in the course of third-person theorizing).
    In addition to the argument given two paragraphs back, and the fact that HOR theories do explain the differences between our three types of mental state, there are other features that recommend HOR theories.  I have expounded them in previous works (chiefly Lycan (1996: 15ff.)), so I shall only mention a couple of them here.  For example:  Such theories explain a nagging ambiguity and/or dialect difference regarding sensations and feeling; and they afford what I believe is the best going account of “knowing what it’s like” to experience a particular sort of sensation (Lycan (1996, Chapter 3)).
    But of course further issues arise, chiefly about the nature of the putative higher-order representations.  Notoriously, HOR theories subdivide according to whether the representations are taken to be perception-like or merely thought-like.  According to “inner sense” or “higher-order perception” (HOP) versions, a mental state is conscious just in case it is the object of a kind of internal scanning or monitoring by a quasi-perceptual faculty (Armstrong (1968, 1981); Lycan (1987, 1996)).  But “higher-order thought” (HOT) theorists contend that the state need be represented only by an “assertoric” thought to the effect that one is in that sort of state (Rosenthal (1986, 1990, 1991b, 1993, 1997); Gennaro (1996), Carruthers (1996, 2000)).
    As is hinted by my title, my main purpose in this paper is to argue that HOP versions of HOR are better motivated and more promising than HOT versions.<4>   Which I believe to be true; but I have written the paper reluctantly.  For at least ten years I have meaning to write a piece exhibiting the superiority of HOP to HOT, but until now I have always put it off, because at bottom I regard the two views as more allies than competitors, both of them sensibly resistant to some of the craziness that is out there about consciousness.  I doubt that their differences matter very much.<5>
    (And I have suffered a strange symptom:  I have meant to keep a list of the pro-HOP and anti-HOT arguments I have occasionally thought of, but then I have always forgotten to make the list and I always forget the arguments.  I will try to dredge them up here, without complete success.  It had to be done sometime, and the quote from Rosenthal that heads this paper is inspiring.)
    Before proceeding to the task, though, we should note the main objections to HOR theories tout court.

III  Objections to HOR theories

    In particular, it is worth heading off three bad and ignorant criticisms.  These three have been scouted more than once before, but I have found that they still come up every time a HOR theory is propounded.  First is the Qualia-Creation objection:  “You say that what makes a state conscious is that the state is the intentional object of a higher-order perception or thought.  But surely the mere addition of a higher-order representation cannot bring a sensory or phenomenal quality into being.  If the original state had no qualitative or phenomenal character, the HOR theories do not in any way explain the qualitative or phenomenal character of experience; a mere higher-order representation could hardly bring a phenomenal property into being.  So how can they claim to be theories of consciousness?”
    But it is no claim of either HOP or HOT per se to have explained anything about qualitative character; they are theories only of the distinction between mental states one is aware of being in and mental states one is not aware of being in.  Some other theory must be given of the original qualitative character of a sensory state, such as the yellowy-orangeness of an after-image, the pitch of a heard tone, or the smell of an odor.  (Neither Armstrong, Rosenthal nor I have ever suggested that a mere higher-order thought or even a higher-order quasi-perception could explain the sensory core of experience.  Each.of us has given an account of sensory qualities, but elsewhere.<6>)
    Second, the Regress objection:  “If the second-order representation is to confer consciousness on the first-order state, it must itself be a conscious state; so there must be a third-order representation of it, and so on forever.”
    But HOP and HOT theorists reject the opening conditional premise.  The second-order representation need not itself be a conscious state.  (Of course, it may be a conscious state, if there does happen to be a higher-order representation of it in turn.)
    Third, the Lending objection (made by some who do take the previous point against the Regress objection):  “But if the second-order representation is not itself a conscious state, how could it confer consciousness on the original state?”
    This assumes that a state gets to be a conscious state by having the property of consciousness conferred on it or lent to it by something else.  That is simply a misconception.  A conscious state is a state whose subject is aware of being in it.  Whatever accounts for a state’s having that property, there is no reason to suppose that the state inherited the property from some other item that already had it.  That S is aware of being in mental state M does not at all suggest that that holds of M because S was aware of something other than M and the awareness somehow transferred.
    Of course, HOR theories face other objections that are more serious, four in particular; I have tried to answer them in previous works.
    (1)  Some critics, such as Sydney Shoemaker (1994), have complained that HOR awards introspection too great a degree of fallibility.  Also, HOR theories predict the theoretical possibility of false positives: Could I not deploy a nonveridical second-order representation of a nonexistent sensation?--and so, I might seem to myself to be in intense pain when in fact there was no pain at all, which seems absurd.  Karen Neander (1998) has prosecuted an especially incisive version of this objection.  I have rebutted Shoemaker’s argument in Lycan (1996, pp. 17-22), and I have replied to Neander, not altogether satisfactorily, in Lycan (1998).  I argued against Shoemaker that introspection is certainly fallible to some degree; having been elaborately primed, the fraternity pledge mistakes the cold sensation produce on his bare skin by the ice cube for burning heat.  And there are reasons why the egregious sorts of false positive cited by Neander would not happen.
    (2)  Georges Rey (1983) argued that HOR comes too cheap, and that laptop computers can deploy higher-order representations of their first-order states; yet we would hardly award consciousness in any sense to a laptop.  In response, Lycan (1996, pp. 36-43) argued that consciousness in the present sense comes in degrees, and that if we suppose (as Rey’s argument must) that laptop computers have mental states in the first place, there is no reason to deny that some of those states are conscious to a very low degree.
    (3)  Carruthers (2000) has complained that HOR theories require a grade of computational complexity that lower animals and probably even human subjects do not attain.  His argument assumes that at any given time, many of our mental states are conscious states; Lycan (1999) rejoins by rejecting that assumption.
    (4)  HOR theorists’ standard reply to the Qualia-Creation objection separates matters of consciousness in the present sense (mental states one is aware of being in) from matters of qualia and qualitatative character, and accordingly HOR theorists hold that a qualitative perceptual or sensory state may go entirely unnoticed; thus there are, or could be, unfelt pains, and many other sensations that are entirely nonconscious.  But to some people the idea of a sensation whose subject is unaware of having it is problematic to say the least.
    Unfelt pains and nonconscious sensations generally have been defended at length by Armstrong (1968), Palmer (1975), Nelkin (1989), Rosenthal (1991a), Lycan (1996) and others.<7>   (The matter is complicated by a dialect difference, and the dispute over nonconscious sensations is at least partly verbal.<8>)  But Joseph Levine (2001) has pushed a version of the present objection that I have not seen addressed in print, so I shall confront it here.
Levine considers three mental states involving visual redness:

my state as I deliberately stare at the [red] diskette case, my perception of a red light while driving on ‘automatic pilot,’ and the clearly unconscious state of my early visual processing system that detects a light intensity gradient.  According to HO[R], the first two states both instantiate the same qualitative character, and for the second and third states there is nothing it is like to occupy them; they are both unconscious, non-experiences.  One oddity, then, is the fact that, on HO[R], the very same qualitative feature possessed by my perception of the diskette case can be possessed by a state that is as unconscious as the intensity gradient detector.
    Furthermore, does the intensity gradient detector itself possess qualitative character?  HO[R] faces a dilemma.  To say such states have qualitative character seems to rob the notion of any significance.  To deny them qualitative character requires justification.  What one would normally say is that to have qualitative character there must be something it is like to occupy a state, and the qualitative character is what it’s like.  But the advocate of HO[R] can’t say this, since states there is nothing it is like to occupy have qualitative character on this view.   (p. 106)
    I am not sure why Levine finds it odd that the same qualitative feature possessed by his perception of the diskette case can be possessed by a state that is “as unconscious as” the intensity gradient detector.  Indeed, I am not sure how he intends the detector example in the first place, as an example of a nonconscious mental state that has a red qualitative character in virtue of its intensity-gradient detection, or rather as an example of a nonconscious psychological state that has no qualitative character.  In light of the alleged dilemma, Levine does not seem to decide between these two interpretations, though I believe he intends the second, because he is trying to bring out the absurdity he sees in the idea of a qualitative state of which one is entirely unaware.
    Moving on to the dilemma: What are its horns?  That (a) one either says that the gradient detector state has qualitative character, and thereby “seems to rob the notion of any significance,” or (b) one denies that the gradient detector state has qualitative character, which “requires justification.”  The first of those conditionals suggests that the detector is interpreted in the second way, as not being in any qualitative state, the idea being that if we say that the gradient detector state has qualitative character, we might as well say that a pancreatic state or a state of a brick has qualitative character.  I am not inclined to choose horn (a), so I turn to the second conditional.
    Since Levine himself seems to want to deny that the gradient detector state has qualitative character, I suspect what he really thinks requires justification is that the automatic-pilot visual state does have qualitative character when the detector state admittedly does not.: “What one would normally say is that to have qualitative character there must be something it is like to occupy a state, and the qualitative character is what it’s like.”  The argument is then, “but the advocate of HO[R] can’t say this, since states there is nothing it is like to occupy have qualitative character on this view.”
    That argument is guilty of equivocation.  For both expressions, “qualitative character” and “what it’s like” (as well as “phenomenal character/property”), are now ambiguous and in just the same way.  Each has been used to mean the sort of qualitative property that characteristically figures in a sensory experience, such as yellowy-orangeness, pitch, or smell as mentioned above—call these “Q-properties.”  Each has also been used to mean the higher-order property of “what it’s like” for the subject to experience the relevant Q-property.
    To see that these are quite distinct, notice: (i) The higher-order “what it’s like” property is higher-order; it is a property of the relevant Q-property.  (ii) A Q-property is normally described in one's public natural language, while what it is like to experience that Q-property seems to be ineffable.  Suppose you are having a yellowy-orange after-image and Jack asks you, “How, exactly, does the after-image look to you as regards color?”  You reply, “It looks yellowy-orange.”  “But,” persists Jack, “can you tell me, descriptively rather than comparatively or demonstratively, what it's like to experience that ‘yellowy-orange’ look?”  At that point, if you are like me, words fail you; all you can say is, “It’s like…this; I can’t put it into words.”  Thus, the Q-property, the subjective color itself, can be specified in ordinary English, but what it is like to experience that Q-property cannot be.
    (The distinction, which is also nicely elaborated by Carruthers (2000), is obscured by some writers’ too quickly defining “sensory quality” in terms of “what it’s like,” and by others’ using the phrase “what it’s like” to mean merely a Q-property; Dretske (1995) and Tye (1995) do the latter.)
    I hold that Levine’s own argument further illustrates the distinction.  We champions of nonconscious sensory qualities have argued that a Q-property can occur without its subject being aware of it.  In such a case, as Levine says, there is a good sense in which it would not be like anything for the subject to experience that Q-property.  (Of course, in the Dretske-Tye sense there would be something it was like, since the Q-property itself is that.  For what it is worth, I think Levine’s usage is better.)  So even in the case in which one is aware of one's Q-property, the higher-order type of “what it’s like” requires awareness and so is something distinct from the Q-property itself.
    Thus, Levine is right that the advocate of HOR cannot say that “to have qualitative character there must be something it is like to occupy a state, and the qualitative character is what it’s like.”  And no one else should say that either, if by “qualitative character” they mean a Q-property.  For we HORists hold that one can be in a state featuring a Q-property without being aware of it, hence without there being anything it is like to be in that state.  And unless I have missed it, Levine has provided no good argument against that claim.

IV  My HOP theory

    Armstrong states the Inner Sense doctrine as follows.

Introspective a perception-like awareness of current states and activities in our own mind.  The current activities will include sense-perception: which latter is the awareness of current states and activities of our environment and our body.  (1981, p. 61)
As I would put it, consciousness is the functioning of internal attention mechanisms directed upon lower-order psychological states and events.  I would also add an explicit element of teleology:  Attention mechanisms are devices which have the job of relaying and/or coördinating information about ongoing psychological events and processes.<9>
    But that is not all.  To count in the analysis of my consciousness in particular, the monitor must do its monitoring for me.   A monitor might have been implanted in me somewhere that sends its outputs straight to to the news media, so that the whole world may learn of my first-order mental states.  Such a device would be functioning as a monitor, but as the media’s  monitor rather than mine.  More importantly, a monitor functioning within one of my subordinate homunculi might be doing its distinctive job for that homunculus rather than for me; e.g., it might be serving the homunculus’ proprietary event memory rather than my own event memory.  This distinction blocks what would otherwise be obvious counterexamples to HOP as stated so far.<10>

V  Rosenthal’s objection to HOP

    To justify his curt dismissal, Rosenthal (1997, p. 740) makes a direct argument against HOP, which argument I reconstruct as follows.  (1) “Perceiving always involves some sensory quality.”  (2) If so, and if “inner sense” is a kind of perceiving, then an inner sensing must itself involve some sensory quality.  Now, (3) either the quality is just the same one as that inhering in the first-order state being sensed, or it is some distinct sensory quality.  But (4) “[w]hen we see a tomato, the redness of our sensation is not literally the same property as the redness of the tomato.”  And (5) if not, “there is no other remotely plausible candidate for the mental quality involved in being conscious of a sensory state.”  So, (6) “inner sense” is not a kind of perceiving.
    I reply (as in Lycan (1996), pp. 28-29) by granting the disanalogy and denying (2).  No HOP theorist has contended that inner sense is like external-world perception in every single respect.  Nor, in particular, should we expect inner sense to involve some distinctive sensory quality at its own level of operation.  The reason is that “outer” sense organs have the function of feature detection.  The sensory properties involved in first-order sensory states are, according to me (Lycan (1987, Ch. 8), (1996, Ch. 4)), the represented features of physical objects; e.g., the color featured in a visual perception is the represented color of a (real or unreal) physical object.  Being internal to the mind, first-order states themselves do not have ecologically significant features of that sort, and so we would not expect internal representations of first-order states to have sensory qualities representing or otherwise corresponding to such features.<11>   As Rosenthal himself puts it, “Whereas a range of stimuli is characteristic of each sensory modality, [first-order] mental states do not exemplify a single range of properties” (p. 740).
    Rosenthal will reply that if inner sense is disanalogous in that way from external perception, what is the positive analogy?  (“[O]therwise the analogy with perception will be idle” (p. 740).)  And that brings me, finally, to my announced agenda:  I mean to show that the higher-order representation of mental states is more like perception than it is like mere thought.<12>


    0.  Intuitive priority.  (I number this point “0” because it is not really an argument, but only a preludial sniff.)  One would suppose that awareness psychologically preceded thinking, in that if X is real and “aware” is taken to be factive, S can think about X only if S is independently (if not previously) aware of X.  (Of course I here mean “aware” in its ordinary, fairly general sense, not anything like direct or perceptual awareness.)  It is hard to imagine the reverse, S thinking about X and thereby becoming aware of X.  By contrast, to perceive X is precisely a way of becoming aware of X.
    No doubt Rosenthal would reply that I am inappropriately importing a model based on thinking about external objects; awareness of those does require epistemic contact prior to thought.  But it does not follow that the same holds for our awareness of our own mental states.  Perhaps when S is in an attentive frame of mind, S’s mental state M itself simply causes in S the thought that S is in M; no intermediating sort of awareness is required.  Fair enough.  (And yet, what is the role of the “attentive” frame of mind?  “Attentive” means, disposed to attend—which again suggests that S first attends, and then has thoughts about what attending has turned up.)

    1.  Phenomenology.  Wittgenstein and Ryle pooh-poohed the etymology of “introspection,” contending that the metaphor of “looking inward” to “see” one’s own mental states is a poor metaphor and an even worse philosophical picture of consciousness.  But I say it is at least a fine metaphor.  When we attend to our own mental states, it feels like that is just what we are doing: focusing our internal attention on something that is there for us to discern.  Now, from this fact about introspecting, it does not follow that the phenomenology of normal state consciousness is also one of quasi-perceptual discernment, because in normal state consciousness we are not doing any active introspecting, but are only passively and usually nonconsciously aware of our first-order states.  But even in the passive case, as Van Gulick (2001: 288-89) points out, our first-order states are phenomenologically present to our minds and not (or not just) represented by them, much as in external vision, physical objects are present to us without seeming to be represented by us.<13>   (It takes philosophical sophistication to see that vision really is representation; indeed, some philosophers still dispute the latter thesis.  So too, it takes philosophical sophistication to reject Descartes’ conviction that our own conscious states are simply and directly present to our minds, rather than represented by them.)
    Moreover, consider what happens when you have only been passively aware of being in a mental state M but are then moved, by curiosity or by someone else’s suggestion, to introspect.  You both attend more closely to M and become aware of your previously passive awareness.  But the sense of presence rather than representation remains.  As Byrne (1997: 117) observes, in such a case you do not suddenly notice one or more thoughts about M, but only the discernment of M through M’s presence to you.  (I emphasize that all this is intended as phenomenology, not as insisting that how things seem is how they are.  The phenomenology of presence may be illusory.  My claim is only that it is the phenomenology of awareness of one’s own mental state.)

    2.  Voluntary control.  Consciousness of our mental states is like perception, and unlike ordinary thought, in that we have a considerable degree of control over what areas of our phenomenal fields to make conscious.  This is particularly true of active introspecting, as I have emphasized elsewhere (Lycan (1996), (1999), (in press)).  If I ask you to attend first to the feeling in your tongue, then to the sounds you can hear, then to the right center of your visual field, and then to what you can smell right now, you can do those things at will, and in each case you will probably make a sensory state conscious that may not have been conscious before.
    Here again, that introspecting is a highly voluntary activity does not entail that ordinary passive representation of first-order states is as well.  But it seems to show that the introspectors, the monitors or scanners, are there to be mobilized and that they may function in a less deliberate way under more general circumstances.  I contend, then, that the higher-order representations that make a first-order state conscious are (etiologically) more like perceptions than they are like thoughts.  They are characteristically the outputs of an attention mechanism that is under voluntary control; thoughts are not that.
    Carruthers (2000: 212-13) has trenchantly objected to one of my previous appeals to the voluntary control of introspecting.  (That appeal (Lycan (1996: 16-17)) was made in defense of a different thesis, that our brains actually do contain mechanisms of internal attention.)  He suggests that “when we shift attention across our visual field, this is mediated by shifting the first-order attentional processes which are at work in normal perception, and which deliver for us richer contents in the attended-to region of visual space….”  Carruthers proposes that what one does in response to the command, “Shift your attention to the right center of your visual field” is exactly what one would do in response to “Without changing the direction in which you are looking, shift your visual attention to the state of the world just right of center.”  “The process can be a purely first-order one.”
    As a staunch representationalist about first-order sensory states (Lycan (1987), (1996)), I am in no position to insist on a contrast between attending to one’s Q-properties and attending to the real or putative perceptible features of external objects; I believe Q-properties just are the real or putative perceptible features of external objects.  However, notice that it is not just the intentional qualitative contents of my first-order states that I can voluntarily attend to.  I can also focus my attention, at will, on further properties of those contents, and properties of the containing state as well.  (Thus, I deny the full-blown “transparency” thesis defended by Tye (2002).)  I introspect a green patch in my visual field.  Let us grant that that is in part to detect external, physical greenness; but that property itself determines nothing about any sensory modality, much less any finer-grained mode of presentation.  I also introspect that the greenness is visual rather than something I learned of in some other way.  I can also tell (fallibly) by introspection how compelling a visual experience it is, how strongly it convinces me that there is a real green object in front of me.
    Or consider pains.  I am firmly convinced by Armstrong (1968) and Pitcher (1970) that pains are representational and have intentional objects, real or inexistent, which objects are unsalutary conditions of body parts.  But those objects are not all I can introspect about a pain.  I can also introspect its awfulness and the urgency of my desire that it cease.  (I distinguish between the Q-property of a pain, the pain’s specifically sensory core--say the throbbing character of a headache--and the pain’s affective aspect that constitutes its awfulness/hurtfulness (Lycan (1998)).  Those are not normally felt as distinct, but two different neurological subsystems are responsible for the overall experience, and they can come apart.<14>   The Q-property is what remains under morphine; what morphine blocks is the affective aspect--the desire that the pain stop, the distraction, the urge to pain-behave.  I contend that the affective components are functional rather than representational.)
    Finally, for any Q-property, I can introspect the higher-order property of what it is like to experience that Q-property.
    The moral is that I can focus my inner attention more finely (even) than on particular Q-properties, and in particular on purely mental properties that, unlike the Q-properties themselves, are not just features of the external world.
    In any case, notice that although Rosenthal makes free (and perfectly reasonable) use of the notions of introspection and introspective awareness, he has no obvious account to give of introspecting, the voluntary and substantive sort of activity I have described.  The having of thoughts on a given subject-matter is only partly, and not directly, under voluntary control.
    Rocco Gennaro has offered a HOTist reply to the current argument.  It takes the form of a dilemma:  Either (a) the relevant higher-order representation is itself conscious or (b) it is not.  Suppose (a).  HOP has no obvious advantage over HOT for this case, Gennaro says, because the HOT theorist can equally talk of a subject S’s “actively searching for” an object of the higher-order thought, or “deliberately thinking about” such an object.  (Indeed, we often do such things in what HOT theorists might call “reflection.”)  As in the anticipated reply to point 0 above, perhaps when S is in an attentive frame of mind, S’s first-order mental state itself simply causes the thought that S is in that state, with no intermediating sort of awareness required.
    Now suppose (b), that the relevant higher-order representation is not a conscious one.  Then, when S controls where to focus S’s attention, that does not seem to be the  result of S’s controlling S’s unconscious higher-order representations.  Even though S does have significant voluntary control over S’s first-order perceptual states, the higher-order representations produced by the attention mechanism are in no way contributing to the voluntariness in question.  Thus, even if we agree that the relevant higher-order representations are produced by a mechanism that is perception-like, there is no reason to think that what is produced by such a mechanism is more perception-like than thought-like.  And so HOP has no advantage over HOT for case (b) either.  Net result:  No relevant advantage for HOP over HOT.
    I am not persuaded on either count.  For case (a), I do not concede that the HOT theorist can equally talk of “active searching” etc.  Compare the voluntary control of first-order attending to external objects:  At will, we can selectively attend to environmental region R and see whatever there is in R.  We do not in the same facile way control what things in the environment we have thoughts about; thought is more spontaneous and random.  The only obvious way in which we control what to have thoughts about is first to attend (perceptually) to a region R and thereby cause ourselves to have thoughts about whatever there is in R.
    The same goes for the voluntary control of attending to first-order mental states.  At will, we can selectively attend to phenomenal region R and detect whatever sensory qualia there are in R.  We do not in the same facile way control what regions of or things in the phenomenal field we have thoughts about; the only obvious way in which we control what to have thoughts about is first to attend to a region R and thereby cause ourselves to have thoughts about whatever sensory qualia there are in R.  I can try to have thoughts about contents of R only by attending to R and detecting qualia there.
    Anent case (b):  Agreed, S’s control of where to focus S’s attention is not the result of S’s controlling S’s unconscious higher-order representations, and my argument does not show that the representation produced is in its own nature and structure more perception-like.  (So far as has been shown, representations spit out by the attention mechanisms and representations that just well up thought-like from the marshmallow may be otherwise just alike  say, neural states having the structures of predicate-calculus formulas.)  But I want to say that even if the putative higher-order perceptions and higher-order thoughts are thus “intrinsically” alike, they still differ importantly and distinctively in their etiological properties:  The relevant higher-order representations are characteristically produced by the exercise of attention.  That makes them more like perceptions than like thoughts, since it is not characteristic of thoughts to be directly produced by the exercise of attention (though of course thoughts can happen to be produced by the exercise of attention, normally by way of a mediating perception).

    3.  Nonvoluntary results.  There is a sort of obverse point to be made in terms of voluntariness.  It is characteristic of external-world perception that, once the subject has exerted her/his voluntary control in directing sensory attention, e.g., chosen to look in a particular direction or to sniff the air inside a cupboard, the result is up to the world.  Perhaps what one then sees or smells is conditioned in some ways by expectation or by interest, but for the most part we must see what is there in that direction to be seen and smell whatever miasma the world has furnished.  The same is true of awareness of our own mental states.  Though there too, awareness is to some degree conditioned by expectation (recall the frat-boy type of example), the first-order states that present themselves to our attention are primarily as they are independently of our will.  If you concentrate on your right lower canine, you may find a slight ache there, or you may feel nothing there, but in either case (normally) that is up to your sensory system and what it has or has not delivered.
    Van Gulick (2001: 286-87) argues that the case of higher-order thought is not actually so different.  He reminds us that the thoughts appealed to by HOT theorists are assertoric thoughts.  “Once the assertoric requirement comes to the fore, our degree of voluntary control seems to shrink if not altogether disappear,” because it is controversial at best to suppose that we can control our beliefs.  We should agree that if I choose to direct my attention to certain nonperceptual but factual questions, I will nonvoluntarily be caused to make assertoric judgments one way or the other.  (What is my daughter’s name?—Jane.  Is foreign-language study required for a Ph.D. in philosophy at my university?—No.  How fast does light travel in a vacuum?—Around 186,000 mps.)
    But in those cases, I already know, or am confident of, the answers.  On questions I have not previously investigated, if I raise them and do not thereupon investigate (perceptually or otherwise), I will normally not be confronted by a fait accompli.  How many people are sitting right now in the Carolina Coffee Shop?  What piece of music is being played on our local classical radio station?  Who was Vice-President of the United States in 1880?  Occasionally, I may consider a novel question and the answer may force itself upon me, in the manner of a thought experiment.  (If we lower admissions requirements and increase the student body by 4,000, will our average student be better or worse?)  But that sort of case is not the norm, and this still distinguishes thought from perception, though perhaps the distinction is now less in nonvoluntariness per se than in the comparative range of nonvoluntariness:  For perception investigation, the answer comes right up nonvoluntarily almost every time, but for thought-eliciting questions the percentage is a lot smaller.

    4.  Degrees of awareness.  As Lycan (1996: 39-43) emphasized, awareness of our own mental states comes in degrees.  Given a mild pain that I have, I may be only very dimly and peripherally aware of it (assuming I am aware of it at all); or I may be more than dimly aware of it though still only mutedly; or I may be well aware of it; or I may be quite pointedly aware of it, thank you; or in a fit of hypochondria I may be agonizingly aware of it and aware of little else.  This range of possibilities is of course characteristic of external-world perception as well.  It is not characteristic of mere thought.  That is not to deny that the HOT theorist might construct a notion of degree-of-awareness; for example, the number of distinct higher-order thoughts I have about my pain might be invoked.  The point is only that the HOP theorist has such a notion already and does not need to construct one.

    5.  Epistemology.  Our awareness of our own mental states justifies our beliefs about them.  Indeed, my only justification for believing that I now have a slight ache in my tailbone and that I am hearing sounds as of a car pulling into the driveway is that I am aware of both states.  Active introspection justifies our beliefs about our mental states even more strongly, though by no means infallibly.  This is just what we should expect if awareness is a perception-like affair.  By contrast, merely having an assertoric thought to the effect that one is in state M by itself does nothing to justify the belief that one is in M, and having a metathought about that thought adds no justification either.
    Van Gulick (2001: 280-81) puts a reliabilist spin on this contrast.  We think of our perceptual capacities as reliable channels of information, and, barring evil-demon and other skeptical scenarios, for the most part they are.  We do not think of the having of thoughts, per se, as reliable information channels, but count a thought as justified or justifying only when it is itself the output of some other reliable channel--paradigmatically a perceptual one.  Introspective awareness is like perception in that respect:  We think of our internal awareness as a reliable source of information, and (again barring skeptical scenarios) for the most part it is.

    6.  Grain.  Thoughts, as ordinarily conceived, differ from perceptual states in being more thoroughly and more discretely conceptual.  Their contents are reported in indirect discourse using complement clauses made of natural-language words, and it is fair to say that we usually think in concepts that correspond to, and are naturally expressed by, words.  But words are fairly coarse-grained representations.  Now, consider a phenomenal region that may be an object of consciousness for you at a time, say, a subarea of your visual field.  Even if it is not a particularly large region, it is rich, in irregular outlines, textures, gradations of shading and the like.  The phenomenal contents of this region would be either impossible to describe in words at all, or possible to describe only in that one could devise a description if one had gabillions of words and days or weeks of time in which to deploy them all.  (Byrne (1997: 117) refers to roughly these disjuncts, respectively, as “the inexpressibility problem” and “the problem of the unthinkable thought.”)  Unless thoughts are significantly less “conceptual” and subtler than sentences of natural languages, your consciousness of the contents of the phenomenal region cannot be constituted by a higher-order thought or set of same.  Perhaps thoughts are more subtle than sentences in some helpful respect, but I think it is up to the HOT theorist to make that case.
    Rosenthal emphasizes (1997: 742, 745) that in areas of subtle sensory discrimination, such as wine-tasting and music appreciation, an increase in conceptual sophistication often makes for new experiences, or at least for finer-grained experiences than were previously possible.  He contends, and I agree, that this is a mark in favor of HOT theories.  But it does not help him against the present argument in particular.  The nuances of wine tastes and of musical experience still outrun the verbalizable.
    One might suggest on HOT’s behalf that a conscious state need not be fully verbalizable.  Indeed Rosenthal fleetingly acknowledges the problem (1993: 210): “No higher-order thoughts could capture all the subtle variations of sensory quality we consciously experience.”  He responds, “So higher-order thoughts must refer to sensory states demonstratively, perhaps as occupying this or that position in the relevant sensory field.”  But, taken literally, that will not do; as Byrne points out (117-8), one can demonstrate only what one is already aware of, or some thing determinately related to another relevant thing one is aware of.  (Byrne adds that relaxing the requirement still further by allowing the higher-order thought to designate the first-order state by any directly referring term will not help either, because to be aware of being in a state is to be aware of it in some characterizing way from one’s first-person point of view, not just to token a directly referring name, such as “Alice,” that in fact designates the state.)

    7.  The ineffability of “what it’s like.”   What is it like to experience the phenomenal yellowy-orangeness of a yellowy-orange after-image?  As I said in section III above, we cannot generally express such things in words.  HOP explains that ineffability.  When you introspect your after-image and its phenomenal color, your quasi-perceptual faculty mobilizes an introspective concept, one that is proprietary to that introspector and (for a reason I have given elsewhere (Lycan (1996, pp. 60-61))) does not translate into English or any other natural language.  That is why you cannot say what it is like for you to experience the visual yellowy-orangeness, though you still, and rightly, feel that there is something it is like and that its being like that is a fact if only you could express it verbally.
    HOT affords no comparable explanation.  The HOT theorist would agree that the way in which you are aware of what it is like to experience the greenness is by deploying a higher-order representation of the after-image and its color, but s/he has no systematic explanation of the ineffability.  (Though of course it is entirely compatible with HOT that some or all of the relevant higher-order thoughts might be inexpressible in natural language.)  Rosenthal’s passing contention that higher-order thoughts must refer to sensory states demonstratively would help if it were tenable, but Byrne’s point remains, that one can demonstrate only what one is already aware of or what is determinately related to something else one is aware of.

    8.  Purely recognitional concepts.  Brian Loar (1990) and others have argued for the existence of “purely recognitional” concepts, possessed by subjects and applying to those subjects’ experiences.  Such concepts classify sensations without bearing any analytic or otherwise a priori connections to other concepts easily expressible in natural language, and without dependence on the subject’s existing beliefs about the sensation in question.  (It is in part because we have concepts of this sort that we are able to conceive of having a sensation of this kind without the sensation’s playing its usual causal role, without its representing what it usually represents, without its being physical at all, etc., and we are able to conceive of there being a body just like ours in exactly similar circumstances but is not giving rise to a sensation of this kind--all facts fallaciously made much of by opponents of materialism.)
    Carruthers (2001: section 3) notes that HOP can explain how it is possible for us to acquire such phenomenal concepts.  “For if we possess higher-order perceptual contents, then it should be possible for us to learn to recognize the occurrence of our own perceptual states immediately--or ‘straight off’--grounded in those higher-order analog contents.  And this should be possible without those recognitional concepts thereby having any conceptual connections with our beliefs about the nature or content of the states recognized, nor with any of our surrounding mental concepts.”  HOT can make no parallel claim about the acquisition of recognitional concepts, at least not straight away.  For one thing, how could we have a higher-order thought about a sensory state without already having the pertinent concept?  But in any case, merely having a thought about something is not generally recognized as a means of acquiring the concept of that thing.

    9.  HOT’s problem of sufficiency.  Rosenthal has always acknowledged a prima facie problem about higher-order thoughts that are mediated in the wrong ways.  For example, I might learn of a nonconscious mental state that I am in by noting my own behavior, or through testimony from someone else who has been observing my behavior, or through Freudian analysis; and so I could then have thoughts about that state even though it is not a conscious one.  To preëmpt that objection, Rosenthal requires of a conscious state that its subject’s higher-order thought not be the result of “ordinary inference.”
    For some reason, several commentators have reported that Rosenthal has imposed a stronger, causal requirement as well, e.g., that the higher-order thought be directly caused by the first-order state.  To my knowledge Rosenthal has never done that.  However, on the basis of a series of hypothetical cases, Francescotti (1995) contends that he should have.  Further, Francescotti argues that this gets Rosenthal in trouble, in that the causal requirement must be either too weak or too strong.
    No doubt chisholming will ensue, and perhaps Rosenthal can solve the problem.  But HOP has no such problem in the first place.

    I do not regard any or all of the foregoing as a proof that HOP is superior to HOT.  I do think I have shown that HOP can withstand considerably more scrutiny than at least the leading proponent of HOT has been prepared to allow.<15>


1.  Rosenthal (1997), p. 740.

2.  As in Lycan (2001), I ignore the dubious possibility that the state is its own intentional object, represented by itself alone.  I know of no HOR theory that does not understand a higher-order representation as being numerically distinct from its representatum.  (Though on Gennaro’s (1996) “Wide Intrinsicality View” and on Van Gulick’s (2001) “Higher-Order Global State” picture, the relevant higher-order states are not entirely distinct from the first-order states they target.

3.  This argument is elaborated a bit in Lycan (2001).

4.  My opponent here will be what Carruthers (2000) calls “actualist” HOT, principally Rosenthal’s own version.  I shall not address Carruthers own “dispositionalist” HOT, because I believe it is quite a different sort of theory and incurs different sorts of objections.

5.  When presenting this paper to audiences, I have found that some listeners have been confused through not realizing how small and local the HOP-HOT dispute really is.  I regard it as a family squabble.  The body of presuppositions shared by HOP and HOT is much more controversial than is the choice between the two.

6.  E.g., Armstrong and Malcolm (1984); Rosenthal (1991a); Lycan (1996), Ch. 4.  It is true that Rosenthal and I have pressed our respective HOR views into service in helping to explain other things about consciousness more generally, including the phenomenon of its being “like” something to experience a sensory quality, but we did that only by conjoining the views in an ancillary way with other, independent theoretical apparatus.

7.  Armstrong’s famous example is that of the long-distance truck driver who is daydreaming and driving on autopilot; the driver must have perceived stop lights as red, for example, or he would not have stopped at them.  This example has become a poster child for HOP theories, but I now believe incorrectly so; I do not think it is an example of a subject who characteristically lacks higher-order perceptions.  See Lycan and Ryder (2002).

8.  See Lycan (1996), pp. 16-21.

9.  Armstrong's term, “introspective consciousness” is potentially ambiguous.  It may seem to imply activity, as in the attention mechanisms being deliberately mobilized by their owners, a case of introspecting.  But Armstrong himself means something much weaker, that includes a quite passive “reflex” consciousness.  He compares “reflex” consciousness to “reflex seeing,” the inattentive seeing that occurs most of the time (“[t]he eyes have a watching brief at all times that we are awake and have our eyes open” (1981, p. 63)).  In Armstrong and Malcom (1984) he adds, “Normally, introspective consciousness is of a pretty relaxed sort.  The inner mental eye takes in the mental scene, but without making any big deal of it” (p. 120).

10.  One such is offered by Levine (2001):  States of the early visual system that detect intensity gradients are (obviously) not conscious states.  “[B]ut it’s not obvious that there aren’t higher-order states of the requisite sort within the computational system that carries out visual processing” (p. 106).

11.  Fred Dretske (in press) misconstrues Lycan (1996) on this point, imputing to me the view that “inner sense does not reveal qualities of the objects (the experiences) being scanned.  [Lycan]…tells us that these (first-order) experiences (of beer bottles) do not (like beer bottles) have ‘ecologically significant features’ and so our introspective ‘scanning’ of them does not represent them as having properties.”  Though the first-order states do not have ecologically significant properties, inner sense certainly does represent thm as having properties of other sorts; it type-classifies them.  For a fuller exposition, see Lycan (in press).

12.  Güzeldere (1997) offers another argument specifically against my version of HOP (the view he calls “Option 2” on its first interpretation (p. 794)).  The argument is hard for me to parse, because—understandably--it is stated in terms of Armstrong’s truck-driver example, which I now reject (note 7 above).  He points out, in my view correctly, that higher-order representation of perceptual states would not help explain how those brain states constitute the representings of external objects that they do.  More generally, he warns against the fallacy of confusing features of the representer with the represented.  But these points are immanent to his diagnosis of what he and I both consider an error of Armstrong’s about the truck driver.  They do not extrapolate to HOP per se.  If Güzeldere has given any further argument against HOP itself, I have missed it.

13.  In his article (p. 279), Van Gulick does us the excellent service of listing some paradigm features of perceiving, and asking the question of which of those features are shared by the meta-mental states that HOP and HOT theorists agree make for consciousness.

14.  Besides a sensory, nociceptive system, it seems there is also an inhibitory system that for occasional reasons damps the nociceptive signals.  See Hardcastle (1999) and the references given therein.

15.  Thanks to Zena Ryder for very detailed comments on a previous draft.


Armstrong, D.M. (1968).  A Materialist Theory of the Mind.  London: Routledge and Kegan Paul.

Armstrong, D.M. (1981).  “What is Consciousness?”  In The Nature of Mind and Other Essays.  Ithaca, NY: Cornell University Press.

Block, N.J. (1995).  “On a Confusion about a Function of Consciousness.” Behavioral and Brain Sciences 18: 227-47.

Block, N.J., O. Flanagan and G. Güzeldere (eds.) (1997). The Nature of Consciousness.  Cambridge, MA: Bradford Books / MIT Press.

Byrne, A. (1997).  “Some Like It HOT: Consciousness and Higher-Order Thoughts.”  Philosophical Studies 86: 103-29.

Carruthers, P. (1996).  Language, Thought and Consciousness.  Cambridge: Cambridge University Press).

Carruthers, P. (2000).  Phenomenal Consciousness.  Cambridge: Cambridge University Press.

Carruthers, P. (2001).  “Higher-Order Theories of Consciousness.”  in E.N. Zalta (ed.), The Stanford Encyclopedia of Philosophy <>.

Dretske, F. (1993).  “Conscious Experience.”  Mind 102: 263-83; reprinted in Block, Flanagan and Güzeldere (1997).

Dretske, F. (1995).  Naturalizing the Mind.  Cambridge, MA: Bradford Books / MIT Press.

Dretske, F. (in press).  “How Do You Know You Are Not a Zombie?”  In Gertler (in press).

Francescotti, R.M. (1995).  “Higher-Order Thoughts and Conscious Experience.”  Philosophical Psychology 8: 239-54.

Gennaro, R. (1996).  Consciousness and Self-Consciousness.  Philadelphia, PA: John Benjamins Publishing Co.

Gertler, B. (ed.) (in press).  Privileged Access and First Person Authority.  Aldershot: Ashgate Publishing Limited.

Güzeldere, G. (1997).  “Is Consciousness the Perception of What Passes In One’s Own Mind?”  In Block, Flanagan and Güzeldere (1997).

Hardcastle, V.G. (1999).  The Myth of Pain.  Cambridge, MA: MIT Press.

Levine, J. (2001).  Purple Haze.  Oxford: Oxford University Press.

Loar, B. (1990) `Phenomenal properties.'  In J. Tomberlin (ed) Philosophical Perspectives: Action Theory and Philosophy of Mind.  Atascadero, CA: Ridgeview Publishing.

Lycan, W.G. (1986).  Consciousness.  Cambridge, MA: Bradford Books / MIT Press.

Lycan, W.G. (1995).  Consciousness and Experience.  Cambridge, MA: Bradford Books / MIT Press.

Lycan, W.G. (1998). In defense of the representational theory of qualia (replies to Neander, Rey and Tye).  In Tomberlin (1998).

Lycan, W.G. (1999).  “A Response to Carruthers’ ‘Natural Theories of Consciousness’.”  Psyche, Vol. 5 <>.

Lycan, W.G. (2001).  “A Simple Argument for a Higher-Order Representation Theory of Consciousness.”  Analysis 61: 3-4.

Lycan, W.G. (2002).  “The Plurality of Consciousness,” in J.M. Larrazabal and L.A. Perez Miranda (eds.), Language, Knowledge, and Representation (Kluwer Academic Publishing).  A much expanded version is forthcoming in Philosophic Exchange.

Lycan, W.G. (in press).  “Dretske’s Ways of Introspecting.”  In Gertler (in press).

Lycan, W.G., and Z. Ryder (2002).  “The Loneliness of the Long-Distance Truck Driver.”  Unpublished MS.

Neander, K. (1998). The division of phenomenal labor: A problem for representational theories of consciousness. In Tomberlin (1998).

Nelkin, N. (1989).  “Unconscious Sensations.”  Philosophical Psychology 2: 129-41.

Palmer, D. (1975).  “Unfelt Pains.”  American Philosophical Quarterly 12: 289-98.

Pitcher, G. (1970).  “Pain Perception.”  Philosopical Review 79: 368-93.

Rey, G. (1983).  “A Reason for Doubting the Existence of Consciousness.”  In Davidson, Schwartz and Shapiro (eds.), Consciousness and Self-Regulation, Vol. 3.  New York: Plenum Press, pp. 1-39.

Rosenthal, D. (1986).  “Two Concepts of Consciousness.”  Philosophical Studies 49: 329-59.

Rosenthal, D. (1990).  “A Theory of Consciousness.”  Report No. 40, Research Group on Mind and Brain, Zentrum für Interdisziplinäre Forschung (Bielefeld, Germany).

Rosenthal, D. (1991a).  “The Independence of Consciousness and Sensory Quality,” in Villanueva (1991).

Rosenthal, D. (1991b).  “Explaining Consciousness.”  Unpublished MS, presented at the Washington University conference in Philosophy of Mind (December, 1991).

Rosenthal, D. (1993).  “Thinking That One Thinks.”  In M. Davies and G. Humphreys (eds.), Consciousness.  Oxford: Basil Blackwell.

Rosenthal, D. (1997).  “A Theory of Consciousness.”  In Block, Flanagan and Güzeldere (1997).  An expanded and updated version of Rosenthal (1990).

Rowlands, M. (2001).  “Consciousness and Higher-Order Thoughts.” Mind and Language 16: 290-310.

Shoemaker, S. (1994).  “Self-Knowledge and ‘Inner Sense’, Lecture II: The Broad Perceptual Model.”  Philosophy and Phenomenological Research 54: 271-90.

Tomberlin, J.E. (ed.) (1998). Language, mind, and ontology (Philosophical Perspectives, vol. 12). Atascadero, CA: Ridgeview Publishing.

Tye, M. (1995).  Ten Problems of Consciousness.  Cambridge, MA: Bradford Books/MIT Press.

Tye, M. (2002).  “Representationalism and the Transparency of Experience.” Noûs 36: 137-51.

Van Gulick, R. (2001).  “Inward and Upward: Reflection, Introspection, and Self-Awareness.”  Philosophical Topics 28: 275-305.

Villanueva, E. (ed.) (1991).  Philosophical Issues, I: Consciousness.  Atascadero, CA: Ridgeview Publishing.