Go to Alexandria's home page
The Library of Alexandria

"Conscious Realism" and "Multimodal User Interface" theories

Jim Carnicelli's AI Blog

Alexandria Home | Up One Level


Switch to multi-page mode for smaller pages with cross-navigation.    Switch to single-page mode for all content in one page.

Thursday, 9/27/2007

"Conscious Realism" and "Multimodal User Interface" theories

Back to
blog home
Listen to an
audio version
Notify me of
new entries
Subscribe to a full
RSS feed of this blog

Professor Donald D. Hoffman I recently sent an email to Donald Hoffman, professor at the University of California, Irvine, with kudos for his book, Visual Intelligence, which has had a profound impact on my thinking about perception. Understandably, he's very busy kicking off the new school year, so I was grateful that he sent at least brief response and a reference to his latest published paper, titled Conscious Realism and the Mind-Body Problem. Naturally, I was eager to read it.

Much of the study of how human consciousness arises stems from the assumption that consciousness is a product of physical processes; that consciousness is a product of a physical processes in the brain. This paper starts from the opposite assumption: that "consciousness creates brain activity, and indeed creates all objects and properties of the physical world." When I read this in the abstract, I must have largely ignored its significance. Having read Visual Intelligence, I'm familiar with Hoffman's focus on how our minds construct the things we perceive, so I took this summary as a shorthand for this concept of construction of the contents of consciousness. It becomes apparent that this claim is far more literal than I had assumed.

Hoffman begins by explaining the ubiquity of a central assumption as follows. "A goal of perception is to match or approximate true properties of an objective physical environment. We call this the hypothesis of faithful depiction (HFD)." After giving lots of examples of this assumption and reasons why it's taken for granted, Hoffman declares his rejection of it:

Now, I'll state here that most of Hoffman's claims in this paper appear logically valid and, on the face of it, uncontested. But I would have to say that this one probably isn't logically supported: that evolutionary considerations require the rejection of HFD. By and large, however, this paper claims that it is not necessary to assume there is an objective physical world in order to study and understand consciousness, which seems acceptable.

The term "objective physical world" deserves some explanation. It identifies the view that there is a single reality that exists without regard to observers. If there is an apple on the table before two people, the apple really is there, whether either of them perceives it. Naturally, one would imagine that if one of them can see the apple, the other one probably can (barring obstructions), because both of them have access to information (e.g., light) reflected off the apple and into both their eyes. They may see different sides of the apple, but the apple is definitively there.

To be sure, one should not dismiss Hoffman as a fringe nut that claims there is no reality, per se; only people and their subjective consciousnesses. He doesn't in this paper. In fact, it's clear he does appear to accept the assumption that there really is an objective reality, but that we don't have direct "access" to it. A classic example of this distinction is a detailed treatment of the table not as a solid object with straight edged surfaces, but as a collection of atoms and, mostly, empty space and, as such, rough, continuously changing surfaces. In this sense, there really isn't a table; that's just a percept (or concept) we use to refer to the collection of atoms.

To help illustrate the distinction between what one perceives and the subject matter of perception, Hoffman introduces the analogy of deleting a computer file by "dragging" a file icon and "dropping" it onto a trash can icon. This action is intuitive and designed specifically as an analogy of the actual file delete operation, but it actually bears no resemblance to what actually goes on under the surface. In fact, even the icon is not equivalent to the file; it's merely a percept specifically designed to represent it to the end user. By analogy, Hoffman refers to the table or the apple as merely "icons" we create in our minds to represent what most people would reflexively call "real objects". In fact, to the person who says, "no, the apple is just a bunch of atoms," Hoffman would in turn say, "the atoms are themselves icons we create."

Hoffman introduces the term "multimodal user interface", or "MUI", to summarize what consciousness is. In contrast to the view that perception is all about constructing a mental model of reality that closely resembles reality, Hoffman claims perception is about constructing practical models that "get the job done". And just as computer designers might construct icon based interfaces to help make it easier for humans to understand and practically manage information, our own minds actually set out to construct "practical" percepts in order to help us simplify what we do. But the mental models, Hoffman claims need not bear any resemblance to what is being modeled.

To be sure, Hoffman may say the percepts -- mental models -- a conscious entity holds bear no resemblance to their referents, but he doesn't claim that there is no correlation to them. Hoffman says that user interfaces, including our own consciousnesses, by design have the following characteristics:

That is, a user interface's "purpose" is to distill immensely complex behaviors down to practical "icons" of objects and behaviors that stand for that underlying complexity, but don't literally mirror it. Take the file-delete example. The icon on the desktop is a sufficient stand-in for a file, even though the file, a pattern of magnetic fields on a metal platter, bears no resemblance to the icon. It's a "friendly format", in this sense. Further, the action of dragging and dropping it onto a trash can icon to "delete it" has its own causal chain, which conceals the true, deeply complex causal chain that actually happens to effect the file delete operation. Yet the drag-n-drop operation and the trash can icon give an intuitive clue of what will happen if something is dropped onto it. Finally, this drag-n-drop-to-delete operation is designed to consistently do the same thing every time, thereby engendering in the user an ostensible sense that there is an objective operation going on that will always happen, even though a moment's reflection tells us that a failure in the underlying software or hardware could cause something else to happen when one drops a file icon on the trash icon.

So far, I can see that there's a practical use for this notion to people trying to understand human perception or to engender consciousness in machines. For one, the claim is that percepts do not have to bear much resemblance to their referents in the "real world". They just have to have practical utility. An icon in a user interface just needs to be useful enough for the user to be aware of a file's existence and to do some basic stuff with it. Similarly, the mental percept an antelope has of a lion in the distance only needs to be useful enough to stay alive to be useful. It doesn't need to be a highly detailed representation of the lion beyond that basic utility. It also alludes to the view that a high fidelity representation in a computer of the "real world" doesn't make the machine that has it any more aware of what is represented. For instance, just because a self-driving car has a 3D map of the terrain out in front doesn't mean it can "see" where the road is. It's still necessary to create a practical model of how the world works that uses this 3D representation as source data, like an algorithm that seeks basically level ground, defined by a threshold of variation that separates level from non-level ground. If this were the message of the paper, I would say it adds genuine value: a set of concepts and terms to use to help steer people away from fallacious assumptions about how consciousness works and to suggest paths for further study.

But this isn't where the paper ends. It's more where it starts. In fact, this paper is less about explaining how consciousness works than about how reality works; it's metaphysics instead of epistemology. As stated earlier, it starts with the assumption that consciousness exists and that the subject of consciousness is optional. To avoid sounding like a total subjectivist, Hoffman states that:

If Hoffman accepts the idea that there is a physical, objective reality, what is it composed of? "Conscious Realism asserts the following: The objective world, i.e., the world whose existence does not depend on the perceptions of a particular observer, consists entirely of conscious agents." Honestly, I would love to say that this claim is explained, but it really isn't. Hoffman claims that humans are not the only conscious agents, but doesn't say that tables, apples, and such are conscious, per se. "According to conscious realism, when I see a table, I interact with a system, or systems, of conscious agents," which really does seem to suggest that the table is conscious, but not clearly.

This is one of the problems I have with this paper, though. Although Hoffman rejects the notion of inanimate objects as conscious in a trippy, Disney cartoon sense, he doesn't really elaborate on what he does mean. Moreover, if a table is labeled as conscious in order to stick a placeholder for a physical object in the objective world, what value does this add over the simpler, more intuitive conception of the table as being a physical object? It almost seems as though, in order to come up with a rigorous, clean-cut, math-friendly theory of how consciousness constructs perceptions of the world, Hoffman throws the baby out with the bathwater by claiming that even though there is an objective world, it is not composed of actual objects.

I think if Hoffman were inclined to speak of "conscious realism" and "multimodal user interfaces" as tools and techniques for studying consciousness and guides to creating it, this could be a practical concept. He could say that our perceptions of reality really do reflect, if simplistically, abstractly, and practically, an actual, objective reality. By taking pains to say there isn't really one -- or that it is entirely disconnected from our ability to perceive it -- this paper seems to do something of a disservice to science:

While I can see that it is possible, perhaps, to express other branches of science in the terminology of MUIs, I don't see how it would advance our understanding of their subject matter. Gravity was well understood by Newton, yet expressing it in terms of the theory of General Relativity makes it possible to do more with the subject matter than was possible in the purely Newtonian framework. What new insights will the physicist have as a result of expressing gravity in terms of multimodal user interfaces and with reference to heavenly bodies as conscious entities? If anything, it sounds more like this extra layer would only add to the confusion people have in trying to understand already complex concepts and could even potentially take away certain practical conceptual tools. So I don't see the point.

All that said, the MUI concept does seem to add value to my own way of thinking of perception. The four functions of a good user interface listed above (friendly formatting, concealed causality, clued conduct, ostensible objectivity) seem to shout out how scientists trying to engender perception in machines should frame their goals and concepts. But the rest of Hoffman's paper, which dabbles in the philosophy of what reality is, seems to have little use for AI research.

method="post" action="../../ai/feedback.asp">
Your Feedback
Name (optional):
Email (optional):

Prove Your Humanity:
Please enter the code you see here. This is designed to
protect our message board from spam posted by automated software.
Those programs can't easily read these codes like you and I can.

Subject: AI - Blog - "Conscious Realism" and "Multimodal User Interface" theories
Or write me an email instead.         

Back to
blog home
Listen to an
audio version
Notify me of
new entries
Subscribe to a full
RSS feed of this blog


All Entries

(reverse date order)

  • 11/13/2007 - Confirmation bias as a tool of perception
  • 11/6/2007 - What bar code scanners can tell us about perception
  • 10/21/2007 - Perception as construction of stable interpretations
  • 10/14/2007 - Rebuttal of the Chinese Room Argument
  • 10/7/2007 - Video stabilizer
  • 9/27/2007 - "Conscious Realism" and "Multimodal User Interface" theories
  • 7/4/2007 - Plan for video patch analysis study
  • 7/1/2007 - Patch mapping in video
  • 6/27/2007 - Emotional and moral tagging of percepts and concepts
  • 6/22/2007 - A hypothetical blob-based vision system
  • 4/21/2007 - Abstraction in neuron banks
  • 4/12/2007 - Pattern Sniffer: a demonstration of neural learning
  • 4/7/2007 - A respectful critique of the Hierarchical Temporal Memory (HTM) concept
  • 11/10/2005 - Neuron banks and learning
  • 11/3/2005 - A standardized test of perceptual capability
  • 10/29/2005 - Using your face and a webcam to control a computer
  • 10/8/2005 - Stereo disparity edge maps
  • 9/25/2005 - Some stereo vision illusions
  • 9/21/2005 - Topics in Machine Vision
  • 8/26/2005 - Introduction to Machine Vision
  • 8/14/2005 - Bob Mottram, crafty fellow
  • 8/11/2005 - Stereo vision: measuring object distance using pixel offset
  • 8/7/2005 - Automatic alignment of stereo cameras
  • 8/7/2005 - DualCameras component
  • 7/30/2005 - Patch equivalence
  • 7/12/2005 - Machine vision: motion-based segmentation
  • 6/20/2005 - Machine vision: spindles
  • 6/16/2005 - Machine vision: smoothing out textures
  • 6/15/2005 - Machine vision: studying surface textures
  • 6/10/2005 - Machine vision: pixel morphing
  • 6/10/2005 - Machine vision: motion tracking
  • 6/10/2005 - Machine vision: tilting my head
  • 6/10/2005 - Machine vision: layer-based models
  • 6/9/2005 - Machine vision: 2D collages
  • 6/9/2005 - Machine vision: Hierarchy of regions
  • 6/9/2005 - Machine vision: cost-effective action
  • 6/9/2005 - Machine vision: overlooking shadow and light splotches on surfaces
  • 6/9/2005 - Machine vision: blob growth
  • 5/11/2005 - Review of "Visual Intelligence"
  • 5/4/2005 - The portable, hand-held learning laboratory
  • 4/27/2005 - Review of "On Intelligence"
  • 4/15/2005 - Bubble Vision
  • 2/26/2005 - Machine vision of GUIs
  • 1/23/2005 - The fallacy of bigger brains
  • 1/12/2005 - Follow-up on Pile
  • 1/12/2005 - A review of the premises behind Pile
  • 11/28/2004 - Thoughts on FLARE
  • 11/28/2004 - New project: Mechasphere
  • 11/14/2004 - Review of "Bicentennial Man"
  • 11/2/2004 - Neural network demo
  • 10/17/2004 - Roamer: recent updates
  • 10/13/2004 - New Roamer project
  • 10/9/2004 - First entry


    Go to Alexandria's home page Copyright © 2014 The Library of Alexandria. All rights reserved.
    Produced in cooperation with Carnell Automation, LLC.