July 30, 2009

Hey Leon, whatcha reading? (WIT?)

I'm glad you asked. I just finished reading this

with which I was quite impressed. Baum builds a very strong, Occam's Razor-based case for how minds, qualia, and thought emerge from the calorically-expensive blobs of tissue encased in our skulls. His argument is strongly rooted in evolution, primarily the need for a brain to acquire the information to make decisions in a particular environment and in real time. That latter is important, because it means that the brain relies strongly on a vast number of heuristic shortcuts, i.e. the sorts of things that make optical illusions possible.  The strength of his argument also rests in what was -- to me -- a new notion, namely that compression is isomorphic to understanding.

To illustrate that, consider the planets in our solar system (add Pluto if it suits you, it doesn't change the point).  If I wanted to tell you where they were, I'd have to send you some set of coordinates for all of them.  If I wanted to tell you where they were a second later, I'd have to send an entirely new set of data, the same size as the initial set.  Unless I know Kepler's laws.  If I know them, I can send them to you first, then send you the initial positions.  Then, barring aberration, I never need to tell you where the planets are again, you can always calculate their position from the initial data and the laws.  Instead of having to convey some fixed amount of data every so often to tell you where all the planets are, I've given you a vast amount of predictive power, the data drawn from which greatly exceeds what I sent you.  The compression of all that data into a finite set of laws which govern it is identical to understanding something about the universe which contains that data.  I'm sure this is an old notion to some, but it was new to me, and I'm still considering how best to exploit it if I get around to any serious AI work at some point.

His conclusion -- for which I believe he makes a very strong case -- is that the mind is composed of a number of summarizing modules, orchestrated by or reporting to a central module, something akin to the "main" function in a C program. This is an important result, suggesting that our continuity of consciousness, the singular feel of "I" and "me", is not a mere emergent property of multiple agents, as suggested by Minsky, but is an agent itself.  Interestingly, Baum suggests that it is not uncommon for the other agents to misinform, or even deliberately disinform this central agent.  Each of the summarizing agents has its own interests (such as they are, influenced by petaflops of prior evolutionary computation and a "desire" to be conserved in future computation) and they are not always best served by honesty.

There's a lot more to it, and I highly recommend the book.  As Baum notes in the opening, in 1944, Schrödinger (and Dirac) published a short book called What Is Life?, expressing everything he/they knew about what life must be, based on what was known by physics at that time.  Schrödinger theorized some sort of self-referential, self-replicating information must be inherent in the smallest structure of an individual cell, since all life spends at least one moment as a single cell.  Nine years later, Watson and Crick confirmed Schrödinger's theory, and a decade after that we knew the basic syntax of DNA.

Posted by: leoncaruthers at 08:18 AM | No Comments | Add Comment
Post contains 592 words, total size 4 kb.

<< Page 1 of 1 >>
13kb generated in CPU 0.03, elapsed 0.0984 seconds.
54 queries taking 0.0818 seconds, 91 records returned.
Powered by Minx 1.1.6c-pink.