?

Log in

Previous Entry | Next Entry

[Note: There is a followup to this essay here]

Ray Kurzweil pisses me off.

His name came up last night at Science Pub, which is a regular event, hosted by a friend of mine, that brings in guest speakers on a wide range of different science and technology related topics to talk in front of an audience at a large pub. There's beer and pizza and really smart scientists talking about things they're really passionate about, and if you live in Portland, Oregon (or Eugene or Hillsboro; my friend is branching out), I can't recommend them enough.

Before I can talk about why Ray Kurzweil pisses me off--or, more precisely, before I can talk about some of the reasons Ray Kurzweil pisses me off, as an exhaustive list would most surely strain my patience to write and your patience to read--it is first necessary to talk about what I call the "da Vinci effect."






Leonardo da Vinci is, in my opinion, one of the greatest human beings who has ever lived. He embodies the best in our desire to learn; he was interested in painting and sculpture and anatomy and engineering and just about every other thing worth knowing about, and he took time off of creating some of the most incredible works of art the human species has yet created to invent the helicopter, the armored personnel carrier, the barrel spring, the Gatling gun, and the automated artillery fuze...pausing along the way to record innovations in geography, hydraulics, music, and a whole lot of other stuff.

However, most of his inventions, while sound in principle, were crippled by the fact that he could not conceive of any power source other than muscle power. The steam engine was still more than two and a half centuries away; the internal combustion engine, another half-century or so after that.

da Vinci had the ability to anticipate the broad outlines of some really amazing things, but he could not build them, because he lacked one essential element whose design and operation were way beyond him or the society he lived in, both in theory and in practice.

I tend to call this the "da Vinci effect"--the ability to see how something might be possible, but to be missing one key component that's so far ahead of the technology of the day that it's not possible even to hypothesize, except perhaps in broad, general terms, how it might work, and not possible even to anticipate with any kind of accuracy how long it might take before the thing becomes reachable.






Charles Babbage's Difference Engine is another example of an idea whose realization was held back by the da Vinci effect.

Babbage reasoned--quite accurately--that it was possible to build a machine capable of mathematical computation. He also reasoned that it would be possible to construct such a machine in such a way that it could be fed a program--a sequence of logical steps, each representing some operation to carry out--and that on the conclusion of such a program, the machine would have solved a problem. Ths last bit differentiated his conception of a computational engine from other devices (such as the Antikythera mechanism) which were built to solve one particular problem and could not be programmed.

The technology of the time, specifically with respect to precision metal casting, meant his design for a mechanical computer was never realized in his lifetime. Today, we use devices that operate by principles he imagined every day, but they aren't mechanical; in place of gears and levers, they use gates that control the flow of electrons--something he could never have envisioned given the understanding of his time.




One of the speakers at last night's Science Pub was Dr. Larry Sherman, a neurobiologist and musician who runs a research lab here in Oregon that's currently doing a lot of cutting-edge work in neurobiology. He's one of my heroes1; I've seen him present several times now, and he's a fantastic speaker.

Now, when I was in school studying neurobiology, things were very simple. You had two kinds of cells in your brain: neurons, which did all the heavy lifting involved in the process of cognition and behavior, and glial cells, which provided support for the neurons, nourished them, repaired damage, and cleaned up the debris from injury or dead cells.

There are a couple of broad classifications for glial cells: astrocytes and microglia. Astrocytes, shown in green in this picture, provide a physical scaffold to hold neurons (in blue) in place. They wrap the axons of neurons in protective sheaths and they absorb nutrients and oxygen from blood vessels, which they then pass on to the neurons. Microglia are cells that are kind of like little amoebas; hey swim around in your brain locating dead or dying cells, pathogens, and other forms of debris, and eating them.

So that's the background.




Ray Kurzweil is a self-styled "futurist," transhumanist, and author. He's also a Pollyanna with little real rubbber-on-road understanding of the challenges that nanotechnology and biotechnology face. He talks a great deal about AI, human/machine interface, and uploading--the process of modeling a brain in a computer such that the computer is conscious and aware, with all the knowledge and personality of the person being modeled.

He gets a lot of it wrong, but it's the last bit he gets really wrong. Not the general outlines, mind you, but certainly the timetable. He's the guy who looks at da Vinci's notebook and says "Wow, a flying machine? That's awesome! Look how detailed these drawings are. I bet we could build one of these by next spring!"

Anyway, his name came up during the Q&A at Science Pub, and I kind of groaned. Not as much as I did when Dr. Sherman suggested that a person whose neurons had been replaced with mechanical analogues wouldn't be a person any more, but I groaned nonetheless.

Afterward, I had a chance to talk to Dr. Sherman briefly. The conversation was short; only just long enough for him to completely blow my mind, make me believe that a lot of ideas about uploading are limited by the da Vinci effect, and to suggest that much brain modeling research currently going on is (in his words) "totally wrong".




It turns out that most of what I was taught about neurobiology was utterly wrong. Our understanding of the brain has exploded in the last few decades. We've learned that people can and do grow new brain cells all the time, throughout their lives. And we've learned that the glial cells do a whole lot more than we thought they did.

Astrocytes, long believed to be nothing but scaffolding and cafeteria workers, are strongly implicated in learning and cognition, as it turns out. They not only support the neurons in your brain, but they guide the process of new neural connections, the process by which memory and learning work. They promote the growth of new neural pathways, and they also determine (at least to some degree) how and where those new pathways form.

In fact, human beings have more different types of astrocytes than other vertebrates do. Apparently, according to my brief conversation with Dr. Sherman, researchers have taken human astrocytes and implanted them in developing mice, and discovered an apparent increase in cognitive functions of those mice even though the neurons themselves were no different.

And, more recently, it turns out that microglia--the garbage collectors and scavengers of the brain--can influence high-order behavior as well.

The last bit is really important, and it involves hox genes.






A quick overview of hox genes. These are genes which control the expression of other genes, and which are involved in determining how an organism's body develops. You (and monkeys and mice and fruit flies and earthworms) have hox genes--pretty much the same hox genes, in fact--that represent an overall "body image plan". The do things like say "Ah, this bit will become a torso, so I will switch on the genes that correspond to forming arms and legs here, and switch off the genes responsible for making eyeballs or toes." Or "This bit is the head, so I will switch on the eyeball-forming genes and the mouth-forming genes, and switch off the leg-forming genes."

Mutations to hox genes generally cause gross physical abnormalities. In fruit flies, incoreect hox gene expression can cause the fly to sprout legs instead of antennae, or to grow wings from strange parts of its body. In humans, hox gene malfunctions can cause a number of really bizarre and usually fatal birth defects--growing tiny limbs out of eye sockets, that sort of thing.

And it appears that a hox gene mutation can result in obsessive-compulsive disorder.

And more bizarrely than that, this hox gene mutation affects the way microglia form.




Think about how bizarre that is for a minute. The genes responsible for regulating overall body plan can cause changes in microglia--little amoeba scavengers that roam around in the brain. And that change to those scavengers can result in gross high-level behavioral differences.

Not only are we not in Kansas any more, we're not even on the same continent. This is absolutely not what anyone would expect, given our knowledge of the brain even twenty years ago.

Which brings us back 'round to da Vinci.




Right now, most attempts to model the brain look only at the neurons, and disregard the glial cells. Now, there's value to this. The brain is really (really really really) complex, and just developing tools able to model billions of cells and hundreds or thousands of billions of interconnections is really, really hard. We're laying the foundation, even with simple models, that lets us construct the computational and informatics tools for handling a problem of mind-boggling scope.

But there's still a critical bit missing. Or critical bits, really. We're missing the computational bits that would allow us to model a system of this size and scope, or even to be able to map out such a system for the purpose of modeling it. A lot of folks blithely assume Moore's Law will take care of that for us, but I'm not so sure. Even assuming a computer of infinite power and capability, if you want to upload a person, you still have the task of being able to read the states and connection pathways of many billions of very small cells, and I'm not convinced we even know quite what those tools look like yet.

But on top of that, when you consider that we're missing a big part of the picture of how cognition happens--we're looking at only one part of the system, and the mechanism by which glial cells promote, regulate, and influence high-level cognitive tasks is astonishingly poorly understood--it becomes clear (at least to me, anyway) that uploading is something that isn't going to happen soon.

We can, like da Vinci, sketch out the principles by which it might work. There is nothing in the laws of physics that suggest it can't be done, and in fact I believe that it absolutely can and will, eventually, be done.

But the more I look at the problem, the more it seems to me that there's a key bit missing. And I don't even think we're in a position yet to figure out what that key bit looks like, much less how it can be built. It may be possible that when we do model brains, the model isn't going to look anything like what we think of as a conventional computer at all, much like when we built general-purpose programmable devices, they didn't look like Babbage's difference engines at all.




1 Or would be, if it weren't for the fact that he rejects personhood theory, which is something I'm still a bit surprised by. If I ever have the opportunity to talk with him over dinner, I want to discuss personhood theory with him, oh yes.


Comments

( 18 comments — Leave a comment )
(Deleted comment)
tacit
Oct. 27th, 2011 05:37 pm (UTC)
I haven't read that. It looks interesting, though.

Gender is a bit of a sticky wicket all on its own. On the one hand, things like gender dysmorphia and observable differences in the wiring of men's brains and women's brains point to the notion that there are biological differences. On the other hand, brains are plastic, so it's hard to say how significant the differences we see in MRI scans are, and a lot of folks want to try to ascribe to biology what seem clearly (to me) matters of social programming. (And don't even get me started on the evolutionary psychology folks who use naive misunderstanding of things like reproductive strategies to justify cultural norms by saying "men are hardwired to want to have sex with lots of people, women are hardwired to want commitment"--an idea that sounds plausible enough to laypeople that it gets a lot of currency even though it's total rubbish, back to front).
(Deleted comment)
tacit
Oct. 27th, 2011 06:18 pm (UTC)
For me personally, I see a lot of value in understanding the things that make me who I am. I'd love to know how much of my "me"-ness is influenced by genetics, how much by my environment and upbringing, how much through epigenetics, how much through experience, how much through choice (assuming such a thing exists, which is far from a given)...I'm curious. I think that more knowledge is always better than less knowledge, and I think that knowing these things gives us insight into the nature of the human condition.

Having said that, it is almost universally true that people who are not part of the dominant cultural mainstream often have to defend themselves or explain themselves to members of that dominant cultural mainstream. It's simply one more aspect of social privilege, and it's a bit silly to expect folks to have to do it. Like you say, it doesn't (or it shouldn't) matter from a cultural, social, or legal perspective if I'm strait because I was born that way, or if someone who is gay chooses to be gay or is genetically gay, or whether being trans is an environmental thing or a matter of biological destiny. Regardless of the reason, all these things are part of human variability, and there's more to life than conforming to whatever social norms happen o be fashionable at the moment.
apestyle
Oct. 27th, 2011 12:23 am (UTC)
I'd like to read a detailed description of what you consider personhood theory. The cursory glance on a google search seemed pallid.
ashbet
Oct. 27th, 2011 06:45 am (UTC)
I'm also curious, because there seem to be a number of conflicting, biased-source interpretations out there. And they're all mixed in with fetal-personhood legislation, which is another crazy kettle of fish altogether.

-- A <3
tacit
Oct. 27th, 2011 05:55 pm (UTC)
If you really want an in-depth overview of the philosophical and ethical ideas behind personhood theory, I highly recommend James Hughes' book Citizen Cyborg. It covers more than just personhood theory--essentially, it's a study of the ethical and social implications of transhuman ideas--but it's the best primer for personhood theory I've found so far.

There don't seem to be any good Web sites on personhood theory that I can find in Google; when I do a search, I see a lot of New Age rubbish and a lot of religious arguments, but nothing that really talks about it in the transhumanist sense.

The Readers Digest condensed version of the transumanist notion of personhood theory is this: The things that we think of as "inalienable human rights" are not attached only to human beings. These ideas extend to anything that has sapience and a reflexive self-awareness; that is, any entity which is capable of reasoned choice (leaving aside for a moment the debate about whether free will exists) with cognitive understanding of the effects of its actions on others is deserving of all the rights and responsibilities we ordinarily attach to people.

So, by this notion, a true "strong" AI, a non-human sapient alien, a cyborg, an animal thats been genetically modified to have human-level sapience, and a human consciousness transferred into a computer are all "people," and have equal claim to the rights and responsibilities thereof.

A lot of religious traditions, particularly Roman Catholicism, vigorously oppose this idea. The Catholic position is that humans alone are created in God's image, and it is that special and unique status that gives humans the inalienable rights we call "human rights." That's all fine and dandy when human beings are the only sapient entities we know of, but I think the time is coming when that will no longer be true. Given how long it took a lot of religious traditions to recognize the notion that people with different skin color were actually people, I'm not optimistic about the fate of the first non-human sapience we invent or discover.
apestyle
Oct. 27th, 2011 06:21 pm (UTC)
So the edge case that I think about is the infamous Terry Schivao case. She was (probably) no longer sapient, so did she qualify as a person still?

I know. Total edge case, and probably tiresome - but I'm curious nonetheless.
ab3nd
Oct. 27th, 2011 08:53 pm (UTC)
There's actually a philosophical construct that gets used in this kind of discussion, called a P-zombie (the "P", for philosophical, distinguishes them from Voodoo zombies, or rage virus zombies). P-zombies have no cognitive understanding of the effects of their actions on others, and in fact, no cognition at all, but in all other respects act just like normal people. They pay their taxes, eat a balanced breakfast, kiss their zombie kids, etc. Obviously, though, they have no internal "I", no real "self" etc. So they are not "sapient" in any real sense of the word.

So you can set P-zombies on fire in front of their P-zombie families and cackle about it because hey, they're not even as sapient as a mouse.

If you think that's a problem, chew on this: You cannot prove to anyone (outside of yourself) that you are sapient (that is, not a P-zombie). There is no way to test that you actually think, only that you can display the outward signs that we as a society ascribe to thinking things. While those signs are mighty convincing, they are not actually proof, just suggestive evidence.

In my opinion, it's better to err on the side of "Sapient until proven otherwise". Anything that displays signs of sapience should be treated as such, until and unless someone comes up with a proof that it isn't. Proofs of non-existence being what they are, this stance means there will probably be some things that are not sapient, but get treated like they are. This may be inconvenient at times, but it beats unwittingly committing atrocities.
tacit
Oct. 27th, 2011 10:06 pm (UTC)
One of the most common objections to the notion of personhood theory is the idea that it can be used against people; "See, they're saying that anyone who doesn't have a certain minimum level of intelligence is not a person! That's eugenics!"

But it's a philosophy that applies globally rather than locally; it's usually talked about in reference to entire classes of entities rather than specific individuals. So human beings are people, even if they're asleep or injured or whatever. Sapient computers are people, even if they're being backed up. And so on.

Now, I would say that someone who's experienced massive brain trauma that as destroyed the parts of the brain responsible for reasoning and awareness is, to all intents and purposes, dead, even if that person's heart is still beating. Everything that made that person who they were is gone. But that's a bioethics issue not necessarily related to personhood theory.

Now, I think that, functionally, Terry Schivao was dead. Still, that doesn't necessarily mean that it's a good idea to disconnect her from life support. We don't know when someone will come up with some sort of breakthrough that allows us to promote neurogenesis in a severely damaged brain or something like that. (There's another ethical issue there, of course; if you have a patient like Terry Schivao who has experienced gross large-scale brain trauma, and you find a way to fix that brain, is the person who woke up the same person who was injured? Probably not. Memories, behavior, even personality, are likely to be different. But that's also a whole 'nother conversation.)

For me, though, the most ironic part of the Terry Schivao case is that every day, people who have no health insurance and whose trauma is not as bad as Terry Schivao's are disconnected from life support simply because they don't have the money to continue it, and nobody signs petitions or engages in frantic legal scrambling to prevent that. A lot of people seem attached to the notion that as a person, you have a right to life if you're brain damaged, but not if you're uninsured.
tacit
Oct. 27th, 2011 10:07 pm (UTC)
In my opinion, it's better to err on the side of "Sapient until proven otherwise". Anything that displays signs of sapience should be treated as such, until and unless someone comes up with a proof that it isn't. Proofs of non-existence being what they are, this stance means there will probably be some things that are not sapient, but get treated like they are. This may be inconvenient at times, but it beats unwittingly committing atrocities.

Just so. I couldn't have put it better.
tripartite
Oct. 27th, 2011 03:01 am (UTC)
Ghost in the Shell and Psychology of Mind in college always made me wonder if we'd ever be able to upload our consciousness and memories to a device and/or if we could make an accurate model of a human body (or parts) to replace a failing one. Thanks for the info.
dreamsinanime
Oct. 27th, 2011 05:35 am (UTC)
Thank you for writing this! I learned quite a few things about the brain just now. ^_^
keep_up
Oct. 27th, 2011 09:09 am (UTC)
Fascinating!

I returned to University for an after-degree in Psychology early this year. One thing that really confuses me about the whole idea of uploaded consciousness is: then what? Does this plan assume we also learn to simulate the sensory input that is as responsible for human experience as actual consciousness and memories?

Most people don't seem to realize how embodied our psyche is: we can't handle sensory deprivation, intensity of our emotions depends on the body (people with spine damage experience certain emotions less intensely depending on where the injury is located and how much of their body is paralyzed), and emotional and cognitive processes are inextricably linked. What will happen to all of these mechanisms after the upload and how will their modification affect perception and cognition?
tacit
Oct. 27th, 2011 05:42 pm (UTC)
Yep, in principle, uploading a person's consciousness includes attaching some sort of sensory apparatus and, presumably, the ability to interact with the outside world.

It's interesting just how much of our emotional responses are dependent on our bodies. I've heard that quadriplegics often report an overall dulling of their emotional responses. Presumably, in a completely virtual person, this could be modeled as well, though even if it's not, I'll still take it.:) Presumably, "experiencing a change in emotional response" is preferable to "being dead."
xaotica
Oct. 27th, 2011 08:45 pm (UTC)

despite being a HCI student, i've only paid cursory attention to kurzweil... so i just read through "the age of spiritual machines" on wikipedia.

i found this amusing:
"Humans are beginning to have deep relationships with automated personalities, which hold some advantages over human partners. The depth of some computer personalities convinces some people that they should be accorded more rights."

have you seen http://en.wikipedia.org/wiki/Plug_%26_Pray ?
tacit
Oct. 27th, 2011 10:12 pm (UTC)
I haven't seen the move, though I suspect I've seen the arguments from both sides already.

This is a subject that a lot of otherwise reasonable people seem to have a lot of trouble discussing reasonably. On the one hand, we have Kurzweil, who babbles enthusiastically about AI lovers and "transcending biology," and on the other, we have the Vatican and secularists like Francis Fukuyama talking about how augmenting human beings in any way whatsoever, even if it's via mechanisms like cochlear implants that surpass "normal" hearing, somehow "undermines human dignity" and leads to a disaster for all.

I can only imagine what the debates would look like if these folks were around during the Agrarian Revolution or the development of the steam engine.
xaotica
Oct. 27th, 2011 10:39 pm (UTC)

it seems like it's partly about what gets the person an audience. "facebook will save us all!" is a headline. "facebook is socially isolating and destroys relationships!" is a headline. "facebook invades privacy!" is a headline. "facebook is a website with some benefits and some drawbacks... some interesting technology and some privacy issues... some negative impact and some positive impact..." well, that won't get as many readers. many complex technology issues can't be easily summarized.

but maybe i'm just jealous because i don't think i could will myself to focus on the utopian aspects of technology without simultaneously thinking about the problems ;)



allburningup
Oct. 31st, 2011 06:38 pm (UTC)
...it's the last bit he gets really wrong. Not the general outlines, mind you, but certainly the timetable.

Perhaps I've misread this post, but do I understand correctly that Kurzweil pisses you off because he thinks uploading will happen a lot sooner than is reasonable to believe?
tacit
Oct. 31st, 2011 06:55 pm (UTC)
He pisses me off because he makes proclamations with little grounding in reality, little understanding of the state of the research, and a reckless optimism that's so exuberant as to be in the land of fantasy.

Not only does he get the timetable wrong (by centuries, in some cases, I suspect), but he'll say things like "Right now, tens of thousands of scientists are working on modeling the brain" when the reality is that there are about eleven, and they're all desperately underfunded. This sort of radical overstatement paints a false picture of what's actually happening, and I find that troubling.
( 18 comments — Leave a comment )