?

Log in

No account? Create an account

Previous Entry | Next Entry

Teaching a Dog Calculus

This is actually a post about transhumanism and Outside Context Problems, and an epiphany I had last time I was in Chicago.

But first...

God damn did I wake up with a bad case of the hornies this morning. Jesus Christ in Heaven, I want to fuck. I want to feel soft skin against mine. I want to trace the curve of the neck with teeth and tongue. I want to hear the little intake of breath when I discover a sensitive spot. I want to rest my hand on the curve of the hip, I want to explore the roundness of breast with my fingertips. I want to run fingernails lightly up the back of the neck and see goosebumps form. Holy fuck it's distracting.

Also, when I crawled out of bed and walked stumbled into the bathroom this morning, I was all like "Ow! Ow! Ouch! Ow! What the hell?" Some time last night, it seems, the cat had scoured the house for every smallish, vaguely cylindrical object he could find, and hidden them all underneath the rug in the bathroom. Pens, a plastic travel tube of Advil, a small bullet vibrator, an AA battery...it was like walking on marbles. WTF?

None of that is what I'm actually here to say.




I've been thinking a great deal these days about Outside Context Problems. Put briefly, an Outside Context Problem is what happens when a group, society, or civilization encounters something so far outside its own context and understanding that it is not able even to understand the basic parameters of what it has encountered, much less deal with it successfully. Most civilizations encounter such a problem only once.

For example, you're a Mayan king. Life is pretty good for you; you've built a civilization at the pinnacle of technological achievement, you've dominated and largely pacified any competition you might have, you've created many wondrous things, and life is pretty comfortable.

Then, all at once, out of the blue, some folks clad in strange, impervious silver armor show up at your doorstep. They carry long sticks that belch fire and kill from great distances; some of them appear to have four legs; they claim to come from a place that you have never in your entire life even conceived might exist...

Civilizations that encounter Outside Context Problems end. Even if some members of the civilization survive, the civilization itself is irrevocably changed beyond recognition. Nothing like the original Native American societies exists today in any form that the pre-Columbians would recognize.

Typically, we think of Outside Context Problems in terms of situations that arise when one society has contact with another society that's radically different and technologically far more advanced. But I don't think it necessarily has to be that way.




In a sense, we are, right now, hard at work building our own Outside Context Problem, and it's going to be internal, not external.

Right now, as I type this, one of the hottest fields of biomedical research is brain mapping and modeling. I've mentioned several times in the past the research being done by a Swiss group to model a mammalian brain inside a supercomputer; such a model is essentially a neuron-by-neuron, connection-by-connection emulation of a brain in a computer. Such an emulation will, presumably, act exactly like its biological counterpart; it is the connections and patterns of information, not the physical wetware, that makes a brain act like it does.

This group claims to be ten years from being able to model a human brain inside a computer. Ten years, and we may see the advent of true AI.




Let me backtrack a little. The field of AI has, so far, been disappointing. For decades, we have struggled to program computers to be smart. The problem is, we don't really quite know what we mean by "smart." Intelligence is not an easily defined thing; and it's not like you can sit down and break up generalized, adaptive intelligence into a sequence of steps.

Oh, sure, we've produced expert systems that can design computer chips, simulate bridges, and play chess far better than a human can. In fact, we don't even have grandmaster-level human/machine chess tournaments any more, because the machines always win. Always. Deep Blue, the supercomputer that beat human grandmaster Garry Kasparov in a much-publicized tournament, is by modern standards a cripple; ordinary desktop PCs today are more powerful.

But these are simple, iterative tasks. A chess-playing computer isn't smart. It can't do anything besides play chess, and it approaches chess as a simple iterative mathematical problem. That's about where AI has been for the last four decades.

New approaches, though, are not about programming computers to act smart. They are about taking systems which are smart--brains--and rebuilding them inside a computer. If this approach works, we will create our own Outside Context Problem.




Human brains are pretty pathetic, from a hardware standpoint. Our neurons are painfully, agonizingly slow. They are slow to respond, they are slow to fire, they are slow to reset after they have fired, and they are slow to form new connections. All these things limit our cognitive capabilities; they impose constraints on how adaptable our intelligence is, and how smart we can become.

Computers are fast. They encode new information rapidly and efficiently. Raw computing power available from a given square inch of silicon real estate doubles roughly every eighteen months. Modeling a brain in a computer removes many of the constraints; such a modeled brain can operate more quickly and more efficiently, and as more computer power becomes available, the complexity of the model--the number of neurons modeled, the richness of the interconnections between them--increases too.




We humans like to make believe that we are somehow the apex of creation--and not just of creation, but of all possible creation. It pleases us to imagine that we are created in the image of some divine heavenly architect--that the universe and everything in it was made by some sapient being, that that sapient being is recognizable to us, and that that sapient being is like us. We like to tell ourselves that thre is no limit to human imagination, that human intellect can understand and achieve anything, and so on.

Now, all of this is really embarrassingly self-serving. It's also easy enough to deflate. The human imagination is indeed limited, though by definition limitations in the things you can conceive of tend to be hard to see, because you...can not conceive of things you can not conceive of. (As one person once challenged me, without apparent irony: "Name something the human imagination can't conceive of!")

But its relatively easy to find some of the boundaries of human imagination. For example:

• Imagine one apple. Just an apple, floating alone on a plain white background. Easy to do, right?
Imagine three apples, perhaps arranged in a triangle, floating in stark white nothingness. Simple, yes? Four apples. Picture four apples in your head. Got it?

Now, picture 17,431 apples in your head, each unique. Visualize all of them together, and make your mental image contain each of those apples separately and distinctly. Got it? I didn't think so.

• Imagine a cube in your head. Think of all the faces of the cube and how they fit together, Rotate the imaginary cube in your head. Got it going? Good.

Now imagine a seventeen-dimensional cube in your head. Picture what it would look like rotating through seventeen-dimensional space. Got it?


The first example indicates one particular kind of boundary on our imaginations: our limited resolving power when it comes to holding discrete images in our imagination. The second shows another boundary; our imaginations are circumscribed by the limitations of our experiences, as perceived and interpreted through finite (and, it must be said, quite limited) senses. Quantum mechanics and astrophysics often pose riddles whose math suggests behaviors we have a great deal of difficulty imagining, because our imaginations were formed through the experiences of a very limited slice of the universe: medium-sized, medium-density mass-bearing objects moving quite slowly with respect to one another. Go outside those constraints, and we may be able to understand the math, but the reality of the way these systems works is, at best, right at the threshold of the limitations of our imaginations.




Everyone who has ever owned a dog knows that dogs are capable of a surprisingly sophisticated sort of reasoning. Dogs understand that they are separate entities; they interact with other entities, such as other dogs and humans, in complex ways; they can differentiate between other living entities and non-living entities, for the most part (tough I've seen dogs who are confused by television images); they have emotional responses that mirror, on a simple scale, human emotional responses; they are capable of planning, problem-solving, and analytical reasoning.

They can not, however, learn calculus.

No matter how smart your dog is, there are things it can not understand and will never understand because of the biological constraints on its brain. You will never teach a dog calculus; in fact, a dog is not capable of understanding what calculus is.

Yes, I know you think your dog is very smart. No, your dog can't learn calculus. Yes, you can too, if you set your mind to it; the point here is that there are realms of knowledge unavailable to the entire species, because all dogs, no matter how smart they may be in comparison to other dogs, lack the necessary cognitive tools to get there.

The intelligence of every organism is circumscribed in part by that organism's physical biology. And just as they are entire reals and categories of knowledge unavailable to a dog, so too are there realms of knowledge unavailable to us. What are they? I don't know; I can't see them. That's exactly the point.




To get back to the idea of artificial intelligence: A generalized AI would in many ways not be subject to the same limitations we are. One nice thing about modeled brains that isn't true of human brains is that we can easily tinker with them. The human brain is limited in the total number of neurons within it by the size and shape of the human pelvis; we can't fit larger brains through the birth canal. We have, in essence, encountered a fundamental evolutionary barrier.

Similarly, we can't easily make neurons faster; their speed is limited by the complex biochemical cascade of events which makes them fire (contrary to popular belief, neurons don't communicate via electrical signals; they change state electrochemically, by the movement of charged ions across a membrane, and the speed with which a signal travels is dependent on the speed with which ions can propagate across the membrane and then be pumped back again). They are limited in how quickly they can learn new things by the speed with which neurons can grow new interconnections, which is pretty painful, really.

But a model of a brain? What if we double the number of neurons? Increase the speed at which they send signals? Increase the efficiency with which new connections form? These are all obvious and logical paths to explore.

And the thing about generalized AI is that it's so goddamn useful. We want it, and we're working very hard toward it, because there are just so many things that our current, primitive computers are poor at, that generalized Ai would be good at.

And one of those things, as it happens, is likely to be improving itself.




The first generalized AI will be a watershed. Even if it isn't very smart, it can easily be put to the task of making AIs that are smarter. And smarter still. Hell, just advances in the underlying processor power of the computer beneath it--whatever that computer may look like--will probably make it smarter. Able to think faster, hold more information, remember more...and able to have whatever senses we give it, including senses our own physiology doesn't have.

The first generalized AI might not be smarter than us, but subsequent ones will, oh yes. You can bank on that. And that soon presents an Outside Context Problem.

Because how do we relate to a sapience that's smarter than we are?

In transhumanist circles, this is called a singularity--a change so profound that the people before the singularity can not imagine what life after the singularity is like.

There have been many singularities throughout human history. The development of agriculture, the Iron Age, the development of industrialization--all of these created changes so profound that a person living in a time before these things could not imagine what life after these things is like. However, the advent of smart and rapidly-improving AI is different, because it presents a singularity and an Outside Context Problem all rolled up into one.

In past singularities, the fundamental nature of human beings and human intelligence have not changed. A Bronze Age human is not necessarily dumber than an Iron Age human. Less knowledgeable, perhaps, but not dumber. The Bronze Age human could not anticipate Iron Age technology, but if they meet, they will still recognize each other.

But a smarter-than-us AI is different, in the ways we are different from a dog. We would not--we cannot--understand the perception or experience of something smarter than we are, ay more than a dog can understand what it means to be human. And that presents an interesting challenge indeed.

Civilizations tend not to survive contact with Outside Context Problems.




Which brings me, at last, to an epiphany that I had while I was walking with dayo in Chicago.

Transhumanism is the notion that human beings can become, with the application of intelligence and will, more than we are right now. I've talked about it a great deal in the past, and talked about some of the reasons I am a transhumanist.

But here's a new one, and I think it's important.

Strong AI is coming. It's really only a matter of time. We are learning that our own intelligence is the result of physical processes within our brain, not the result of magical supernatural forces or spirits. We are working on applying the results of this knowledge to the problem of creating things that are not-us but that are smart like us.

Now, there are several ways we can approach this. One is by creating models of ourselves in computers; another is by using advances in nanotechnology and biomedical science to make ourselves smarter, and improve the capabilities of our wet and slow but still serviceable brains.

Or, we can create something not based on us at all; perhaps by using adaptive neural networks to model increasingly complex systems in a sort of artificial evolutionary system, trying things at random and choosing the smartest of those things until eventually we create something as smart as us, but self-improving and altogether different.

Regardless, we have a choice. We can make ourselves into this new whatever-it-is, or we can make something entirely independent from us.

However we make it, it will likely become our successor. Civilizations tend not to survive contact with Outside Context Problems.

If we are to be replaced--and I think, quite honestly, that that is only a matter of time as well--I would rather that we are replaced by us, by Humanity 2.0, than see us replaced by something that is entirely not-us. And I think transhumanism, refined down to its most simple essence, is the replacing of us by us, rather than by something that is not-us.


Comments

( 45 comments — Leave a comment )
zastrazzi
May. 24th, 2008 10:40 pm (UTC)
If you haven't read the Uplift series by David Brin, you really really need to. It tackles almost every issue you've raised (albeit in a different context).

Thanks for writing this though, it's well stated.
zaiah
May. 24th, 2008 11:53 pm (UTC)
Did you care for any of the second set? I couldn't ever get into them.
(no subject) - zastrazzi - May. 25th, 2008 01:26 am (UTC) - Expand
(no subject) - zaiah - May. 25th, 2008 01:40 am (UTC) - Expand
(no subject) - zastrazzi - May. 25th, 2008 01:49 am (UTC) - Expand
(no subject) - joreth - May. 25th, 2008 02:39 am (UTC) - Expand
(no subject) - ragnarok_2012 - May. 29th, 2008 04:23 am (UTC) - Expand
redhotlips
May. 24th, 2008 10:41 pm (UTC)
My goodness, that is a lot to think about.

My airport waiting is never so productive as this.
ragnarok_2012
May. 25th, 2008 01:12 am (UTC)
As a Galactica fan, my immediate thought is "So say we all!"

Great post, Tacit. I love transhumanism, and I find myself persuaded by your argument. I would much prefer to see Humanity 2.0.
tacit
May. 27th, 2008 09:25 pm (UTC)
I sometimes think there's a thread of anti-transhumanism in BSG; the premise--that sapient machines are bad, that there are limits to the level of technology it's safe to explore--seem a bit questionable to me. I do dig the consciousness-transferral thing the Cylons have got going on, though.
(no subject) - ragnarok_2012 - May. 29th, 2008 04:51 am (UTC) - Expand
6_bleen_7
May. 25th, 2008 01:21 am (UTC)
Very well said! Actually, I once owned a cat who got a B– in trigonometry.
redsash
May. 25th, 2008 12:56 pm (UTC)
Apologies to Stephen Wright
Yeah, you know how some dogs compulsively thump a hind leg when you scratch them in the right spot? Well, one day I found out that my dog would do different things if you scratched him in different places.

Eventually, I had him doing my taxes. But I hated scratching him there.

~r
mlordslittleone
May. 25th, 2008 01:33 am (UTC)
I, too, would much rather see Humanity 2.0, but I'm not sure that I'm entirely swept up in your conviction that this AI is coming.

A random thought - it wouldn't be called AI any longer though, would it? Intelligence, however created or acquired, would only be "artificial" if it had something natural to compare to. One couldn't argue that the intelligence gained by a computer truly capable of learning - of THINKING - was artificial at all. Not if it was created to mirror our brains, it would simply mean that it was a 'siliconical' computer and and we were biological computers - materials aside, the intelligence itself, per your post, would be just as natural.

I'll think more on this when my brain is less pickled. Nice post - I enjoyed reading.
redsash
May. 25th, 2008 01:12 pm (UTC)
> I'm not sure that I'm entirely swept up in your conviction that this AI is coming.

I'm with tacit here. I work with the guy who came up with the high-order pattern detection algorithm behind data mining. That tech seriously scares me, and we have only just begun to explore its possibilities. It can find subtle and complex patterns in gigantic databases that humans just can't detect.

I also agree with jtroutman below that the software isn't there yet, and may not be anytime soon. I think in some ways it's a chicken-or-egg thing -- we need that watershed tacit writes of. Only a very few of us are good enough coders to attempt such a project, and our organizational systems are ill-suited to support their work.

~r
re: AI - creekracer - May. 25th, 2008 05:02 pm (UTC) - Expand
(no subject) - tacit - May. 27th, 2008 09:28 pm (UTC) - Expand
(no subject) - mlordslittleone - May. 27th, 2008 09:36 pm (UTC) - Expand
anansi133
May. 25th, 2008 03:01 am (UTC)
I often smell a buried assumption, that somehow the species Homo Sapiens js somehow unsuited to this planet. As if the civilization we were born into is naturally the highest and best example of what humans are capable of, and if the civ isn't good enough, then neither is the species. In order to somehow become adaptive to this place from which we originated, we have to change, not as individuals, not as a society, but at the DNA level, as a species.

The reason a dog can't do calculus, is because there's no need for a dog to do calculaus. It's not a feature of the dog's environment.

In a similar way, there's a whole set of skills that human beings don't really need to have as a species in order to live on this planet.

The civilization might want us to have these skill sets, and the civ might want us to believe that its requirements are the same as the planet's. But I sense a con job.

There's a lot for me to like about transhumansim, but I hate the idea that it's compulsory for us to somehow trancend our human selves before we can really be present to the historical moment. It sounds too much like a corporate agenda wanting more and better worker drones for itself.

Is there even such a thing as wilderness in a transhuman future?
creekracer
May. 25th, 2008 05:16 pm (UTC)
re: AI transcendence
"There's a lot for me to like about transhumansim, but I hate the idea that it's compulsory for us to somehow trancend our human selves before we can really be present to the historical moment. It sounds too much like a corporate agenda wanting more and better worker drones for itself."

I think a lot of people really do believe that we, as a species, are obligated to transcend (transcendentalists believe this, but only in the context of our "baser instincts"). This is the same kind of insistence that radical environmentalists roll with ("we have a responsibility to the planet").

So, yes, there are those who believe that we must "move up." But there are also those who simply want to survive. That's me.
Re: AI transcendence - anansi133 - May. 26th, 2008 04:00 pm (UTC) - Expand
Re: AI transcendence - tacit - May. 27th, 2008 01:35 am (UTC) - Expand
Re: AI transcendence - creekracer - May. 27th, 2008 02:03 am (UTC) - Expand
(no subject) - datan0de - May. 25th, 2008 07:23 pm (UTC) - Expand
(no subject) - tacit - May. 27th, 2008 01:31 am (UTC) - Expand
merovingian
May. 25th, 2008 03:12 am (UTC)
>Similarly, we can't easily make neurons faster.

I misparsed this sentence at first!

You meant "we can't increase the speed of neurons" and that's true.

I misread it as "we can't speed up the production of neurons" and it turns out I had something interesting to say about that. You can do that! There's great research by Elizabeth Gould on the subject (article here) and it turns out when you make new neurons, you get improved mood, cognitive development, and other tasty stuff.

But then I did a double-take and realized I misread the subject.

And then I decided to comment anyway because we all want better brainz.
merovingian
May. 25th, 2008 03:13 am (UTC)
P.S. Out of Context Problem, Transhumanism, and wanting it to be us that replaces us? Awesome, well-put and agreed.
(no subject) - merovingian - May. 25th, 2008 03:14 am (UTC) - Expand
jtroutman
May. 25th, 2008 05:07 am (UTC)
Interesting and well put, as always. You do sound somewhat like Vernor Vinge, I have to say.

I disagree with the timeframe, as pretty every single target date ever spoken by AI researchers for "we will have Y AI in X years" has been missed.

Some assessments on the computing power needed for a "real AI" from the early 1990s said we need 3 to perhaps 10 orders of magnitude greater computing power than was was available at the time. Well, ~15 years later, we have managed about 3 orders of magnitude. The rate of progress of computing power is slowing down, it is expected to continue to do so.

Another issue is that we don't have really good software and compilers to to deal with multiple processors working together yet, either. Yes, there are lot of systems to dividing a problem is lots of small pieces and having each node or processor work on a small piece of a workload, but that is not the same. Additionally, currently the latency between each computing element is very high (compared to being on the same CPU) even on the best systems.

Based on the decreasing rate of computing power advancement, plus the additional complexity of modeling the brain that will appear as we start to actually do it, I think it will be 50 years or more before a "human equivalent brain" is simulated. Not that I would mind if it was sooner, of course.


Edited at 2008-05-25 05:09 am (UTC)
tacit
May. 27th, 2008 09:36 pm (UTC)
All the AI timeframes have been missed, bit I think it's interesting that in this case, the people involved aren't AI researchers. (Or, if they are, they don't see themselves that way.)

The Swiss team isn't actually setting out to create AI. Their goal is to make a dynamic computer model of a human brain in a computer, which I have a feeling will result in an AI, but they're not doing it for that purpose; they're doing it because if they can create a perfect , working model of a human brain all the way down to the cellular level, the idea goes, they can use it to model new psychoactive drugs and anticipate the behavior of those drugs without human trials. Though if the model has that kind of fidelity, I suspect it may, for all intents and purposes, be human.

And that raises a whole ethical can o' worms that I don't know if the researchers have considered.

They're currently using a BlueGene/L supercomputer, on which they've successfully modeled a dynamic rat neocortex in real-time. The BlueGene computers use a number of novel techniques to reduce latency between different processors. IBM's currently building the BlueGene/L's successor, the BlueGene/P, which is scheduled to go online next year; they're anticipating that it will be at least ten times faster than BlueGene/L, and possibly more, in real-world applications.

In theory, a BlueGene/L has roughly the same raw computing horsepower as a human brain, though the architecture is vastly different and the computer's nowhere near being intelligent on its own. If that's true, though, the BlueGene/P will be at least an order of magnitude more capable than a human brain in terms of raw processing capacity, which leaves plenty of overhead for emulation. :)
pstscrpt
May. 25th, 2008 03:11 pm (UTC)
Deep Blue, the supercomputer that beat human grandmaster Garry Kasparov in a much-publicized tournament, is by modern standards a cripple; ordinary desktop PCs today are more powerful.
I'm not so sure about that. This was only 11 years ago, Deep Blue included custom hardware to accelerate some of IBM's particular chess-related functions, and the rate of advance in CPUs has slowed down quite a bit since around 2003 when Intel discovered their Pentium-4's didn't get any faster at 90 nanometers.

Computers are fast.
At completely different things, sure. Neural net training is pretty slow.

Modeling a brain in a computer removes many of the constraints
And imposes a massive emulation overhead. Modeling a brain physically seems a lot more plausible than doing it in software. But if we're really good at it, we might be able to get a couple orders of magnitude more power, for an awful lot of money, but still wind up with fundamentally the same sort of thing we're modeling. A silicon brain that thinks the way we do is not necessary going to get the advantages a computer has, or be able to conceive of anything outside of its own experience, either.

One nice thing about modeled brains that isn't true of human brains is that we can easily tinker with them.
No, we can't. Even simple software neural nets aren't something where you can really identify which part does what. And a silicon brain that operates like our brains will have a consciousness that needs to be respected just like a person.

contrary to popular belief, neurons don't communicate via electrical signals; they change state electrochemically, by the movement of charged ions across a membrane, and the speed with which a signal travels is dependent on the speed with which ions can propagate across the membrane and then be pumped back again
I guess I knew that, but that just makes the software emulation overhead much bigger, and means it can't be physically modeled in anything solid-state.

Strong AI is coming. It's really only a matter of time.
I'm not remotely convinced. It's certainly possible that a neural system isn't the only way to create thought, and that we may come up with another way. Within neural systems, though, we're talking about a massive outlay of time and money for something that's only going to be a strange, very smart person, that may or may not be inclined to act in our interests. Where's the incentive for someone to do that?
creekracer
May. 25th, 2008 05:40 pm (UTC)
re: AI
It would definitely seem that we will need to, at some point, replace all biological components with synthetic, for the simple reason that synthetics are more durable and more easily replaced than biological components, which, as we've learned (painfully), are neither durable nor easily replaced. Of course, there will be those who not only refuse the "upgrades" but deny others the right to have them, based on the belief that we somehow require bloody, mushy components to remain "human." (The soul, it is argued, cannot exist within a machine, no matter how lifelike it seems. Are transhumanists denied the right to believe they have no soul?) The issue, it would seem to me, is one of the right to life, liberty, and the pursuit of happiness. Denied the right to upgrade—to survive the deterioration of our biological bodies—we're denied the right to life.

But I digress. My reply doesn't really address your thoughts on AI, but it made me think, once again, just how inevitable upgrades must be.
tacit
May. 27th, 2008 09:42 pm (UTC)
Re: AI
I'm up for that, and I think it presents an interesting sort of Ship of Theseus problem.

Suppose biomedical nanotech reaches the stage of development at which cellular-level repair is possible. This is not outside the realm of possibility, and indeed might even happen within the lifetime of people who are alive today.

And now suppose that biomedical nanotech exists which can not only perform cellular-level repair, but can replace neurons as they die or are damaged with synthetic equivalents that are more durable, but otherwise behave the same way and are wired up in the same patterns as the dead cells they replaced.

Since brain cells die all the time (and are, mostly, not replaced), a person filled with such nanomachines would, as time goes on, gradually have parts of his brain replaced with synthetic analogues. The process would take many decades before a significant number of his brain cells had been replaced, and presumably his identity would be preserved during that time; after all, as things stand now, our identities are preserved even when those neurons are lost completely. After enough time had passed, his brain would be entirely synthetic. Would he still be the same person?
Re: AI - creekracer - May. 27th, 2008 11:36 pm (UTC) - Expand
Re: AI - pstscrpt - May. 28th, 2008 04:21 am (UTC) - Expand
Re: AI - tacit - May. 28th, 2008 04:33 am (UTC) - Expand
Re: AI - pstscrpt - May. 28th, 2008 04:50 am (UTC) - Expand
Re: AI - tacit - May. 28th, 2008 07:43 pm (UTC) - Expand
datan0de
May. 25th, 2008 07:02 pm (UTC)
Which brings me, at last, to an epiphany that I had while I was walking with dayo in Chicago.

Not to be a prick, but you realize that you and I have had that conversation and arrived at the same conclusion ("Mankind will eventually be supplanted, but we have the unique opportunity to become our own replacements.") at least a couple of years ago? :-)

Either way, great post and a wonderful summary of the non-intuitive ramifications of AI. As with most of your posts, it should be required reading for anyone not wanting to be relegated to an Amish-like "natural preserve" in the future while the rest of civilization moves on.

Something that you hinted at but didn't explicitly state is the idea of Weak Superintelligent AI vs Strong Superintelligent AI. One of the big advantages of uploading is that you're moving yourself into a substrate that can take advantage of the much more rapid advances in artificial computational hardware vs the relatively static wetware. An upload or other digital model of a human brain that takes advantage of increased processing speed without otherwise modifying its basic structure and functions is a weak superintelligent AI. It's just as smart as a human but with a faster subjective experience of time, and thus still suffers from the same fundamental limitations of human cognition. An uploaded dog that's running at a thousand time the speed of a meat dog brain would absolutely be superintelligent by canine standards, but would still be completely unable to learn calculus.

Conversely, your example of increasing the complexity and storage capacity of an upload opens up the possibility of overcoming the inherent architectural limitations of human cognition, and thus creating a strong superintelligent AI. IMHO, that's when things get really interesting...
tacit
May. 28th, 2008 08:01 pm (UTC)
Did we? Well, hmm. I blame the alcohol; it had a tendency to fritz out my clones' telemetry systems before the rev. 127 patch.

Seriously, though, the distinction between weak and strong AI is an important one, and I doubt that we'll see the arrival of weak AI without strong AI soon after. If we build an AI from a bottom-up approach by emulating a brain in a computer, questions like "what happens if we increase the number of neurons in the prefrontal cortex by ten percent?" and "what happens if we increase the number of connections between neurons dramatically?" seem to be the next logical step. I'd be very surprised if experiments like that aren't among the first things we do once we have a model that works.

So I find it unlikely that we'll end up with weak superintelligent AI but not strong superintelligent AI. And strong superintelligent AI is without question an Outside Context Problem.


cheerilyxmorbid
May. 26th, 2008 05:29 am (UTC)
Interesting post. I just skimmed it, as my brain is not up to processing all that right now. But from what I got, it makes sense.

RE: your case of the hornies. I'm home in Atlanta for the summer. I'd love to grab lunch and chat if you're up for it.
And your cat is WEIRD.
tacit
May. 27th, 2008 01:13 am (UTC)
I'm definitely up for meeting for lunch or coffee (well, hot chocolate!) or something. Whereabouts in Atlanta are you?
(no subject) - cheerilyxmorbid - May. 27th, 2008 04:01 am (UTC) - Expand
(Deleted comment)
tacit
May. 27th, 2008 01:12 am (UTC)
It'll be more general, though it's the family model I've been thinking about the most lately. But I do intend to talk about a wide variety of different models of relationship.
( 45 comments — Leave a comment )