Sunday, April 20, 2008

Thoughts on Artificial Intelligence

We've all seen the movies or read the books. Artificial intelligence, however, is about set to break free from the 2-dimensional world. Leading scientists now say that the first artificial organisms can be born within 10 years.

Intelligence, as a working definition, is the ability to make standalone judgments based on impulses coming from one's surroundings. The hallmark of intelligence is action based on choice, rather than rule. For example, if you've seen those eHarmony.com commercials that talk about a system for matching by compatibility. It is advertised as an intelligent system, but really it's just based upon a sequence of rules (i.e. if A=B, then C=D, etc.) There is no ability to discern unusual circumstances, such as a partial parameter (i.e. person does not fully conform to A or B, but falls in between).

Many people might confuse robotics and artificial intelligence. For example, Honda's now famous Asimo is sometimes mistaken as being intelligent.


In reality Asimo is a non-intelligent robot; although he seems to interact of his own volition, every single action and reaction is controlled by lines and more lines of programmed code. These sub-routines do not think, simply follow.

The key difference is that A.I. beings will ultimately be able to actively shape their own program. Imagine for a moment - you own one of those Asimo units. Let's say it's been programmed to help you carry objects up and down stairs. The program will consistently carry out the action as needed, faltering only due to mechanical issues.

Now let's say that Asimo has been upgraded with an A.I. processor. The unit no longer relies on the owner to call it. It will inspect the situation and determine whether or not assistance is necessary, based on various factors. Of course, this stage is much further away than 10 years.

Still, how benign are the molecular mecha-organisms currently being developed?

Anyone who has seen 2001: A Space Odyssey knows the ugly side of A.I. gone bad.


Image: HAL9000, the fiendishly disobedient A.I. from 2001: A Space Odyssey

Many people assume that as long they are micro-organisms, there is little threat. Maybe I've seen too many disaster movies or watched too many sci-fi shows, but isn't that when the danger is greatest? It's more difficult to control something we can't directly see or handle.

I think A.I. is a great thing and the natural next step in technological evolution, but we have to be cautious. Scientists are making bold leaps, sometimes without having 100% understanding of or control over the laboratory conditions. It's easy enough to predict and control one tiny A.I. molecule. What about groups? There are many unpredictable dynamics when dealing with groups of such organisms.


I’ve got many questions.

- What safeguards will there be against such groups co-operating to break out of their design parameters (which is totally possible with non-rule oriented beings)?
- Will these new organisms be able to interact with biological sells and non-living organic molecules?
- What about continuity? Will they be able to replicate themselves? What controls are there to prevent them from "assimilating" new organic elements and evolving beyond human control?

I don't mean to sound paranoid, it's really a great thing. However, humans are inherently open to making miscalculations and this is one area where a miscalculation can be grave.

Alternately, this presents a whole new area of philosophy. Recently, Java Jones did a couple of interesting philosophical posts. One about
perceptions of reality and another on the physical manifestation of stored information. The upcomming A.I.'s present a cross-section of these two concepts. Soon enough we'll be sitting around pondering how neurotransmitters are stored, copied and transferred; how electrical impulses carry oxygen and other fuels for life.


Also, what about the use of A.I. technology on already existing organisms...

What will happen when A.I. is applied to more advanced organisms? Such devices have already been used to mind-control insects, fish, birds and even small mammals. The U.S. government (supposedly) already has dolphins equipped with such bio-mechanical interfaces.

Image: Electrical impulses used to control the behavior of a roach.


I believe this is torture for animals, and I think it takes mankind "1 step forward, 2 steps back". To be simultaneously advancing in technology and de-evolving in comodification of living beings is simply not acceptable.

Mankind is at a cross-roads, much as we've been in the past. Whatever the chosen path, we can never and will never go back.

13 comments:

The Jaded NYer said...

Sweet Jesus- have we learned nothing from the Terminator movies???

when this topic comes up, I always go back to that line in Jurassic Park (spoken by Jeff Goldblum's character):

"...your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should."

And that's what I see in all these "scientific advancements"- and no one loves science more than me but come on already! There's this mentality of "let's be the first to make it bigger, better, faster" without really looking at the greater good.

Why do we need AI? To do meanial tasks we don't feel like doing? Isn't it bad enough that everything on the planet is damn near computerized, I need some damn AI robot rushing to help me with my bags? Uh, no thank you, I'd rather do it myself!

Java Jones said...

Nice one Pan. One wonders about the lengths the human species will go to in our efforts to ‘improve’ on ‘natural’ systems. Of course one can argue that it is still a ‘natural’ evolutionary trend for us to go down this route (AI, etc.), but if ‘organisms’ that are part mechanical and part biological start to reproduce organically and get to be beyond human control, that would be an interesting scenario. Of course their ‘conditioning’ will determine the way they ‘think’ and act, and who knows, this may just be the way that the planet will ultimately be saved from the destructive trend that humans have put in motion and that is now snowballing with increasing pace.

On the other hand, if the 'conditioning' is what you and I would consider to be 'negative', then it would be nightmare time for any of us that may happen to be still around.

Nataleesthot said...

I can't think of any reason why the earth would need this. We are destroying ourselves all in the name of technological advancements! It's a sad state of affairs

Dili said...

The word "Cylon" comes to mind.

Are we humans really "human", or do we "Play God" and let the consequences be damned?

Antimatter said...

"Scientists are making bold leaps, sometimes without having 100% understanding of or control over the laboratory conditions." <- [citation needed] ;)

Sorry to nitpick, but you seem to be talking about two separate things, albeit things that could conceivably overlap. One is Synthetic biological life, and the other is artificial intelligence, which is a branch of computer science that could, but doesn't need to be, applied to synthetic life. I don't know of any work being done to merge AI into biological life (the animal example you give has nothing to do with AI), but there is research being done into creating biological computers. Whether it's biological, silicon based, or quantum based computing, AI typically applies to some kind of computational machine.

When it comes to micro organisms and synthetic biological life, I don't know much to comment. You raise some interesting points. I imagine that the dangers of synthetic / mutated microbial life is well understood and necessary precautions are in place, though the potential for them to be used as weapons will always be a dark cloud.

When it comes to AI, it's already here in many forms - think games (chess, videogames), robots, chat bots (yeah they still suck) etc... There's just many different types of AI applied in various areas. I'm always amused by some of the knee jerk reacions against AI. Not every robot scenario ends with Arni or Yul Brynner styled killing machines! Think about machines working in hazardous environments, space exploration, performing precision surgery, or even simply helping the disabled or elderly. And robotics is just one area AI contributes to - just look at the list of applications on Wikipedia!

The current state of AI leaves us with little to worry about. The killing machines aren't on the horizon just yet. Even the biggest computer based threats today like fast spreading malicious software (which is not AI in any way) are limited by things like platform dependance and reasonably secured systems, and are unlikely to cause civilization to collapse.

While even the definition of intelligence is debatable, the aspect of AI that worries people is Strong AI, where an AI is as intelligent as a human. But think about the benefits such an AI could bring about - imagine a genius human intellect applying itself to scientific and economic problems, only without ever requiring rest, having access to volumes of data and being able to process it faster than any human could dream of. Imagine a technological singularity where the AI can keep getting smarter and smarter. It could change the world.

Of course, there's also the scary part - it could decide humans are no longer necessary. While such a conscioussness could, in today's world, spread without hindrance I don't see it being able to create a Terminator style future - technology just isn't that pervasive. Yet.

Anyway, as I've already stated, we're nowhere near creating a strong AI based on the current state of AI research. There's still a long way to go, and it's not clear if creating such a strong AI is even possible.

And if we did, there's also the ethical considerations - would using new life to work for us be equivalent to slavery? Would it be a form of life that could live alongside us? (think Blade Runner, Battlestar Galactica, The Matrix)

There's just loads and loads of things to think about relating to AI, even some that we should worry about right now. But until something gets even close to passing the Turing Test for Intelligence, most of those concerns are bridges we haven't arrived at yet. Most of the current developments in AI are wholly beneficial. And compared to possible threats of biology run amok, the present state of AI is small potatoes, though things can (and undoubtedly will) change (as an aside, self replicating nanotechnology could possibly be as big or bigger a threat as microbial threats, though these don't necessarily need much in the way of AI either).

Science fiction has dealt with these ideas for ages; it's always a fascinating topic.

Apologies for rambling. :)

Darwin said...

I've never been too concerned with the 'omfg the robots are gonna take over' idea of a doomsday prediction, especially since I still think we're years away from seeing that happen. The one thing that does concern me about bioethics would be the whole patenting of a genome' thing that Craig Venter is on about.' Scary stuff indeed, and unlike AI scares, this is here and now.

Antimatter said...

Oddly enough, I just read this, which is sort of tangentially related...

Pan/Thanatos said...

Jaded - That line from Jurassic Park was brilliant! You make a great point, we already rely on automation so much, what happens when it's all done by machines and something breaks down. Will anyone be left who knows how to fix it?

Java - Thanks J man. Conditioning in artificial beings, that's something to think about. Definitely right out of Blade Runner. Hopefully they'll learn better than humans when it comes to caring for their world.

Natalee - It seems more like we're doing it because we can than because we really need to. Human nature to be curious I guess. Look at the time and effort spent on space exploration... totally unnecessary but interesting non-the-less.
That being said, space exploration doesn't have the potential to kill us =)

Dili - That brings me back to the old school sci-fi days =) Typically we don't care much for consequences, especially when it doesn't seem like they'll happen in our own lifetime.


Antimatter -
Thanks for the very well written and all-inclusive comment. The article I used is strictly about artificial life forms, but I really wanted to do just a general discussion about artificialness in general. Call it a "loose transition" if you will =)
As for citations... something like the point you mention would NEVER make it into scientific literature because it would mean political suicide for the author involved. (and I try to stay away from Wikipedia). From the practical point of view, I can tell you that more often than not... new confounds and side-effects are discovered in the progress of the experiment.
Scientists need only present a certain amount of predicted variables to get approval for even high level experiments. With something potentially catastrophic, this "wait and see" methodology just isn't enough.
As I mentioned, robots and things like video games and simulators are not intelligent. They still rely on a strict program. This is why a script error (the program equivalent of an unaccounted variable) still requires debugging, or causes games to freeze and crash. An A.I. must be able to adapt on it's own.
High end space exploration systems already to this to a degree, but it's still more of a set of rules with scuttle room.
But, like you said, the days of Blade Runner are nowhere near (but still ,let's not take the 1990's global warming approach to this one).
Thanks for rambling!

Darwin - Scary indeed! That's like making a preformed model that anyone with the right equipment can customize. AS a parasitologist, I'm not surprised that concerns you. It should concern us all.

Antimatter - Hey thanks for that link! I actually wanted to do a post about sex with robots, which is becoming huge right now. I'll be sure to use that article when I do.

The Jaded NYer said...

dahling- that movie changed my way of thinking about science for real! Plus scared the bejeezuz out of me.

Oh, btw, TAG- you're it (see Tuesday's entry for details)

dejanae said...

dang
jaded nyer took my line

we've already taken taken technology too far

1/3 of what I used to be said...

wow yea I'm already fearful of what we are doing with technology. Did you hear they are going to start putting microchips in babies? Yea A.I, I Robot, shoot even I AM Legend is making me realize we all need to take a step back and realize we need to stop playing God cause He always wins when He's ready to put an end to things.

Meese said...

as humans evolve we will try our very best to destroy our own world.
Humans are independent thinking and functioning creatures.

Giving this power to another form of life means they are free to think of their own allegiance be it for humans animals or themselves. So I see no way in which we will be able to control them over time. They will take over their creators I think. This can be justified as an evolution process where they will obviously have artificial computing power and a mind of their own which will make them better than humans. Time of the mechanical wars I guess.. or i'm I just paranoid?

rtfgvb752 said...
This comment has been removed by a blog administrator.