[22]                                home                        [24]

 

 

 

Key questions on Common Upper Ontology

 

Anonymously posted by an AI scholar

 

The illusion of the AI mythology, illustrated

 

There is a good deal of concrete, practical evidence of the advance of AI, as there is about the need for knowledge bases that define the entities, attributes, constraints, etc. and their relationships in information systems in a more formal language than data dictionaries do - in other words - ontologies.  Look; back in the 19th century, the Luddites were against power looms, but neither their arguments nor their axes kept the use of power looms at bay for very long.

 

4/20/2004 2:48 PM

 

Reply to this note  -> .

 

Hello Paul,

 

I am not sending this to the whole group, and I don't like to pick nits, but some of the things you say are just not correct, or are correct in some ways but not the ones that matter. 

 

You haven't said why a computer deemed to be "creative" would not be likely to have some form of case based reasoning.  I assume, therefore, that you are bothered by the 'deemed "creative"' part.

 

Remember that people predicted that gasoline cars would never replace horse drawn carriages, that Fulton's steamboat would not work, that people would never "fly"? 

 

I don't know that machines will ever know that they are being creative, and I don't know that any of them are "creative" by human standards, and I am assuming that the human is the one doing the deeming; but to say that people who might today deem "creative" that or people who think that people might at sometime in the future deem the computers "creative" are holding "a false sense" of any sort is rather presumptuous.

 

It reminds me of what Herb Simon e-mailed me several months before he died, when we were having an email discussion (among other things) about someone who was trying to turn off support of any research in artificial intelligence by NSF (and seemed to be getting his way, violating the peer review system, among other basic NSF policies - though he was later fired, thank goodness).

 

Herb wrote, "I continue to marvel at the fact that, after 45 years, the naysayers can still be taken seriously, when they deny that computers (sometimes) think, or place that happy possibility in the distant future. I am afraid that at the outset of our adventure I greatly underestimated the emotional need many members of our species have to believe in its uniqueness. Patience! All that will pass."

 

I have never felt quite as strong as Herb, but have taken a more agnostic approach:  I don't really care if machines think. 

 

If they are doing tasks that relieve people of some of their cognitive tasks, analogous to the ways that physical machines have relieved people of  physical tasks that humans previously had to do throughout history, I will be quite happy. 

 

And when computers enable achievements that humans were not able to achieve alone, as airplanes have allowed people to fly, powered machines have enabled drilling deeper in the earth than humans could, and telephones enable humans to make and receive calls all around the world, even without wires, I am even happier. 

 

And computers already have enabled things in the cognitive domain that humans could not do or not do as well in many ways - in process scheduling or logistics planning and in autonomous space probes, and in learning to find astronomical data in sky surveys that humans could not even describe well enough to write a program. 

 

Granted, humans are necessary to the machines in many of these things, just as they are to the physical machines; but many tasks take a team of two or more intelligent people, and we do not therefore say that only the boss of the team is doing a cognitive task, so we should give cognitive credit to the machines.  Under those circumstances, I don't care if the machines think or not.

 

You said:

 

We question whether there is any practical need to have "creative computers", given the grounded discussion that the computer program is an "abstract" finite state machine and has NO access to physical law or to quantum mechanical emergences.

 

I can't be quite so bombastic on this one because I haven't thought a lot about whether we need "creative computers" for sure.  But I would bet that if they do, there will be uses for them. 

 

By the way, the model for a computer is not a finite state machine.  It is a Turing machine, as Turing told us a long time ago.  The fallacy that a computer is a finite state machine comes from the idea that it has a finite amount of information stored in it at any given time.  But it can acquire more information with the help of humans or with sensors and can add that information to its memory.  The total number of states that such a machine is able to use is unbounded.  But we shouldn't be too shocked about that, since our brains are finite in their storage too!