Objectivism and Artificial Intelligence


Recommended Posts

If strong AI (like Data) is ever achieved, then from the point of view of Objectivism, the question of rights will come up. Will Data have rights?

I'm not sure how useful the strong vs. weak AI dichotomy is. AI is fairly powerful now and is growing more powerful, we can point out its weaknesses if we want but we are growing increasingly dependent on it.

I think the real question is, will we have rights when Artificial Intelligence is making most of our decisions for us?

Link to comment
Share on other sites

  • 2 years later...

Some excerpts from a new article from Real Clear Future and some old letters about A.I. or Artificial Intelligence.

Peter

 

Artificial intelligence is not as smart as you (or Elon Musk) think. Posted Jul 25, 2017 by Ron Miller

 

In March 2016, DeepMind's AlphaGo beat Lee Sedol, who at the time was the best human Go player in the world. It represented one of those defining technological moments not unlike IBM's Deep Blue beating chess champion Garry Kasparov, or even IBM Watson beating the world's greatest Jeopardy champions in 2011. Yet these victories, as mind-blowing as they seemed to be, were more about training algorithms and using brute-force computational strength than any real intelligence. Former MIT robotics professor Rodney Brooks, who was one of the founders of iRobot and later Rethink Robotics, reminded us at the TechCrunch Robotics Session at MIT last week that training an algorithm to play a difficult strategy game isn’t intelligence, at least as we think about it with humans.

 

He explained that as strong as AlphaGo was at its given task, it actually couldn’t do anything else but play Go on a standard 19 x 19 board. He relayed a story that while speaking to the DeepMind team in London recently, he asked them what would have happened if they had changed the size of the board to 29 x 29, and the AlphaGo team admitted to him that had there been even a slight change to the size of the board, “we would have been dead.” “I think people see how well [an algorithm] performs at one task and they think it can do all the things around that, and it can’t,” Brooks explained.

 

Brute-force intelligence. As Kasparov pointed out in an interview with Devin Coldewey at TechCrunch Disrupt in May, it’s one thing to design a computer to play chess at Grand Master level, but it’s another to call it intelligence in the pure sense. It’s simply throwing computer power at a problem and letting a machine do what it does best. “In chess, machines dominate the game because of the brute force of calculation and they [could] crunch chess once the databases got big enough and hardware got fast enough and algorithms got smart enough, but there are still many things that humans understand. Machines don’t have understanding. They don’t recognize strategical patterns. Machines don’t have purpose,” Kasparov explained.

 

Gil Pratt, CEO at the Toyota Institute, a group inside Toyota working on artificial intelligence projects including household robots and autonomous cars, was interviewed at the TechCrunch Robotics Session, said that the fear we are hearing about from a wide range of people, including Elon Musk, who most recently called AI “an existential threat to humanity,” could stem from science-fiction dystopian descriptions of artificial intelligence run amok. I think it’s important to keep in context how good these systems are, and actually how bad they are too, and how long we have to go until these systems actually pose that kind of a threat [that Elon Musk and others talk about] — Gil Pratt, CEO, Toyota Institute

“The deep learning systems we have, which is what sort of spurred all this stuff, are remarkable in how well we do given the particular tasks that we give them, but they are actually quite narrow and brittle in their scope. So I think it’s important to keep in context how good these systems are, and actually how bad they are too, and how long we have to go until these systems actually pose that kind of a threat [that Elon Musk and others talk about].”

 

Brooks said in his TechCrunch Sessions: Robotics talk that there is a tendency for us to assume that if the algorithm can do x, it must be as smart as humans. “Here’s the reason that people — including Elon — make this mistake. When we see a person performing a task very well, we understand the competence [involved]. And I think they apply the same model to machine learning,” he said.

 

Facebook’s Mark Zuckerberg also criticized Musk’s comments, calling them “pretty irresponsible,” in a Facebook Live broadcast on Sunday. Zuckerberg believes AI will ultimately improve our lives. Musk shot back later that Zuckerberg had a “limited understanding” of AI. (And on and on it goes.)

It’s worth noting, however, that Musk isn’t alone in this thinking. Physicist Stephen Hawking and philosopher Nick Bostrom also have expressed reservations about the potential impact of AI on humankind — but chances are they are talking about a more generalized artificial intelligence being studied in labs at the likes of Facebook AI Research, DeepMind and Maluuba, rather than the more narrow AI we are seeing today.

 

Brooks pointed out that many of these detractors don’t actually work in AI, and suggested they don’t understand just how difficult it is to solve each problem. “There are quite a few people out there who say that AI is an existential threat — Stephen Hawking, [Martin Rees], the Astronomer Royal of Great Britain…a few other people — and they share a common thread in that they don’t work in AI themselves.” Brooks went onto say, “For those of us who do work in AI, we understand how hard it is to get anything to actually work through product level.”

 

AI could be a misnomer. Part of the problem stems from the fact that we are calling it “artificial intelligence.” It is not really like human intelligence at all, which Merriam Webster defines as “the ability to learn or understand or to deal with new or trying situations.”

The analogy that the brain is like a computer is a dangerous one, and blocks the progress of AI.

— Pascal Kaufmann, CEO at Starmind. Pascal Kaufmann, founder at Starmind, a startup that wants to help companies use collective human intelligence to find solutions to business problems, has been studying neuroscience for the past 15 years. He says the human brain and the computer operate differently and it’s a mistake to compare the two. “The analogy that the brain is like a computer is a dangerous one, and blocks the progress of AI,” he says.

 

Further, Kaufmann believes we won’t advance our understanding of human intelligence if we think of it in technological terms. “It is a misconception that [algorithms] works like a human brain. People fall in love with algorithms and think that you can describe the brain with algorithms and I think that’s wrong,” he said.

 

When things go wrong. There are in fact many cases of AI algorithms not being quite as smart as we might think. One infamous example of AI out of control was the Microsoft Tay chatbot, created by the Microsoft AI team last year. It took less than a day for the bot to learn to be racist. Experts say that it could happen to any AI system when bad examples are presented to it. In the case of Tay, it was manipulated by racist and other offensive language, and since it had been taught to “learn” and mirror that behavior, it soon ran out of the researchers’ control.

 

A widely reported study conducted by researchers at Cornell University and the University of Wyoming found that it was fairly easy to fool algorithms that had been trained to identify pictures. The researchers found that when presented with what looked like “scrambled nonsense” to humans, algorithms would identify it as an everyday object like “a school bus.”

 

What’s not well understood, according to an MIT Tech Review article on the same research project, is why the algorithm can be fooled in the way the researchers found. What we know is that humans have learned to recognize whether something is a picture or nonsense, and algorithms analyzing pixels can apparently be subject to some manipulation.

 

Self-driving cars are even more complicated because there are things that humans understand when approaching certain situations that would be difficult to teach to a machine. In a long blog post on autonomous cars that Rodney Brooks wrote in January, he brings up a number of such situations, including how an autonomous car might approach a stop sign at a cross walk in a city neighborhood with an adult and child standing at the corner chatting. The algorithm would probably be tuned to wait for the pedestrians to cross, but what if they had no intention of crossing because they were waiting for a school bus? A human driver could signal to the pedestrians to go, and they in turn could wave the car on, but a driverless car could potentially be stuck there endlessly waiting for the pair to cross because they have no understanding of these uniquely human signals, he wrote.

 

Each of these examples show just how far we have to go with artificial intelligence algorithms. Should researchers ever become more successful at developing generalized AI, this could change, but for now there are things that humans can do easily that are much more difficult to teach an algorithm, precisely because we are not limited in our learning to a set of defined tasks. end quote

 

From: "Dennis May" To: atlantis Subject: ATL: Naturally Evolved, Man-Made, & The Supernatural Date: Sun, 15 Apr 2001 00:46:35 -0500

. . . .  Natural selection is a crude, wasteful, undirected, and inefficient process yet it has created the human mind in just a few billion years.  Can a man-made process generate something equivalent to or superior to a human mind?

. . . . My real question is still the same as it has been for over a year now, what is the Objectivist solution?  Having heard none I assume there is none.  Will Objectivism simply retain the information in undigested form, ignore it, or give up the claim of being objective.  I suspect from what I've seen so far some combination of indigestion and ignoring it (leave it to the specialized sciences) will occur.  That will certainly wash with some people but not with many others.  The process will be greatly drug out by those who will claim no artificial intelligence is intelligent in any case.

 

From: "William Dwyer" To: <Atlantis Subject: Re: ATL: A.I. Date: Tue, 3 Jul 2001 10:11:24 -0700. In response to Jens Hube, Debbie wrote,

> If you're going to assert that there is such a thing as life which is not biological life as we know it, then I think the burden of proof has to fall on you to prove it, not on me to disprove it.  Not being a scientist in the field of Artificial Intelligence, it is to me as though you're trying to assert the right to life of fairies --something which exists only in the imagination of man, but not in physical reality. >

 

You could probably have some form of artificial life, but in order to qualify as a living organism, it would have to possess values; it would have to pursue goals in response to its own needs.  It would have to have something to gain and/or lose by its actions. As an intelligent being possessing rights, it would also have to be capable of learning, of acting autonomously, and of grasping and applying moral principles. If it were an intelligent form of life in this respect, then, despite its being artificial, I think that it would have rights, including the right to life.  However, this is an exceedingly tall order, and one that is unlikely to be filled in the foreseeable future. 

Link to comment
Share on other sites

10 hours ago, Peter said:

Some excerpts from a new article from Real Clear Future and some old letters about A.I. or Artificial Intelligence.

Peter

 

Artificial intelligence is not as smart as you (or Elon Musk) think. Posted Jul 25, 2017 by Ron Miller

 

In March 2016, DeepMind's AlphaGo beat Lee Sedol, who at the time was the best human Go player in the world. It represented one of those defining technological moments not unlike IBM's Deep Blue beating chess champion Garry Kasparov, or even IBM Watson beating the world's greatest Jeopardy champions in 2011. Yet these victories, as mind-blowing as they seemed to be, were more about training algorithms and using brute-force computational strength than any real intelligence. Former MIT robotics professor Rodney Brooks, who was one of the founders of iRobot and later Rethink Robotics, reminded us at the TechCrunch Robotics Session at MIT last week that training an algorithm to play a difficult strategy game isn’t intelligence, at least as we think about it with humans.

 

DeepMind AlphaGo cannot tie a shoelace.  Lee Sedol  can. 

Intelligence is  -general-.   AlphaGo  can only play Go. 

If you specify one or a small number of tasks (even "mental" tasks)  then a machine  can most likely be built  that can outdo a human   -at those tasks-.   John Henry  found out the hard way  that  a steam driven  hammer  can "whop down steel"  better than he could. 

"Well de captain he say to John Henry, I'm gonna put a steam hammer  on the job, I'm going to whop that steel on down Lord, Lord I'm going to whop  dat steel  on down, I gots a steam hammer on dat job and it whop dat steel on down"

"Well John Henry he say to de captain, a man ain't nothing but a man,  and before I let that steam hammer beat me down, I'm gonna die with dis hammer in my hand,  Lord, Lord, I'm gonna die with dis hammer in my hand"

Well,  John Henry lost to the steam hammer and he did die with his hammer in hs hand. 

Link to comment
Share on other sites

  • 5 months later...
On 7/31/2017 at 11:38 PM, BaalChatzaf said:

DeepMind AlphaGo cannot tie a shoelace.  Lee Sedol  can. 

Intelligence is  -general-.   AlphaGo  can only play Go. 

If you specify one or a small number of tasks (even "mental" tasks)  then a machine  can most likely be built  that can outdo a human   -at those tasks-.   John Henry  found out the hard way  that  a steam driven  hammer  can "whop down steel"  better than he could. 

"Well de captain he say to John Henry, I'm gonna put a steam hammer  on the job, I'm going to whop that steel on down Lord, Lord I'm going to whop  dat steel  on down, I gots a steam hammer on dat job and it whop dat steel on down"

"Well John Henry he say to de captain, a man ain't nothing but a man,  and before I let that steam hammer beat me down, I'm gonna die with dis hammer in my hand,  Lord, Lord, I'm gonna die with dis hammer in my hand"

Well,  John Henry lost to the steam hammer and he did die with his hammer in hs hand. 

The significance of AlphaGo is how it achieved its narrow intelligence. The algorithm that produced AlphaGo is general purpose, and it computes using principles inspired by how the brain works.

It's not AlphaGo, but the algorithms that produce AlphaGo that are astounding.

These algorithms are building blocks on the road to general intelligence. They are just using games as a way to test their technology.

Link to comment
Share on other sites

9 hours ago, Nerian said:

The significance of AlphaGo is how it achieved its narrow intelligence. The algorithm that produced AlphaGo is general purpose, and it computes using principles inspired by how the brain works.

It's not AlphaGo, but the algorithms that produce AlphaGo that are astounding.

These algorithms are building blocks on the road to general intelligence. They are just using games as a way to test their technology.

Believed when seen.  I worked in the field of AI  from 1960 to 1965.  When I went on to do other things I became quite skeptical of the possibilities. 

Link to comment
Share on other sites

AlphaZero was told nothing about chess except the rules of the game, no principles of strategy, no opening book, no endgame database, nothing. It seems to have learned more about chess on its own in 4 hours than all of humanity learned about chess in several centuries. If a human did that, what IQ would be required?

AlphaGo and AlphaZero may be a first step going from weak AI to strong AI, from special purpose intelligence to general purpose intelligence. Tell it the rules of a game, shogi, Chinese chess,  Japanese chess, whatever game, or something in real life instead of a game, and it figures out the rest.

Maybe the next step is it figures out the rules of the game and doesn't need to be told.  Like Capablanca at the age of 4, knowing nothing about chess, watching his father play 2 games of chess, then beating him.

 

Link to comment
Share on other sites

  • 4 years later...
On 12/17/2014 at 12:45 PM, Reidy said:

While I'm not a heavy-duty student of the topic, I've enjoyed the writings of Hubert Dreyfus (What Computers Can't Do and Mind Over Machine) and The Improbable Machine by Jeremy Campbell. They treat AI as a case of rationalism, the mistaken attempt to treat all intellectual activity as rule-bound deduction. Campbell draws an explicit analogy to Hayek on this point, digital AI being the counterpart of centralized economic planning.

Here's creepy instance of almost-AI.  I wonder how long the "robot" art-director was trained or taught the visual language.  It could be that this particular "Mash Up Art" installation was designed to deliver horror-movie output no matter what ...

 

Link to comment
Share on other sites

  • 1 year later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now