Objectivism and Artificial Intelligence


Recommended Posts

Have any of you given serious thought to Artificial Intelligence?

It wasn't a significant issue during Ayn Rand's life and I'm not aware of any Objectivists have written about it.

There are lots of intriguing questions like will AI be conscious or will it destroy humanity and would that be a good thing?

Please share your thoughts.

Link to comment
Share on other sites

While I'm not a heavy-duty student of the topic, I've enjoyed the writings of Hubert Dreyfus (What Computers Can't Do and Mind Over Machine) and The Improbable Machine by Jeremy Campbell. They treat AI as a case of rationalism, the mistaken attempt to treat all intellectual activity as rule-bound deduction. Campbell draws an explicit analogy to Hayek on this point, digital AI being the counterpart of centralized economic planning.

Dreyfus allows that formal calculation can simulate the real thing ever more skillfully but still argues that such simulation won't amount to human reason. Software is much better at this than it was when he wrote, but I suspect I'd still agree with him if I were to revisit the question.

  • Like 1
Link to comment
Share on other sites

When I was younger I worked for about five years on AI. Eventually I became convinced it was a futile quest. We can make clever machines run by clever programs but the only intelligent things we have ever encountered are biological and organic. Maybe we can grow AI in a vat full of biological substances, but we cannot do it with dry stuff like silicon and gerrmanium.

Link to comment
Share on other sites

@tmj AI can be defined as intelligence that exists in a non-biological object (otherwise several animals could be considered AI). AI is a tool created by humans which mimic the brain's abilities.

@Reidy I haven't read those books but I have read some work skeptical of the possibility of AI. None of it convinces me that AI is impossible, just that current programming techniques aren't equivalent to human intelligence and can't yet replicate certain human behavior. But AI has clearly been progressing quickly, first it beat the best human at chess, then it beat the best human at Jeopardy. What's next and what could possibly stop that trend?

@Baal Five years isn't that long and computation continues to grow in power and decrease in price. Not sure why you think there's some critical difference between Silicon and biological substances (which are carbon based, and Silicon is chemically similar to Carbon).

Link to comment
Share on other sites

Questions like this torment me. Most everything regarding man-made life, and things relating to genetic engineering.

For us to truly answer the question of whether we can create consciousness in inanimate objects, I think we need to understand how consciousness actually works a bit better. How would you imbue free will into a circuit? I think the answer is either you can't, or something regarding of quantum-weirdness. (Which I know is somewhat controversial around these parts--I don't pretend to know half what I should to be able to comment on that debate, though... If there's something to it, perhaps it resolves the seeming contradiction in having a deterministic universe, and humans with free will.... Which I guess is another controversy, eh? :/ )

Link to comment
Share on other sites

Ray Kurzweil thinks we will be able to upload our brains to something like the cloud before too long.

Michael

He has been saying that for 30 years. Upload our brains has been 30 years in the future for the past 50 years and one hundred years from now it will still be 30 years in the future. Kurztweill is a brilliant inventor but he should not give up his Day Job to become a prognosticator.

Link to comment
Share on other sites

Ray Kurzweil thinks we will be able to upload our brains to something like the cloud before too long.

Michael

He has been saying that for 30 years. Upload our brains has been 30 years in the future for the past 50 years and one hundred years from now it will still be 30 years in the future. Kurztweill is a brilliant inventor but he should not give up his Day Job to become a prognosticator.

Do you have any sort of citation for that? I don't believe everything Kurzweil says, but his predictions have a decent track record.

Link to comment
Share on other sites

Ray Kurzweil thinks we will be able to upload our brains to something like the cloud before too long.

Michael

He has been saying that for 30 years. Upload our brains has been 30 years in the future for the past 50 years and one hundred years from now it will still be 30 years in the future. Kurztweill is a brilliant inventor but he should not give up his Day Job to become a prognosticator.

Do you have any sort of citation for that? I don't believe everything Kurzweil says, but his predictions have a decent track record.

Have a look here: http://en.wikipedia.org/wiki/Technological_singularity

Link to comment
Share on other sites

Ray Kurzweil thinks we will be able to upload our brains to something like the cloud before too long.

Michael

He has been saying that for 30 years. Upload our brains has been 30 years in the future for the past 50 years and one hundred years from now it will still be 30 years in the future. Kurztweill is a brilliant inventor but he should not give up his Day Job to become a prognosticator.

Do you have any sort of citation for that? I don't believe everything Kurzweil says, but his predictions have a decent track record.

Have a look here: http://en.wikipedia.org/wiki/Technological_singularity

I have... nothing there supports your claim that brain uploading is always 30 years in the future. Kurzweil has made falsifiable predictions in his book, The Singularity is Near. Have you read it?

Link to comment
Share on other sites

Ray Kurzweil thinks we will be able to upload our brains to something like the cloud before too long.

Michael

He has been saying that for 30 years. Upload our brains has been 30 years in the future for the past 50 years and one hundred years from now it will still be 30 years in the future. Kurztweill is a brilliant inventor but he should not give up his Day Job to become a prognosticator.

Do you have any sort of citation for that? I don't believe everything Kurzweil says, but his predictions have a decent track record.

Have a look here: http://en.wikipedia.org/wiki/Technological_singularity

I have... nothing there supports your claim that brain uploading is always 30 years in the future. Kurzweil has made falsifiable predictions in his book, The Singularity is Near. Have you read it?

I perused it. I did not believe a word. I have worked in AI (in my younger days). It ain't going to happen. We do not know how the stuff in individual brains is encoded and even if we stored brain pulses amplified to changes in a magnetic field we have not got the slightest notion of how to retrieve them and put them back in brains. The technology simply does not exist and there is no sign of it on the visible intellectual horizon.

It is a science fiction concept. Underline the word fiction.

Link to comment
Share on other sites

All right, then Bob. Humanity itself is the singularilty then, but it's biological to the rest of biology, not technological, which is just more and better tools.

--Brant

in a million years--or a hundred million or?--the Milky way itself will be our home and Earth a myth in the midst of time and humans won't be like today's humans: they'll all be beautiful with lots of real nice hair on top, etc.

Link to comment
Share on other sites

Bob,

The guy below Randal Koene is working on it, too. I've only skimmed this, but it is clear to me that Kurzweil is not the only set of big brains working in this area.

This article came out in the Daily Mail yesterday.

The scientist planning to upload his brain to a COMPUTER: Research could allow us to inhabit virtual worlds and 'live forever'

Since more and more and more people are studying this, you certainly have your work cut out for you. It won't be easy scoffing at a target that grows exponentially.

You may have to start lumping these scientists together for a collective scoff instead of scoffing at them one by one. If not, soon you will not have time for anything else.

I sympathize, too. Professional scoffing is hard. Not everyone is qualified to do it.

:)

Michael

Link to comment
Share on other sites

I think we need to understand how consciousness actually works a bit better. How would you imbue free will into a circuit? I think the answer is either you can't, or something regarding of quantum-weirdness.

Quantum weirdness could refer to a few unique properties of the so-called quantum world ... or with the near-term promise of quantum computing. I think a detailed electric 'map' of the brain -- alongside the maps of protein cascades, the chemical mapping of millions of possible combinations -- would probably be the place where we could discover actual quantum effects.

What you say about consciousness -- needing to understand how it actually works -- I agree. Have you read anything from Antonio Damasio? His books come to the subject of consciousness (and the brain) through 'defects' in consciousness. I recommend his 2012 Self Comes To Mind: Constructing the Conscious Brain. It helps to have read his earlier books to adopt his phraseology and definitions, but he is a good enough writer that even difficult concepts are intelligible.**

Damasio looks at the levels of functioning 'awareness' that develop as different 'levels' of the brain come 'online.' His work to understand consciousness is rooted in the study of patients who have had various brain injuries (including coma, locked-in syndrome, and other effects of brain lesions). Shut-in syndrome is close to the absolute edge of consciousness embodied -- when the body is almost completely paralyzed by injury to particular areas of the brainstem -- where the person is consciously aware, but gives no sign.

It's a weird, flat, emotionless world.

At the other end of the scale, Damasio looks at the effects of brain lesions in areas that contribute to a sense of self, executive control, and the all-important emotions, without which decision-making (free will?) is practically destroyed.

Here's Damasio at a TED talk, as a sample of his overall project -- "The quest to understand consciousness". Ted also supplies a transcript if you hate watching videos ...

I think that once we humans understand more about the many-layered and plastic functions of the brain, when we can finally make headway on the 'hard problem' of consciousness, then we will be very closed to designing a machine with a 'self,' or a 'soul' or a quasi-human independent 'mind.'

I think we will be thirty years away from this until the day when Damasio's work (and the work of other fine researchers and philosophers of mind) seems basic and inchoate.

So, I will answer a resounding yes if the question is Will humans ever succeed in embuing a robot with high-level intelligence. I think we can barely conceive just how much progress in brain and cognitive sciences can happen over the next hundred years of human history.

___________________________________

** for a brief explanation of Damasio's theory of consciousness, see the fairly good Wikipedia entry.

Link to comment
Share on other sites

My issue is that people have often been skeptical of new technologies (often because they were afraid of them). So why should AI be an exception?

I read Kurzweil with a healthy amount of skepticism ( here's an example of his predictions accuracy being analyzed http://lesswrong.com/lw/gbi/assessing_kurzweil_the_results/). However, the basic principle remains, our technology is becoming more efficient very quickly. It's doing things that our ancestors couldn't have imagined. This is especially true since the fall of communism, now that technological progress is less focused on building weapons.

I also don't buy these objections of 'we don't understand the human brain'. Artificial Intelligence doesn't need to imitate the human brain just like artificial flight (helicopters) doesn't need natural flight (birds).

Link to comment
Share on other sites

AI is not technology but only consequent to it insofar as it exists or will exist. As for flight, birds and helicopters need air and a minimum density altitude. They can fly but only so well and only so high. "Natural" or unnatural is beside the point. Electro-mechanical technology will likely be replaced by biological technology and humans will become biological self-evolvers. Humans, if so, will become quite different over the next thousand years. They won't care about us--dead past--and we don't about them out of the unknowingness of it all.

--Brant

Link to comment
Share on other sites

Peter Voss is an AI guy who hung out at TAS events for several years. I believe his company is still operating.

I am skeptical about AI and somewhat optimistic about robotics (the difference is that today's robotics has a different intellectual foundation from AI). What does end up working is going to take much longer to develop than the visionaries currently suppose.

And, in my opinion, the highly incomplete Objectivist epistemology never really addressed the underlying issues. There actually can't be an Objectivist position on AI or on robotics.

Robert Campbell

Link to comment
Share on other sites

Artificial intelligence can be divided into 2 types: weak and strong. Weak AI is special purpose and it can do such things as play chess at super human level and is accomplished fact. Strong AI is general purpose and it is like Data of Star Trek and in 2014 is only science fiction and probably will remain only science fiction within the lifetime of anyone now living.

If strong AI (like Data) is ever achieved, then from the point of view of Objectivism, the question of rights will come up. Will Data have rights?

Link to comment
Share on other sites

Artificial intelligence can be divided into 2 types: weak and strong. Weak AI is special purpose and it can do such things as play chess at super human level and is accomplished fact. Strong AI is general purpose and it is like Data of Star Trek and in 2014 is only science fiction and probably will remain only science fiction within the lifetime of anyone now living.

If strong AI (like Data) is ever achieved, then from the point of view of Objectivism, the question of rights will come up. Will Data have rights?

Die Macht ist das Recht. Yes. It will have exactly those rights it claims it has.

Link to comment
Share on other sites

When strong-AI beings come along, the matter will be worked out.

Not a pressing worry...

Robert Campbell

It should be.

I'm already here.

--Brant

making decisions, getting ready to strike (I'm not the only one)

(trying to work it out with Objectivism, so I'm going to ARI and ask to be taken to their leader [analogous to Mars Attacks])

Link to comment
Share on other sites

While away from commenting on this forum I have been caught up in working on the problems and solutions of AI (not in so much of a technical sense as I am not a computer scientist, coder, programmer or anything else more in a philosophical (?) sense) I believe I have found several solutions and I'm actually pretty confident that I can work out several others. Problem is, because I have no programming ability to actually test my theories then all I have is arm chair theories. My sister is a programmer but I doubt I can get her or anyone else to spend thousands of hours constructing my theories for zero pay.

First thing I see is that if you use very nearly ANY programming that causes a specific outcome in the AI then you have immediately failed because that is where the machine loses free will.- I believe I have solved it

Second, AI does need a strong natural language ability. I put "need" in italics because actually it doesn't, it does simply to satisfy the average person that this thing is intelligent. Example, animals solve problems and do many things that if a non speaking human did it would qualify as intelligence but because those animals cannot speak (English!) then they are considered by a large part of the population as being unintelligent. But I do see the need to be able to communicate with such a creation so it still is important to the project- I haven't solved it and I'm still wrestling with the idea that that is not my job. That the natural language portion is a separate program for a separate team which is added to my AI, the same as vision wouldn't be my concern but a team who creates that component and adds it would certainly make the AI more able to relate to humans.

Second and a half. Expectations. Anyone who thinks that it can never be considered AI until it can play master chess, compose a symphony (which you personally would like), paint pictures, and philosophize about death and the after life, need to tone down your expectations. Can you do all of those things? Why does this AI have to be Leonardo Da Vinci. Also, you cannot expect the AI to give you the correct answers to things and you can't expect it to always follow orders. That would be mixing the computer (machine tool of humans) too closely with what should be an independent AI

Third, problem solving is a critical ability to be considered intelligent- I would say I'm a confident 60% solved on that one

Fourth, Learning- solved, no problem including a component of curiosity and the drive to grow itself.

Fifth, awareness of itself and its place in an environment - solved, including a world-view

Sixth, internal conflicts. Very important in people. Balancing what the outside world wants to do with your time with what you want to do with your time, or/and the two components of ones own personality which very often come into conflict. These have very much shaped the face of what we call human- solved.

I believe those are the only components that an universal AI would need. Again, adding the Sight and touch components would make it better. This would not have emotions but I see emotions as being merely one actor in the internal conflict dance and I've already covered that.

I could be missing other parts and the parts that I have so-called solved probably don't work. But it is a very nice thought experiment to work out over time

Link to comment
Share on other sites

What if, assuming that strong AI is indeed possible, the debate becomes not over whether humans consider the AI 'intelligent' in any worthwhile sense, or whether the AI considers humans intelligent. It would be part-amusing, more frightening, if it came to the conclusion we were mere automatons and not worth conversing with.

Actually one of my earlier exposures to objectivism (or objectivist-like beliefs) was from reading the forums of transhumanists and people like Max More, who are very much into human advancement and life extension through technology and augmentation. Which is not AI per se, but definitely up a similar alley.

The questions for me are what morals we would expect a strong AI to have, whether it would have identifiable personality flaws, or what we would do if it developed beliefs we found repugnant. There's no harm thinking about all this stuff even if we don't personally think strong AI is likely.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now