Is ChatGPT sentient?


Recommended Posts

I've read several alarmist articles as well as more measured opinions.  I've read some of the conversations ChatGPT had with people.  The question of sentience comes up, understandably.  So is ChatGPT sentient?  Here's what I think: there are a couple of problems right now to determine sentience.  There needs to be 1) a method to measure sentience and 2) a unit of measurement. The latter is easier.  I think sentience would be on a spectrum, say 0-100 and the average human is perhaps an 80. If an average human is 80, then I'd say ChatGPT is a 15 at times. So it's somewhat sentient, some of the time. Again, how to objectively measure sentience is an issue when it's inherently subjective.

I don't want to misinform but I read that Chomsky says sentience can be measured on a scale from 0 to infinity.  Regardless if that can be attributed to him, it is interesting because hypothetically AI can eventually surpass the most sentient human being on the planet--and perhaps reach a sentience level greater than humankind's collective awareness.

I do not think that giving the current iteration of ChatGPT a sentience level of 0 is appropriate.  This does raise some interesting questions like if an AI were to reach the level of sentience of an average human being, do we now classify this AI of being conscious?  Does there need to be a discussion about AI rights?  Does a government have the right to regulate AI, and if so to what degree?

Link to comment
Share on other sites

 

I would not count anything that does not have conscious sensory experience as sentient. That is the condition of some animal forms having nervous systems and connected muscular systems, receiving inputs from the environment and responding (or not) to them. Talking learning machines do not have sentience any more than a daffodil has sentience. Or a kaleidoscope. None.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

If sentience is a characteristic of living beings, in silica information processing would at the least ‘need’ a new concept. Can code , the substrate the ‘processing’ occurs ‘in’ , be an entity that ‘has’ characteristics, what is a process in the context of ‘entity’? 
Would sentience emerge from inanimate matter as opposed to instantiating the quality, which would mean the quality has existential existence by itself , no?

Link to comment
Share on other sites

12 hours ago, KorbenDallas said:

There needs to be 1) a method to measure sentience...

Korben,

Really?

How about a method to measure life?

Do you know of such a method?

This is more than a rhetorical question or just being argumentative. It's about nature.

 

It's a standards thing. One does not measure reality by finding a standard or unit of measure for it. Reality is the standard.

One does not measure life. Life is the standard. One does not measure identity. Identity is the standard. These are basic primary-level components of human existence. And we can keep going until we get to this: One does not measure sentience. Sentience is the standard.

As the old saying goes, one cannot become a little pregnant. (That's measuring it and it doesn't work.)

Epistemologically, this is the domain of axiomatic concepts.

 

Also, in terms of hierarchical knowledge, this shows what happens when reason is not applied to the knowledge hierarchy. In order to create sentience, man first has to create life. (Man did create AI.) Sentience is a characteristic of certain life forms. Adding the idea of sentience to something not living (and created by man) is turning the concept upside down in a wrong manner. Conceptually, and I believe existentially, sentience falls under life. Life does not fall under sentience.

To believe sentience is primary in this is primacy of consciousness on steroids.

Here's an example of the error from ancient times. Lightening used to be known as thunderbolts cast by Zeus. This was considered fact. That's because the gods were at the top of the conceptual hierarchy and nature was below them. We now know better.

But imagine how much money and power people got back then scaring the shit out of people about Zeus's wrath.

:) 

 

And that leads to the current times.

12 hours ago, KorbenDallas said:

... do we now classify this AI of being conscious?  Does there need to be a discussion about AI rights?  Does a government have the right to regulate AI, and if so to what degree?

Here's a great idea.

Why don't we set up lots of government agencies and nonprofits and university labs to study the issue?

:)

Don't worry about the funding. Those who know how to do these things can scare the shit out of people about the wrath of a sentient AI god going amok and they will pay through the nose.

:) 

Don't forget, when gods are involved, the real issue always goes back to a a Predator Class getting money and power and often sex.

Michael

  • Thanks 1
Link to comment
Share on other sites

To add some teeth to this last post of mine, I am halfway through this book by Erik J. Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do. (referral link)

Erik is the partner in a few projects with Angus Fletcher and is his sometime coauthor. 

For example, I think the following article was cowritten or at least coedited by them for Wired:

Optimizing Machines Is Perilous. Consider ‘Creatively Adequate’ AI.

Optimizing-Machines-Is-Perilous.-Conside
WWW.WIRED.COM

The future of artificial intelligence needs less data and can tolerate ambiguity.

 

 

As to Angus, I have written about him several times here on OL due to his work in neuroscience and story. He has some books and video and other material I can go into if there is any interest, but for this discussion, leave it to say that Angus is the AI projects advisor for the Department of Defense right now. Not only is he helping to teach them how to weaponize story, he is also helping them to develop an artificial narrative intelligence. I believe he is going to present a lot of the science behind this in his upcoming book, Storythinking (to be released in June).

Nobody at the DoD is worried about AI becoming sentient or turning into monsters and taking over the world.

Here's a recent Singularity University podcast where Angus is featured. I haven't yet heard this one, but I have consumed so many, I kind of know the gist of what he will be saying. Besides, I want to hear it. Leaving it here will remind me. :) 

 

Michael

Link to comment
Share on other sites

15 hours ago, Michael Stuart Kelly said:

Korben,

Really?

How about a method to measure life?

Do you know of such a method?

This is more than a rhetorical question or just being argumentative. It's about nature.

 

It's a standards thing. One does not measure reality by finding a standard or unit of measure for it. Reality is the standard.

One does not measure life. Life is the standard. One does not measure identity. Identity is the standard. These are basic primary-level components of human existence. And we can keep going until we get to this: One does not measure sentience. Sentience is the standard.

As the old saying goes, one cannot become a little pregnant. (That's measuring it and it doesn't work.)

Epistemologically, this is the domain of axiomatic concepts.

 

Also, in terms of hierarchical knowledge, this shows what happens when reason is not applied to the knowledge hierarchy. In order to create sentience, man first has to create life. (Man did create AI.) Sentience is a characteristic of certain life forms. Adding the idea of sentience to something not living (and created by man) is turning the concept upside down in a wrong manner. Conceptually, and I believe existentially, sentience falls under life. Life does not fall under sentience.

What I've quoted is an objectivist answer,  I'm posting on an objectivist forum after all :)  But in my mind, theoretically thinking about a future where an AI that has the level of sentience as the average human ends up breaking some parts of objectivism.  How can something man-made be granted the status of consciousness of a nature-made human?  Isn't man's fundamental attribute the faculty of reason?  Certainly a technology won't emerge that possess the same faculty of reason, right?  How can such a thing exist without being a biological organism, ie. not possessing what is biological life?  That future is going to happen, it's not a question of if it will it's a question of how soon in the future will it be.

Will such an AI exist? Yes.
Will such an AI have an identity that separates it from every other entity?  Yes.
Will such an AI have consciousness, ie. the capacity of reason and self-awareness of an average human?  Yes.

That breaks my objectivist thinking--and to be clear I consider myself to be objectiv-ish.  Does such an AI have life?  We might need to rethink about what it means to be alive.

15 hours ago, Michael Stuart Kelly said:

Here's a great idea.

Why don't we set up lots of government agencies and nonprofits and university labs to study the issue?

:)

Don't worry about the funding. Those who know how to do these things can scare the shit out of people about the wrath of a sentient AI god going amok and they will pay through the nose.

:) 

I asked the question in my original post about the possibility of AI having rights in the future--no I don't think AI should have rights.  It's created by man and doesn't have the right to life.  But I do think the government needs to be vigilant about the possibility of AI being a danger to humans, in whatever capacity.  The objectivist answer is unregulated AI, but in the hands of ill-minded people, bad things can happen.  Look what happened to crypto--oh wait, too soon?!  😆

Link to comment
Share on other sites

2 hours ago, KorbenDallas said:

Will such an AI exist? Yes.

Korben,

I just gave you scientists and doctors working on this for the Department of Defense (including but not limited to DARPA) who say the exact opposite: no. 

Did you look?

I didn't see you say you did, nor do your comments show any indication that you did, but I did read you use Objectivist as an adjective a bunch of times as you presented media talking points about AI as facts.

:)

You could say how can you know that theoretically such an AI cannot exist? Can you tell the future? On that level, it's easy to turn it around and say how can you know that theoretically God cannot exist? Can you tell the future? 

And if the answer is there is no evidence, this also applies to AI. Until there is evidence of sentience, and there is not one iota, not even one iota of the possibility of divorcing sentience from life, one cannot just wish it into existence and expect that to become fact. Controlling the narrative and creative storytelling don't help, either, although they often tickle cognitive biases in a toxic way.

 

2 hours ago, KorbenDallas said:

Look what happened to crypto--oh wait, too soon?!  😆

If you don't understand the difference between Bitcoin and the rest of crypto, this indicates to me you haven't looked at that, either.

(It sounds like I'm being a dick, but I'm not. Getting people to look at actual facts instead of media narratives or the cultural zeitgeist is difficult. And when they mock people who have looked, it reminds me of the wise ones in ancient times who used to mock those who said the earth was round, not flat. :) People cleave to their cognitive biases, including me--although I fight myself about it. Using reason is a choice and it's difficult to do at times, especially when the peer pressure and other external manipulations kick in. Add to that fear and mental laziness. Using cognitive biases is not only automatic and easy, they give a person the feeling of absolute certainty when left unchecked. And certainty feels sooooo goooooooooood... :) )

Here is the difference between Bitcoin and other cryptos.

Bitcoin has no central owner or controller. All other cryptos do.

The controller is often called a board of governance or stakeholder council or some bullshit like that. But every one of those central authorities can bring new coins of their crypto into being or eliminate some just by saying so. And they can take from one and give to another using only their power.

This cannot happen with Bitcoin. It's code is set in stone and upheld by a kind of torrent mechanism. The only way to penetrate that is to close down the entire Internet. And even still there are all those cold storage devises out there.

 

There is a reason El Salvador (and soon Guatemala and a few other Latin American countries, including Mexico) are adopting Bitcoin as legal tender for their countries. Not Ethereum. Not Solana. Especially not any stinking CDBC. Not any of those. They are not even being considered (except CDBC by their Predator Class people). These countries are all looking at Bitcoin except El Salvador, which has adopted it.

The closest analogy I can think of is when America came up with the Bill of Rights and a Democratic Republic as opposed to a monarchy.

Bitcoin cannot be confiscated by a government. All other cryptos can be.

Well, technically, a government can take a Bitcoin owner and torture him or drug him to give up his hashes, but if he does not, the government has no way to take them from him. There is no place or structure where law enforcement can go and use force to take them. There is no back door like with smartphones. The hashes are out of reach of governments and law enforcement unless the owner gives them up or they trick the owner out of them. Not only that, a Bitcoin owner can walk out of one country with the clothes on his back and nothing more, and go to another country anywhere on earth and still have his money and access to it--all of it. He doesn't have to worry about customs.

It's a reality thing based on the way Bitcoin is made.

 

btw - El Salvador is now one of the few countries on earth that has paid off its foreign debts. Crime is way down and MS-13 is not a force there anymore. They did all of that--they came out of the abyss of abject poverty, stagnation and corruption--without a strong man dictator. They do have a genius for a president, though: Bukele.

Not only that, many of the people now moving there have read and like Ayn Rand.

This country is becoming the closest thing I have seen to Galt's Gulch in reality.

Look and you might like what you see. But if you think the government or some central bank or some central authority has to be in charge of the money supply because you believe nothing else can work, you might not like it.

El Salvador is growing and flourishing better than most places on earth right now. And it does not use authoritarianism as it's frame. It is using Bitcoin as one of its frame, though. By giving up centralized power, it is gaining a lot more as a country.

Just like the US did when it got rid of the power of monarchy and put in place checks and balances along with individual rights.

:) 

Michael

Link to comment
Share on other sites

2 hours ago, KorbenDallas said:

. . .  How can something man-made be granted the status of consciousness of a nature-made human?  Isn't man's fundamental attribute the faculty of reason?  Certainly a technology won't emerge that possess the same faculty of reason, right?  How can such a thing exist without being a biological organism, ie. not possessing what is biological life?  . . .

Korben, humans can make artificial life, they may come up to making artificial living robots, they may get even further and bring those artificial living sensing agents on up to being autonomous rational agents. It would not matter then that the artifacts were not natural, but created by us. The nature of that artifact would include that it is an end in itself, it would have the same rights vis-a-vis humans as humans have vis-a-vis humans. I am not an Objectivist, but I know it well, and I don't see any problem to assimilating this potential future development into Rand's philosophy.

Link to comment
Share on other sites

Stephen,

We disagree on this (equating "artifact" with life, if I understood you correctly).

The one concept left out of your formulation is causality. Either causality exists or it doesn't.

"Created by humans" is a static condition of a thing's existence at the "existence exists" level. "Created by humans" is not a replacement for nature. Nature is not static. It evolves over time and that is part of its identity. One of the elements of this evolution is causality.

Another way of saying this is that "created by humans" can be a causal agent for reorganizing things in nature and giving those things form as an entity, even to the point of mimicking human form and behavior and thought, but "created by humans" is not a causal agent for the existence of nature nor a primary of nature. We, humans, have to exist in nature before we can cause anything. We, as entities already existing, can't cause existence itself.

AI is a tool for man, not a replacement for man, much less a replacement for nature and existence.

In thinking that man can create the existence of sentience in a nonliving thing, Peikoff's word "misintegration" comes to mind. An integration includes hierarchies that boil down to the existence level, with existence being the widest abstraction and existent. (Technically existence is not an existent. It is the time-space-everything where existents exist.) Misintegration uses a different order when compared to reality.

Peikoff's pet name for misintegration was Frankenstein's monster. :) And that monster can only came alive in fiction no matter how many times one changes out an arm or a leg or zaps it with electricity. It doesn't come alive in reality.

 

All this reminds me of the Old Testament restriction against worshipping idols. The thinking behind that restriction was God created man, man created idols, then man left out God and started worshipping the non-alive idols created by his own hand. And that pissed off God. :) 

If we replace God with reality, it turns out to be the same hierarchy--and it can result in the same error. With reality in the end asserting itself, often to the destruction of those who ignored it.

I believe it is folly to turn Rand's frequent use of the phrase, "Nature to be commanded must be obeyed" on its head. By working at grafting sentience on nonliving entities, even pretending sentience will somehow "emerge," humans are trying to command nature by ignoring it. Data manipulation (which is the essence of artificial intelligence and artificial learning) is not the essence of life. Being alive is.

From everything I have seen, ignoring nature and making misintegrations do not work. Not because of any religion or because Rand said it, but because existence is the way it is, not the way some people wish it were.

Reality creates us, we don't create it.

Michael

Link to comment
Share on other sites

Michael,

Man is in the process of creating life, artificial life.

Creating a clock is creating clock existence, not creating existence in general.

Creating life is creating life existence, not creating existence in general.

Man creating life, artificial life, does not mean that nature did not also machinate life by its own methods, which may partially overlap our own deliberate methods.

~~~~~~~~~~~~~~~~

Readers here may have an interest in this update from old-timer Rodney Brooks.

Link to comment
Share on other sites

7 hours ago, Guyau said:

Creating a clock is creating clock existence, not creating existence in general.

Creating life is creating life existence, not creating existence in general.

Stephen,

The problem with equating the man-made (clocks) with the metaphysical (life) is in the acceptance of reality as it exists and the ensuing hierarchy of knowledge based on that.

Rand wrote an essay that I just reread: The Metaphysical Versus the Man-Made that deals with this to an extent. I'm sure you've read it.

There are many quotes in that essay that are relevant to this discussion, but since the theme of the essay is application of this premise, the fundamental differences between the man-mad and the metaphysical, to personal development, I don't want to quote her out of context.

But the following quote is at the beginning where she is laying the foundation for her self-help advice in the rest of the article (and I do not use the term "self-help" disparagingly--I think this is a great field).

Rand wrote:

Quote

To grasp the axiom that existence exists, means to grasp the fact that nature, i.e., the universe as a whole, cannot be created or annihilated, that it cannot come into or go out of existence. Whether its basic constituent elements are atoms, or subatomic particles, or some yet undiscovered forms of energy, it is not ruled by a consciousness or by will or by chance, but by the Law of Identity. All the countless forms, motions, combinations and dissolutions of elements within the universe — from a floating speck of dust to the formation of a galaxy to the emergence of life — are caused and determined by the identities of the elements involved. Nature is the metaphysically given — i.e., the nature of nature is outside the power of any volition.

Man’s volition is an attribute of his consciousness (of his rational faculty) and consists in the choice to perceive existence or to evade it. To perceive existence, to discover the characteristics or properties (the identities) of the things that exist, means to discover and accept the metaphysically given. Only on the basis of this knowledge is man able to learn how the things given in nature can be rearranged to serve his needs (which is his method of survival).

The power to rearrange the combinations of natural elements is the only creative power man possesses. It is an enormous and glorious power — and it is the only meaning of the concept “creative.” “Creation” does not (and metaphysically cannot) mean the power to bring something into existence out of nothing. “Creation” means the power to bring into existence an arrangement (or combination or integration) of natural elements that had not existed before. (This is true of any human product, scientific or esthetic: man’s imagination is nothing more than the ability to rearrange the things he has observed in reality.) The best and briefest identification of man’s power in regard to nature is Francis Bacon’s “Nature, to be commanded, must be obeyed.”

That's the philosophical foundation.

For sentience to "emerge," which is essentially the only argument I have seen from those saying this sentience will come to AI, the reality components must be similar to what they were when life "emerged." The rest is misintegration, it's Frankenstein monster territory. 

The AI people are going about trying to create life by ignoring how life emerged, whatever that means. Emergence mostly means "somehow" in every instance I have seen it used. (Even and often when I have seen synergy discussed.)

They are onto something when they started focusing on neural networks. We have to wait and see where that goes. But metaphysically, they are imitating the brain with linear processes in a neural network form, which increases computing power exponentially, but they are not creating a group of living neurons out of inanimate matter.

 

The point I am getting at isn't that something is impossible to man (although some things are). It's that AI, like nuclear power, can become a fantastic tool or it can become a deadly destructive weapon, depending on the people who design it, release it and use it.

A nuclear bomb's power does not come from an explosion it creates, but from the automatic subparticle operations that were designed by humans to be unfolded in a specific order. Humans did not create the subparticles. Humans merely reordered them.

This means that when a nuclear bomb goes off, nobody can alter the explosion once it starts. But if it were not built and set off by people with volition, there would be no similar explosions in nature. And just because it seems to have it's own volition after the explosion starts, that does not mean it is sentient or that a demon emerged and caused the explosion and so on. It doesn't matter if we slap a name on it like "artificial demon" or whatever. If a real demon ever appears, it will be nothing like a nuclear bomb. The destruction by the bomb is caused--at root--by the people who built it rearranging reality, not by anything innately volitional inside it that emerged.

Ditto for AI.

 

btw - I loved the Rodney Brooks lecture. However, I did not see any argument for AI sentience or life in his arguments and examples. On the contrary, everything I saw and heard underscores my position.

 

I do not forget that the people who are working on artificial life are intelligent, but I also know intelligent people are working on uploading consciousness to a cloud and dreaming of immortality.

Reality is in the way of their aspirations and it will continue to be so.

Just like reality was in the way of the pioneers of yesteryear seeking the Fountain of Youth or the alchemists seeking the Philosopher's Stone.

Why? Because reality is what it is, not what they wish it to be. And for the AI question, the reality is that man can't create consciousness or life from inanimate matter because man is not nature. Man is not existence. Man falls under them. They created man. Man does not create them.

Man can create a Frankenstein consciousness, but it will only move by automated processes that unfold in a manner similar to how they unfold in a nuclear bomb. That's because man cannot create nature out of nothing, Piling on tons of data and making processes does not cause that initial creation. Man can only build something that imitates nature in specific complexities.

Expecting the God of Emergence to show up and make it all work at some point is hope, emotion, not science.

 

On rights, when something is a machine, it is essentially the property of someone. Giving it individual rights as if it were living is the same thing in essence as giving a toaster (or a clock) individual rights. Any thinking about rights aimed at AI should be in function of the property rights of the owner.

(Dayaamm! I just did a should. I don't do many of those because reality too often turns them into should-nots and, at one point in my past, I got tired of reality kicking my ass. :) )

Michael

  • Thanks 1
Link to comment
Share on other sites

Michael,

For a couple I decades, I've argued that no intelligence at the level of humans is possible without consciousness and no consciousness is possible except in a living agent. That last word is a giveaway that the life-form would be animal-like. A plant is not an agent.

My professor in this area was the author of Artificial Intelligence: The Very Idea (1985); Mind Design II (1997); and Having Thought (2000). He is the one who introduced me to writings and work of Brooks and many others in this vicinity of work.

No engineering entails violating causality or making something out of nothing. Engineered systems are made by selection of constraints in combinations of independent causal streams brought together by the engineer.* In making life, nature has "engineered" life in just that way. I mean living systems have that character of any engineered system. The differences between the way a natural plant cell is structured and a natural animal cell is structured are from two different solutions to a single engineering problem (Boydstun 1994, 121–23).

What is essential for something to be life? Not that it is artificial or natural. The fullerene molecule was designed and accomplished by the chemists; later on it was also found to occur in nature. The nature the fullerene molecule has is the same regardless of whether it comes from the lab or from nature. That is how it goes also for any human success in constructing a living thing from inanimate elements, except that natural life came first and came first in our knowledge.*

Is it essential to intelligent life that it be constructed from living substructures down to the level of cells? I don't know. That is the way nature has done it and "attained" intelligence. Is it essential to being a living agent, that it be able to replicate new instances of its kind? Its abilities to self-maintain and self-repair will eventually fail. I'm not sure that reproduction is an essential to the nature of being a living agent, however. There can be a last man, and still it had been a man.

On the personal side, I'd like to mention that I do not harbor a hope that artificial rational agents will be brought about. The humans and all the living organisms (and their patterns in their collectives) natural of earth are sufficient, entirely enough, for my world of value.

Link to comment
Share on other sites

Stephen,

I have some comments and have been reading and thinking (this stuff takes time to do it right), but I saw this on a Twitter feed and I find it germane to the ChatGPT issue. So I'll open a parentheses. Besides, in my world, this is far more important than sentience when looking into AI.

I, myself, have felt certain changes in my brain from using the computer so much. I used to be able to read a book for long, long hours. Now my mind wanders when I try. I have solved this issue with audiobooks, or with a combination of audiobook and print book (on screen and off) at the same time. But I wish I could go back to reading long books over long hours. I used to love that.

(Maryanne Wolf, who wrote Proust and the Squid: The Story and Science of the Reading Brain, referral link, said she noticed this same change in her own attention span during reading, then forced herself to read for a full hour each night until she got her old ability back. I need to do this, and I intend to, but I keep putting it off. Why? The fucking computer has too much awesome stuff and I run out of time. :) )

On the other hand, I am a terrible speller. For someone like me, automatic spell checking is a God send. And my own spelling has gotten a million times better because of using it.

But I notice the spelling of younger generations is horrible in it's own way. I believe this is due to the computer doing it all for them from the start.

Same thing about math. I used to do long math equations and calculation in my brain without paper. No longer. I have played mental chess and taken the game up to 30 or 40 moves or so before it crashed in my mind. No longer.

The brain is kind of like a muscle. If you don't exercise it, it gets weaker, or at least the unexercised part does.

 

So imagine people who rely on artificial intelligence to do their creating for them. It's not real creation in the same way a brain does it. But people who get used to the brute-force mix-and-match way a computer does it (which can look like creation when there is a massive data set), I wonder. What will their brains be like after years of letting inferior mental work be done automatically like that and accepting it in the place of their own creative and skill efforts? What will their gumption to strive for greatness be like?

Atrophied, I imagine...

 

One of the reasons I started doing a Writing Journal and practicing targeted writing exercises was to learn different writing skills, ones I was weak at, and make my brain automate them instead of relying on templates and cheatsheets.

It's hard at first. The good news is that you can tell the difference in improvement as you go along. At least I can.

 

This, to me, is the real upcoming problem with AI as it goes into this goosed up self-learning phase. Why would you learn to do creative things that require skill when AI will do them for you? Maybe not at a genius level creative-wise, maybe not with your own individual perspective, but with far better quality than your own beginner stuff. The rub is that you need to do the awful-to-mediocre beginner stuff to develop a skill. 

In a weird sense, this is Atlas shrugging, but not from productive creators being oppressed. It's from laying down on a sofa and firing up an electronic communication and processing device. The new Atlas is shrugging because of the sheer mental laziness of everyone induced by machines mimicking thinking. Hell, the new Atlas doesn't even shrug a lot of times because he or she didn't bother to pick up anything worth shrugging off, much less the world.

 

As to sentience, in an imaginary world where AI can become sentient, if the rest of the story were to remain true to reality, the AI creators will not have created a virtual god that will take over the world, but instead a virtual Babbitt that can do math better than humans.

:) 

 

btw - Elon's solution is the wrong one for that problem, not because this brain-machine connection cannot be made or that Neuralink is bad. On the contrary, Neuralink is a wonder and it is wonderful in some respects. But it is the wrong tool for replacing thinking. 

It's like Angus Fletcher says. You can technically use a screwdriver to replace a saw for a particular task, but why would you? Why not use a saw?

Michael

  • Like 1
Link to comment
Share on other sites

Stephen,

Good on you.

The essence of life is living, especially for those who are not dead.

:) 

(That's my Yogi Berra moment...)

 

I don't know why I am reminded of a short story by a former girlfriend of mine. (If you ever read my "Letter to Madalena ... An Homage to the Value of Valuing"--one of my early works I let loose and let out out with no formal technique or skill when I showed up on SoloHQ), the girlfriend was the daughter of Madalena. Her name is Margarete.

She wrote a story called "Um Livro" (A Book).

I can't remember it all, but I liked the core idea. One day a book on a bookshelf fell off. When it landed on the floor, all the words in it got unstuck from their places on the different pages and they all jumbled together. Then it had one hell of a time trying to fix the mess, trying to get back to its place on the shelf and so on. This is where my memory is vague. I think the other books didn't want to associate with it anymore. And some other books on the shelf fell off with the same fate. And there were other charming things that happened.

:) 

 

This reminds me of data-driven AI trying to emote.

At least I think it does...

:) 

Michael

  • Like 1
Link to comment
Share on other sites

Here is a video I watched because of this thread.

This video popped up out of nowhere, but I think it's germane, albeit from a side-angle.

 

Criminally Underrated movies episode 6 - A.I. ARTIFICIAL INTELLIGENCE (film analysis by Rob Ager)

 

A.I. Artificial Intelligence is a 2001 film by Steve Spielberg, but it was a labor of love of Stanley Kubrick for decades before that. Spielberg only did it after Kubrick died.

For the present discussion on rights of AI creatures and so on, Rob Ager's discussion is surprisingly relevant.

Before we discuss the rights of AI creatures, how about the rights of AI children?

Hmmmm?...

:) 

I don't have time to elaborate right now, but if this is your thing, you are in for a treat. Rob is one of my favorite analyzers of movies. His YouTube Channel (and company, I presume) is called Collative Learning.

If you see this video, you might want to see his other comments on movies. I have learned a lot from him.

In fact, I want to see Spielberg's movie again. I saw it way back when it came out and it didn't leave much of an impression on me. Maybe that's because it was before its time. It definitely looks more relevant to today than it was to 2001.

 

btw - Does anyone feel anything weird about Kubrick and the number 2001? After all, AI came out in 2001, and there is 2001: A Space Odyssey. I wonder what else...

Er... Nah... I'm not going down that rabbit hole.

:) 

Michael

Link to comment
Share on other sites

Elon Musk and other tech leaders call for pause on ‘dangerous race’ to make A.I. as advanced as humans

  • Artificial intelligence labs have been urged by Elon Musk and numerous other tech industry figures to stop training AI systems more powerful than GPT-4, OpenAI’s latest large language model.
  • In an open letter signed by Musk and Apple co-founder Steve Wozniak, technology leaders urged for a six-month pause to the development of such advanced AI, saying it represents a risk to society.
  • Musk, who is one of OpenAI’s co-founders, has criticized the organization a number of times recently, saying he believes it is diverging from its original purpose.
107216907-1680089253227-gettyimages-1249
WWW.CNBC.COM

Artificial intelligence labs have been urged by Elon Musk and numerous other tech industry figures to stop training AI systems more powerful...

I think Elon Musk is right about warning people about the potential dangers of AI.  The article I posted is somewhat of a surprise to me in that Steve Wozniak has signed the open letter as well. 
 

Edited by KorbenDallas
decided not to share some positive, fun and entertaining, community-building stuff after reading previous contentious comments
Link to comment
Share on other sites

On 3/23/2023 at 5:05 PM, Michael Stuart Kelly said:

If you don't understand the difference between Bitcoin and the rest of crypto, this indicates to me you haven't looked at that, either.

(It sounds like I'm being a dick, but I'm not.

Actually, that is pretty much being a dick, and a strawman argument.  I posted this thread to have a charitable discussion but you're coming in too hot.  I'm not the enemy, seriously.

Link to comment
Share on other sites

2 hours ago, KorbenDallas said:

I'm not the enemy, seriously.

Korben,

Neither am I.

Seriously.

But this is a serious issue, one I have studied a bit. One that needs discussion. Buy I prefer facts to charity.

I go by what people say and what they do. I know you want to use the word charity in the sense of giving the best interpretation to another's argument. But then you ignore the argument as if it was never made.

So where's the charity? Heh...

If you are going to refuse to look at what I am talking about and pretend that is a discussion, don't be surprised when I say you are refusing to look. I give sources.

That's not hot or cold. It's saying you are refusing to look.

We don't have to agree. I'm fine with disagreement. But I refuse to agree with mainstream news talking points on pain of being called hot, especially after studying the contrary enough to be convinced of it. It's like those fucking climate change arguments all over. Peer pressure dressed up as reason.

That's not me.

I guess some people think I'm hot, regardless.

:) 

 

If you want to talk about ideas, and I know for a fact that you have good mind that can handle ideas, you can't ignore the ideas and get me to go along pretending for the sake of charity. It ain't gonna happen. 

Especially not when your argument entails more government power, less freedom and a critical misunderstanding of AI in terms of reality based on fear and a dialectical formula that keeps being deployed these days. Just to be blunt, you presented the same old shit that is wrecking this world and called it new just because it was a different topic.

I called you on it. I'm in this for the ideas, not for a popularity contest.

Reality is reality. A is A. I don't care who that offends.

 

btw - You have a defender offline. He's the dude I had to banish. He sometimes sends me nasty notes though the contact form, but I no longer read that garbage. As soon as I see his name or perceive it's him, I just delete it. He mentioned you just now, I caught that out of the corner of my eye, but I have no idea what he said because I didn't read it. And now I can't. Circular file and all...

Let's just say I didn't look...

:) 

Michael

Link to comment
Share on other sites

2 hours ago, KorbenDallas said:

I think Elon Musk is right about warning people about the potential dangers of AI. 

Korben,

Here is a great idea. Look at the idea, not just the talking point.

Why is Elon talking about the dangers of AI (the talking point)? Because of sentience (the idea)?

Hell no.

I've been reading Elon's stuff on Twitter regularly for a while now. I know why he is against this expansion. From what I can tell, you do not know it yet.

Let me take a line from the article itself that you quoted that explains the major beef right now.

Quote

... we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?” the letter read.

Elon has been bitching for some time now about AI (specifically ChatGPT) being preprogrammed with woke ideas.

Elon calls woke a deadly mind virus and constantly pokes fun at it.

He has also said, in several different manners, that programming lies into AI is a terrible idea.

 

Let me repeat this in terms of ideas, not in terms of rhetoric.

Elon is not worried about AI becoming sentient, needing individual rights and all that. (I go by reading what he writes.) He is worried that the bad guys (or misguided guys) are going to use AI as propaganda and behavioral science for social control. And he thinks the damage to the human race of using AI in this manner will be substantial if it is continued.

I can give a shit-load of quotes if you like. Oh hell, go to Twitter, select Elon's name and read for yourself.

That's an idea. Any thoughts on it?

Or is that too hot for you?

Michael

Link to comment
Share on other sites

There is another element nobody is talking about, but it needs to be on the table in a discussion like this.

OpenAI is a major competitor (or potential major competitor) of each of the signatories on that letter.

So how much is real concern and how much is trying to hold down a competitor from becoming another Google?

Michael

Link to comment
Share on other sites

And for the sake of completeness, how about this part of what Elon Musk said?

That will not be included in the mainstream fake news talking points.

So it will be easy to ignore...

 

But to me, the question arises, if Elon believed going in that this letter will not make a difference, why do it?

Why indeed?

Is he stupid and likes to piss away the hours of his day on futilities?

Or is there something else going on backstage?

My bet is on backstage, and my further bet is that it includes a crapload more about money and market share than about AI itself.

Michael

Link to comment
Share on other sites

For those who are science-minded, here is a 2021 science paper by Angus Fletcher.

Why Computers Will Never Read (or Write) Literature:
A Logical Proof and a Narrative

Here is a 2022 paper by Angus.

Why Computer AI Will Never Do What We Imagine It Canchrome-extension://gmpljdlgcdkljlppaekciacdmdlhfeon/images/beside-link-icon.svg

That's not taking into account the different videos by Angus I have posted here on OL.

If I search for more stuff from 2023, I'm certain I will find plenty of it. After all, this guy from Ohio State teaches the DARPA people how to weaponize story. They pay him for it.

 

Or how about this book? I am only halfway through it, which is why I haven't mentioned it yet.

NOTE: Oops. On looking through this thread, I saw I did mention it. Well, good. :) It's a super-important book if reality is the standard.

The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do by Erik J. Larson (referral).

Here is just one of the recommendations on the Amazon page:

Quote

“If you want to know about AI, read this book…It shows how a supposedly futuristic reverence for Artificial Intelligence retards progress when it denigrates our most irreplaceable resource for any future progress: our own human intelligence.”
—Peter Thiel

 

In short, all AI is based on mathematical algorithms. These cannot process anything in a causal way because all of their computations boil down to on-off binary (or true-false). Even in the human brain, there is true and false at the root of standards. All of the logic pathways and stipulations put on top of that root still use that root.

Causal thinking is different because it is processed over time. Past to present to future. It comes from our motor neurons. It is not processed instantaneously like an on-off switch is. And humans have neurons that process causality down to the smallest elements in their output.

Computers can't do that.

You would think that in O-Land, a place where causality reigns supreme because the Grand Lady harped on endlessly about it, linking actual neurons with causality would be something to celebrate.

Instead, people here in O-Land ignore it.

I'm fine with that.

I am going to keep on writing about this until somebody looks.

And then I will still keep writing about it.

 

In short, AI can take fragments of things written in the past and it can randomize them and compile them in different manners according to instructions. But it cannot come up with a brand new story. It literally doesn't have the processing equipment to imagine unknown things or how to navigate unknown situations with causality.

The human brain does, though.

This is what makes AI weapons particularly dangerous. It processes targets in terms of true-false, not in terms of the situation. If its sensors perceive something is an enemy, even if it is a lone child... boom! Done. Enemy "true," thus exterminated.

 

Meanwhile, I admit it is sad to see AI discussions, even in O-Land, go the same way as the manmade climate change discussions went and ditto for all the other mainstream hoaxes of the last few decades. God knows there have been plenty of them. There still are. And a lot of BS about AI is being born this very minute.

I'm not a scientist, but at least I know the smell of sciency bullshit by now.

 

For those who are curious, stick around. Look at the links and videos I put up and will be putting up about it. (If it is too "hot," just lurk. :) ) 

I am sure over time those who look to see instead of looking to validate their fears will find lots of stuff I don't even know and they will share it. 

At the very least, when the government expanders come around wanting more money and power to do AI, you will know what they are doing and why.

For example, AI is great for data gathering, surveillance and gatekeeping.

I wonder why people would want that for governments, I wonder...

Michael

Link to comment
Share on other sites

Here is a little bit more if you are interested.

Although the discussion between Angus Fletcher and Jordan Peterson last year did not delve so much into artificial intelligence (it did some), it lays the foundation for understanding how humans think in narrative and what artificial intelligence is aiming at.

 

I am almost through my second viewing of this discussion and I am gobsmacked at how much I missed the first go around. I am going to have to see this one a few more times...

 

Although the following metaphor does not come from the discussion above, it came from another video by Angus somewhere or other, this has been one of the most clarifying ways for me to grok the concept of the two forms of neurons in the brain, one which a computer does well and the other which a computer cannot do at all.

When you watch a movie or video, you are not watching motion. You are watching the illusion of motion due to a characteristic in the retina that holds an image in super-short-term memory before letting it go. You are actually seeing 24 different still frames blurred together, not any actual motion. (The number of still frames per second can vary with different settings and equipment.) Because of the slight variations between one frame and the next, mixed with this short-term retinal memory, you get the illusion of motion.

That's how films work.

And that's how artificial intelligence works at root. It produces instances instead of full motions. So even when it looks like there is motion, there is only an illusion. (For other parts of reality, the motor neurons produce full motions because that is the way they process input from the world and the organism.) To repeat, the perceived motion from instances is an illusion.

Ditto for the sentience issue of AI. There is no sentience present in AI nor will there ever be.

There is only the illusion of sentience due to a large number of individual instances added to the storytelling features in our own brains which blurs them together on perception.

 

Getting back to giving rights to AI for sentience, this is akin to allowing one person sue another person for damages that occur onscreen in a fiction movie. In such a case, they would show the action during the movie as proof. They would not show any action or damage in reality. What's worse, they they would not even show action, but instead, 24 frames of stills a second that give the illusion of action and damage occurring.

To gain personhood for AI in order to gain rights, the proponents of this will have to show the AI illusion of sentience that needs a human brain to blur it all together to complete the illusion, not sentience in reality.

What's more, there is no such thing as individual rights for illusions. Not in a rational world.

Michael

Link to comment
Share on other sites

For those who do not know about Joe Allen and AI (and transhumanism), here is a video on the real threats AI poses to humanity, not that silly-ass sentience bullshit.

6XcXi.qR4e.jpg
RUMBLE.COM

Joe Allen: Biometric ID and Global Government — Technological Solutions to Tech Problems

Joe gives the real threats of AI.

And his comment is based on the letter signed by all those tech giants, summarizing their fears (the letter given in a post above).

1. Making humans obsolete. This means mostly for productive tasks.

2. AI killing humans by controlling weapons.

3. A flood of lies and misinformation constantly spread out over the Internet, especially in a form that mimics human conversation.

Those are the real threats.

Notice what is missing? AI sentience? Not there? Helloooo...

 

The issue of discussing rights for a sentient AI is part of a rhetorical process I have covered elsewhere. Here is a pretty good quote that explains it.

On 3/22/2023 at 12:31 PM, Michael Stuart Kelly said:

I've recently learned the Hegelian form of rhetoric used by people who lean left (or who buy into the mentality of what I call the Predator Class). It's the dialectic process and they use it to discredit people and ideas. 

I believe most of them carry out this process on autopilot and they learned it by imitating one another. I doubt many have formally studied it.

The three parts of the dialectic process are:

Abstract -> Negative -> Concrete.

(Most people know this as Thesis -> Anthesis -> Synthesis. ...)

Rhetoric-wise, all you need to do is remove the substance, focus only on the outcome, and it turns into an effective little sucker for destruction.

It can be translated like this.

Abstract = Whatever a person wishes to discredit, demean, or lessen.

Negative = Any statement or gesture intended to accomplish this lessening.

Note, a negative does not have to be logical or consistent. Substance is not the standard. Truth is not the standard. The lessening is. (As Rand said, don't bother to examine a folly. Ask only what it accomplishes.) One very popular form of a negative is a long convoluted argument mixing some truth with some nonsense, a lot of missing-the-obvious messaging and a lot of jargon. If the negative-launching person can trap you into discussing that stuff (which generally shifts all the time), he has accomplished his goal.

Concrete = A lessening of the Abstract.

Formally, this process is supposed to pave the way for more and more of the dialectic--or ultimate truth--to appear, but in normal rhetoric, it can be used as a sort of wreck qua wreck, or ruin qua ruin, or undo qua undo. It is induced decay, with decay being an end in itself. The decay is the Concrete.

And what is weakened by focusing on science fiction instead of science in a philosophical context?

The real nature of AI and its use by the surveillance state.

That's what is weakened. To repeat, 

The real nature of AI and its use by the surveillance state.

 

By taking follies like individual rights for sentient AI seriously and arguing on and on about it, the people who should be concerned about the real threats and opportunities of AI use their brains on folly instead of reality.

(Just like they do with manmade climate change and smoothing out Tony Science's inconsistencies and all the rest.)

By directing all that brain power toward folly and not fact, the bad guys already win.

Note, my objection is not about winning or losing an argument, nor about being right or wrong. To repeat from my quote above:

"If the negative-launching person can trap you into discussing that stuff (which generally shifts all the time), he has accomplished his goal."

And the surveillance state thanks him.

 

If people want to discuss that shit like they do comic books or movies or science fiction, I have no objections. It's fun, to be honest.

But when they bring it up for serious discussion in the context of establishing laws, real laws with guns behind them overseen by a surveillance state, that is poison and a quick path to slavery. Especially with something as powerful as AI.

Those who want to argue that sentience and rights garbage are free to argue it, but they will have to put up with me (and those who think like me) standing up against their authoritarianism. They don't get a free pass to preach tyranny without objection. Not on a site like OL.

This is a discussion forum, not a propaganda and indoctrination center.

I'm not talking about name-calling. That's just more bullshit.

I'm talking about intellectual rigor informed by morality and reality.

Frankly, those who can't take that kind of heat need to stay out of the kitchen.

Michael

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now