Recommended Posts

Sam Altman out at OpenAI

Holy crap...


Chief technology officer Mira Murati appointed interim CEO to lead OpenAI; Sam Altman departs the company. Search process underway to identify permanent successor.

From the article:


Mr. Altman’s departure follows a deliberative review process by the board, which concluded that he was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities. The board no longer has confidence in his ability to continue leading OpenAI.

This is only a tidal wave in the AI community.

I wonder if the OpenAI machines became sentient and attacked their creator...



Link to comment
Share on other sites

Is this theater to keep AI in the hands of crony corporations?


Altman and former OpenAI president to lead advanced AI research team at Microsoft; Emmett Shear to be interim chief at OpenAI

Surprise, surprise, surprise...

OpenAI started out as an open source project.

Now some core brains of OpenAI will work directly at Microsoft on a new AI division.

And OpenAI, I fear, might go the way of Project Veritas without James O'Keefe.

We'll see.


I would not be surprised to see Elon Musk buy OpenAI, though, when the price gets right. After all, Elon was one of the founders. And he is doing his own AI thing (Grok) over at X. OpenAI and Grok might be a marriage made in heaven for him.



Link to comment
Share on other sites

I won't bore you with the infighting, but a powerplay by the Predator Class over the future of AI unfolded in in this affair and the Predator Class won.

Surprise, surprise, surprise...


Sam Altman has been reinstated as CEO of ChatGPT developer OpenAI just five days after his sudden dismissal from the organization. The move comes after more than 700 OpenAI employees threatened to quit if Altman wasn't reinstated at the AI powerhouse.

The arrangement means most of the board of OpenAI is going to be fired if they already are not.

The holdouts on the board wanted to go slower in order not to create a Frankenstein monster with AI.

Altman wanted the money from rushing headlong into using AI for surveillance and control state projects of all sorts. He was toadying up to Microsoft, Gates and others of like ilk.

In their view, if they create a Frankenstein monster, well, blank-out. At least they will get more money and power, that is if their monster does not kill them off.


Should we avoid AI? No.

I advise that you use AI in limited ways that enhance your own projects. As a tool, it is great. But this is still the wild west or dot com days. So you need to think about it, not just use it. Trial and error is the correct method of working through this.

I also advise against sudden adoption of shiny new features that make you give up more and more control of your life and give it to them. I would start with neural implants. If you are paralyzed and an implant will give you mobility of your limbs and body, I say go for it. What a wonderful development.

But to buy goods at stores without going through a checkout line due to a microchip under your skin, or to be able to access the cloud with a brain implant? I say, hell no.

That's getting into metaphorical Soylent Green land.


Link to comment
Share on other sites

I think the possibility or probability of AGI is still a big question especially in the near future. 

AI on the other hand is here and they will be the biggest drivers of wealth creation in the very near future. If you are ‘old’ enough to have seen the societal and cultural changes that the internet plus social media has wrought , we ain’t seen nothing yet baby,lol.

Patrick and Tom have a good analysis of the business/corporate side maneuverings involved. Pretty sure the Altman stuff is in the first third of the episode.


  • Like 1
Link to comment
Share on other sites

1 hour ago, tmj said:

I think the possibility or probability of AGI is still a big question especially in the near future. 


I seriously doubt this will become important other than discussion.

AI is a machine. Humans are living beings.

The engineers might be able to make something so advanced, and so auto-improving, it mimics life. But I doubt life will ever come to pass. I doubt it so much, this as close to certainty as I can get. The premise is wrong from the ground up, starting with its design being from the top down from a species (human beings) that is alien to machines in just about every way imaginable qua living being. Fuel. Growth. Reproduction. Biology. And so on. 


An analogy might help to clarify the concept.

There exist self-driving automobiles these days. Not only can cars can go really fast, but they are created for human comfort inside the coach in addition to coming with a lot of safety features. And now AI will be integrated into them.

But in no way will a car ever become a horse.

In fact, we are now at a point in history where nobody makes that comparison anymore. Yet that is all people used to talk about at one time in the past.

I see AI following a similar trajectory.


Thanks for the vid. I will watch it.



Link to comment
Share on other sites

Incidentally, when I say Frankenstein monster, I am not talking about a living creature by analogy.

I am talking more in the sense of something like nuclear power. Some people can use it to power a city, others can use it to blow up the city.

The AI fight I see is between people who want to use it for lighting up the city. And they are worried that a bomb is being created that will go out of control so much it blows before it can be harnessed again.

Knowing what dirty rotten scoundrels the people in the Predator Class are, I share their concern.


Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now