Elon Musk and Merging With Machines


Ed Hudgins

Recommended Posts

Elon Musk and Merging With Machines
By Edward Hudgins

Elon Musk seems to be on board with the argument that, as a news headline sums up, “Humans must merge with machines or become irrelevant in AI age.” The Paypal co-founder, SpaceX and Tesla Motors innovator has, in the past, expressed concern about deep AI. He even had a cameo in Transcendence, a Johnny Depp film that was a cautionary tale about humans becoming machines.

Has Musk changed his views? What should we think?

Human-machine symbiosis

Musk said in a speech this week at the opening of Tesla in Dubai warned governments to "Make sure researchers don't get carried away --- scientists get so engrossed in their work they don't realize what they are doing. But he also said that "Over time I think we will probably see a closer merger of biological intelligence and digital intelligence." In techno-speak he told listeners that "Some high bandwidth interface to the brain will be something that helps achieve a symbiosis between human and machine intelligence." Imagine calculating a rocket trajectory by just thinking about it since your brain and the Artificial Intelligence with which it links are one!

This is, of course, the vision that is the goal of Ray Kurzweil and Peter Diamandis, co-founders of Singularity University.  It is the Transhumanist vision of philosopher Max More. It is a vision of exponential technologies that could even help us live forever.

AI doubts?

But in the past, Musk has expressed doubts about AI. In July 2015, he signed onto "Autonomous Weapons: an Open Letter from AI & Robotics Researchers," which warned that such devices could “select and engage targets without human intervention.” Yes, out-of-control killer robots! But it concluded that “We believe that AI has great potential to benefit humanity in many ways … Starting a military AI arms race is a bad idea…” The letter was also signed by Diamandis, one of the foremost AI proponents. So it’s fair to say that Musk was simply offering reasonable caution.

In Werner  Herzog’s documentary Lo and Behold: Reveries of a Connected World, Musk explained that "I think that the biggest risk is not that the AI will develop a will of its own but rather that it will follow the will of people that establish its utility function." He offered, "If you were a hedge fund or private equity fund and you said, 'Well, all I want my AI to do is maximize the value of my portfolio,' then the AI could decide … to short consumer stocks, go long defense stocks, and start a war." We wonder if the AI would appreciate that in the long-run, cities in ruins from war would harm the portfolio? In any case, Musk again seems to offer reasonable caution rather than blanket denunciations.

But in his Dubai remarks, he still seemed reticent. Should he and we be worried?

Why move ahead with AI?

Exponential technologies already have revolutionized communications and information and are doing the same to our biology. In the short-term, human-AI interfaces, genetic engineering, and nanotech all promise to enhance our human capacities, to make us smarter, quicker of mind, healthier, and long-lived.

In the long-term Diamandis contends that “Enabled with [brain-computer interfaces] and AI, humans will become massively connected with each other and billions of AIs (computers) via the cloud, analogous to the first multicellular lifeforms 1.5 billion years ago. Such a massive interconnection will lead to the emergence of a new global consciousness, and a new organism I call the Meta-Intelligence.”

What does this mean? If we are truly Transhuman, will we be soulless Star Trek Borgs rather than Datas seeking a better human soul? There has been much deep thinking about such question but I don’t know and neither does anyone else.

In the 1937 Ayn Rand short novel Anthem, we see an impoverished dystopia governed by a totalitarian elites. We read that “It took fifty years to secure the approval of all the Councils for the Candle, and to decide on the number needed.”

Proactionary!

Many elites today are in the throes of the “precautionary principle.” It holds that if an action or policy has a suspected risk of causing harm … the burden of proof that it is not harmful falls on those proposing the action or policy. Under this “don’t do anything for the first time” illogic, humans would never have used fire, much less candles.

By contrast, Max More offers the “proactionary principle.” It holds that we should assess risks according to available science, not popular perception, account for both risks the costs of opportunities foregone, and protect people’s freedom to experiment, innovate, and progress.

Diamandis, More and, let's hope, Musk are the same path to a future we can’t predict but which we know can be beyond our most optimistic dreams. And you should be on that path too!

Explore:

Edward Hudgins, Public Opposition to Biotech Endangers Your Life and Health. July 28, 2016.

Edward Hudgins, The Robots of Labor Day. September 2, 2015.

Edward Hudgins, Google, Entrepreneurs, and Living 500 Years. March 12, 2015.

 

Edited by Ed Hudgins
Trying to insert photo. Impossible
Link to comment
Share on other sites

I might as well post this here for future discussions.

Human%20Achievement%20NEW%20banner%20SMA

THE HUMAN ACHIEVEMENT ALLIANCE

Exponential technologies in information, nanotech, biotech, robotics, and AI promise a future of unimaginable prosperity with longer, healthier, even transhuman lives for all. But these changes are producing radical economic, social, and moral challenges, with reactionary pushback from left and right and with calls for government controls. Worse, our increasingly nihilist culture is eroding the value and joy of productive achievement. But the good news is that otherwise cynical young people do love technology. Further, entrepreneurs creating this tech are individualists who love their work and want to prosper, but who need to understand better the need for free markets if they are to achieve their goals.

A Human Achievement Alliance can meet these challenges. This initiative exploits the synergy between the values of Millennials, a new breed of entrepreneurial achievers, and friends of freedom. It offers an optimistic, exciting, empowering vision of the world as it can be and should be. In operation, it seeks:

Celebrate and promote through our institutions and through a Human Achievement Day, the value of achievement and Enlightenment virtues of reason and entrepreneurship from which achievements emerge.

Raise public awareness of the potential of exponential technology and the necessity of economic liberty in coalitions, media, political circles, and the wider culture.

Develop cutting-edge thinking on deep issues concerning exponential technologies: Should we reject the “precautionary principle’ for a “proactionary” principle?” Why are robots and AI do not threaten jobs? Will human-machine mergers pose ethical problems? Could we actually live 500 years?

Promote free-market public policies that remove barriers to exponential tech.

“We are all achievers, whether nurturing a child to maturity or business to profitability, writing a song, poem, business plan or dissertation, laying the bricks to a building or designing it.”

To help ensure this bright future, and for further information, contact Edward Hudgins at edward@edwardhudgins.com.

Link to comment
Share on other sites

It will be a long  time  before "dry" technology can achieve the packing density and parallelism of  wet gooey  organic material.  The brain may be slow compared to electronic or even quantum devices,  but its parallelism and density is not even closely approximated by "dry"  (semi-conductor)  technology.

The much vaunted high density non-technology is still off in the future (if ever).

Nature has had 4 billion years to figure out how to make  "wet" sentience.  We humans are a bit behind the curve.

Link to comment
Share on other sites

I suggest that neither you nor I know the truth about dry vs wet tech vis a vis human life. I suspect that the folks who are investing billions of their own $$$s into human-brain interface work know better than we do, though they could be wrong.

As I've also pointed out, the costs of sequencing a human genome has dropped from $100 million in 2001 to $10 million in 2007 to just over $1,000 today. That an innovations like the CRISPR cas9 gene editing tool suggests that manipulating our DNA will play a major roles in the kinds of creatures with evolve to in the future. Also, see Diamandis's quote in my piece.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now