Recommended Posts

Damage from Artificial Intelligence to Ding

The following video by David Knight is an easy-to-grok introduction to looking at potential AI damage that is not end of the world nonsense (be afraid, be very afraid of the AI monster and follow us--give us more power and we will keep you safe--that kind of bullshit :) ).

There is real damage AI can do and it sure looks like it is doing it right now. We have to be alert so this damage does not touch our individual lives. 

(I don't care so much about clutching pearls over societal damage. When that comes up, control freaks start talking about giving AI rights. :) )


Regardless of what happens with AI, the stock market hype will collapse as all the FOMO investors see big investors exiting.  And there are real concerns about its not becoming self-aware and dominant, but it imploding when it consumes too much of i…


Here are some of the things David says to look out for.

1. Accuracy, especially with chat. AI is now encountering a problem of "hallucinating." This comes from the nature of how it works. AI gets trained on data sets, meaning info that already exists online. It scans that data and chops it up, then reassembles it according to templates and algorithms. Also, it is self-learning, meaning it goes out on its own and scans more stuff, and comes up with new templates and algorithms based on statistical analysis. (It has to use stats since the computer is 1 and 0 at root. There are no conceptual connections based on meaning that are processed without numbers.)

In other words, AI puts out stuff that is often inaccurate meaning-wise, but the numbers for usage are correct. And what does AI do? It scans and saves that stuff, too. So AI puts out some garbage, lots of users publish the garbage, then AI scans that garbage as if it is legit knowledge and adds it to its own data.

So far, there has been no process developed for AI to detect AI garbage in its scans with any accuracy and consistency.

Over time, this process makes AI results weirder and weirder, in other words, inaccurate. AI starts hallucinating. :) 


2. An economic bubble similar to the Dot Com bubble is likely to happen soon. As AI has been overhyped, a shit-ton of money has gone into investing in it. Now that AI is underperforming in relation to the hype, but the investments keep rolling in, the bubble is going to pop at one point. Hell, AI is being used to make market predictions on a massive scale. This bubble is so big, if it pops, it threatens to take down the stock market with it. 

Here is just one signal David talks about. Michael Burry, the guy who predicted the 2008 housing market kaboom (the subject of the movie, "The Big Short"), has just put 95% of his portfolio in a bet to short  S&P 500 and Nasdaq. That's over a billion dollars under his management. (See here for corroboration.)

The AI bubble is tied up in this. Granted, there is no guarantee this bubble will pop, but the signs are not good. And Burry has been wrong in some of his predictions, but not to the tune of over a billion dollars of funds controlled by him.


3. Surveillance. This one is easy, even if AI starts getting hopelessly inaccurate. The government (the intelligence community) and the Predator Class in general, rule humans by manipulating fear, not by accuracy. So as AI spits out tons of stuff on you, me and all of us, the bad guys will format this stuff in scary terms for the public and the different sides of controversies. It doesn't matter if it is accurate or harms innocents. The people who want to rule don't give a damn. They just want their power and AI looks like it can be effective in this scenario.


That's enough for a start.

Enjoy your day...



Link to comment
Share on other sites

I have cooled on Scott Adams, but I am 100% on board with him in the following X-post:

Here's the full text.


I'm feeling AI will be less like a full transformation of civilization and more like the invention of the laser printer. 

AI has not made a joke or written a song or made a movie I care to consume. And a judge ruled you can't copyright AI work.

When AI is used, it seems to be a tool (like a laser printer) and in my experience increases my workload because I can do new things.

Some humans will have relationships with AI, but some humans are furries too. No big deal.

AI is not "smart" in any way that impresses me more than a calculator. Feels more like magic tricks than intelligence.

AI seems like demoware to me. Great for showing you what it can almost do but not practical if you were to try it yourself.

I think we overestimated it. By a lot. Because we are primed by scifi.


AI will cause some mischief, but nothing like what the fear-mongers and transhumanists have predicted.

Not even in the same category of trouble.

AI will cause trouble from its own limitations when used as a source, and it will be a great surveillance propaganda tool for bad guys.

I will also be the reason a lot more schlock gets published online.

On the good side, it is far, far, far better than Wikipedia for when you want to research a topic. You have to check everything before you can use the information and be sure you are correct, but It's great for the first phase. Being able to hone down on a topic with questions and statements is wonderful.


Just to give a nod to the bad stuff, though, since that is the theme of this thread, imagine how much code AI writes that is useless. I know since I tried a while back. l I wanted it to give me a simple code for Autohotkey and, man, did I get a pile of crap that doesn't work. In step-by-step form, too. :) 

It might be good, though, for learning code just to see things formatted correctly. Since it's a crapshoot if the code is any good, looking at code produced by AI and checking it is a great learning tool.


Link to comment
Share on other sites

Kaleidoscope decoding , I think, can be a analogous description of what the LLMs(large language models) are 'doing'.

A kaleidoscope basically breaks an image into fractals and then reflects the new image 'back' to you. If you can figure out how to use computation to examine portions of the overall broken image , find similarities in the patterns and then reverse engineer the fractals back into the original image, your computation can then be judged on the fidelity of the reverse engineered image to the original.

Give an 'ai' a set of 'all the words' and it computes an 'answer' based on the individual 'words' and the surprising part was "it" found some 'rules' that seem to govern propositional language and then incorporate those back into the 'search' for the answer. The scare quotes and such feel necessary because it is hard to not anthropomorphize (consciousness-morphize) what the computation is rendering. I've heard Stephen Wolfram describe what the LLMs do to what Aristotle did when he invented logic. He sees the rules of logic as being like an emergent property somehow embedded in language.

Web3 development and implementation seems to be a potential for amelioration for possible 'dangerous' aspects of AIs and their use.  

Link to comment
Share on other sites

While I'm not familiar with the Amy or Devon , the things Bret Weinstein brings to conversations are worth noting along with healthy doses of good faith dialogues. I just listened to this one seems like a pretty good conversation about web3 and such. I remember web1 , lots of chat sessions on all the 'alts' lol , I sure like the sounds of what web3 could/will be.


Link to comment
Share on other sites

  • 2 weeks later...

One year ago, the following kind of article in the mainstream would have been impossible.

This one is on Axios

Notice that a year ago, artificial intelligence was not being introduced as a science. It was being introduced as a religion.

Now people are seeing what happens when AI eliminates humans from its processing. When it believes its own bullshit, so to speak.


AI could choke on its own exhaust as it fills the web


The internet is beginning to fill up with more and more content generated by artificial intelligence rather than human beings, posing weird new dangers both to human society and to the AI programs themselves.

What's happening: Experts estimate that AI-generated content could account for as much as 90% of information on the internet in a few years' time, as ChatGPT, Dall-E and similar programs spill torrents of verbiage and images into online spaces.

That's happening in a world that hasn't yet figured out how to reliably label AI-generated output and differentiate it from human-created content.
The danger to human society is the now-familiar problem of information overload and degradation.

AI turbocharges the ability to create mountains of new content while it undermines the ability to check that material for reliability and recycles biases and errors in the data that was used to train it.

The article goes on to say AI could threaten the jobs of content creators. Blah blah blah...

Maybe at corporations that adopt it, I guess. But not in general.

Enter capitalism. New corporations are going to crop up that prize human-created content.

And, there is a natural move for people to create their own content and publish it themselves. And even if they use corporate social media platforms, these individuals gather their own audiences.

To use a metaphor, how is garbage going to threaten the existence of food?


Link to comment
Share on other sites

Slightly less metaphorically , garbage threatens health if it is mis-taken as food.

Blockchain technology and its uptake in the schema of web3 can help point to the wheat from chaff.

As a story Exodus can be applied in sense to development and use of the inter webs.

Moses refuted all the Egyptian gods through the plagues and moved Israel into the wilderness to get ‘the Egypt’ out of the Jews before bringing them to deliverance.

web1 allowed information to move outside of the sole province of the universities, the files escaped by the parting of the Red Sea. Web2 and its commercialization provided the manna , the ‘something’ the whiny masses felt they needed and took without the consideration of a dependence it could create. Web3 should be the liberation/deliverance.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now