If We're Not Careful, The AI Revolution Could Become The "Great Homogenization"

In a world where information is constantly being diluted, it's hard to know what to believe.

Artificial intelligence is growing, and so are attempts to control it. If we can distinguish between the real and fake risks, then this technology could be used as a tool to promote diversity in thought and ideas.

The world changes before our eyes. Artificial intelligence (AI), a technological revolution, is not what you think.

You may have heard that 'Artificial General Intelligence (AGI),' is just around the corner or, "Now that the language problem is solved, conscious AI is the next step."

These concepts are not the same.

Red herrings

It is either the delusions and fears of technologists, who think God lives in the circuits. Or it's the malicious incitement to fear by people with ulterior motives.

You can also find out more about the following:

do not

Think AGI is a danger or that there's an "AI safety issue" or that the singularity of machines is just around the corner.

But...

This technological paradigm shift is a serious threat to the humanity.

Only a few people know how to pronounce the word "only"

There is one thing that I agree with the majority, but for totally different reasons.

Let's try to understand the real situation before we learn about them.

The STOCHASTIC PARROT is now available!

The technology is a multiplier. Technology is a multiplier.

Computers can be used for a variety of purposes, just as a hammer can be used either to build a home or to beat someone on the head. They can also be used by central banks to create digital currencies that turn you into communist cat-ladies working at the European Central Bank.

AI is no different. It's a tool. It is a tool. It is

You can also check out our other blog posts.

A new lifeform is emerging, despite the nerds' cries.

Calling for progress is a call to shut down

So desperately want to be believed.

The most interesting thing about generative AI isn't that it's sentient. It's that this is the first time we've ever communicated with anything other than humans in a coherent way. Before this point, the closest thing we have come to it was... parrots.

Yes:

Parrots

!

We're not impressed by the fact that you can teach a parrot how to talk, even though it doesn't understand what it is saying.

But generative AI... that's another story. It's been around for about six months (in the public eye) but we don't really know how it works. It responds to our words like the annoying, politically correct, midwitted nerd you remember from class...or your typical Netflix show.

You've probably spoken to someone similar during your support calls with Booking.com or other services where you have had to dial in, or chat on the web. You're shocked at the answers.

You tell yourself, "Holy sh*t!" This thing talks like a person!

Sentences are logical. Not only is it grammatically correct, but also semantically.

Holy sh*t! It must be alive

Source: Author

Little do you realize that you are speaking to a highly-sophisticated,

The stochastic Parrot

It turns out that language is more rule-based than we thought. Probability engines are actually very good at this.

Emulating

Intelligence through language or the frame.

Mathematicians win another victory thanks to the law of large numbers!

What does that mean? What's the point?

This isn't useful? It's proof that it isn't a pathway to AGI.

Both are not necessary.

This tool is very useful. The greatest utility is probably its use as "MOT" or "Midwit Obsolescence Tech." The 'content-creators' and woke journalists who for years have talked a lot, but said nothing are now like dinosaurs as they watch the comet burn everything around them. It's beautiful. The life wins.

These tools can also be used for brainstorming, to code faster, or to learn at a higher level.

From the perspective of AGI and consciousness,

Who knows?

It mayyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy. But my spidey-sense tells me that we are still a long way from there. So I won't hold my breath. I believe consciousness is much more complex than we can imagine. To think that we could create it with probability machines seems a strange mixture of arrogant and naive, as well as empty.

What's my problem? And what is the risk involved?

Enter the Age of the LUI

Recall what I said earlier about tools.

Computers are the most powerful tools that mankind has ever created. Computers have evolved in the following ways:

Punch Cards

Command line

Point and click interface is a graphic user interface.

Mobile - thumbs and tapping

Source: Author

Now, we are moving into an era of LUI or the 'Language User Interface'.

You can also find out more about this by clicking here.

The paradigm shift is a big one. LUI, not AGI is the big paradigm shift. In the future, we'll be able to interact with apps that have conversational interfaces. We won't be restricted by how quickly our fingers can tap keys or screens.

Typing and tapping are orders of magnitude slower than speaking 'language.' Thought is probably a higher level, I won't be putting electrodes in my head any time soon. LUIs may even replace Neuralink technology because the risks of implanting chips in your brain outweigh the marginal benefits that you get from just talking.

This decade, we'll move from interacting with apps by tapping on their graphical interfaces to speaking to them.

Herein lies the danger.

The generative AI of tomorrow will determine all the answers we need. Just as Google determines today what we see when we search, and Twitter and Facebook determine what we see on their feeds, generative AI determines tomorrow the answers we get to our questions.

The screen is not just a lens that you use to absorb all the information about the world. The screen is your model of the universe.

Mark Bisone

I wrote an amazing article

Please read this article I wrote recently:

The problem with screens is actually very old. It is a problem that has existed since Plato's Cave and may even predate written languages. When we speak of a screen we are really talking about the editorialized transmission of an illusionary model.

The trick is to present you with an image (or these days with a sound) of something that the presenter tells or strongly suggests, and then you are told it's a window into the Real. The shadow and form are one and the same. You can trust the former as much as you would any other fragment of reality which is visible to your sense organs.

For those who think that "this won't be happening for a long time", well

Here are some bumbling idiots who make a decent attempt.

.

The 'GREAT HOMOGENSATION'

Imagine that every question, image, video, or data request you make is returned to you in a manner that's deemed "safe," "acceptable," or "responsible" by some faceless, anonymous'safety officers.'

Imagine that all the information you receive is a lukewarm version of the truth. Every opinion you seek is no longer an opinion, but a bland, inoffensive response that tells you nothing (this benign, annoying version). Or worse, it's some ideology wrapped up in a reply so that you are influenced by what the makers of this'safe AI" want you to believe and know.

Imagine that you could have modern Disney characters as intellectual assistants, such as those clowns in 'The Eternals" movie. You would be 'dumb-squared'.

'

The UnCommunist Manifesto

The utopian dream of the communists is the grand homogenization.

It would be easier to create paradise on Earth if everyone was a set of numbers in a spreadsheet or a group of automatons who all had the same opinions. If you rationed out enough to everyone, we would all be miserable proletariats.

It's like George Orwell

Thought police

Crossed with '

Inception

Because every question that you asked would be captured and monitored perfectly, and each response from the AI might instill an ideology into your mind. Information does exactly that, if you stop to think about it. It's like planting seeds in your brain.

You need to have a variety of ideas in men's minds! You want to see a lush rainforest in your head, not a mono-crop wheat field with deteriorated ground, susceptible to insects and weather, and dependent on Monsanto, Open AI, or Pfizer for its survival. You want to see your mind flourish, and that's why

You need Idea-Verity

.

The promise of the Internet was this. Anyone can post anything. It has been a positive force, but now it is being attacked. It could be the de-anonymization on Twitter and Facebook or the creeping KYC that has been implemented across online platforms. We had a taste of that in

You can find out more about it here.

Its glory will be seen in 2020. It seems that things are only getting worse.

One alternative is the push by organizations like WEF to implement KYC for online identity and tie it with a CBDC or your iris. However, it's quite overt and explicit. This move could be more difficult to make after the recent pushback against medical experimentation. It would be easier to let LUIs take over, as they will (because they are a superior experience for users), and create a 'AI Safety Council' to implement'safety filters' on all large language models.

You don't believe me, do you?

Already, our G7 leaders are talking about it

.

The web today is still a collection of pages. If you are curious, you can explore the dark, hidden corners of dissidence. When everything is only accessible through these models, surfing ceases to exist. You are simply receiving a synthesised response after it has been run through the necessary filters and editors.

It's likely that there will be some truth in the story, but it'll be so wrapped in'safety" that 99.9% won't know or hear about it. The model will dictate what the truth is.

I don't know what will happen to the majority of internet content when information discoverability fundamentally changes. As most applications move to a language-based interface, I imagine it will be difficult to find information that is not approved or safe by the portal you are using.

You could argue that you will need to be able to hack and prompt your way to better answers in these platforms, just as you would need to have the tenacity to search for the dissident crevices of the internet.

It may be true but I think that every time something is deemed unsafe, the route should be blocked or patched.

The argument could be that the tool could have a negative impact on the user.

Again, I'd probably agree. Such stupidity will make room for better tools in a free market.

The free market has long since disappeared. These hysterical efforts to push for "safety" are either knowingly, or unknowingly, paving the path for the suppression of possible alternatives.

By creating "safety" committees to "regulate" these platforms (read, regulate speech), any new models not put through such "safety or toxic filters" will either be unavailable for consumer use, or made illegal or difficult to find. How many people are still using Tor? DuckDuckGo or Tor?

If you don't believe this,

here's

Some information about the current toxicity filter that is already used by most LLMs. In a few years, such filters will be mandated on all financial applications. The new compliance appendage is strapped to language models like tits onto a bull.

Regardless of the arguments against this homogenization effort, they both support my argument that we must build alternatives and that we should begin that process immediately.

You can believe whatever you like, but it doesn't change the fact that this essay is about AGI.

If language becomes the new screen and all language that we hear or see must go through filters approved by the government, then the information we receive, our learning process, even the thoughts we think, will be confined to a small area.

Overton Window

.

I believe that this is a huge risk to humanity.

Social media algorithms have made us dumber. They tell us what they think we should be aware of. It was simple for them to activate the hysteria. Social media multiplied by 100 is language user interfaces.

Imagine what they could do the next time there is a 'crisis.'

It's not going to be pretty.

A healthy and functional society requires a marketplace of ideas. That's exactly what I want.

It's not sustainable. It will ultimately fail, just as every attempt to ignore or bottle truth has failed. Each attempt is accompanied by unnecessary pain, loss, and catastrophe. This is what I'm trying to prevent and help ring a bell for.

What can you do about all this?

This AI revolution may become a 'great homogenization' if we don't take action. We have to take action in order to avoid this.

Two-thirds of the population are able to vote.

Main things

Reject the "AI safety" narratives. They may look like safety committees, but if you dig deeper, they're actually speech and thought regulators.

Create alternatives now. Build them in large numbers and open-source them. The sooner we can do this and the faster they run locally, the more likely we are to avoid a homogenized world.

We can create a world of real diversity if we follow this path. Not the kind of woke bullshit. I'm talking about diversity of thought, of ideas, of viewpoints, and of a marketplace of ideas.

A new idea-verse.

The original promise of internet. Not limited by the slow bandwidth of typing or tapping. Combine that with Bitcoin, also known as the Internet of Money, and you've got the recipe for a bright future.

Laier Two Labs is where I and my team are at the moment. We are building smaller, more narrow models to replace these large language models.

We will open source all of our models and, in time, we aim to make them compact enough to be run on your own machine, but still retain a certain depth, personality and unique bias to use it when you need it the most.

In the next few weeks, we will reveal our first model. It is my goal to make this model the standard for an industry and topic that I am passionate about: Bitcoin. I believe we should also start building a set of alternative AI tools and models here.

On the next blog, I'll reveal it. Then, I will reveal it on the next blog.


Aleksandar Svetski

The founder of

The Bitcoin Times

You can also find out more about the following:

Amber App

Author of '

The UnCommunist Manifesto

,' '

Authentic Intelligence

The upcoming 'Bushido Of Bitcoin'