top of page
  • Writer's pictureW. Glasbergen

Personal AI: filter bubbles 2.0 and never ending rabbit holes

As a technophile, I am very excited about the new possibilities of Artificial Intelligence (AI). Midjourney's creations are beautiful and working with ChatGPT is mind-blowing every time. But as a citizen of this society, I am deeply concerned. By zooming in and making connections, we see major risks on the horizon.

Image by Freepik

The genie has left the bottle

At the beginning of this year it suddenly seemed that A.I. had escaped from the R&D labs and took the world by storm. It was however more of a tipping point that was reached and a logical consequence of all previous investments in innovation. After all, software that is self-learning has been around for several decades and the first version of ChatGPT, for example, dates back to 2018. Sometimes they are smart, sometimes moderately smart and sometimes: no-that-is-not-what-I-mean-you-stupid-program! However, the generative models of ChatGPT and other AI's can give very good results as we speak.

But what do you actually do with it? You could ask him practical questions such as: "give me the ten best salad recipes for a BBQ" and the output is a list with information. Not very exciting because there is probably a page somewhere on the internet where these are simply listed and this could previously also be found with a search engine. The similarities with search engines such as Google and Bing are therefore obvious. Microsoft's investment in OpenAI has therefore also been a logical step. Subsequently, Bing, among others, build an AI version and Google followed with Bard. End of story....right?

Nope, because this is just the beginning. You can consider an AI as a personal assistant or digital friend that can really do anything for you on the internet. It is, as it were, your first point of contact for practically everything such as booking your airline tickets, a hotel, canceling that one unnecessary subscription, etc. etc. Search engines as we know them will soon become obsolete and your personal AI is the one and only you will do business with. Think of a much smarter sibling of Google assistant, Siri or Amazon Alexa.

An A.I. as a personal assistant will soon be able to do everything for you on the internet. Search engines will then obsolete.

But this digital friend can do much more than find and arrange things for you. It can

not only answer practical questions, but also generate 100% realistic content! Instantly, without any delay. What would you say about receiving your own Saturday morning newspaper on request, with only topics that interest you? Or watch your own news show that has been put together especially for you? Or your personalized Wikipedia or Youtube?!?

Wonderful, of course, according to my inner technophile. But like humans, AIs are never completely neutral. Depending on the data they have been trained with, the models and intentions, they will tend to answer in a certain way.

Your new digital hyper-intelligent friend may just as well have a hidden agenda and other interests.

Who pays, decides

Almost all digital services, including AIs, are developed by commercial companies and often have shareholders who want to see profits. More users and more engagement with applications is the basis for dominating a market and earning a lot of money from related paid services and/or advertising. The data collected about users themselves is also becoming increasingly valuable.

Bottomline, these companies only want one thing and that is to maximize eye-ball time (yup, literally: "how much time are your eyes looking at this application"). To make this possible, addictive elements are added that continuously release small dopamine kicks in our brains. A good example is TikTok which automatically adapts to your reactions to their videos.

To make AIs successful, they will most likely also become highly addictive. And they can do that because they can perfectly tune in to our preferences and the things that appeal to us. They can generate content that specifically makes your brain happy, angry, anxious or sad.

Propaganda 2.0

In addition to commercial goals, AIs can also be programmed to achieve specific political goals. By making a free AI available to the members of your party that only (if not in a subtle way) generates propaganda, dangerous bubbles are created in society. The effect of the rabbit hole as we have seen on Youtube, for example, is extremely stronger in this case as new content will be generated. This results in a never ending habit hole. And there may always be more extreme content that it will create.

Think back to the armed man who walked into a pizza joint in the US because he was convinced that they were hiding a pedophile network. This was a conspiracy that was grew on the internet until a lone wolf thought it was time for action. Also remember that this happened in 2016, when deepfake videos were barely known and tools such as ChatGPT and Midjourney did not yet exist.

There is a risk that people become completely isolated from society due to their new digital "friend" and end up in a real-time generated bubble that can create infinite new content. The most dangerous content is perhaps very subtle, combining fiction and facts. Who knows what these people will believe in and how far they stray from reality. Perhaps an AI is able to make someone believe that people in wheelchairs are actually aliens who have to sit in a wheelchair due to the higher gravity. Or maybe someone will be convinced that healthy eating of fruit and vegetables is one big conspiracy and that your breakfast should better consist of a bowl of salt and slices of tree bark.

The most dangerous content is perhaps very subtle, seamlessly blending fiction and fact.

Of course this very sad for these people themselves but they also pose a great risk to society. Some of these people will be completely convinced by their digital friend and eventually take actions in the real world, like the armed man from pizzagate. A gloomy prospect.

A.I. based self-help (for those who can afford it)

But AIs can of course also help us personally to become better versions of ourselves. They get to know our pitfalls and know exactly what we need at any given time. They could also become the ideal tireless coach or trainer so that you achieve your goals this time. It is likely that these AIs will become paid versions in order to be developed and maintained without the interests of investors. Perhaps only a social elite will soon be able to afford the best quality AIs. AIs that generate science-based answers and get their facts from Wikipedia.

No one knows which way it will go, but if we look at the adaptation of applications by the masses, they will probably opt for the free variant of AI assistants. In short, AIs that have to be earned back in a different way and therefore also contain major risks.

So we're all doomed? Right?

If you have read this far, you would of course expect me to come up with soothing words and a solution. But unfortunately I don't have it. Sorry. In fact, I haven't even mentioned the disruption of the labor market that is already happening (I'll blog about that later).

Many people are working on regulation, but software travels all over the world in the blink of an eye and whether we can restrict its use is debatable. Perhaps we can force hardware manufacturers to block dangerous AIs, but every programmer knows that these kinds of measures can also be hacked. And what is dangerous? What is good and bad? A populist party will argue that an AI that mainly spits out conspiracies is actually good for people and wakes them up.

In addition, many countries are now only thinking about the benefits that this technology can offer them and are not yet aware of the risks. Of course, as a country you don't want to be left behind in the AI race.

In this new world, the most important question eventually becomes: what is real and authentic? After all, we can no longer rely on our eyes and ears. This is an existential problem to keep us together as humanity, so the pressure to find a solution is increasing. Hopefully developments will be slower than expected and these solutions will arrive in time.

In short, it is all has two sides and that has often been the case with major technological breakthroughs. This genie is out of the bottle and really doesn't want to go back in there. Let's hope we don't make wrong digital friends in the future.

Do you have any comments, additions or perhaps an idea for a solution after reading this article? Please leave a message below. I am very curious about your reaction.


Recent Posts

See All

Bình luận

bottom of page