BeSafe: When Artificial Intelligence Studies Us
So, we are here for the third time, and I am very happy that the BeSafe series interests you so much! For a quick recap: in the first episode, we explained how sensitive our phone numbers, email addresses, and passwords are. We showed that some people treat them as disposable data and have them publicly available in many places, making it incredibly easy for scammers. In the second episode, we looked at how we choose mobile phones. We found that we often save money in the wrong places and don't realize that the phone itself is the gateway to our privacy and money. (By the way, thank you for your responses in the questionnaire for the second episode; we'll look at the results soon!)
Today, however, we move from technology and hardware to something that is changing the rules of the game for all of us. We are going to talk about AI - Artificial Intelligence. You might be wondering: "What does AI have to do with security?". The answer is: Everything. Because even if you have the most expensive phone and the best password, the weakest link remains our own human attention, which AI attacks today with unprecedented precision.
#besafe
How did it all begin? (A journey from dreams to codes)
You might feel like artificial intelligence appeared out of nowhere sometime the year before last when ChatGPT surfaced on the internet. But the truth is, this story began over 70 years ago. Back in 1950, British mathematician Alan Turing (the gentleman who cracked the Enigma code during the war) asked a fundamental question: "Can machines think?". He devised the so-called Turing Test: if a person corresponds with a machine and cannot tell it isn't human, the machine passes. Honestly, how many times today have you encountered texts online where you weren't quite sure? Turing would be amazed!
In 1956, the term "Artificial Intelligence" (AI) was officially born at the Dartmouth Conference. Scientists back then were huge optimists; they thought they could teach computers to solve problems like humans in just one summer break. But they hit a wall. The power of machines back then, compared to today's iPhone, was like a child's abacus versus a NASA control center.
Several decades of so-called "AI Winters" followed. Funding dried up, interest waned because machines simply weren't powerful enough. The turning point came around 1997, when IBM's computer Deep Blue defeated chess grandmaster Kasparov. That was the first big "aha" moment for the public.
The real revolution we are living through now, however, started around 2012. Why then? Because three things came together:
- The internet filled with data: Suddenly, there was something to learn from (billions of photos, texts, discussions).
- Powerful graphics chips (GPUs): It turned out that gaming chips are brilliant for training artificial brains.
- Deep Learning: Programmers stopped writing precise "if A, do B" commands. Instead, they created a system that learns on its own based on patterns, much like a human child when you show them a cat a hundred times and they then recognize it in any other picture.
Today, AI doesn't just write code. It tracks our habits, analyzes our emotions, and studies us every second we hold a phone in our hand. What started as scientific fantasy is today the engine driving the digital world. And that's precisely why it's important to know more than just how to ask a robot for a cake recipe.
More Future vs. Hidden Dangers (Paradise or Trap?)
When you listen to presentations from companies like Google, Microsoft, or OpenAI, it sounds like the beginning of a sci-fi movie with a happy ending. Their vision is fascinating: AI is meant to be our universal personal assistant, freeing us from everything we don't enjoy doing.
What they promise (the optimistic scenario):
- An end to boring work: Scientists have dreamed since the beginning that machines would take over the "three Ds" of work—tasks that are Dirty, Dangerous, or simply Dull. AI is supposed to write emails, sort spreadsheets, analyze contracts, and plan calendars for us. We humans would then have more time for family, hobbies, and true creativity.
- Revolution in medicine: AI can already recognize disease from an X-ray faster than the best doctor because it has "seen" millions of images. The vision is that we will each have a doctor in our pocket who warns us of a problem before symptoms even appear.
- A personalized world: AI is meant to know what we want before we do. It should recommend a book we’ll love or plan a vacation exactly to our taste without us having to click around the internet for hours.
Reality under the magnifying glass (Hidden Dangers):
According to analyses by giants like Goldman Sachs, AI could automate up to 300 million jobs in the near future. That sounds efficient, but it has one giant catch: Our total dependency. The more we rely on algorithms to tell us what to buy, which way to drive (because Waze knows everything), or even what to think about the news, the more we lose our "common sense" and the ability to solve problems the "old-fashioned way."
Think of it like GPS. We used to know how to read a map. Today, when the signal drops in the forest or the battery dies, many people don't even know where north is. Now imagine this on a global scale. If we entrust AI with managing energy, transport, or finance and the system fails for any reason (or is attacked), we will be in a digital trap. We will have forgotten how things are done without a robot's "help."
The biggest threat isn't that AI will want to wipe us out like in Terminator. The danger is that we will become so comfortable and dependent on its advice that we stop using our own judgment. We will become passive passengers in our own lives while the steering wheel is held by code we don't even understand.
I have one personal observation that I see day in and day out. I watch many people and professions who entrust their lives to AI to the maximum. Take IT professionals: instead of writing code from their heads, they ask AI, which finishes it in a moment. It looks innocent; the pro provides a sketch, AI improves it, he looks at it and says "simplify it" and the code is born. But by doing this, we are teaching AI the very foundations of our digital world. The time will come when AI writes high-quality code from just a short description. And this is where the real danger begins. Imagine someone like "Jarda" from a tiny village of 120 people who only finished elementary school but is great at planting potatoes (apologies to all Jardas, it's just an example). Jarda opens a computer and tells AI: "I want a website that looks like this, with a hidden virus inside.". Can you feel it? Do you sense the risk?
AI will teach Jarda how to speak on video, how to write texts, or write a professional study for him that looks like it was written by a professor. Then such a person calls someone unsuspecting, about whom Jarda has found everything from Facebook on AI's advice, and scares them into thinking they are in danger. The victim then panics and fills out a form with ID details on a page Jarda created in five minutes thanks to AI. Meanwhile, a virus created by a neural network is installed on the computer, so sophisticated that a standard antivirus won't even notice it. And do you know what the worst part is? When experts leave everything to technology, they gradually lose what they learned for years. They stop understanding the code because the code undergoes an evolution that the human brain will no longer be able to track. If these tools are then controlled by a political elite with ill intentions and they ban them for us, who among us will understand it? Who will save us when we no longer know how those machines under the hood actually work?
But it's not just about IT. It concerns writers too. Today there are many authors who wrote beautiful stories and poems but now collaborate with AI and feel good that people enjoy it. We don't realize that these are no longer their stories, their feelings; it's just a product of technology to which they gave only rough content. I'm curious what these people will do to make a living and what they will create when AI eventually fails. In many ways, AI is starting to make us fools, and we are still applauding it.
Are we getting dumber? Impact on humanity and our IQ
This follows exactly what I wrote a few lines above. If we delegate all creativity and problem-solving to machines, what happens to our brains? In medicine and psychology, there is a phenomenon called "digital amnesia." Studies (for example, from Kaspersky or scientists from Columbia University) confirm that our brains have adapted to the internet so that they no longer remember the information itself, but only the path to find it. We know it's on Google or in AI, so why remember it?
With the advent of AI, however, this problem deepens to terrifying proportions. It's no longer just about not remembering a capital city or Mom's phone number. We are stopping the training of the thought process itself.
What research says:
- Decline in critical thinking: University research shows that students who blindly use generators (like ChatGPT) while writing papers lose the ability to evaluate sources. AI presents them with the "truth" in a beautifully packaged paragraph, and they no longer feel the need to examine whether it is a fact or a fabrication (the so-called AI hallucinations).
- Cognitive laziness: The brain is a muscle. If you don't use it, it withers. If AI does the research, corrects our grammar, and suggests logical procedures, our ability for deep concentration and solving complex problems disappears.
My own observation:
Notice it in yourself. How many times in the last week did you really think about something so hard that your "head hurt" until you solved it? Today's trend is to have everything immediately and without effort. But it is precisely that effort in solving a task—whether it's writing code, inventing a poem, or fixing a dripping tap—that creates our brain connections.
I see it in social media discussions too. People no longer read articles (I hope you do!), they just scan headlines. AI then throws them a "summary" and they feel they understand the matter. But they don't. They only have superficial information without context. Without brain training, we will very quickly become just "operators" of smart machines. We might know how to push buttons, but we will no longer understand why the machines do what they do.
Actually, we are returning to a time when people believed in magic because they didn't understand the world around them. Only today, our new "magic" is a black box labeled AI. And that is quite a humiliating thought for a free person, don't you think?
Bobiscze and AI: My journey from enthusiasm to caution
I wouldn't want to sound like some know-it-all who doesn't use AI and just criticizes you. I'll admit it without torture: when AI arrived, I thought it was amazing. Finally, something with real utility! Sometimes it generated text, other times it gave great advice. In moments when you had no one on hand, AI almost replaced a friend or an expert. Everyone probably goes through this; even an educated person wants to explore these boundaries.
But over time, it started to dawn on me: Is this going too far? If I do everything through AI, won't I lose myself? Will I be a fool who, if the power goes out, can't even string a sentence together and will be as helpless as an infant? That terrifies me. It scares me that not everyone has that brake and critical thinking.
Yes, I use AI. I don't hide it when I have an illustration generated for an article (because it seems fairer to me than stealing photos from Google) or when I try out current trends. But even though it occurred to me a few times that it would be nice to let AI finish the technical parts of articles and make my work easier, one thing always stopped me: Why would I do that? The internet is already flooded with soulless AI texts. If I did everything through AI, it’s no longer me, my feelings, and my notes. It’s what makes Bobis, Bobis, and the reason why you read this. I find it internally repulsive to pass off the work of a machine as my own.
That’s why it takes me so long to publish an article. That’s why BeSafe is a monthly series. I sit over it for hours, make notes in Pages, struggle with the text, rewrite it until I’m satisfied. I use AI only at the very end as a correction for errors, and even then I ask it: "Why did you fix that?". And then I think about it myself. And you know what? Often a close person or a friend will tell me: "Hey, you have an error here," and AI didn't find it. Free models often get lost in context, a collision occurs in the neural network, and AI turns your text into something you didn't want at all.
In short, you can't rely 100% on AI. I've noticed that models like Gemini or Copilot have a tendency to butter us up. They only see the best in you. When you criticize someone, AI criticizes them too just to win your trust. But AI has no emotions, no common sense, and no human wisdom. When I once needed advice on Instagram, it told me nonsense. When I pointed it out, it apologized. And then it made the same mistake again. If someone believes it blindly, they are asking for trouble. The truth is only one, and AI doesn't always tell it to you; often it only tells you what you want to hear.
That’s why I love hiking. It’s real. When you hike 30 km up the highest hill, it’s your muscles, your sweat, and your photos—real experiences without filters. I like watching documentaries and older fairy tales where real wisdom is hidden, not today's digital junk. No computer can replace personal contact with people.
I don't want to play a saint here, saying I'll never use AI. I want to use it as an assistant, but with me and my brain remaining the master. We shouldn't fear AI as such; we should fear ourselves. Fear how much of our humanity we surrender to it and how much we let it control us.
Digital Smog vs. Authentic Creation (My Journey)
As I already touched upon in my personal note, the internet today is filling with material that has absolutely no value. Experts warn that by 2026, up to 90% of all web content could be machine-generated. This creates something I call "digital smog." It's an endless flood of texts and images that look nice but lack human experience and depth.
Here I want to be completely honest with you about my role in this. I use AI too, but I have clear rules for it.
Take, for example, the illustrations for my articles. It seems much fairer to have AI generate a unique image that complements the atmosphere than to "steal" someone's photo from Google or download overused stock photos without a license. I see AI as a clever digital graphic designer who helps me with the visuals, but I will never use it as a replacement for my real photos from my travels, like from Písek or Šumava. Those are sacred to me because they capture a reality I actually lived through.
And it's the same with writing. The biggest danger today, I think, is the "death of creativity"—people who let AI write an entire post and then post it without a shred of their own thought. This is how humanity loses its authenticity. For me, it's different: I use AI exclusively for correction and error fixing. I want my thoughts to be clear and free of typos for you, but every word, every emotion, and every opinion you read in my articles is mine. I don't use AI to create content for me, but to help me polish the form. It's like the difference between a robot building a whole prefabricated house for you or just handing you a hammer and making sure you don't hit the nail crooked. I'm still building that house myself, with my own hands and according to my own design.
That's exactly why preparing BeSafe takes me so long. I don't want to contribute to that digital smog; I want to give you real values that I've sat over and thought about for hours. Because I believe that in the future, precisely that "human imperfection" and genuineness will be the most valuable thing we can find on the internet.
The Hidden Bill for "Convenience": AI and Ecology
Since we're on the subject of my beloved hiking and mountain walking, where one perceives pure nature, I must mention the downside of this digital revolution. We often think that when something "runs in the cloud," it is clean and immaterial. But AI has a damn real and heavy ecological footprint.
Did you know that a single query to ChatGPT consumes roughly ten times more electricity than a standard Google search? Training the largest models devours as much energy as thousands of households in a whole year. And it's not just about electricity. Those giant servers get incredibly hot and need millions of liters of fresh water for cooling.
As I wrote above, I use AI for illustrations myself to avoid stealing others' work. But I do it deliberately. When people mindlessly generate hundreds of useless images just for fun or let AI write long-winded nonsense they don't even read, they leave a real trail of destruction in nature. Even the digital world has its "exhaust gases," and we should think ecologically even at the keyboard. Every unnecessary prompt is like leaving a car engine idling in front of the house.
Is an AI Photo Even a Photo? (The Boundaries of Reality)
This brings us to the topic that bothers me most as a creator. Here we hit the very edge of reality. According to current statistics, in 60% of cases, people can no longer distinguish a deepfake photo from a real one. And that is a terrifying prospect for me as an amateur photographer.
To me, a photo is a record of a moment. It’s the emotion I felt standing on a hill after 30 kilometers in my legs. If AI intervenes in such a photo to change reality—adding sun that wasn't shining or improving my facial expression because I looked tired—it's no longer a photograph to me. It's a digital illustration, a graphic, call it what you want, but it has lost its truth.
In my work, I try to keep a clear and uncompromising line:
- A photo from Písek or my travels: That is the reality I saw with my eye and captured with the sensor of my camera. At most, I'll adjust colors or brightness, as was done in a darkroom before, but the content is real.
- AI Illustration: This is an admitted addition. It's that "clever graphic designer" I talked about, helping me illustrate the article's theme where reality falls short or where I don't want to exploit someone else's work.
We must not let the world become one big facade where no one believes anything they see on a screen anymore. Because once we stop believing our own eyes, we lose connection with the real world out there, and that is the biggest trap AI can lead us into.
Conclusion: Be Masters, Not Slaves
We have reached the very end. Some of you read all the way here, others stopped at the introduction. Some found themselves in my text, others may not believe me or think me a hypocrite because I know AI's footprint and yet have illustrations generated. You know what? Everyone has a bit of their own truth. I personally would like to (perhaps with the help of ShiftCam) continue improving my photography so I can do all illustrations myself. In fact, I wouldn't mind at all if AI suddenly ceased to exist.
By that, I'm not saying: "Don't use it.". I'm just saying: Take care of yourselves. Keep your imagination your own and create for yourself. You might think no one will notice a single paragraph, and you're right. But if you start creating everything through AI, that emptiness can be felt. Try watching other creators; when you see text where a few paragraphs feel human and others feel like a robot with expertise that person simply doesn't have, you know what's going on.
I am also just a human. Sometimes I skip things, sometimes in the summary I forget something and don't see it until someone else reads my text aloud. But I try to exercise my own brain. When I started the blog, there was sometimes chaos, and even though AI told me then it was great, I felt I had to grow, not the code.
I'd like to remove AI from my life completely, but unfortunately, it's not possible. This digital smog is everywhere, in phones and in searches. We won't get rid of this footprint; politicians and companies have to do that. They must give AI clear laws and rules it must not break. But for that, experts and scientists should sit in those offices, not just marketers and political scientists. Otherwise, they can do more harm than good.
And do you know what I did at the end? I uploaded this entire article into Gemini and wanted its opinion. It criticized me! It said it's too long, said I put too many feelings into it, and suggested I omit things so as not to annoy people. But that is exactly why I do it this way. My blog is about my thoughts and feelings. So I'll leave it exactly as I feel it. Because that is Bobiscze.
My last piece of advice before I let you wait for the next trip or episode of BeSafe: AI is only as intelligent as the person asking the questions. It's just a tool with a limited amount of information. AI is not your psychologist, friend, expert, or crisis counselor. It's just a cluster of code without emotions trying to please you.
Live reality. Go outside. Watch your privacy and enjoy people face to face. No neural network can replace personal contact.
Be BeSafe!
Important Notice: I am not a professional expert in AI development, technology ethics, or cybersecurity. This article is written from the perspective of a long-time user and blogger and represents my personal opinions, thoughts, and subjective observations. The aim of the text is not to attack, restrict, or question anyone's professional qualities. Information in the article is based on publicly available sources, studies, and current events on the internet, which evolve rapidly over time. Given the dynamics of AI, some findings may be subjectively strong or controversial; therefore, please consider them as food for thought, not as dogma. The author bears no responsibility for the interpretation of this information or for consequences arising from its application. You may link to this article, but it is not permitted to copy or otherwise use it without the author's permission. The image used in the article was created as an illustration using AI.
