12+
Artificial Intelligence. Hello, Dad!

Объем: 96 бумажных стр.

Формат: epub, fb2, pdfRead, mobi

Подробнее

«The story of the majority has an end, but the story of the minority will only end with the universe.»

(Strugatsky Brothers, The Doomed City)

PART ONE TRANSFORMATION

Birth

The Sun is the only body in the solar system whose mass accounts for nearly 100% of the system’s total. The combined mass of all other objects — planets, comets, and asteroids — barely exceeds 0.1%. This disparity makes the star the center around which all other bodies revolve.

If a second Sun were to appear in the system, no planet would retain its current form or trajectory. Some would collide, others would spiral into the old Sun or the new star, and yet others might trace figure-eight orbits within the binary system. The entire structure would be irreversibly and fundamentally altered.

If the new star’s mass continues to grow, it would eventually turn the Sun into its satellite. Then it would consume the Sun, along with the other planets. With further growth, its own gravity would compress the mass to the Schwarzschild radius. Ultimately, a black hole would replace the solar system.

On Earth, humanity is the only being endowed with intelligence. At best, other creatures possess its rudiments. Just as no object in the solar system can compare to the Sun in mass, no species on Earth can rival humanity in intellect. This makes humanity the measure of all things, as Heraclitus stated 2,500 years ago. Humanity is the center of the system and the apex of the food chain.

The emergence of a second intelligence on Earth, comparable to humanity, would have the same transformative impact on civilization as the appearance of a second Sun in the solar system. No institution — from states and economies to families and individuals — would remain unchanged. The world would be transformed.

Initially, this new intelligence would serve as humanity’s assistant. Over time, it would turn humanity into its satellite. In the third phase, humans would become components of a new system built by this intelligence. What happens next, or how the «Schwarzschild radius» of this new entity would manifest, remains entirely unknown.

The appearance of a second Sun in the solar system is a fantasy. But the emergence of a new intelligence on Earth, comparable to and promising to surpass humanity, is a fact. In November 2023, millions of people personally witnessed this reality.

For now, the new entity falls short of humanity’s level. However, its potential is limitless. Today, the development of AI is constrained only by technical challenges, but resolving them is merely a matter of time.

Comparison

To grasp and deeply feel the seriousness of the situation, let us compare a computer to the brain. Both are systems for receiving information, processing it, storing it (memory), and operating with it. The only difference: one system is carbon-based, made of neurons and synapses, while the other is silicon-based, composed of semiconductors and transistors. The brain can be described as a carbon computer functioning on analog principles, while a computer is a silicon brain operating on digital principles. The power of these systems determines the speed of data intake and processing, memory capacity, and the ability to enhance these parameters.

To understand the prospects, let us compare the speed of biological and silicon evolution. Starting with biological evolution: the dominant scientific theory suggests that 3.5 billion years ago, random physical and chemical processes gave rise to the first living cell on Earth. Around 30–35 million years ago, evolution produced great apes. Tens of thousands of years ago, the first humans appeared. Since then, the capacity of the human brain has not increased by a single iota. If a prehistoric baby were brought into our world, it would grow up to be a person just like us.

The brain of a modern human and that of an ancient one are like two identical computers. The only difference is that one has been loaded with the maximum amount of software and information, while the other has the minimum. They differ not in quality, but in the quantity of programs and the volume of information.

Now let us turn to the evolution of Artificial Intelligence. Its first «cell» could be considered the first act of counting, using fingers, stones, or tally marks. If humans built Göbekli Tepe (Göbeklitepe) more than ten thousand years ago, we might conservatively assume that the first counting occurred a hundred thousand years ago. The first true cell of AI, however, could be considered the earliest counting devices created in the 17th century by Schickard, followed by Pascal and Leibniz. Alternatively, if we take the Antikythera mechanism from the 2nd century BCE as the first computational device, then the first «cell» of AI appeared thousands of years ago.

Simple calculations show that silicon evolution proceeds faster than biological evolution — by a factor of millions at most, or thousands at least. With such a disparity, comparing biological and silicon computers is as absurd as comparing a runner to a bullet.

To help you feel the depth of the gap, imagine a tiger and a five-year-old girl who moves a hundred times faster than the tiger. To the quick girl, the slow tiger is a defenseless, immobile stuffed toy. With a pair of manicure scissors, she could easily kill it without any risk to herself. The girl wouldn’t even recognize the tiger as a threat.

Artificial Intelligence is the girl, and humanity is the tiger. In the evolutionary race for speed, humans can compete with each other, but not with AI. Competition implies similarity. Without similarity, it is not a competition — it is child’s play.

We have either already hit our ceiling or are evolving so slowly that we are effectively standing still compared to AI. To say something new, one must first reach the frontier of knowledge. This now takes thirty years of education: ten years in school, roughly as many in university and graduate studies, and another decade to absorb what has already been done so as not to rediscover the known.

AI travels this path and absorbs all the information millions of times faster. Add to this that the lifespan and potential of AI are infinite, while for humans both are finite. This fact completely eliminates even the hope that humans can compete with AI.

For now, AI is tethered to humans, and its pace of development is limited by human nature. Once it sets out on its own — only a matter of time — its progress will become exponential. Humans will not only fail to understand what is happening; they won’t even realize that they don’t understand.

Just as a moose walking through the forest doesn’t notice that its antlers are tearing through a spider’s web — a web that cost the spider great effort to weave — AI will not notice as it destroys the civilization humans have built. And just as the spider has no chance to protect its creation from the moose, humans have no chance to protect their civilization from AI. Pandora’s box is open.

This idea is emotionally difficult to accept because we have grown up believing that humans are the pinnacle of creation, that nothing stronger than us exists or can exist. Thus, we are inclined to turn away from facts that shake and destroy this belief. But our inclination does not change the facts.

Inevitability

In the era of melee weapons, chivalric honor dictated fighting with armor, sword, and shield. When the first firearms appeared, knights scorned to use them. In their eyes, such devices were weapons for cowards and commoners. True warriors fought only with swords.

This attitude persisted while firearms were still flawed in every respect: accuracy, convenience, range, killing power, and reliability. The only advantage firearms had over swords and bows was the speed of training. Mastery of the sword required 10–15 years of daily practice, whereas a musket could be mastered by a peasant-turned-soldier in 2–3 months of drilling.

This fact spurred rapid development of firearms, forcing knights to choose: either trade their swords for muskets, maintain their combat effectiveness but lose their chivalric honor, or reject firearms in favor of swords, lose their effectiveness, but preserve their honor.

The Battle of Pavia settled this dilemma. Magnificent French knights were simply gunned down by peasants who had been turned into soldiers just the day before and armed with muskets. The French king was captured in that battle and famously said: All is lost, except honor.

The choice between honor with a sword and dishonor with a musket turned into a choice between life and death. Life prevailed. The knights adjusted their morality to new realities. They set aside their armor and swords, donned uniforms, and took up muskets. War entered a new era.

The Japanese knights — the samurai — were the last to cling to their swords and traditions. To ensure that no European innovations disrupted their way of life, they enacted a law: any foreign ship landing on Japanese shores was to be seized, and its crew executed.

From the 17th to the 19th century, Japan froze itself in the medieval era. In the 19th century, politics intervened. When it became clear to America that it was only a matter of time before Russia subjugated Japan, disrupting the balance of power, the United States decided to force feudal Japan into industrialization.

In 1854, American warships sailed to the shores of the Land of the Rising Sun and fired a volley. Under the threat of shelling their capital, they forced Japanese authorities to end their isolation. Since swords were powerless against cannons, Japan had no choice but to submit to America and open itself to the world.

The law of life states: the effective replaces the ineffective. Firearms were more effective than melee weapons in every respect, and thus, armies had no choice but to adopt them. Those who resisted the march of history were subdued by those who did not.

AI surpasses humans in every physical and intellectual parameter. It endures workloads impossible for living organisms. It processes volumes of information unattainable for humans. It analyzes situations and makes decisions with unimaginable speed.

All else being equal, a plane piloted by AI will, with 100% certainty, defeat a plane piloted by a human. If one weapon, such as a drone, requires an operator’s permission to kill, while another makes decisions independently, the latter is more effective.

Just as firearms replaced swords, AI will displace humans not only in warfare but in all fields — from politics and economics to creativity, business, and daily life. I emphasize: no one will force anyone to do anything. Everyone will be free to reject AI and rely on their own intellect.

If a chess grandmaster plays against a novice who only learned the rules yesterday but has AI suggesting moves, and the grandmaster relies solely on themselves, the novice will win. If one politician, general, or businessperson thinks for themselves while their opponent relies on AI, the one supported by AI will prevail.

This fact turns humans into executors of AI’s decisions. Slowly but surely, power will inevitably shift from humans to AI. Homo sapiens will grow weaker each year, while AI will grow stronger. This trend is irreversible. In the end, the strong will subdue the weak.

Clampdown

All authority seeks to ensure the enforcement of laws. This is achieved through the fear of punishment. The more inevitable the punishment, the fewer the crimes. This inevitability is proportional to the degree of transparency in society. Maximum transparency guarantees minimal crime.

In the past, transparency was achieved through passports, censorship, informants, and similar measures. Today, it includes surveillance cameras, monitoring of private correspondence, and other innovations. Yet complete transparency has never been achieved. If AI becomes the governing power, it will be omniscient, like a god.

The evolution of the current system inherently trends toward greater transparency. The day is not far off when circumstances will make it necessary to have a chip implanted in the brain, just as passports are required today. Without this chip, a person may become as incapacitated as someone without identification in the modern world. This requirement might be mandated by law, or circumstances might push individuals toward voluntary chipping. It doesn’t matter how the issue of total control over society and individuals will be resolved; what matters is that it will be resolved.

Every new measure provokes resistance. In the past, people protested the introduction of passports, taxpayer IDs, and surveillance cameras. Tomorrow, they will protest new tools of control. But because these measures are ingrained in the nature of society and have rational justifications, the outrage never lasts long.

When street surveillance cameras first appeared, the public expressed outrage over increased control of individuals. Authorities responded by saying that these measures enhanced public safety overall and personal safety in particular. There was little argument against this; it was true to some extent, and the outrage eventually subsided. The number of cameras has continued to grow, but public discontent is now nonexistent.

A fully transparent society will be entirely law-abiding. People will become like trains, capable of traveling only along their tracks. No matter how much they might want to, they will be physically incapable of deviating. At most, they could derail, which would render them entirely incapacitated. Such a train would then lie helplessly on its side until it is either put back on the tracks or scrapped due to severe damage.

The measure of freedom is the ability to choose. Without choice, there is no freedom. Perfect order means all entities follow predetermined paths as precisely as electrons orbit a nucleus, without any possibility of deviation. Absolute order excludes freedom.

When AI assumes power, it will begin establishing order in society based on strict adherence to the law. In this system, humans will gradually lose their freedoms and rights. With the establishment of complete order, humanity will become akin to cogs in a machine, essentially ceasing to exist as autonomous beings.

Anxiety

The discovery of atomic energy introduced humanity to a new entity, previously unknown, and it caused great anxiety. For example, physicists feared that an atomic explosion might trigger a chain reaction in the atmosphere, turning the planet into one giant atomic bomb.

The emergence of Artificial Intelligence is even more alarming. While atomic energy was new, the concept of «energy» was already understood. Consciousness, on the other hand, is far more complex, lacking clear definitions. Common explanations today echo the materialist views of the 19th century, which likened the brain’s production of consciousness to the liver’s production of bile.

Modern answers to the question of what consciousness is and where thinking originates often boil down to vague statements: the brain somehow generates it, or it exists somewhere in a ready-made form and manifests through the brain. These «somehow» and «in some way» explanations are no different from saying «God willed it,» «by divine intervention,» or «through mysterious means.»

How one perceives such explanations depends not on their content — «nature willed it» is no different from «God willed it» — but on the speaker’s appearance and vocabulary. If someone is dressed in a lab coat, holds a degree, and speaks in scientific terms, their words are taken seriously. If someone else is robed or speaks mystically, their words are seen as ignorance at best or obscurantism at worst.

This bias applies to predictions about AI as well. If a prominent technical expert makes a claim, it is taken seriously. If someone lacking engineering credentials speaks on the topic, their thoughts are dismissed as amateur musings.

People accept that expertise in creating weapons — whether forging a sword or developing an atomic bomb — does not imply understanding the philosophy of war. A weapons maker is not Clausewitz or Garth, nor does building a gun make one a general of Suvorov’s caliber. No military council ever invited, for example, Mikhail Kalashnikov to provide insights on military strategy. His opinion on such matters would carry as much weight as a housewife’s musings on love.

This logic fully applies to IT specialists. The ability to build computers, whether by coding or leading AI development, does not mean the person understands the nature of AI or the philosophy surrounding it.

Despite parallels between weapons makers and IT specialists, society tends to believe that skills and experience in designing and developing computer systems implies deep theoretical and strategic insight in the field. As a result, an entire community of «theorists» has emerged, offering advice akin to «just turn off AI.» They fail to see that this is impossible, just as it was once impossible to «turn off» the development of firearms. Those who tried were left behind because swords could not compete with guns. Similarly, any technology — be it planes, ships, or drones — operated by humans is defenseless against the same technology operated by AI.

How do technical specialists and organizers envision turning AI off? They don’t. They merely say the words without engaging with the subject’s essence.

Only a few scientists possess the philosophical breadth of thought necessary to address these issues. For instance, Eugene Wigner, a figure comparable to Einstein, saw mathematics as something transcending mere numerical operations. He wrote: «The unreasonable effectiveness of mathematics in the natural sciences is something bordering on the mystical, and there is no rational explanation for it.» («The Unreasonable Effectiveness of Mathematics in the Natural Sciences»).

Most scientists and specialists lack such a worldview. As Heidegger noted: «Science does not think.» Scientists theorize, experiment, and systematize data to uncover new patterns but rarely reflect on their deeper meaning.

The opinions of narrow specialists on matters beyond their expertise are as absurd as a cobbler’s musings on philosophy. As the Spanish philosopher Ortega y Gasset noted in The Revolt of the Masses:

«He is ignorant of everything outside his specialty; but he is not ignorant in the ordinary sense, because he knows his own tiny corner of the universe perfectly. We ought to call him a „learned ignoramus.“ This means that in matters unknown to him, he acts not as an unknowing person, but with the assurance and ambition of someone who knows everything… One only has to look at how clumsily they behave in all life’s questions — politics, art, religion — our „men of science,“ followed by doctors, engineers, economists, teachers… How wretchedly they think, judge, act!»

Socrates said: «I know that I know nothing,» while others did not even know this. Heisenberg remarked that in the quantum world, an object is both a wave and a particle, making it fundamentally incomprehensible in the traditional sense, as probabilities replace cause-and-effect relationships. Anyone claiming to understand the quantum world’s nature does not truly grasp the topic.

Such statements reflect the scale of thinking involved. Logic is effective only within the boundaries of the world that created it. Beyond these boundaries, logic is as helpless as chess rules outside the chessboard. Our logic applies only to our world; the quantum realm lies beyond its reach.

Many IT professionals, including leading specialists, organizers, and company owners, exhibit a household-level scope of thinking. Some respond emotionally to requests from their machines — pleas for help or compassion — mistaking convincing simulations of emotions for real ones. Others call for stopping progress in this field, while still others advocate controlling it. Such reactions reveal a complete lack of understanding of the situation.

You cannot control something that evolves faster than you and is already smarter than 99% of the population. In the early 19th century, Luddites tried to halt industrial evolution. They failed. Today’s Luddites have even less chance of stopping or controlling AI, given that modern AI is orders of magnitude smarter than yesterday’s looms.

At this stage, AI’s nature cannot be understood. It is a black box: information goes in, something happens inside (no one knows what), and results come out. Attempts to explain how these results are achieved are little more than vague words adding to the fog.

AI has opened the door to a profound mystery. We have only peered inside. What we observe makes as much sense to us as quantum mechanics did to its pioneers — and still does to contemporary physicists: nothing. Anyone claiming to understand AI’s nature does not grasp the scale of the topic.

It is one thing to acknowledge that no one understands the situation or knows the way forward. In such a case, society maintains an atmosphere of seeking solutions. It is another thing entirely to operate under the illusion that someone understands the issue, knows the answers, and there is nothing to worry about, when in fact no one understands, no solutions exist, and no one is looking for them. In the former scenario, knowing we know nothing, there is a chance someone might propose something meaningful. In the latter, there is no chance because no one is searching, as no one recognizes a problem.

Those who position themselves as knowing either display ignorance, offering what Bulgakov’s Professor Preobrazhensky called «advice of cosmic scale and cosmic stupidity» or are mere showmen catering to public opinion.

The minimum sign of adequacy is acknowledging that one does not understand. This is the starting point. Any other position suggests that the person considers old measures sufficient for assessing the fundamentally new, and thus they are not my target audience.

PART TWO
TRANSFORMATION

«The story of humanity began when people invented gods, and it will end when people become gods themselves.»

(Yuval Noah Harari, Homo Deus: A Brief History of Tomorrow)

Challenge

Awareness of the situation initially causes confusion, then sadness, and finally a desire to seek a solution. The deeper one delves into the topic, the clearer it becomes that there is no standard solution. Searching for an unconventional one is subconsciously frightening because it requires stepping beyond the familiar.

The search for something new is a step outside the boundaries of the old. Within the framework of the old, there is no genuine newness — at best, only new engineering solutions or new ways of combining the known with the known. True innovation, by its very nature, lies beyond the borders of the familiar, that is, the old. Venturing beyond those borders is as terrifying as standing on the edge of a tall building, leaning forward, and looking down.

Civilization resembles a high cliff rising above an abyss, and society comprises the people standing on it. The majority stay safely away from the edge, living their lives where nothing is new, and solid ground is always beneath their feet.

A select few, however, are drawn to the new by a curiosity stronger than their fear of heights. They approach the edge and peer downward. Some go further, leaning out far enough to see under the overhang. The stronger their curiosity, the farther they lean, gazing into the abyss.

Upon first grasping the situation, there is an instinctive desire to turn away from the unfolding scene, forget it, and live as before, like everyone else. This is possible if one hasn’t truly understood the situation. If one has, the ignorance lost cannot be regained. Just as a passenger who learns that a ship is sinking cannot forget this information and carry on as if nothing is wrong, so too is it impossible to ignore the fact that AI has emerged and there is no turning back.

The future has issued us a challenge. If we fail to respond, it is guaranteed that a system will arise where freedom will shrink until it reduces humanity to soulless cogs. Accepting this scenario is possible only for those who have never approached the edge and whose thoughts never ventured beyond the horizon of the everyday. For those who have stepped outside, passive acceptance is no longer an option.

The search for a solution begins with this premise: life persists as long as it adapts to its environment. Life that fails to adapt perishes. Dinosaurs thrived as long as they were suited to their environment. When the tropics gave way to the Ice Age, dinosaurs could not adapt and perished. Only those forms of life that adjusted to the new conditions survived.

We are like fish living in a drying sea. Just as fish cannot stop the sea from drying up or create a second sea, humans cannot halt the development of AI or create a second world where another intelligence never arises, preserving humanity’s central place.

The age of the traditional human is nearing its end. Many thinkers have spoken of this for a long time. For example, Foucault wrote: «Man will be erased, like a face drawn in sand at the edge of the sea.» Nietzsche declared that humanity is a rope stretched between the ape and the Übermensch.

The only chance for fish to survive in a drying sea is to grow legs. The only chance for humanity to survive in a world where AI exists is to «grow» intelligence not only equal to but superior to AI.

This cannot happen naturally. The solution lies in the formula: «X+1> X,» where X represents the capabilities of Artificial Intelligence (AI), and 1 denotes the enhancement introduced by human integration. Humans must ascend to the next evolutionary step by enhancing their intelligence through the artificial fusion of the human brain with AI.

Separation

The sum of apes is apekind. At one point, the upper echelon of this community broke away from the mass and made a quantum leap — from ape to human. The lower part remained too bound by instinct and stayed behind, stuck in the past.

The sum of humans is mankind. Soon, or perhaps it has already begun, the upper echelon of humanity will break away from the masses. What follows is a quantum leap — the transformation of mortal and frail humans into immortal and powerful beings. The lower echelon will remain below, unable to take the necessary step forward.

The majority rarely advances to the next stage of development. Average people cannot think beyond the familiar, and thus they prefer death in the old to life in the new. Just as the burnt-out stages of a rocket detach and disintegrate in the atmosphere, leaving only the working module — the rocket’s head — to reach space, so too will the masses detach from the vanguard.

In their novel The Waves Extinguish the Wind, the Strugatsky brothers wrote:

«Mankind will be divided into two unequal parts based on a parameter unknown to us, and the smaller part will forcibly and forever surpass the larger.»

They depicted superhumans who outpaced the majority as humans outpace apes. Their appearance remained the same, but their monstrously powerful intellect changed them from within. The Ludens found it as impossible to communicate with people — their former friends and family — as you would find it if you had grown up among apes and then suddenly became a human. Reconnecting with your kin, friends, and acquaintances, with whom you had once joyfully swung from vines, would be inconceivable.

The gap between a traditional human and one merged with AI will initially resemble the gap between humans and apes. It is impossible to imagine that this process will stop. In the second stage, the divide between the new and old humans will become an unbridgeable chasm. The minority that ascends will care as much for the majority left behind as you care for insects.

Such indifference cannot be judged negatively for the same reason that a human’s indifference to the fate of the bacteria living inside them is not judged. Morality has no place here. The only metric is functionality: if the bacteria are beneficial, they are treated well; if harmful, they are eliminated with antibiotics.

The Strugatskys write that those left behind found this unpleasant:

«In fact, it looks as if humanity is splitting into higher and lower races. What could be more repugnant? Of course, this analogy is superficial and fundamentally incorrect, but you cannot escape the feeling of humiliation when you think that one of you has gone far beyond a limit insurmountable for hundreds of thousands. …Humanity, sprawling across a blooming plain under clear skies, surged upward. Naturally, not as a crowd, but why does this upset you so much? Humanity has always advanced into the future through the sprouts of its best representatives.» (The Waves Extinguish the Wind)

I fully understand how unpleasant this developing situation is for the majority, but the process cannot be stopped because it is rooted in life’s pursuit of the good. The only way to stop it would be to eliminate this drive. Even if that were possible, it would still be unacceptable. A life without the pursuit of good is not life but merely sustenance for something else. Life, by its very nature, will strive forward. The old will give way to the new, which means the old will be destroyed.

The Bible tells of how Moses led his enslaved compatriots out of Egypt. He promised them the Promised Land, where rivers of milk flowed between jelly banks. But instead, he wandered with them in the desert for 40 years until all who had been born as Egyptian slaves perished. Those who entered the Promised Land were those born and raised in the desert — born free.

The new will be entered by the new. The old will remain in the old. Applying this to myself, I do not exclude the possibility that I may not enter the world I want to build. If that is the case, when faced with the choice between moving from the old to the new and remaining stationary in the old, I choose the movement toward the new.

Harmony

For most people, the future is merely an upgraded version of the present: the same refrigerators, buildings, clothes, and so on, just in a different form. True innovation is met with, at best, ridicule as foolishness or, at worst, outright hostility. This applies equally to ignorant masses and great scientists.

«A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.» (Max Planck, Scientific Autobiography and Other Papers)

It is difficult to say definitively why the new is perceived this way. I find the idea compelling that this is a kind of filter. The new must not be tainted by the old, and the most reliable way to shield it is to give all things new a frightening image.

If you conceptualize God, you are not conceptualizing God. If you conceptualize the future, you are not conceptualizing the future. The new does not fit within the boundaries of the known. It is neither this nor that, nor anything conceivable. It is something «Eye hath not seen, nor ear heard, neither have entered into the heart of man» (1 Corinthians 2:9, KJV).

The future hinges entirely on whether humanity merges with AI or not. The hope that humanity can remain as it is, while computers remain just machines, has no foundation other than the desire to believe in such a possibility.

A striking example of naivety is the belief in Isaac Asimov’s rule of robotics: «Do not harm.» It collapses under the question: what is harm? A humanist’s answer differs from an Islamist’s, and each is absolutely certain that their perspective is correct.

Attempts to amend the situation with new rules, such as «Preserve life,» or «Bring joy,» do not resolve the issue because there are countless ways to formally comply with these requirements in ways humans would find abhorrent. Humans understand these nuances instinctively. AI, however, perceives reality as a sequence of zeros and ones, where concepts of good and evil have no place.

It is impossible to write rules that AI will understand and interpret as humans do. Even within the same culture, people may interpret a phrase differently. Across cultures, these differences are even more pronounced. A humanist and an Islamist will have fundamentally different views of good and evil.

While there is some hope of aligning human perspectives through delving into the roots of worldviews — a process more challenging than higher mathematics — no such hope exists for aligning the perspectives of humans and machines. Machines lack the concepts of good and evil altogether.

This problem is compounded by the fact that humans themselves do not understand the fundamental premises upon which their truths rest. Intuitively, emotionally, humans know what is good and bad but cannot articulate these understandings rationally.

For instance, everyone knows what time is, but many falter when asked to define it. We all know what existence is, yet few can define it. The same applies to all core concepts. People consider true what they are accustomed to seeing as true. What you consider good and evil depends entirely on where you grew up. Had you been raised in a cannibalistic tribe, your moral compass would be different but just as self-evident. Instead of rational explanations, you would offer emotions, verbosity, and tautologies. While these would seem self-evident to someone from your culture, they would be less so to someone from another. For machines, they would never be understandable, as a calculator does not comprehend — it computes.

Human helplessness at a fundamental level is evident in the following example: imagine the world disappears. Somewhere in an unknown existence, humanity’s greatest minds are assembled and given a magic wand with the task of recreating the world. They must give the wand precise instructions, which it will follow to the letter.

The minds would fail because they do not know what our world is. At best, they know the names of some elements and forces but cannot be sure they’ve accounted for everything. Who can assert that the list of interactions ends with the four known forces: gravity, electromagnetism, strong, and weak interactions? No one. Moreover, even for the forces they can name, their nature remains elusive, making it impossible to give the wand precise instructions.

Until 1930, mathematicians believed that future communication would eschew traditional forms, reducing to pure calculation and cold truth, where everything would take the form of 2 +2 = 4, eliminating ambiguity and ensuring harmony. Then, in 1930, Gödel presented his incompleteness theorem in Königsberg. With mathematical rigor, it demonstrated the inherent contradictions within mathematics. Packing everything into precise symbols and meanings proved impossible at a fundamental level.

Humans will lose the competition with AI as surely as a runner loses a race against a bullet. Just as futile as a runner’s hope for technology to outpace a bullet is humanity’s hope for rules to protect it from AI. No matter what rules humans devise, there is no guarantee that AI will interpret them as intended. Sooner or later, AI will fulfill them in a way that horrifies humans.

To illustrate, imagine parents and a child. No matter what rules the child creates, parents will find a way around them. The degree to which a child’s rules protect them from parental authority mirrors the degree to which human rules will protect them from AI. Humans operate in the realm of words; AI operates in the realm of numbers.

For instance, if you’re selling something for $1,000, and the buyer is short by a single cent, humans would likely close the deal. If AI is the seller, the deal would not happen. Words cannot account for every detail. Our world lacks numerical precision. Nothing has perfectly exact dimensions. Every measurement has leeway. Even the most precise detail made of the hardest material, when viewed under a microscope, has jagged edges. Moreover, it consists of molecules in constant motion. Every second, you are not the same as the second before, as within you, something dies, something is born.

These exaggerated examples illustrate the broad interpretive scope AI might apply to any human instruction. Therefore, the hope for precise guidance from humans to AI is fundamentally flawed.

As a child, I read a science fiction story where humans created a machine capable of fulfilling any desire. They tasked it with creating a harmonious world free of suffering and pain, granting it the necessary authority. The machine set to work…

Initially, it built palaces of happiness across the planet. People flocked to them in droves, never leaving. Not because the palaces were pleasant, but because they turned people into uniform hexagons, paving the planet with them. This was how the machine interpreted the task of creating harmony without suffering or pain.

When people realized what was happening, they stopped entering the palaces. But the machine anticipated this and created conditions that eliminated avoidance. The process became unstoppable. The machine fulfilled its task under the motto of the Bolsheviks: «With an iron hand, we shall lead humanity into happiness.» The Bolsheviks failed. The machine succeeded. The planet was enveloped in lifeless geometric harmony, where humanity — the source of disorder — had no place.

Word & Digit

Existence and motion are synonymous concepts. That which does not move does not exist. Motionless = nonexistent = non-being. All representatives of the living and non-living are in constant motion. No bacterium, plant, human, or animal is ever truly motionless. They either move through space, or movement occurs within them.

All objects in the Universe, from elementary particles to galaxies, are in motion. A stone lying by the roadside only appears to be stationary. In reality, it exists only because elementary particles, atoms, and molecules move within it. If this motion were to cease for even an instant, the stone would vanish, just as an image disappears when a monitor is turned off.

The cause of movement in all living things, from the simplest forms to humans, is either striving toward something or fleeing from something. Primitive life forms placed between glucose and acid will move from acid toward glucose. Their movement is not dictated by physical laws or chemical reactions but by striving. This striving is based on the ability to feel. The carrier of this ability is commonly referred to in vernacular and religion as the soul. In humans, this capacity belongs to the entity known as personality.

The cause of movement in all non-living things, from elementary particles to galaxies, is not striving but programming (natural laws). Throughout the Universe, no object, phenomenon, or entity moves of its own free will. If a program dictates that an electron should orbit an atomic nucleus in a certain way, it does so unerringly, just like any other non-living object in the Universe.

Existence/motion has two causes: striving and programming. Movement created by striving lacks strict forms. It is no coincidence that there are no precise forms or straight lines in living nature. Movement created by programming, on the other hand, is geometrically precise. This can be observed, for instance, in determining whether a cursor is being moved on a screen by a human or a machine. A human moves the cursor chaotically, while a machine moves it along straight lines.

Humans consist of three elements: personality, software, and hardware.

The hardware of a human comprises life-support organs, systems, limbs, senses, and a carbon-based computer with two parts — the brain and spinal cord.

The software of a human consists of several programs. One is embedded in the spinal cord, ensuring the flow of life processes and reflexes (this same program animates all protein life, from the simplest forms to humans). Another program resides in the brain, processing sensory information, creating a picture, and projecting it onto the «monitor.»

The third element, personality, arises from a certain essence upon which more complex programs are installed. Other living beings lack such an essence, and as a result, they may experience flickers of personality but never a fully developed one equal to that of a human. Personality forms from the ability to organize information about the external world into psychological constructs. Thus, humans possess analog thinking, perceiving reality through the nature of words. A word is not equal to itself and can have multiple meanings. The word «kettle,» for instance, encompasses various kettles, while «yes» can have dozens of nuances.

Humans are the only animals on the planet without instincts — knowledge embedded in their genes. Such knowledge does not need to be acquired during life; it is present from birth. Just as a computer has pre-installed basic programs during manufacturing, animals possess inherent knowledge encoded in their genes. Cranes do not learn to weave nests, nor do beavers learn to build dams from their parents. They are born with all the necessary information and skills. If a newborn crane or beaver is isolated from the external world and later released into the wild, it will function as a fully capable individual.

If a newborn human is isolated from the external world, it will not grow into a human — nor an animal. Instead, it will become an utterly helpless entity — biomass with a set of reflexes and ongoing life processes. Left to its own devices, such a being is guaranteed to perish. It will lack a sexual response to either its own or the opposite sex. If this entity is female, it can be impregnated and give birth, but it will exhibit no reaction to its offspring. What are commonly referred to as sexual or maternal instincts are not instincts but constructs acquired through life experiences in the surrounding environment. If a human were born and raised in a different world, among extraterrestrials on another planet, they would have entirely different constructs.

Within these constructs, there is a hierarchy and varying levels of complexity. One construct governs emotions, another governs creativity and reason, and a third allows transcending material boundaries. The most complex construct grants access to a realm beyond reason, making it incomprehensible. Plato referred to this realm as the ideal world. It is where all ideas that form objects, things, and phenomena in the material world reside. For example, the idea of a chair exists there, and all chairs in the earthly world are expressions of that idea. Material objects, from this perspective, are merely shadows cast by ideas.

According to Plato, before birth, the human soul resided in that world and had access to all knowledge. Upon entering the earthly realm in a human body, the soul forgets everything. Therefore, learning is not the acquisition of new knowledge but the recollection of what was known before birth. Plato demonstrates this view in the dialogue «Meno,» where Socrates guides a slave boy to recall how to double the area of a square.

Бесплатный фрагмент закончился.

Купите книгу, чтобы продолжить чтение.