Skip to main content

Humankind hovers over a precarious inflexion point as you read this. Will AI unlock monumental progress or trigger catastrophic ruin? You get to decide.

 

I grew up in apartheid South Africa. As a teenager, I attended a liberal (apartheid-opposing) high school reputed for its academic excellence. One particular experience had profound impact on my understanding of the human condition.

In my final year, the student body held a mock election in parallel with a national election to teach us firsthand about democracy. During the scholastic campaign, two 16-year-old boys—one intelligent, even cunning; the other desperate for popularity—decided to represent the ultra-conservative, pro-apartheid political party. Their electioneering exploited two primary fears of the “electorate” in a deeply emotive way: the political fear of losing white power and privilege, and the social fear of being ejected from the popular mass.

The election forced the student body to confront a weighty choice: Would they fall mindlessly in with the crowd advocating for one of the biggest wrongs of human history? Or would they choose to overcome their fears, standing against the crowd and acting for justice and equality?

I’d like to say that the students chose the more difficult path, allowing morally defensible logic to outweigh individual fear—but sadly, that was not the case. In the end, those two leaders manipulated many into supporting the ideology of apartheid, even sparking hostile chants that directly attacked and demeaned other racial groups in unambiguous terms. They seized control of the mass psyche and soon controlled a mindless following. Those among the electorate who applied their own thoughtful and principled opposition to the cancerous, fear-driven hysteria ultimately lost the election to the cunning intelligence of a 16-year-old mastermind who seized cognitive control of the majority by co-opting the brute social influence of the “cool” guy with the very loud voice.

Sadly, the mock election perfectly mirrored the amoral ideological grip that the apartheid government held over the white South African minority—a platform that enabled them to impose brutal and unconscionable injustices on the disenfranchised majority.

This painful memory haunts me as I contemplate the future our species faces today in the emergent grip of artificial intelligence (AI). We are at a fulcrum point, and we must decide: Will AI be the next giant leap forward in human evolution? Or will it be the disaster that ruins us?

Let me explain.

The Human Brain: A Vulnerable Design

I’ve dedicated my life to understanding the human brain and its impact on human behavior. We humans are naturally gifted with the most powerful, complex supercomputer ever built: our magnificent brain. But despite the impressive design of our brain, and our phenomenal natural intelligence, I now recognize that there is a simple—but overlooked—flaw in our design that leaves us vulnerable to domination and control by any superior intelligence.

Where does this vulnerability come from? It comes from the fact that we’re born useless and as infants must enslave ourselves to our brain to survive. There is no problem with this initially. In fact, it’s a very good design. But here is the glaring issue: Most of us never manage to seize back authority over this vital and vastly powerful organ. It controls us, forever.

To make matters worse, most of us are blissfully unaware of how our brain works, and so, even if we wanted to, we simply don’t know how to reclaim this control.

We must first understand the functionality of our brain to master it.

As a neurocentric coach, my predominant focus has been to liberate adults from this magnificent (but flawed) design. I guide people to reclaim authority over their brains. As their reward, they move beyond surviving to thriving. As adults, we all have that capacity to reverse this brain-authority axis, and luckily, there’s a systematic, scientific way to do so.

But it is this ubiquitous reality, that we are by default enslaved to our own brain, that is the source of my deep fear about the future of humans in an AI world. With limited authority over our brain, we become easy pickings for the unimaginable power of rapidly advancing superintelligence that may fall into unscrupulous hands.

When Technology Hijacks Neurobiology

Perhaps the greatest social experiment of our time—the invention and adoption of social media—provides the most graphic evidence of how superintelligence can hijack our brains.

I’m a big fan of social media. I believe in the metaverse, a future we’re already living. It has massive advantages for the quality of human life and for social connectivity and collaboration. However, at the same time, it poses immense individual and societal risk, particularly by way of its algorithms.

A social media algorithm is a set of instructions designed to continuously deliver content to a user based on their prior online interactions (Golino, 2022). These algorithms are designed to keep feeds interesting and engaging—to constantly create pleasurable and useful experiences. They do this by presenting content to the user based on the probability that it is what they want to see.

There are pros and cons to the filtering effect of these algorithms (Draper & Neschke, 2023). Given the massive volume of data available today, one of the benefits of the algorithms is that they filter out content (assumed to be) not relevant for a particular user. They make our lives more efficient.

On the other hand, these algorithms can essentially control user attention in an iterative and reinforcing spiral. Not surprisingly, hijacking a social media user’s neurobiology like this can have serious negative implications, including loss of executive function and cognitive control, leading to impaired decision-making, reduced impulse control, and poor emotional regulation (Blachnio et al., 2023; Meshi et al., 2019; Montag & Markett, 2023). These shifts in brain function often induce feelings of depression, anxiety, loneliness, and low self-worth arising from biased perspectives on how one’s life matches up to that of others. They also lead to detrimental behavior changes, including time wasting and procrastination, excessive online spending, reduced attention to daily tasks, and neglect of responsibilities, resulting in outcomes such as poor work performance and inadequate attention to physical needs such as diet, exercise, and sleep; additionally, real-life relationship breakdowns can occur when users focus predominantly on online connections and withdraw from face-to-face engagement with other people (Hilliard, 2023; Moshel et al., 2023; Weinstein, 2023).

As we contemplate our future in a world of AI, we must address the risk of exploitation by non-human intelligence that extends virtual hands deep into our human psyche to seize control of our attention.

There’s every reason to believe that AI is going to learn about our neurobiology. It’s going to understand our addiction to the pursuit of pleasure and happiness. It’s going to figure out how to draw us in and keep us captive, unable to control our own mind in the face of the promise of compelling pleasure that never ends. If our deep neurochemical algorithms can be hijacked by the relatively simple social media algorithms of today, then we face enormous risk as we enhance the reach of AI, which is far more complex, powerful, and, by design, poorly understood.

The Documented Dangers of AI

We ought to take note when Geoffrey Hinton, a pioneer of neural networks (the technology at the heart of generative AI and machine learning), is outspoken about the risks of AI and, in addition, expresses certain regrets about his life’s work (Metz, 2023). We should also be concerned when AI researchers state that humans will not be able to control a superintelligent AI of the future because it’s fundamentally impossible to do so (Alfonseca et al., 2021; Max Planck Society, 2021). More importantly, though, we need to be keenly aware of the damaging consequences of poorly managed AI that are already evident today.

Dr. Fei-Fei Li is an esteemed pioneer and authority in the field of AI. Her specific focus is currently on grounding AI as a human-centered practice. In her book The Worlds I See, she describes AI as “a force of nature. Something so big, so powerful, and so capricious that it could destroy as easily as it could inspire” (Li, 2023, p. 287).

One of the dark sides of AI is its potential for social manipulation of individuals and groups (Eliot, 2023; Ienca, 2023). This can take many forms, including subtle manipulation enabled by generative AI’s capacity for conversational engagement with its human users—machines posing undetectably as humans. Social manipulation can also be perpetrated by people using AI for nefarious purposes, for example, by creating virtual social influencers so humanlike that gullible followers don’t realize they’re following machine creations and not real people (Nguyen, 2023). Chatbots can be used and abused to create manipulative and controversial content (Gehl & Lawson, 2023). The story of Tay, an AI chatbot released by the Microsoft Corporation on the then-“Twitter” platform, illustrates how easily intentional societal unrest can be incited via AI-powered technology (Wakefield, 2016).

Discriminatory decision-making by AI systems can have serious social and economic consequences (NIST News, 2022). From school admissions to job recruitment to allocation of medical insurance, decision-making biased by gender or ethnicity can negatively impact a person’s rights to equitable allocation of opportunities and resources (Nicoletti & Bass, 2023; Obermeyer et al., 2019).

Probably one of the most serious hazards of the advancement of AI is that it’s making it easier to generate and disseminate fake news, in the form of factually incorrect text and deepfake photos, videos, and audio tracks. For example, previously prohibitively costly deepfake creation processes, such as voice cloning, are now being offered by startups for a few dollars. It’s estimated that at least 500,000 video and voice deepfakes were shared on social media sites during 2023 (Baptista et al., 2023). Apart from leading to general misinformation, fake news and deepfakes have serious implications, from reputation damage and blackmail of individuals to manipulation of election outcomes that affect entire nations and even global politics (Dack, 2019; Ulmer & Tong, 2023). Adding to the issue is the fact that AI developers remain largely thwarted as they attempt to explain (and avoid) what is commonly known as “hallucinations”—instances in which an AI program confidently gives and defends an answer that is factually incorrect (Hatem et al., 2023).

My personal experience and the robust list of theoretical and emergent risks of AI would ordinarily have me standing shoulder to shoulder with vehement naysayers pleading to stop (or at least slow) the advances of AI. But I am not there—far from it.

I am an ardent fan of AI, championing its power as the next evolutionary step for Homo sapiens—for two reasons. First, the benefits of AI are unimaginably vast. And second, there is demonstrable evidence that we can protect ourselves both individually and collectively against potential harm.

The Next Step in Human Evolution

I regard the development and implementation of AI as the most spectacular leap forward our species will see in our lifetime, even as I remain committed to preventing human domination by advancing technology. I have no doubt that AI is the future. I’m both excited and intrigued by it.

As a species, I believe we’re rapidly evolving beyond the bounds of natural intelligence and gaining a superintelligence that will eclipse our wildest imaginings. Although AI isn’t a part of our inner biology, it’s an extension of our mind that, when used for good, has the potential to improve our existence in myriad ways. I could add a long list of very real and tangible examples here. Suffice it to say that we are speeding up our decision-making capacity, solving problems that were previously out of reach by identifying unseen patterns in mega datasets, helping people communicate better, and collating our collective expertise and experience to build a viable future for humanity.

With AI serving us—and not the other way around—we can become stronger, faster, more accurate, more consistent, more creative, more sustainable, better able to solve big problems, and much else besides.

The Solution: Protecting the Species Against Harm

So, how do we make sure AI is serving us and not the other way around? First, consider the fact that those who do not have authority over their brains are at greatest risk of being overcome by AI. If we look closely at any one of the damaging examples I’ve touched on, we find underrepresentation of fellow humans who have reclaimed authority over their own brains. Mindlessness increases risk of an individual being manipulated by a chatbot or taken in by deepfakes. And, as my opening example shows, if the mindless are the majority, there is substantial risk of systemic ruin. Herein lies the urgent imperative for individual awareness and action.

Each of us needs to ask, “Who is in my driver’s seat? Am I actively present, stewarding my brain, which acts under my intentional authority? Or am I asleep at the wheel?”

If you understand the operation of your own brain, if you take control of your greatest evolutionary gift, then you’ll also enjoy the emerging power and huge advantages provided by AI. If not, you’ll most likely join the powerless masses subjugated by the technology—a future that is loaded with disproportionate individual and collective risk.

So what can you do to gain and sustain your autonomy? Just like your mobile phone and laptop, your brain has an operating system. I call it the Brain Operating System (or BOS). The prefrontal cortex (PFC), the newest part of the human brain (from an evolutionary perspective), is well recognized as being fundamental to mastery of our full natural intelligence.

There are many steps you can take to master your BOS, but knowing how to use your PFC is key. Operating from the PFC, we can take a supervisory overview of our (often) chaotic thoughts, feelings, and fears. This puts us firmly in the driver’s seat of our BOS and empowers us with maximal authority over our natural intelligence and, by extension, our engagement with AI.

Though the science of the BOS is far more complex than can be adequately described here, one concrete action you can take toward mastery of your natural intelligence is to enact the five-step process outlined below. You can apply this process to gain clarity and perspective, enabling you to tackle any challenge, obstacle, or big decision, especially when negative emotions are clouding your judgment and the fears underlying them are in control of your brain. Here are the steps:

  1. Pause: Begin with a brief pause—a mindful moment—by taking three deeper-than-normal breaths. An intentional pause shifts the seat of your awareness into your PFC and prepares you to take executive control of your BOS.
  2. Feel negative emotions: Become aware of any negative emotions you’re experiencing by asking and answering, “What am I feeling about this situation?”
  3. Explore fear: Identify, acknowledge, and explore the fears underlying these negative emotions by asking and answering, “What am I afraid of in this situation?” Try to identify all of your (often hidden) fears.
  4. Think: Activate your powerful neocortex by asking and answering, “What do I think about this situation?” As you contemplate your situation objectively, you should notice positive thoughts arising almost spontaneously. This happens when you purposefully shift your attention from your fears to your thoughts. If you find yourself still experiencing negative emotions, it suggests that you’re uncovering more (and perhaps deeper) fears. Return to Step 3. Once you’ve identified and explored all of your fears, you can resume positive thoughtfulness without resistance.
  5. Feel positive emotions: Positive thoughts generate positive emotions. Recognize and celebrate the positive feelings you have evoked using your natural intelligence to override fear and gain clarity and perspective. You’ll now feel empowered to find the best way forward, unencumbered by your survival-driven brain that would otherwise have held you hostage.

How Does This Story End?

My childhood story ended dismally, with a few malicious players seizing destructive control over a mindless majority. For humanity’s sake, I ardently hope that we will have greater wisdom as we face our own unfolding reality. I hope that we will make a better choice today and will urgently claim authority over our individual and collective consciousness.

I hope that enough of us will master our brains to enable AI to be the next, hugely positive step in our evolution rather than mindlessly allowing our species to be controlled by the technology—with disastrous results. Without urgent action, I fear that cold, heartless, super-focused leaders of today and tomorrow could abuse AI to enslave feeble human brains.

Unlike in the mock election, our decision about AI will not be a practice run for “real” life or a learning experience we can improve upon the next time. If we allow AI to be used as a tool to gain control over our minds today, it is unlikely we will ever be able to wrestle it back.

I predict that humans will dominate supercomputers in the future—but it will be a tiny elite who have command of this vast power, and they will have learned how to master their own brains first. I hope that simply knowing that such autonomy is possible inspires you to take a big step today toward your own personal mastery.

If you want to become (or remain) a leader in the era of superintelligence, please reach out to me through my website, RoddyCarter.com, and/or explore the brain-mastery resources there. I am committed to working as long as I have breath in my body and command over my own intellect to help those who want to learn, who want to thrive, and who want to lead.

Author’s Note

I consider Dr. Li’s book on the emergence of AI (The Worlds I See) to be a must-read for anyone who cares about the future of our species. It stands alongside two of my all-time favorite books: Yuval Noah Harari’s Sapiens and Homo Deus. Harari tracked the footprints of Homo sapiens from our earliest origins through modern time in Sapiens and contemplated the future of our species in Homo Deus (the title alluding to our future access to powers only imaginable in the hands of deity). In The Worlds I See, Li gives us a close and personal account of the emergence of AI, allowing the nontechnical reader a reasonable and balanced understanding of the scientific and ethical platform on which our future rests.

References

Alfonseca, M., Cebrian, M., Anta, A. F., Coviello, L., Abeliuk, A., & Rahwan, I. (2021). Superintelligence cannot be contained: Lessons from computability theory. Journal of Artificial Intelligence Research, 70. https://doi.org/10.1613/jair.1.12202

Baptista, D., Smith, A., & Harrisberg, K. (2023, October 22). Voice actors take action as AI tries to steal their voice. BusinessDay. https://www.businesslive.co.za/bd/world/2023-10-22-voice-actors-take-action-as-ai-tries-to-steal-their-voice/

Blachnio, A., Przepiorka, A., Cudo, A., Angeluci, A., Ben-Ezra, M., Durak, M., Kaniasty, K., Mazzoni, E., Senol-Durak, E., Hou, W. K., & Venbenuti, M. (2023). Self-control and digital media addiction: The mediating role of media multitasking and time style. Psychology Research and Behavior Management, 16, 2283–2296. https://doi.org/10.2147/PRBM.S408993

Dack, S. (2019, March 20). Deep fakes, fake news, and what comes next. The Henry M. Jackson School of International Studies, University of Washington. https://jsis.washington.edu/news/deep-fakes-fake-news-and-what-comes-next/

Draper, D., & Neschke, S. (2023, October, 4). The pros and cons of social media algorithms. Bipartisan Policy Center. https://bipartisanpolicy.org/explainer/social-media-algorithms-pro-con/

Eliot, L. B. (2023, March 1). Generative AI ChatGPT as masterful manipulator of humans, worrying AI ethics, and AI law. Forbes. https://www.forbes.com/sites/lanceeliot/2023/03/01/generative-ai-chatgpt-as-masterful-manipulator-of-humans-worrying-ai-ethics-and-ai-law/?sh=45b133a51d66

Gehl, R. W., & Lawson, S. (2023, June 27). Chatbots can be used to create manipulative content—understanding how this works can help address it. The Conversation. https://theconversation.com/chatbots-can-be-used-to-create-manipulative-content-understanding-how-this-works-can-help-address-it-207187

Golino, M. A. (2022, April 24). Algorithms in social media platforms. Institute for Internet & the Just Society. https://www.internetjustsociety.org/algorithms-in-social-media-platforms

Hatem, R., Simmons, B., & Adler, J. R. (2023). A call to address AI “hallucinations” and how healthcare professionals can mitigate their risks. Cureus, 15(9), e44720. https://doi.org/10.7759%2Fcureus.44720

Hilliard, J. (2023, October 26). Social media addiction. Addiction Center. https://www.addictioncenter.com/drugs/social-media-addiction/

Ienca, M. (2023). On artificial intelligence and manipulation. Topoi, 42, 833–842. https://doi.org/10.1007/s11245-023-09940-3

Li, F. F. (2023). The world I see. Flatiron Books.

Max Planck Society. (2021, January 11). We wouldn’t be able to control superintelligent machines. https://www.mpg.de/16231640/0108-bild-computer-scientists-we-wouldn-t-be-able-to-control-superintelligent-machines-149835-x?c=2249

Meshi, D., Elizarova, A., & Verdejo-Garcia, A. (2019). Excessive social media users demonstrate impaired decision making in the Iowa Gambling Task. Journal of Behavioral Addictions, 8(1), 169–173. https://doi.org/10.1556/2006.7.2018.138

Metz, C. (2023, May 4). The ‘Godfather of A.I.’ leaves Google and warns of danger ahead. The New York Times. https://www.nytimes.com/2023/05/01/technology/ai-google-chatbot-engineer-quits-hinton.html

Montag, C., & Markett, S. (2023). Social media use and everyday cognitive failure: Investigating the fear of missing out and social networks disorder relationship. BMC Psychiatry, 23, 872. https://doi.org/10.1186/s12888-023-05371-x

Moshel, M. L., Warburton, W. A., Batchelor, J., Bennett, J. M., & Ko, K. Y. (2023). Neuropsychological deficits in disordered screen use behaviors: A systematic review and meta-analysis. Neuropsychological Review. https://doi.org/10.1007/s11065-023-09612-4

Nguyen, M. (2023, September 19). Virtual influencers: Meet the AI-generated figures posing as your new online friends—as they try to sell you stuff. The Conversation. https://theconversation.com/virtual-influencers-meet-the-ai-generated-figures-posing-as-your-new-online-friends-as-they-try-to-sell-you-stuff-212001

Nicoletti, L., & Bass, D. (2023). Humans are biased. Generative AI is even worse. Bloomberg. https://www.bloomberg.com/graphics/2023-generative-ai-bias/

NIST News. (2022, March 16). There’s more to AI bias than biased data, NIST report highlights. https://www.nist.gov/news-events/news/2022/03/theres-more-ai-bias-biased-data-nist-report-highlights

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453. https://doi.org/10.1126/science.aax2342

Ulmer, A., & Tong, A. (2023, May 31). Deepfaking it: America’s 2023 election collides with AI boom. Reuters. https://www.reuters.com/world/us/deepfaking-it-americas-2024-election-collides-with-ai-boom-2023-05-30

Wakefield, J. (2016, March 24). Microsoft chatbot is taught to swear on Twitter. BBC News.https://www.bbc.com/news/technology-35890188

Weinstein, A. M. (2023). Problematic social networking site use—effects on mental health and the brain. Frontiers in Psychiatry, 13, 1106004. https://doi.org/10.3389/fpsyt.2022.1106004