40.9 F
Washington D.C.
Friday, March 21, 2025

PERSPECTIVE: DeepSeek May Be the Least of Our Concerns

News of China’s DeepSeek model challenging America’s Silicon Valley and vaporizing a trillion dollars of market value in a day may well be a “Sputnik” moment for the U.S. 

But perhaps we should raise another type of wake-up call. I’m not talking about the prospects of embedding AI in nuclear arsenals or the dystopian visions of Skynet and the robot apocalypse, but a more prosaic, short-term conern: that we’re all focusing too much on how we humans are changing and advancing Artificial Intelligence, and not paying enough attention to how AI is changing us. 

Yes, today’s AI models offer unprecedented potential advances in the discovery of drugs and medical treatments, as well as achieve breakthrough advances in productivity. And for the record, the Luddites were wrong; historically, automation has net benefited societies. But humans have never before dealt with a technology that rivaled (and in some cases can even surpass) their own cognitive and reasoning abilities – merely their physical strength, endurance, precision of dexterity, and ability to munge and analyze vast volumes of data. 

But when algorithms can instantly generate content optimized for quick consumption, they can create a feedback loop: shorter attention spans lead to demands for simpler content, which further reduces attention spans. Psychologists call it “cognitive offloading” – the tendency to rely on external tools rather than developing internal capabilities. As machines get smarter, most of us may get dumber. 

As a college educator, I see how generative AI tools have become intellectual shortcuts. Instead of wrestling with a problem, we simply prompt AI for an immediate answer or solution. But as knowledge is separated from understanding, we risk atrophying our critical thinking and reasoning – doing what the calculator did to our arithmetic skills, smart phones did to our memory of phone numbers, and GPS navigation is doing to our sense of direction. 

And by making it so easy for anyone to create very realistic and convincing “deep fakes,” these alternate realities can shatter our shared experiences and beliefs, making our society more fragmented and polarized, more easily manipulated into what others want us to believe. How trustworthy are news outlets when their algorithms feed us only what we want to hear and see? How can “experts” be trusted when anyone can appear as an expert? 

Chatbots and virtual assistants simulate social interaction without the challenges and growth opportunities of real human dialog. AI certainly didn’t create social fragmentation and isolation from all our “digital drugs,” but it’s clearly not making the problem any better. 

And when you look at who’s in charge of advancing AI, the picture is not reassuring. By and large, these tech-bro billionaires are the same lovely people who brought us surveillance capitalism – that incredibly profitable business of spying on us to pander to, and capitalize upon, our biases, impulses, and desires. 

Perhaps as the dawn of AI arrives, we need to resurrect a mindset from an earlier time: “caveat emptor,” or buyer beware. 

Dr. Arthur O’Connor
Dr. Arthur O’Connorhttps://sps.cuny.edu/about/directory/arthur.o%E2%80%99connor
Dr. Arthur O’Connor is an academic director of the MS in Data Science and BS in Information Systems programs at the CUNY School of Professional Studies. As the author of Organizing for Generative AI and the Productivity Revolution, Dr. O’Connor has extensively researched the organizational and economic shifts driven by AI adoption. His background as both an academic and a corporate executive offers a unique perspective on how businesses and policymakers should respond to generative AI breakthroughs.

Related Articles

- Advertisement -

Latest Articles