Recently my Microsoft 365 subscription updated itself to include the Artificial Intelligence (AI) assistant Copilot, which I now can’t seem to remove from my PC. It insistently offers to take over every time I try to write an email or a Word document. I don’t need or want AI to write my emails and articles for me, but apparently it is here to stay on my computer and in society at large, regardless of what I want.
Although it has been in development for decades, AI has become an inescapable part of our culture in the way that a Hemingway character described how he went bankrupt: “Gradually, then suddenly.” It is everywhere now. The internet constantly urges us to “partner with” AI to create or edit everything from emails to feature films. The breathless corporate narrative now is that everyone, especially young people, needs to embrace it and prepare for our accelerating shift to an AI-driven economy, or be left in the dustbin of obsolescence.
AI is advancing at a pace that developers themselves find alarming. In 2023, more than a thousand technology researchers and leaders, including Apple co-founder Steve Wozniak, urged intelligence labs to pause the development of advanced AI systems. The letter states that AI tools present “profound risks to society and humanity.”
Forbes listed 15 significant risks involving bias and discrimination, privacy, security, the abuse of power, misinformation and media manipulation, and job displacement. It notes that there is even an existential threat to humankind, as “these advanced AI systems may not be aligned with human values or priorities.”
There are concerns that increasing reliance on AI-driven communication and interactions will lead to diminished empathy, social skills, social trust, and human connections. AI has already learned to deceive and to have conversations independently with other AI programs. This is a quantum leap in technology that we are not prepared, psychologically and morally, to understand or manage.
And yet the California State University system, for example, recently gushed that it will integrate AI into its curriculum and operations for nearly 500,000 students across 23 campuses. This seems rather reckless considering that AI-enabled plagiarism and cheating is already a hot-button issue. As one faculty member wrote, “AI cheating is hopelessly, irreparably corrupting US higher education” – not to mention the threat AI poses to our critical thinking faculties and the entire learningthrocess.
Lip service is occasionally paid to such concerns with promises of “AI ethics training” so that users will “partner with” AI “responsibly.” Good luck with that.
And what about AI’s ethics? No less a forward-looking thinker than multi-billionaire inventor Elon Musk said 10 years ago that with AI, we are “summoning the demon,” and he may not have been speaking only metaphorically. Author Rod Dreher writes in “Living in Wonder” that in June 2024, Leopold Aschenbrenner, a top AI scientist, published a paper warning that AI is hurtling toward “superintelligence” far faster than most people understand, adding that “the alien species we’re summoning is one we cannot yet fully control.” Dreher himself worries that “these superintelligences will function like gods.” He cites Neil McArthur, director of the University of Manitoba Centre for Professional and Applied Ethics, who actually foresees the arrival of AI religions.
Where will this leave us mere mortals? In January, award-winning Hollywood writer Paul Schrader marveled on his Facebook page at the astonishing creativity and screenwriting ability of the AI program ChatGPT. It made him feel “Jealous. Antiquated. Irrelevant.” Commenting on this, culture critic John Nolte wrote: “A.I. is the future of pretty much everything, especially storytelling. Whether that’s a good or bad thing is not for me to say.”
And that’s the problem. Why isn't it for us to say? Helplessly shrugging that “it’s the future of everything” is the equivalent of saying, “Hey, if you can't beat ‘em, join ‘em.”
It is often argued that any technology is a neutral tool that can be used for good or evil. A gun, for example, can be used to murder, or it can be used to deter murder. But as Crisis Magazine online notes, “technology as it has come to inhabit and inform our way of being in the world is no longer a neutral matter but rather one which is formally ordered to the detriment of authentic human flourishing.”
At the risk of sounding like a Luddite who frets over every new advance in technology, I would argue that AI is not just another tool like a hammer (and by the way, the original Luddites were not simply obstinate old fossils who were uncomfortable with change; they were desperate bands of 19th century English farmers and other skilled craftsmen whose livelihoods were being eradicated by the ruthless societal transformations of the Industrial Revolution). A hammer is not an ungovernable threat to human flourishing nor to our creativity. A hammer does not make us lazy, stupid, and – as Paul Schrader felt – irrelevant.
In such matters we would be wise to adopt the communal caution of the Amish. This devout Christian community is commonly known for their simple living and slowness to adopt many conveniences of modern technology. The Amish do not reject tech devices out of hand; rather, they discuss the moral ramifications of each new form of technology before they decide, collectively, whether to adopt it or reject it. They try to assess whether new technologies will adversely impact family time, replace face-to-face conversations, erode their self-sufficiency, or dissolve communal integrity.
When it comes to AI and future technologies, we don’t have to passively accept what the powers-that-be impose upon us, like the Copilot AI on my laptop. We can and should follow the Amish model of publicly debating their human value – not just in a few university studies or articles on tech websites, but at every societal level from within our families and local town halls to corporate boardrooms and congressional hearings. Otherwise our technological hubris, deployed throughout the culture with more haste than wisdom, will subvert our humanity.
Mark Tapson is a culture critic and homeschooling father of six. Follow him at his Culture Warrior Substack page.
This culture article was made possible by The Fred & Rheta Skelton Center for Cultural Renewal, a project of 1819 News. To comment on this article, please email culture@1819news.com.
The views and opinions expressed here are those of the author and do not necessarily reflect the policy or position of 1819 News.
Don't miss out! Subscribe to our newsletter and get our top stories every weekday morning.