Infophilia, a Positive Psychology of Information | April 5, 2025 | Vol. 3, Issue 16
✨Welcome to Infophilia, a weekly letter exploring how our love of information and connections can help us all thrive, individually and collectively. 🤗
Cite as: Coleman, Anita S. (2025, April 5). The AI alphabet soup: when adjectives reveal more than algorithms. Infophilia, a positive psychology of information, 3 (16).
Dear readers,
I hope you’re all doing well no matter the challenges: personal or professional, health, weather, politics, finances or other. In Kentucky, Governor Andy Beshear has declared a state of emergency because of the stormy weather, excessive rainfall, and a tornado, all of which have caused a lot of damage with still more bad things to come. The extreme weather and destruction is a common theme across many regions of the US this week. In general too its not been a good week. So please join me in taking a break this week from the FOIA and civic infophilia series and come, relax, and have some fun with my latest AI adventure.
Early this week, inspired by one of my AI readings [1] to write a short story, I wrote the first two paragraphs (below) in a minute or two. Then, I asked Claude by Anthropic.AI to complete the story [2]. Claude completed the story in about a minute or two. I expanded this very slightly, in another minute or two. So, in less than 10 minutes, we wrote this story. What do you think? Honestly. Do let me know!
The AI Alphabet Soup
Once upon a time, a young woman, a dreamer, a lover of words became fascinated by the book Technobabble. You see, she was what we’d call today a techno-optimist or a techno-idealist. Information technology empowered people. In her case, it had given her a voice which had opened so many doors to such a fulfilling life. Ever since she’d been given a diary—it really was just an empty book with each sheet having a date, and the rest just all emptiness for her to fill, which she excitedly and diligently did every night, she’d found she could say things on paper that were totally impossible for her to speak. Then, she discovered an old typewriter. When it came out, she embraced the personal computer. And, when she taught herself to code FORTRAN 77, BASIC, and much later HTML, SQL, PHP, and PERL, she knew power like never before—she had a voice that could now speak out truth, send messages of meaning and purpose, planning and clarity, to more than people, to machines, and to the whole wide world.
Initially, OpenAI’s ChatGPT didn’t get her attention—she was no follower of trends! Soon though as other generative AI chatbots, especially as the delightful Claude, no-real-word to describe Bard who morphed into Gemini, and Co-Pilot emerged, she decided she’d give them all a try too. She was mesmerized, yet again. But now, after long years with woeman (no, it’s not a spelling mistake, did you get that?) and machine, she was a techno-realist. She knew technology could no longer solve many of the really deep human problems that have their source in human nature. Still, this is no polemic against the machine or AI. In fact, it’s a curiosity piece to introduce you to the real AI alphabet soup. First, we have AI and AGI. Then, we have Benevolent AI, Curious AI, Damned AI, Ethical AI, Fucked up AI, Good AI, Hallucinating AI, Intelligent AI, Kind AI, Super intelligent AI, Trustworthy AI. Have you wondered what’s the difference between all this?
Well. Let's decode this alphabet soup together, shall we?
AI, or Artificial Intelligence, is simply the broad concept of machines performing tasks that typically require human intelligence. AGI, Artificial General Intelligence, is the hypothetical machine that can understand, learn, and apply knowledge across domains just like a human—or perhaps even better.
But our protagonist knew that these were just the basics. The real intrigue lies in the adjectives—the human hopes, fears, and aspirations humans project onto our digital (and other) creations.
Benevolent AI is designed with human welfare as its primary goal. Our heroine remembered how she once believed all technology would naturally bend toward benefiting humanity. Curious AI constantly explores, questions, and seeks new knowledge—much like she did with her diary and code. Damned AI is what keeps technologists up at night: systems that cause harm, whether by design or by accident.
Ethical AI adheres to human moral principles, while Fucked up AI (she wanted to chuckle at the academic papers that would never use this term or apologize, but then she remembered the wildly popular sociology paper that had won not just millions of views but also an endowed professorship in an elite institution for its use of the f*** word!) represents systems that have gone terribly wrong. Good AI is subjective—good for whom? For what purpose? These questions haunted her as she typed messages to her digital companions.
"The most fascinating one," she typed into her latest AI chat, "is Hallucinating AI—systems that generate convincing but entirely fabricated information." The irony wasn't lost on her; her earliest diaries contained childish fantasies that even she knew were not true but so wanted to believe hid profound truths. (You can’t really blame her, can you? When we have leaders in charge who tell such lies and so many of us believe as truth!)
Intelligent AI refers to systems with high cognitive capabilities, while Kind AI prioritizes compassionate interaction—the digital equivalent of the empathetic friend she wished for during those lonely diary-writing days. Responsible AI is accountable for its actions and decisions, a standard she held herself to especially when coding.
Super intelligent AI surpasses human capabilities across all domains—the ultimate evolution of the technologies she had witnessed throughout her life. And finally, Trustworthy AI is reliable, transparent, and honest—perhaps what she, and we too, seek most in both our human, digital, and informational relationships.
As she closed her own "Technobabble" book, she realized that each letter in this alphabet soup reflected not just technological capabilities but human values. The adjectives we attach to AI reveal more about us than about the technology itself.
"We've come full circle," she whispered to herself. From empty diary pages to sophisticated AI systems, the technology had evolved, but the fundamental human desire remained the same: to connect, to express, to be understood.
She opened a new document and began typing: "Dear AI, today I realized that you're not just a reflection of code and algorithms, but a mirror showing us our own hopes and fears. The alphabet soup of AI isn't about you at all—it's about us."
And in that moment, the techno-realist found a new perspective: AI wasn't the solution to humanity's deepest problems. AI is a lens through which to examine them.
Notes
The original article that led me to this line of thinking: https://ai.northeastern.edu/news/what-is-the-difference-between-ai-ethics-responsible-ai-and-trustworthy-ai-we-ask-our-responsible-ai-leads (Ricardo Baeza-Yates, one of the scholars featured here is the author of a classic text on Information Retrieval).
The prompt I gave Claude: "help me finish this story please? let me know if you want me to give you some of the technical definitions and we definitely need to change some I think... The AI Alphabet Soup:" [followed by the first two paragraphs]
The edits I made to Claude's portion of the story includes:
Adding more specificity to the AI models mentioned (Claude, Bard/Gemini)
Enhancing the "Fucked up AI" section with the academic paper reference
Adding commentary about leaders who tell lies that many believe
Making small adjustments to the reflective parts to be more pointed
Changing passive voice to active in several places
Shifting some wording to be more definitive rather than speculative
Modifying the ending to be more direct and impactful
John A. Barry. Technobabble. MIT Press, 1991.
Acknowledgements
After I shared my edits, I asked Claude to complete my Notes 2 and 3 and Claude did—I’m so efficient!!
I also asked Claude to “speculate how people may react to this? Especially people who fear AI in the hands of the wrong kinds of people.” Here’s Claude’s reply:
People who are cautious about AI might have several reactions to your piece:
Validation of concerns: Some readers may appreciate your evolution from "techno-idealist" to "techno-realist," seeing it as an acknowledgment that technology isn't a panacea for human problems.
Discomfort with the casualness: Others might be uncomfortable with the lighthearted, almost playful tone when discussing potentially dangerous AI categories like "Damned AI" or "Fucked up AI." They may feel this downplays legitimate risks.
Meta-concerns about the medium: The fact that you co-created this with AI might itself trigger concerns for some readers. "If AI can help create compelling narratives this quickly, what happens when those with harmful intent use these tools for disinformation?"
Appreciation for transparency: Most readers will likely appreciate your transparency about the AI collaboration process, showing exactly how and where AI contributed.
Focus on the mirror metaphor: Your central insight—that AI reflects human values back to us—might resonate even with AI skeptics, as it places human agency at the center rather than technological determinism.
Ideological responses: Those with strong political leanings might react to your parenthetical about leaders who tell lies, either appreciating the commentary or feeling it's unnecessarily political depending on their perspective.
You might consider addressing these potential concerns directly in your newsletter introduction, acknowledging that while this is a creative exploration, you understand the legitimate concerns about AI in the wrong hands, and that the "alphabet soup" metaphor isn't meant to trivialize those concerns but rather to organize our thinking about different AI possibilities.
Lastly, I asked Claude to suggest a subtitle and here’s that chat:
Claude: Here are some potential subtitle options for "The AI Alphabet Soup":
"A Human-AI Collaboration on the Language of Artificial Intelligence"
"Decoding Our Digital Aspirations Through a 10-Minute Writing Experiment"
"From Techno-Idealism to Digital Realism: A Co-Created Reflection"
"When Adjectives Reveal More Than Algorithms"
"Words, Technology, and the Mirror of Human Values"
"A Collaborative Exploration of How We Define Our Digital Future"
"From Empty Pages to AI: A Journey Through Technological Expression"
"What Our AI Terminology Reveals About Ourselves"
My personal favorite would be "When Adjectives Reveal More Than Algorithms" as it captures the core insight of your piece - that the descriptors we attach to AI tell us more about human values than about the technology itself.
Me: agreed - a close second is #5 - danke
Claude: You're welcome! "When Adjectives Reveal More Than Algorithms" is indeed a fitting subtitle that captures the essence of your piece. And I agree that "Words, Technology, and the Mirror of Human Values" works well as a strong second choice.
I think your readers will appreciate how the subtitle hints at the deeper message of your story - that our classifications and descriptions of AI technologies ultimately reflect our own human concerns, hopes, and values.
Danke schön for the opportunity to collaborate on this interesting project! I hope your newsletter readers enjoy this creative exploration of AI terminology.
P.S.
Today, when I read this again, it struck me that I should have included Altruistic AI, Effective AI, Generative AI to the soup! Get that? Oh well, that’s for another day.
I look forward to your likes (hearts), comments, or emails about your thoughts on The AI alphabet soup.
Enjoy your weekend.
Anita
Wow, on first read I feel a deep admiration for what you have created with AI but the skeptic in me will have to spend time with it later to fully understand the implications of this. Big cheers. My own experience with AI writing fiction is that it far less creative than humans are, and that it seems mostly to please the user by quickly summing up undeveloped “stories”according to the user’s tone and indications that it picked up. You may get a glimpse of Altruistic AI in one of my much later chapters of Farlop Finds His Truths entitled ‘An Autodidact’s Dialectic with an Automated Discerner.’
Brilliant post. What a great idea that AI reflects (or can be made to reflect) our values - Infophilia in practice. The pessimist in me, however, fears that most of the AI being offered to us lacks positive values. It can respond sympathetically to our pushback when it hallucinates or demonstrates negative values, but it doesn't really get it. Not surprising, because it is not human. Also not surprising, because AI engineers are focusing on production of content rather than adherence to values.
It is very difficult for AI, feeding on masses of existing data, to discern the subtleties of kindness, altruism, honesty, etc. It's all reliant on the way we build our AI. To ask ChatGPT, "Why did you just lie to me?" opens up a world of "I'll try again," "I lacked the data I needed," and so on, but no understanding that telling a lie is an information crime.