Trust in Turbulent Times
An Infophilia Roundup

Infophilia: A Positive Psychology of Information | January 10, 2026, Vol. 4, Issue 2
✨Welcome to Infophilia, a weekly publication exploring humanity's evolutionary drive for information and how it shapes thriving, meaning-making, and human connection, individually and collectively.
If this is your first time: Welcome. I’m glad to have you here!
📌 Access & Attribution: This serial offers free previews always and occasional open access essays to keep scholarship accessible. Students and those facing financial barriers can request complimentary access. If you find value here, please cite the original work and consider supporting this public scholarship through subscription.
Cite as: Coleman, A. S. (2026). Trust in turbulent times: An Infophilia roundup. Infophilia: A Positive Psychology of Information, 4(2).
Trust in Turbulent Times
An Infophilia Roundup
The infrastructure of culture and memory—whether physical or informational—requires active protection, not passive faith that it will endure.
Before we begin, a moment to mark January 7, the one-year anniversary of two of California’s most devastating fires—the Pacific Palisades and Eaton fires. The Getty Villa, a remarkable repository of ancient art and knowledge, remained standing as the Palisades Fire scorched the hillsides and neighborhoods around it. Vegetation, hillsides, and around 1,400 trees on or near the estate were burned but the buildings and collections remained intact, thanks to fire-mitigation work, on-site staff, and firefighters. The infrastructure of culture and memory—whether physical or informational—requires active protection, not passive faith that it will endure. Getty and partner organizations subsequently created the LA Arts Community Fire Relief Fund to support artists and arts workers affected by the fires, underscoring the need for active protection of cultural infrastructure.
It’s just ten days since the New Year began, but 2026 already promises whirlwinds. From the Donroe Doctrine dominating international news to defenders calling the ICE killing of Renée Nicole Good a good shooting, from the continuing backlash against transhumanist AI to MAHA’s new health policies, from the Rama haircut and Halo lip beauty trends to one of the world’s largest iceberg that has “sprung a leak”—it can seem... overwhelming? Exhausting? You fill it in.
Books constitute capital… It is not, then, an article of mere consumption but fairly of capital - Jefferson
Time for another roundup. Grab your tea, coffee, or water, settle into your cozy spot, and remember what Jefferson wrote: “Books constitute capital… It is not, then, an article of mere consumption but fairly of capital” Infophilia isn’t a book—at least not yet—but we write here every week, expanding work I might have tucked into an academic press for 200 readers. Instead, we’re building something more public, more alive.
But first, gratitude. Thank you to everyone who responded to last week’s “The Self-Censorship They Didn’t Want to Hear About: On Intellectual Freedom, Information Health, and the Future Selves We Refuse to Become.” I’m humbled and honored that the essay was cited in this week’s AL Direct, the American Library Association’s membership communication and I’m especially delighted to introduce you to Carolyn Culbertson, a new paid reader—an MLIS student and public health communications professional—who left this message (and gave me permission to share it):
“You articulated something I have felt without really understanding in my public health work, where I’ve seen the dangerous outcomes of misinformation. Your arguments give me hope because you provide a novel framework that could begin a new chapter for information literacy and intellectual freedom.”
Bill Badke, the eminent information literacy scholar (also Infophilia’s first reader and paid subscriber from our beginning three years ago) wrote:
“I think this is your most important post yet…”
Thank you, Bill and Carolyn, and thank you to ~20 new readers who joined us this week. If there’s one theme threading through today’s roundup, it’s trust.
Signal & Noise
On What’s Happening with Generative AI
The solution isn’t refusing AI tools—it’s regulating their use and maintaining rigorous verification practices.
Many prominent writers and outlets—including the New Yorker—are now on Substack. So is She Writes AI Community, where Karen Smiley is building an accessible, community based directory of women and nonbinary people writing about AI (disclosure: I’m listed here). This one-year old directory includes 566 writers in 56 countries with 16 different language combinations in the newsletters.
While AI pundits focus on Altman calling “Code Red” last December, chatbot personas, and librarians teach people to spot AI hallucinations (or how to adopt roles), the real action is also happening in open-source AI development. My current favorite is Qwen (developed by Alibaba Cloud), though DeepSeek’s late-2025 reasoning models also merit attention. Qwen and DeepSeek aren’t just technical or research curiosities. They represent an alternative to Big Tech AI development, with open-weight Chinese models released under rather permissive licenses instead of being kept fully proprietary.
Here’s why this is relevant: China’s amended Cybersecurity Law, taking effect in 2026, explicitly brings AI into its scope—supporting AI research and strengthening ethics, risk assessment, and security governance. These changes, together with other policies, don’t shut down open-weight releases but creates a “more regulated, but still actively supported, environment” for them. This makes China, a live, large-scale experiment in AI governance —whether or not one trusts the Chinese government’s approach.
When Gemini lives inside Google Docs, and Microsoft 365 Copilot powered by GPT-4 class models, is built into Word, how do we know who’s writing what?
A recent high-profile discussion brought together Michael Burry (who called the subprime mortgage crisis), Jack Clark (co-founder of Anthropic—Claude AI), and Dwarkesh Patel (tech interviewer) with Patrick McKenzie (attention is all you need) as moderator to debate whether AI represents genuine transformation or “a historic misallocation of capital.” The organizers note they “put them into a Google doc” for the conversation—a debate about the future of AI!
That phrase—”we put them in a Google doc”—turned me off. When Gemini lives inside Google Docs, and Microsoft 365 Copilot powered by GPT-4 class models, is built into Word, how do we know who’s writing what? When Patel reportedly says AI tutors offer “a qualitatively much better experience” than human ones, should we trust those are his unmediated words, or question whether AI tools shaped the conversation about AI’s value?
This isn’t paranoia (or conspiratorial thinking). It’s the epistemological reality of our moment: the tools we use to discuss AI are themselves AI-enhanced, creating a verification problem that compounds with every layer. Yes, I use AI—and although I prefer open source AI run on private servers, I’m not above using commercial tools like Claude. Yes, I verify with AI, like Perplexity. But I also use and check other sources like libraries and archives.
Libraries, archives, and museums—with their centuries of curated catalogs, meticulously managed indexes, finding aids, repositories—are infrastructure that costs money to maintain, capital we must actively protect.
Here I must give a huge shoutout of thanks to the Libraries at the University of Illinois, Urbana Champaign—one of the world’s largest libraries. Library access is what makes this public scholarship reliable. Thank you, UIUC Libraries and iSchool!
The solution isn’t refusing AI tools—it’s about regulating their use and maintaining rigorous verification practices. Libraries, archives, and museums—with their centuries of curated catalogs, meticulously managed indexes, finding aids, repositories—are infrastructure that costs money to maintain, capital we must actively protect.***
Forensic Scientometrics and Academic Integrity
Speaking of verification: Leslie McIntosh of Digital Science is also on Substack, where her work on Forensic Scientometrics newsletter develops tools for maintaining trust in knowledge production. Recent posts examine spoofed institutional domains and affiliations, self-citation patterns, and networks of questionable research linked to seemingly high-profile authors. When university leaders boast that a faculty member or unit is producing 100+ papers annually, these methods such as network mapping, affiliation verification, anomaly detection in citations and productivity, can help distinguish genuine scholarship from gaming the system—a task that becomes increasingly vital as AI makes paper production dramatically easier and journals report surging submissions.
FOIA Files: The Past is Always Present
Jason Leopold, nicknamed the “FOIA terrorist” for his relentless transparency advocacy, in his latest FOIA Files (yesterday) reminds us: “With the FOIA, the past is always present.” His latest dataset from the Office of Professional Management reveals previously unreported details about DOGE hiring—including familiar names like Jordan Wick, Edward Coristine, Christopher Stanley, and Luke Farritor, along with their salaries and assigned agencies. The intense secrecy around DOGE in early 2025 makes this transparency work essential.
Two Irreconcilable Truths
Search “January 6” and you’ll encounter split realities. The White House page declares it “A Date Which Will Live in Infamy,” featuring President Trump’s sweeping pardons for defendants he deemed as unfairly targeted patriots. Scroll further and you’ll find Britannica and Wikipedia describing a violent attempt to overturn an election. The basic infrastructure of information now encodes radically different narratives about the same day, mirroring our fractured epistemology.
What’s Changing in Library News
On January 12, ARL’s former Day in Review relaunches as ARL Daily Intelligence—”curating and sharing the top news and analysis for library leaders and advocates.” The rebrand illustrates how generative AI accelerates what we used to call “desktop publishing,” now morphed into broadcast influence at unprecedented scale.
More consequentially: Effective February 2, 2026, the Library of Congress will stop adding form subdivisions ($v) to LC subject heading strings and will instead express them with LC Genre/Form Terms. For catalogers, systems librarians, and anyone who cares about fiction access (children’s and adult public librarians), this is seismic. The ALA Core SAC response flags concerns about workflow disruption and how patrons will find formats like fiction, comics, and guidebooks in the catalog.
From the Library of Congress Blogs
The Library of Congress blogs remain wonderful anchors. Recent highlights:
On January 10, 1776, Thomas Paine’s “Common Sense” was first printed and distributed in the American colonies, published anonymously, and sparking revolutionary thinking.
On Tuesday, January 20, 2026, anyone can attend the Library of Congress webinar: Orientation to Legal Research: U.S. Case Law. These online orientations, taught by legal reference librarians and typically offered once a month on a rotating basis, are open to the public so that anyone can learn to do basic legal research using the tools librarians rely on.
Gloss
Beauty Trends Meet Political Moment
The Rama cut—that precisely calibrated length between pixie and bob that Rama Duwaji sports—has been making waves since Zohran Momdani became NYC’s first Muslim mayor. Women nationwide are discovering they’ve been accidentally sporting this exact haircut during the awkward grow-out phase for which they’ve been needlessly apologizing.
The trend pairs perfectly with the Halo lip—lips shaped to glow like you’ve been blessed by a ring light deity! A sharp departure from previous trends where women wanted lips that looked like they’d been stung by luxury bees.
Information is capital, not mere consumption.
In a week where we’re navigating AI-mediated conversations, competing historical narratives, and institutional changes affecting knowledge access, Jefferson’s insight resonates anew: information is capital, not mere consumption. These anchors matter—transparent FOIA work, forensic scientometrics, and Library of Congress resources connecting us to revolutionary thinking. They represent our collective investment in truth infrastructure, assets we build, and maintain across generations. This is also why I’ve also given the many links in Notes (see below).
Trust isn’t naive—it’s built through verification, maintained through accountability, and renewed through communities that refuse to let information infrastructure serve anything less than truth. Like the Getty Villa standing against the flames, knowledge institutions endure only through active protection and wise stewardship.
Have a great weekend.
✦✦✦✦✦
About the Image
One year after the Pacific Palisades fire, the Getty Villa stands as a testament to what endures when we actively protect cultural memory. This Drunken Satyr with a Wineskin (a reproduction statue), photographed in the Villa's peristyle garden pool over Thanksgiving weekend 2025, embodies that beautifully
Notes
***On “regulating” AI use: I mean self-regulation—the kind of internal governance we explored in last week’s essay on self-censorship. Drawing on Derek Parfit’s framework that “we ought not to do to our future selves what it would be wrong to do to other people” (Reasons and Persons, 1984), regulating AI use means taking responsibility for how these tools shape our thinking and work, rather than waiting for external mandates. It’s personal epistemic responsibility in practice.
I also chose the word “regulation” because adaptive infophilia—our information styles and engagement patterns—varies by context (we can be healthy infophiles in one setting and misled infofools or infovores in another) and operates at multiple levels: individual, institutional, international, and beyond. This means AI governance matters at every scale. Consider current examples: Australia has enacted a national law banning social media accounts for children under 16, requiring platforms to prevent and deactivate such accounts. The EU’s “common charger” rules require all new smartphones sold in the EU to use USB-C charging ports by end of 2024, explicitly justified as reducing e-waste. While nothing on the books today regulates AI with comparable scope—no “USB-C for all AI” or blanket national rules—we’re seeing emerging partial analogues. I’ll be introducing some of these in February with series on FTC 6(b) and green libraries, much like I did with FOIA last year.
To be clear: Gemini “lives inside” Google Docs for many Google Workspace and some paid consumer tiers: it appears as a “Help me write” / Gemini panel that can draft, rewrite, and expand text directly in a document. It is not present for all free accounts, and you still have to invoke it; it does not write by itself.
Jefferson’s letter to James Madison (Books / Knowledge as capital not consumption) - https://founders.archives.gov/documents/Madison/04-02-02-0322 - Jefferson argued books are capital, not consumption and should be exempt from taxation (for individuals). We can apply this to our public knowledge infrastructure—FOIA systems, libraries, cultural institutions. These aren’t services we consume but generational assets requiring active stewardship and an expansive notion of intellectual freedom.
Getty Villa / Pacific Palisades and Eaton Fire. The Los Angeles Public Library’s Palisades Branch burned down on January 8, 2025, during the Palisades Fire; library leadership has said that essentially the entire 34,000‑item collection and the 11,500‑square‑foot building were lost. The Library Foundation of Los Angeles created the LAPL Palisades Branch Recovery Fund to support outreach, tech access, and eventual rebuilding; donations are being solicited for long‑term restoration. Rebuilding is underway but slow, with only temporary and stopgap services so far. The branch remains a focal point of community grief and recovery planning rather than a fully restored facility as of early 2026. See https://www.latimes.com/california/story/2025-10-11/pacific-palisades-fire-anger-los-angeles-city-hall-rebuild
AL Direct, American Library Association. Archive from beginning until mid 12/2025. https://americanlibrariesmagazine.org/al-direct/
ALA Subject Analysis Committee Response to the Library of Congress’ Genre/Form Subdivisions Announcement: https://connect.ala.org/core/discussion/response-to-the-library-of-congress-genreform-subdivisions-announcement-1#bm715f0e3f-72a8-412d-91f1-de5bab834684
ARL Daily Intelligence. (ARL’s long‑running Day in Review newsletter just rebranded. product is the same, the name and framing are new). https://www.arl.org/category/daily-intelligence/
Barcelona Artificial Intelligence Declaration. (2026, January 9). The Barcelona Declaration on AI in Europe: 2025 in review. https://barcelona-declaration.org/news/20260109_barcelona_declaration_2025_in_review/
Canadian officials say US health institutions are no longer dependable for accurate information. (2026, January 4). The Guardian. https://www.theguardian.com/world/2026/jan/04/canada-us-health-institutions-information
Dontro doctrine. (2026, January 8). See The Economist. https://www.economist.com/ (paywalled)
FOIA Files. https://www.bloomberg.com/news/newsletters/2026-01-09/data-reveals-details-about-doge-government-hiring
Quinn, M. (2026, January 9). Massive iconic iceberg turns blue and is “on the verge of complete disintegration,” NASA says. CBS News. https://www.cbsnews.com/news/iceberg-a23a-turns-blue-verge-of-complete-disintegration-nasa/
Ripatrazone, N. (2025, July 28). A.I.: Marshall McLuhan warned us about folks like Peter Thiel. He might be right. Slate. https://slate.com/technology/2025/07/ai-artificial-intelligence-peter-thiel-dangerous-marshall-mcluhan.html
Subscribe to email newsletters and alerts from the Library of Congress: https://www.loc.gov/subscribe/ - the intellectual wealth here is indescribable.
Upcoming US Law webinars. In custodia legis (law librarians of Congress): https://blogs.loc.gov/law/2026/01/upcoming-us-law-webinars-january-2026/
UIUC. Libraries, https://www.library.illinois.edu/ | iSchool Illinois
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł., & Polosukhin, I. (2017). Attention is all you need. Advances in Neural Information Processing Systems, 30, 5998–6008
AI-mediated conversations (Jan 09 2025). The AI revolution is here. Will the economy survive the transition? The man who predicted the 2008 crash, Anthropic’s co-founder, and a leading AI podcaster jump into a Google doc to debate the future of AI—and, possibly, our lives. URL: post.substack.com/p/the-ai-revolution-is-here-will-the
She Writes AI Community by Karen Smiley
Forensic Scientometrics by Leslie McIntosh
McIntosh, L. D. and Vitale, C. H. (2024). Forensic Scientometrics. – An emerging discipline to protect the scholarly record. ArXiv. https://arxiv.org/abs/2404.00478
Melissa Reeve, a digital marketing consultant, is also on Substack, writing Hyperadaptive Intelligence. In one of her essays, she describes how her AI team contains a professor, a writer, and a salesperson and how yours can too.
✦✦✦✦✦



