The Self-Censorship They Didn't Want to Hear About
On Intellectual Freedom, Information Health, and the Future Selves We’re Afraid to Become

Infophilia: A Positive Psychology of Information | January 3, 2026, Vol. 4, Issue 1
✨Welcome to Infophilia, a weekly letter exploring how our love of information and connections can help us all thrive, individually and collectively. This is a living lab for positive information psychology: avant-garde research in how we engage with knowledge, meaning, and each other.
If this is your first time: Welcome. We began 2025 with Information Literacy at 50 (open access) but you can explore earlier essays on Infophilia and the Dopamine Connection, Infophilic Information Styles, and Artificial Intelligence.
📌 Access & Attribution: This serial offers free previews always and occasional open access essays to keep scholarship accessible. Students and those facing financial barriers can request complimentary access. If you find value here, please cite the original work and consider supporting the public scholarship through subscription. Proper attribution sustains this work and models healthy information engagement.
Cite as: Coleman, A. S. (2026). The self-censorship they didn’t want to hear about: On intellectual freedom, information health, and the future selves we’re afraid to become. Infophilia: A Positive Psychology of Information, 4(1).
Happy New Year, 2026.
I was invited to speak about self-censorship, then uninvited when the organizers realized that I wouldn’t be discussing book bans. This essay is what I would have said. Welcome to 2026!
In the spirit of intellectual freedom, this essay is open access. Share it, cite it, argue with it.
This is also the intellectual freedom crisis of our times: not access to information, but the capacity to construct selves capable of using that access meaningfully.
The Self-Censorship They Didn’t Want to Hear About
On Intellectual Freedom, Information Health, and the Future Selves We’re Afraid to Become
I was invited to speak about self-censorship, then uninvited when the organizers realized I wasn’t going to discuss book bans.
They wanted me to talk about LGBTQIA+ book challenges. Important work, certainly—work that matters deeply to communities under attack. But what’s metastasizing while we’re busy defending Maus and Gender Queer?
I wanted to talk about the self-censorship that happens when librarians pursuing AI literacy can’t quite face what they’re learning. The self-censorship of professionals who’ve spent 2024 and 2025 accumulating frameworks and competencies around AI for a future they’re half-refusing to imagine. The self-censorship of a profession so focused on external threats to intellectual freedom that we’re missing the internal colonization already underway.
The self-censorship, in other words, of becoming—the future selves we’re too afraid to become.
But that’s not the intellectual freedom narrative librarians are used to hearing. So I was uninvited.
Which is its own kind of answer. But what kind of self-censorship was I trying to name?
Here’s another story about self-censorship.
In an antiracism workshop I once taught for library staff, a white male participant said he could not identify as a ‘white man’ because he associated that label with oppression. Some time later, I learned that a woman of color colleague who had helped open doors for him was quietly pushed out, and he eventually moved into the role she once held. The story is not unusual; what stayed with me was the gap between how he narrated his identity and the racialized outcome.
Hold that contradiction. It’s a map of what philosopher Derek Parfit called treating the future self like “another person”—someone you can betray without feeling you’ve betrayed yourself.
The librarian had consumed enough progressive discourse to perform an anti-oppression identity. But performance isn’t integration. His stated beliefs couldn’t reach his enacted behavior because there was no stable self bridging them—only an information stream constructing identity moment to moment, carefully curated to avoid the future self who would have to reconcile the contradiction.
This is what colonized selfhood looks like. This is self-censorship at the level of being.
Mental health disorders have been rising dramatically, globally, since the 1990s. Depression, anxiety, and ADHD diagnoses (high‑prevalence, ‘everyday’ end of the spectrum rather than rare, severe psychiatric disorders) are sharply increasing—particularly among adults who are newly recognizing themselves as neurodivergent. We tend to frame this as brain chemistry, lifestyle failure, or the cost of modern life. Research points to a more complex mix of biological, social, and structural drivers.
The evidence for intervention is compelling: exercise reduces depression as effectively as medication in many cases (for adults with non-severe depression). Poor sleep increases the risk of later mental health difficulties. Social isolation carries mortality risks similar to smoking. We know some of what works. The lifestyle factors are clear, measurable, actionable.
So why aren’t we doing it? The standard answer: We’re weak-willed. Addicted to our phones. Victims of Silicon Valley’s attention economy, paying the price of being modern!
All true. But incomplete.
What if we’re not only avoiding information? What if we’re avoiding the person we become when we stop consuming it —the unstimulated self who must actually live with what we know?
Many disciplines study information avoidance, identity avoidance, and self-image regulation through media. Building on that work, we can use phrases such as identity-avoidant information seeking, self-avoidant information consumption, and identity-regulation through information overload, although these are not standard clinical terms. The research shows that we consume and avoid information not only to learn and make decisions but also to manage moods, protect our concepts of self, and move toward preferred identities, the version of ourselves we can tolerate inhabiting—while steering away from feared or dissonant selves (i.e. avoid constructing the version we fear).
Paul Schofield writes about “duties to the self,” including cases where someone deliberately ensures their future self will be powerless to act on values that future self will (reasonably) endorse. It is, in effect, a kind of structured abandonment of the future self. By analogy, you might scroll through enough content to perform the identity you want, but you never integrate it deeply enough that it would actually constrain your actual behavior.
Ocean Vuong, in On Earth We’re Briefly Gorgeous, writes a letter across time haunted by the selves he could become but resists—particularly the violent, masculine, trauma-repeating versions. Vuong doesn’t invoke it, but the book can be read as an act of what David Brin calls “self-preventing prophecy”: it imagines disastrous future selves and trajectories in to avoid becoming them, and to outlive them.
In Jenny Offill’s Weather, librarian Lizzie answers apocalypse questions for a climate podcast. She accumulates fragments of catastrophe—melting ice sheets, migration patterns, survival tips—without ever quite turning them into action. Lizzie is experiencing a kind of prejudice against her future self—the self who would have to fully reckon with what she knows.
It’s 2026 now, and real librarians are pursuing generative AI literacy with the same fragmented urgency —webinars, frameworks, and “essentials” guides squeezed into the margins of their disappearing days. They know the information landscape is weaponized and their professional role is under pressure as algorithmic search and AI systems reconfigure how people discover and trust information. They scroll through trainings and best practices, often in small snatches of time, accumulating AI knowledge they struggle to fully integrate, preparing for a transformed future.
Here’s what I believe makes our moment different: the avoidance is engineered, and now it is weaponized.
Tech firms didn’t stumble into addictive design; like tobacco and ultra-processed food companies before them, they use behavioral science and neuroscience to optimize products for compulsion and reward. They A/B test notification patterns and interfaces to maximize ‘time on device,’ effectively exploiting our dopamine-driven vulnerabilities. Former Facebook leaders now say they consciously built ‘social-validation feedback loops’ to exploit (hijack) weaknesses in human psychology and keep us hooked, even as they marketed this as engagement, connection, and stickiness.
We have an information environment that makes the unstimulated self harder and harder to inhabit. We often know Instagram, Facebook, Twitter, Tik Tok or any other social media platform aren’t really what we need; we sense we’re using them as agents for something deeper we can’t quite name. Yet we scroll and post anyway, not despite that knowledge, but because the alternative—being with whoever we are when nothing external is mediating us—feels almost intolerable.
Constant information consumption doesn’t just distract us from who we’re becoming. It can erode our capacity for sustained attention and self-reflection, and with it our sense of being anyone coherent enough to become. We lose the psychological musculature to inhabit an unstimulated self. Then that unstimulated self becomes genuinely intolerable—not because of pre-existing pathology, but because we’ve outsourced the construction of consciousness itself.
We've outsourced the construction of consciousness itself. What we call addiction is actually withdrawal—the panic of trying to exist independently of the systems that construct us moment to moment.
Much of what we casually call addiction feels, from the inside, more like withdrawal: the panic of trying to exist independently of the systems that have come to scaffold (or even construct) us moment to moment.
Some philosophers describe the moral-emotional residue of harming or abandoning a future self in terms of self-directed resentment: the later self’s grievance against the earlier one. In our context, that dynamic is now being industrialized—scaled and monetized—through systems that reward present selves for offloading costs onto their futures.
Librarians have been preparing for Orwell when we’re living in Huxley.
We’ve organized against external censorship—book bans, state control, explicit suppression. We’ve built our intellectual freedom frameworks around visible enemies: school boards, politicians, would-be censors trying to remove books from shelves.
Meanwhile, we’re drowning our patrons—and ourselves—in a Huxleyan nightmare of infinite choice, weaponized engagement, and information designed to prevent the formation of selves capable of caring about books at all.***
As Neil Postman put it, Orwell feared those who would ban books. Huxley feared we wouldn’t need to ban them—that we’d be too distracted, too saturated, too fragmented to read them. Orwell feared those who would deprive us of information. Huxley feared those who would give us so much we’d be reduced to passivity and egoism.
We’ve been fighting 1984 while Brave New World quietly colonized our consciousness.
The data centers humming in the desert (and into orbit) aren’t storing banned books. They’re storing the behavioral profiles that ensure we never develop the attention span to read challenging books in the first place. The AI tools we’re frantically trying to understand aren’t censoring information—they’re replacing the cognitive labor that would make us capable of synthesizing information into independent thought.
The data centers humming in the desert (and into orbit) aren't storing banned books. They're storing the behavioral profiles that ensure we never develop the attention span to read challenging books in the first place.
This is also the intellectual freedom crisis of our times: not access to information, but the capacity to construct selves capable of using that access meaningfully.
And when I tried to name this at a conference on self-censorship, I was uninvited.
In My Year of Rest and Relaxation, Ottessa Moshfegh’s narrator spends a year in drugged hibernation, literally trying to skip over the unbearable self she would otherwise have to become—the one who would feel and act and choose. It reads like satire until you realize how many of us are doing a milder version: scroll until numb, binge until unconscious, consume until the future self who would have to integrate all this information simply... doesn’t arrive.
We’re not just information avoiders. We’re future-self avoiders.
And here’s the cruelest twist: that librarian who performed progressive values while enacting oppression wasn’t uniquely hypocritical. He was showing us what happens when information consumption replaces integration. His stated beliefs couldn’t reach his enacted behavior because there was no stable self bridging them. Only an information stream constructing identity moment to moment.
The self-censorship isn’t just what we refuse to say. It’s who we refuse to become.
So when adults begin to identify as neurodivergent in what health systems describe as unprecedented numbers, when ADHD and autism spectrum diagnoses rise sharply, we could read it as medicalization. Or we could ask: What if many of these adults are accurately perceiving something about their traits and about a world that has become less tolerable for those traits?
What if the diagnostic categories are proliferating because people correctly sense that the way we’re being asked to exist has become incompatible with their unaugmented human capacities —perhaps even with how their consciousness naturally operates? What if “I have ADHD” is, in part, code for “The environment requires a continuous, focused self that can delay gratification and integrate information into coherent action—and I literally cannot construct that self anymore”?
Research on ageism suggests we avoid older people because they represent our feared future self—prejudice as displaced self-avoidance. Now imagine that scaled to every future self: the one who would have to act on climate knowledge (Offill’s Lizzie), the one who would have to integrate stated values with behavior (the librarian), the one who would have to face inherited violence (Vuong).
The mental health crisis might not be primarily about individual pathology. It might be about selves trying to emerge in an environment increasingly designed to prevent their formation.
What if the rising rates aren’t just pathology but also appropriate response? The canaries in the coal mine aren’t broken; they’re the highly sensitive ones registering the toxicity.
This is where information health becomes an intellectual freedom issue.
Paul Zurkowski coined “information literacy” fifty years ago. He also noted: information infrastructure costs. Free access isn’t free—someone must pay for the enrichment, the organization, the curation that makes information usable.
We thought we were getting free information from tech platforms. Instead, we paid with our attention, our autonomy, our capacity to construct stable selves. We paid with our children’s ability to delay gratification, with our own ability to sit with boredom, with the psychological conditions that make intellectual freedom possible in the first place.*****
Intellectual freedom isn’t just about access to information. It’s about the psychological capacity to think independently, to integrate information into coherent selfhood, to construct a future self capable of autonomous judgment.
What good is a diverse collection if our patrons—if we ourselves—have been trained into present-moment selves that cannot construct the future selves who would read, integrate, and act on what we provide?
What good is protecting books from censorship if we’ve lost the attention span to read them? What good is information literacy if we’ve been trained into selves that cannot integrate information into identity? What good is intellectual freedom if the intellect itself has been colonized?
This isn’t a failure of librarianship. It’s a recognition that the librarian’s mission—organizing information for human flourishing—now requires addressing the weaponization of information itself.
Information health is the intellectual freedom question of our times.
If Schofield gives us the language of duties and self-directed resentment, Derek Parfit gives us the ontology underneath: we often treat our future selves more like other people, which means we can betray them. But what if some of us are being trained into a present self that cannot construct a future self at all? What if the information stream is so relentless, and so engineered for moment-to-moment construction, that the very capacity for becoming has been disabled?
Who have we — and our children — been prevented from becoming?
What self is trying to emerge beneath the flood? What would we discover if we could tolerate the gap—the discipline of not reaching for the next digital input, not skittering away from the future person who would have to feel and integrate and act?
In the classic marshmallow test, children who could delay gratification — resisting one marshmallow to get two later— were found to have better life outcomes decades later (recent work shows family background and trust account for much of that effect). That capacity for delayed gratification is the muscle that allows presence, that makes becoming possible.
But the test assumed a fair game—that the second marshmallow would actually come, that patience would be rewarded. We’re adults who’ve watched that promise break. And we’re adults whose capacity to wait, to wonder, to tolerate ambiguity has been steadily eroded—often quite deliberately—for profit.
Maybe the pain we’re calling mental illness is also information—a signal of misfit between a person and their environment, not just a defect or misalignment in individual brains. Is it self-directed resentment speaking—the future self we keep betraying, trying to make itself heard?
Most of us aren’t ready for that conversation—librarianship certainly wasn’t, as my uninvitation made clear.
But some of us are ready. Ready to wonder if information health is the intellectual freedom question we’ve been avoiding. Ready to ask who we’ve been prevented from becoming. Ready to sit with the possibility that we’re systematically targeted. Ready to consider that the self-censorship that matters most is who we refuse to become.
We ought not to do to our future selves what it would be wrong to do to other people - Derek Parfit, Reasons and Persons (1984).
Wishing you a weekend of presence and wonder!
About the Images
Historically, Crystal Cove served as a film location and seaside retreat that attracted Hollywood figures, and today its restored historic cottages, operated by the Crystal Cove Conservancy in partnership with California State Parks, host overnight guests and school programs while the surrounding state park remains a popular place for hiking, biking, surfing, tidepooling, scuba diving, and swimming.
The opening image shows a rock formation with a distinctive hollow—a natural space where something might form, surrounded by mussels that have colonized the edges, with human footprints visible in the wet sand nearby (you do have to look closely!). The closing image captures the steep descent to the beach, a path that’s difficult but navigable (by foot and bicycle!) with the surroundings and destination visible below.
Notes
Details have been altered and experiences combined to protect privacy, while preserving the dynamics that matter, especially in examples drawn from my library world.
***There are some librarians engaging with Huxleyan threats: Rory Litwin (2008) here.
*****The specific claims —that we “paid with our capacity to construct stable selves,” “our children’s ability to delay gratification,” etc.—are strong interpretations that resonate with research on attention problems, social media overload, and self-regulation. BL: The direct causal chain from “platform economics” to “loss of stable selfhood” is not empirically nailed down. What research does show is that we traded money for models that monetize our attention and shape our habits, with real costs to autonomy, attention, and self-regulation. Merlici, I. A., Maftei, A., & Opariuc-Dan, C. (2025). This is Too Much! Social media integration and adults’ psychological distress: the mediating role of cyber and place-based information overload. Behaviour & Information Technology, 44(10), 2445–2455. https://doi.org/10.1080/0144929X.2024.2406252
ALA defines intellectual freedom as “the right of every individual to both seek and receive information from all points of view without restriction,” and emphasizes “the right to think for themselves” and “self-rule.” Those definitions implicitly presuppose some psychological capacity to form, revise, and hold independent judgments, even though they do not explicitly speak of “integrating information into coherent selfhood” like I do. My statement: “Intellectual freedom isn’t just about access to information. It’s about the psychological capacity to think independently…” extends existing definitions, drawing out what “thinking for oneself” and “self-rule” mean.
Case, D. O., Andrews, J. E., Johnson, J. D., & Allard, S. L. (2005). Avoiding versus seeking: the relationship of information seeking to avoidance, blunting, coping, dissonance, and related concepts. Journal of the Medical Library Association : JMLA, 93(3), 353–362.
Schofield, P. (2015). On the existence of duties to the self (and their significance for moral philosophy). Philosophy and Phenomenological Research XC (3), May. doi: 10.1111/phpr.12034 open access version: https://www.paulschofieldphilosophy.com/uploads/1/3/3/5/133512109/schofield_duties_to_self.pdf
Brin, D. (1999) The self-preventing prophecy. URL: https://www.davidbrin.com/nonfiction/tomorrowsworld.html | George Orwell and the self-preventing prophecy. URL: https://www.davidbrin.com/nonfiction/1984.html
About AI Literacy and libraries / librarians: https://liblime.com/2025/03/17/ai-literacy-in-the-future-of-libraries-adapting-to-a-new-information-landscape/ | https://sr.ithaka.org/blog/is-ai-literacy-the-trojan-horse-to-information-literacy/ | Archambault, Susan G.; Murph, Nicole L.; and Ramachandran, Shalini, “Fostering AI literacy in undergraduates: A ChatGPT workshop case study” (2025). Librarian Publications & Presentations. 161. https://digitalcommons.lmu.edu/librarian_pubs/161
https://qz.com/1126271/facebooks-founding-president-sean-parker-admitted-how-it-exploits-human-psychology | https://www.axios.com/2017/12/15/sean-parker-facebook-was-designed-to-exploit-human-vulnerability-1513306782
https://theconversation.com/are-labels-like-autism-and-adhd-more-constraining-than-liberating-a-clinician-argues-diagnosis-has-gone-too-far-247138 | Skafle, I., Gabarron, E., & Nordahl-Hansen, A. (2024). Social media shaping autism perception and identity. Autism : the international journal of research and practice, 28(10), 2489–2502. https://doi.org/10.1177/13623613241230454 | Sensible Medicine - www.sensible-med.com/p/the-rise-of-autism-and-adhd-diagnoses | Wu, Y., Wang, L., Tao, M., Cao, H., Yuan, H., Ye, M., Chen, X., Wang, K., & Zhu, C. (2023). Changing trends in the global burden of mental disorders from 1990 to 2019 and predicted levels in 25 years. Epidemiology and psychiatric sciences, 32, e63. https://doi.org/10.1017/S2045796023000756 | Xu, Jj., Ding, Ly., Sun, Cc. et al. The burden of mental disorders, substance use disorders, and self-harm among youths globally: findings from the 2021 Global Burden of Disease study. Transl Psychiatry 15, 346 (2025). https://doi.org/10.1038/s41398-025-03533-x | Todd Nelson explicitly characterizes ageism as “prejudice against our feared future self” or “prejudice against our future selves.” See https://spssi.onlinelibrary.wiley.com/doi/abs/10.1111/j.1540-4560.2005.00402.x | Ageism: Stereotyping and Prejudice against Older Persons edited by Todd Nelson. https://cpb-us-w2.wpmucdn.com/blogs.cofc.edu/dist/0/348/files/2010/08/Nelson_Aging.pdf
The data center boom in the desert (2025). MIT Technology Review. https://www.technologyreview.com/2025/05/20/1116287/ai-data-centers-nevada-water-reno-computing-environmental-impact/ | Billionaires want data centers everywhere, including space: Astronomers and environmental scientists are skeptical. The Verge. https://www.theverge.com/ai-artificial-intelligence/845453/space-data-centers-astronomers
Zurkowski emphasized that creating usable information infrastructure is capital-intensive, speaking of “first copy costs” and the need to recognize information as an economic resource that requires investment—“the creation of a resource people can draw on is a most capital intensive activity.”
I’ve not cited the literature references here - please ask me if you need them! Aldous Huxley’s Brave New World was unforgettable for me a lot more than George Orwell’s 1984. I’m deeply influenced by Postman’s Amusing Ourselves to Death as well.



Thank you Anita. Please post this on ALA Connect and IFRT Connect. ALA members need to discuss a much broader and deeper meaning of intellectual freedom as the ability for critical thinking and reasoning. We are facing both Orwell's 1984 and Huxley's Brave New World .
I think this is your most important post yet. We are being coralled into a massive, complex group-think that tells us who the algorithms in our silos want us to be. Maybe that's too extreme, but in relation to AI use by university students, I have been emphasizing that AI can very easily rob you of your education, defined as the ability to think freely and critically, with ever-increasing capacity to understand the world and navigate the information landscape with wisdom. Every use of AI that substitutes your thought for its easy explanations is a nail in the coffin of your development as an educated human. I also despair for the many who have been swallowed by weaponized messaging, leading them to acts of violence or merely ruining their lives and relationships. Brave New World indeed.