Vibe Coding Explained
AI Code Generation, Hyperscale Data Centers, and the Need for Human Oversight
Infophilia, a Positive Psychology of information | April 28, 2025 | Vol. 3, Issue 20 | Bonus edition
✨Welcome to Infophilia, a weekly letter exploring how our love of information and connections can help us all thrive, individually and collectively.
Cite as: Coleman, Anita S. (2025, April 28). Vibe coding explained: AI code generation, hyperscale data centers, and the need for human oversight. Infophilia, a positive psychology of information, 3 (20).
As we continue our exploration of infophilia—the positive psychology of information centered around the human love of information and connections—I want to introduce you to "vibe coding," a phenomenon that's rapidly changing software development.
Vibe coding, at its core, is the practice of using AI to generate code based on vibes, feelings, or general intentions rather than writing it manually. Instead of meticulously crafting each line of code, developers describe what they want—sometimes in vague, conversational terms—and let AI models interpret and implement their vision.
This connects interestingly with what I've been calling "millennial coding" when encountering buggy web forms and inefficient, frustrating subscription software as a service features in products like Microsoft 365 or Adobe’s Creative Cloud over the last few years—especially since ChatGPT, Claude, Gemini, and other large language models have become more integrated into development workflows. While my original characterization blamed inexperience, perhaps the reality is different: these bugs might be arising from over-reliance on AI-generated code without the critical human oversight that comes from deep experience with systems and their failure modes.
For readers of Infophilia, vibe coding represents a fascinating inflection point in our relationship with information:
It democratizes coding by lowering technical barriers
It shifts the developer's role from writing code to describing intentions and curating AI outputs
It introduces new types of bugs—ones born not from logical errors but from misinterpreted "vibes"
As I write this, Canadians are heading to the polls for their federal election. What strikes me as profound is the contrast between their voting system and our technological trajectory. A reader wrote me: "All votes are paper, and ballots are counted, with multi-partisan scrutiny, right in each polling station." This deliberate, tangible approach to one of democracy's most critical information processes stands in stark contrast to our rush toward automation and AI-generated solutions.
There's wisdom in knowing when to embrace technological advancement and when to maintain human oversight and physical reality. The Canadian election reminds us that sometimes, the most reliable systems aren't the most technologically advanced—they're the ones designed with human verification and tangible accountability at their core.
Hyperscale Data Centers: The Physical Reality Behind Digital Magic
Land across the United States is being acquired in significant amounts to expand data center infrastructure, reflecting Big Tech's commitment to supporting cloud and AI-driven services.
The aerial view of Southern California's coastline in my photo at the top reveals more than just our land’s natural beauty and human settlement. Beneath these translucent waters (not exactly here but somewhere not too far away) lie vast networks of undersea cables—like Meta's Project Waterworth—that form the physical backbone of our digital world. At 50,000 kilometers when complete, this cable will be the world's longest, connecting five continents with 24 fiber pairs operating 7000 meters deep.
These invisible highways carry the data that powers our vibe coding experiences and cloud computing, yet remain largely unknown to most users. While we casually request AI to generate code based on our "vibes," massive data centers and undersea infrastructure are working tirelessly, consuming enormous resources and reshaping landscapes both underwater and on land. The juxtaposition of natural coastline with hidden technological marvels forces us to confront urgent questions: How aware are we of the physical infrastructure supporting our digital existence? At what environmental cost does our increasing reliance on cloud computing come? And who bears the burden when these massive data centers arrive in our communities?
Land across the United States is being acquired in significant amounts to expand data center infrastructure, reflecting Big Tech's commitment to supporting cloud and AI-driven services. Microsoft, for example, now has facilities in Wisconsin, Georgia, Michigan, Nevada, North Carolina, Idaho, and Ohio, securing large parcels in both established and emerging tech hubs to ensure capacity for future growth.
Kentucky has 35 existing data centers, mostly in Louisville, with a new hyperscale data center announced for completion in fall 2026. While the developer hasn't disclosed which major tech company will be the end user, Louisville is seeing similar investment from Meta in nearby Jeffersonville, Indiana. Interestingly, Kentucky's Governor Andy Beshear favors a more measured approach. He’s not opposed to data center development, but would like more caution exercised while offering substantial, upfront tax incentives with strong safeguards and thorough vetting.
The Governor is right. This rapid expansion is happening against a concerning backdrop of potential regulatory rollbacks. In fact, today's news about proposed legislation to eliminate the Public Company Accounting Oversight Board (PCAOB)—established after the Enron scandal to ensure financial accountability—should remind us that as our digital infrastructure grows more complex and pervasive, attempts to weaken oversight continue. Just as we need experienced eyes reviewing AI-generated code, we need robust oversight of Big Tech building our digital infrastructure.
AI Self-oversight is a Myth!
As our digital infrastructures get more complex, there's also a trend in some tech circles to suggest that AI oversight of AI will eventually be sufficient—that human intelligence and supervision will become irrelevant or redundant (and inefficient!!). This perspective often stems from a belief that AI systems will eventually surpass human capabilities in detecting and correcting their own errors.
This view fundamentally misunderstands what human oversight brings to the table. While AI may excel at identifying patterns and anomalies within established parameters, humans contribute contextual understanding, ethical reasoning, and experiential wisdom that algorithms simply cannot replicate. Just as we need human eyes on AI-generated code to catch subtle misalignments with real-world use cases, we need human oversight on our broader technological infrastructure to ensure it serves human needs and values.
The case for human oversight becomes even stronger when we recognize that AI systems inherit the biases and limitations of their training data and design choices. Without diverse human perspectives providing checks and balances, AI systems risk amplifying existing problems or creating entirely new ones that their algorithms aren't designed to detect.
Questions for Reflection
Despite the new frontier of vibe coding and AI-assisted development, we should pay attention to the lessons from systems like Canada's election process. Experiential wisdom remains crucial—not just for writing code, but for recognizing when high-tech solutions might benefit from old-school verification methods.
What experiences have you had with AI-generated code? Have you noticed differences between traditionally coded applications and those built through "vibes"?
How much cloud computing and storage do you use in your daily life? Are you aware of the physical infrastructure supporting your digital existence?
What systems in your life would benefit from more human oversight, and which ones are ripe for AI transformation? Is there a data center coming near you?
I'd love to hear your thoughts in the comments.
Notes
The term "vibe coding" was coined in February 2025 by Andrej Karpathy, one of the founders of OpenAI:
There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works. Source: https://x.com/karpathy/status/1886192184808149383
A hyperscale data center is a large-scale facility designed to deliver massive computing, storage, and networking capabilities, optimized for scalability and efficiency. These centers typically exceed 5,000 servers and 10,000 square feet, supporting cloud services, AI, and big data workloads through modular design, automation, and energy-efficient infrastructure. https://www.ibm.com/think/topics/hyperscale-data-center
Some of the top hyperscale operators who own infrastructure: Amazon Web Services (AWS) is the global leader; Microsoft Azure is the second largest hyperscaler; Google Cloud; Meta Platforms (Facebook); Alibaba (dominant in Asia); Apple; Tencent (China-based); ByteDance (Tik-Tok); Oracle.
Fernanda Gonzalez. (February 2025). Meta will build the world’s longest undersea cable. Wired. https://www.wired.com/story/meta-undersea-cables-internet-connectivity-india/ (Providing internet connectivity to five continents, with landing points in Brazil, India, South Africa, and US, this project is fully owned by Meta reflecting its strategic focus on infrastructure independence.)
Adam Hays. (2024). The rise and fall of Worldcom: the story of a scandal. Investopedia. https://www.investopedia.com/terms/w/worldcom.asp (“an aggressive acquisition strategy and falling revenues led this [telecommunications] company to a downward spiral that would ultimately open the door to one of the largest accounting frauds and bankruptcies in the United States.”)
Republican proposal to abolish US PCAOB. April 28, 2025. International Accounting Bulletin. https://www.internationalaccountingbulletin.com/news/republican-proposal-to-abolish/
While the move to eliminate the PCAOB (transferring its role to the SEC) is not targeted specifically at Big Tech, it would unfetter all large public companies, including major technology firms, by reducing the rigor and independence of audit oversight.
Tech neutrality is largely a myth in audit regulation: The PCAOB has recognized that technology presents new risks and opportunities, requiring tailored standards and oversight. Big Tech companies, as some of the largest and most influential public firms, would benefit from a regulatory environment with less stringent, less specialized, or less independent audit scrutiny. Reduced compliance costs and enforcement risks for these companies will be at the potential expense of investor protection and audit quality across the market.
I Live 400 Yards From Mark Zuckerberg’s Massive Data Center. (March 2025). More Perfect Union. [YouTube] Learn more from this video (13 mins 33 sec long) about the rush to build data centers in the US.