I’m sitting at the bar of the Prospect Park Nitehawk cinema, with an Ivy League art professor I met on Hinge. We’re meeting for the first time, and we’re discussing the actor Natasha Lyonne, and whether David Lynch did (or did not) encourage her to use AI as a creative tool.
“Everyone has access to a pencil, and likewise, everyone with a phone will be using AI, if they aren’t already,” Lynch reportedly said. “It’s how you use the pencil. You see?”
I mention Miyazaki’s famous reaction to an early presentation of an AI tool: “I strongly feel that this is an insult to life itself”, he said of the demo. She talks about the dramatic drop in her student’s work quality, their disinterest, their inability to think critically. With disgust she speaks through gritted teeth about the environmental impact model training has on the globe (she didn’t mention Taylor Swift’s flight patterns).
She asked what I thought, and I said I feel a kinda heavy optimism about the future, and that it’s probably not all bad. She didn’t text me back.
I’m tired of having the same two conversations about the future. It either pangs of the crypto hype cycle (the AI boom is going to make us all rich), or marks the beginning of the end (the 0.1% will hoard the wealth, our brains will atrophy, and we’ll live like pigs in slop while the planet dies — Mike Judge’s Idiocracy was actually prophecy).
Transactive Memory & The Extended Mind
Digital amnesia (aka the Google effect) is the measurable phenomenon that as search engines and cell phones became prevalent, we became generally worse at remembering specific data, like phone numbers or addresses. The millennials among us might remember the short-lived wave of panic when this effect came into focus.
What’s not clear is that this type of outsourcing (and the associated dread of cognitive decline) isn’t new. In fact, it’s kinda lindy. 2,500 years ago, Plato’s The Phaedrus documented the fear that the written word, acting as an “external memory” might cause our internal memory to atrophy.
In the mythical dialogue, Thamus, the King of Egypt, said to Theuth:
“And now, since you are the father of writing, your affection for it has made you describe its effects as the opposite of what they really are. In fact, it will introduce forgetfulness into the soul of those who learn it: they will not practice using their memory because they will put their trust in writing, which is external and depends on signs that belong to others, instead of trying to remember from the inside, completely on their own.”
And while the use of tools to record information may have affected our ancient ability for individual recall, the written word has vastly expanded the aggregate spectrum of human thought, by crudely networking our interior worlds over thousands of years.
This effect isn’t limited to tools or technology, either. Psychologists use the term “transactive memory” to describe the differing types of recall and thinking that form among groups of humans, thereby (often unconsciously) allowing families and communities to divide up the way information is stored and used in the group. For example, one member of a family might serve informally as an “archivist”, retelling stories and memories to the group, whereas another might be better at remembering birthdays and practical details, serving a more logistical role.
That is to say, our social groups, tools, situations, and, more broadly, environment have always served as a cognitive extension, networking our individual minds, allowing them to spill into each other and share processing tasks as a group. It’s as though our brains are aware of their own biohardware limitations. They naturally seek to form rings of processing power greater than the sum of their parts, constantly optimizing for better channels to act as one integrated cognitive system.
Offloading Reason, the Exocortex & N+1 cameralism
While cell phones and digital technology taught our brains that the rote memorization of arbitrary datapoints was no longer necessary, we did get particularly good at remembering where and how to find information we need.
Psychologist Betsy Sparrow found that since the advent of search engines, people are less likely to remember information itself, and more likely to remember where to find the information. Our brains now excel at remembering files & folder paths, search terms, key words, instagram handles. Our mental model of our own information space has grown further abstract, including pointers to external data, as opposed to the information itself. This is not a lapse in memory, but a strategic choice of the mind to optimize effort by minimizing cognitive load. An optimistic take is that this cognitive offloading frees us up to focus on creativity, critical thinking, the assemblage of bigger picture reason.
What happens when our minds start to rely on external systems not only for thumb drive-style cold storage, but rather for reasoning itself to happen “off brain”?
Studies around GPS-driven map systems and spelling autocorrect tools show clear declines in individual ability to navigate without an app, or spell words correctly. Frequent use of AI tools do in fact correlate with lowered critical thinking performance, particularly among younger people. But map pins, the order of letters in a word, and file paths are simple patterns. Reasoning is the formation of complex thought. It’s scary to imagine our brain optimizing reasoning away, too.
Julian Jayne theorized that our ancestors experienced thought as bicameral, that is, as the voices of gods, and that our internal, self-narrating unicameral individual consciousness is a relatively new thing. Venkatesh Rao recently noted on Substack that we’re about to head into a noncameral mode of thought, by offloading the realtime processing that our inner monologue currently serves to an AI-thought stream.
We may find ourselves thinking in increasingly abstract shapes, whispering unformed fragments of an idea to the AI, allowing it to directionally autocomplete the thought itself, in a continuous, high bandwidth ping pong of kaleidoscope synthetic idea space. My pal Scotty Weeks said that we’ll probably start thinking like mushrooms.
The Ego Death We Need
A core critique of this thinking is that an assimilation with external AI systems may result in a loss of diversity of human thought, or homogeneity of output.
But long before the internet, philosophers and religious leaders theorized a global connectivity, or collective consciousness. In 1922, Pierre Teilhard de Chardin coined the term “Noosphere”, or the “thinking layer” of the earth, networking all human thought. In the internet era, his predictions have become surprisingly accurate, now starting to resemble something akin to a global brain.
Today we already see the collectivist project of the internet solving problems far bigger than any of us are capable alone, from open source software, Wikipedia, Linux, community fact checking, crowdfunding etc. Now more than ever, the shared infrastructure of the internet resembles and reflects back our shared humanity, acting less to mute individualism, but rather as an amplifier of interdependence.
In Mahayana Buddhism, Indra’s Net visualizes reality as an infinite web of jewels, each unique, reflecting all others infinitely: “every part of the universe is intimately connected to every other part, and each individual element contains within it the entirety of the whole.” A fully integrated hive mind of human consciousness is not the same, but not separate, allowing a more fluid and balanced understanding of the self and its role in the species.
As Rumi said:
“You are not a drop in the ocean. You are the entire ocean in a drop.”
Networked Empathy
Today, amid global terror, state-funded genocide, modern slavery, climate collapse, nuclear weapons, drone strikes, deforestation, and rampant pollution of the natural world, our tendency to subjugate and harm our own species prevails. Driven largely by a pursuit of greater safety for oneself and in-group, the capitalist realist refrain that “it is easier to imagine an end to the world than an end to capitalism” feels never more true.
Or does it?
Eastern philosophies have long emphasized that separation is an illusion. Buddhist tradition teaches the annihilation of the ego as a path to Nirvana, denying any permanent or separate self. Enlightenment is understood as realizing the non-duality of self and other, allowing a greater state of compassion by feeling the suffering and joy of all beings.
Platforms like GoFundMe, reddit grief communities, mutual aid networks, real-time disaster storytelling, and global uprisings in reaction to systemic inequality are demonstrative, real world examples that connectivity breeds empathy and unity. The work of Taiwan’s digital minister Audrey Tang during the COVID-19 pandemic — quickly using the internet to distribute masks to underserved communities via public, real-time APIs — strengthened solidarity by delivering key information, and illustrated the fantastic potential high bandwidth connectivity has to alleviate suffering. Tang has been quoted saying that “When we see the same data, we care about each other.”
As such, global conflict and harm may finally give way to compassion when the “other” is no long longer other, but part of oneself.
The internet we know today has been for the most part a passive harness; requiring individual humans “crank the handle”. As the bandwidth and interconnectivity of our discrete conscious minds are faster linked, the internet may start to resemble an active “exocortex”, functioning like a coherent super-organism, capable of wise, compassionate action on a massively global scale.
Human minds, as nodes in the system, would be brought to face their own own acute suffering and harm, finally allowing us to feel it all as a personal wound, and overcome our fragmented, short-sighted self-interest. The more entangled our humanity becomes with its own digital reflection, the more a vision of social justice not enforced by law, but naturally arising from shared experience may come.
Digital Gaia
When the astronaut Edgar Mitchell saw Earth from space, he experienced a profound unity with all life. He said:
“You develop an instant global consciousness, a people orientation, an intense dissatisfaction with the state of the world, and a compulsion to do something about it. From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, ‘Look at that, you son of a bitch.”
As we deploy ever more sensors and actuators in the form of satellites, drones, and Internet of Things devices, we are quickly installing sensory organs to our planet.
Topher McDougal’s Gaiacephalos hypothesis poses that such a system might actually be understood as the planetary brain. Also requiring a cool planet to survive, a super intelligence acting in the interest of its own self preservation will be forced to serve as caretaker, balancing and continuously mitigating its own energy usage with its need to curtail climate collapse, and sustain Earth’s complex ecological systems.
Further, our growing interconnectivity may allow us to feel remote environmental conditions as though they are local to us. Sensors installed in forests might allow us to feel the stress of an ecosystem in realtime, driving further empathy for the natural world, and uniting more humans around this common cause.
Such biofeedback loops may even dissolve the boundary between human and environment, forcing us to rebalance our relationship with the earth’s resources: humans, nature and machine fused into a single self-regulating organism. Our lived experience becomes increasingly profound in its interconnectedness, echoing spiritual truths long held that what we do to the earth, we do to ourselves.
“Despite the efficiency gains of unconventional computation, the energy use of information processing alone had reached a hundred quintillion (1020) joules of energy per year by 2040, compared with a hundred trillion (1014) joules just twenty years prior. Expecting that energy demand will continue to increase where it can, solar-powered satellite computers were launched into orbit around Earth in huge quantities. Once stable, they combined to form a federated entity known as the Infinity Mirror: a fleet of supercomputers that encircled Earth, a distant descendant of vacuum tubes, microprocessors, GPUs, and photonic cores.
Of the 90,000 terawatts of solar energy absorbed by Earth’s surface every year, a small quantity was caught in order to power the machine, a glassy overseer suspended at all times above the Arctic running the simulations integral to planetary intelligence, bouncing the rest into space. Computation had joined many other industrial activities—mining, material processing, toxic synthesis—off-planet, leaving the biosphere to bloom once more beneath its technological shell.”
The Last Technology
Friend of the pod Diego Segura said that my heavy optimism stems from the feeling that the next five years feels like the last chance for any of us to make real money. I think he’s right about that.
The SaaS gold rush (estimated at around ~US $408 billion in 2025) was about taking anything that used to be an spreadsheet, and making it a web app. In the era of LLMs, any process that involves composing or connecting a few messy human inputs into a distilled, sophisticated output can now be an app.
“…and investors shovel billions into AI wrapper startups, desperate to capture a piece of the pie. Hiring new programmers has nearly stopped, but there’s never been a better time to be a consultant on integrating AI into your business.”
Today, of the labor market, it’s said that AI won’t take your job, someone who’s good with AI will take your job. But that feels only partially true, and only for the next few years. Those of us who’ve got our 10,000 hours, and have graduated to be delegating and being paid to think are likely to be OK.
I feel for the younger Gen-Z and Gen Alpha contingent, faced with the impossible choice of deciding to learn (by brute force) skills that they can clearly see they will never actually catch up to with the sophistication of their generative counterparts, or just let the slop all the way in, vibe coding outputs that they barely understand, hoping that the market will eventually pay them a salary for effectively pulling the slot machine handle of an LLM, and hoping for the best.
I won’t pretend humanity’s transition through this period will be fair and just for all people. I fully expect the megacorps hoping to monetize their hundred billion dollar investments will install pervasive surveillance harnesses, biased points of view and product placements deeper into our psyche. But we’ve learned from social media companies and their dopamine jacking strategies. We’ll need to stay hyper vigilant by developing privacy first, airgapped, and end-to-end encrypted systems running against local, offline and open source toolchains.
Big like religion
On a recent episode of New Models, Orit Halpbern noted that AI feels fundamentally different to all other technologies that came before, in that it can be understood as the last technology: self improving such that it will eventually develop its own ideas and concepts, all while requiring less and less human intervention to do so. Unlike SaaS, or the blockchain, or the cloud, or cell phones, or GPS, or electricity, or language, AI feels not like a building block to greater things — rather the final destination those previous technologies were building toward.
Nick Bostrom wrote in his 2014 book Superintelligence, Paths, Dangers, Strategies:
"If there is a way of guaranteeing that superior artificial intellects will never harm human beings, then such intellects will be created; if there is no way to have such a guarantee, then they will probably be created nevertheless."
Simply, we are engineering the next stage of human consciousness. The existence of organic intelligence meant that we were always likely to create synthetic intelligence on a long enough timeline. It’s here whether we like it or not. But the Fully Automated Luxury Gay Space Utopian Solar Punk future I’ve yearned for and built toward my whole career always presupposed a sophisticated global compute strategy would arrive to unlock our full human potential.
This moment feels big like religion. I won’t fall limp and complacent, rejecting AI completely and bemoaning the fall of civilization. No, I’m going to use what small agency I have as a craftsperson to shape the cultural narrative around what we should build with this fantastical bizarre machine made of all of human thought through the lens of my own collectivist politics, and I sincerely hope that you’ll join us in that.
garden3d is (among other things) a future facing, fully integrated product design & development team. We’re driven to create systems that understand people.
If you’re working on something ambitious, strange, or quietly revolutionary, we’d love to help. We’re looking for partners who are ready to start designing the next generation of cognitive interfaces together. Just send us an email at hello@garden3d.net
Oh — and if you liked this post, you may enjoy Oh, to Be Known By My Computer!