How Not to Let A.I. Change You

How Not to Let A.I. Change You

Jaron Lanier at UC Berkeley, 2025, by Eric Kotila

Two and a half years after Chat GPT, the ideological lines of AI (Artificial Intelligence) are more clearly drawn than ever, and very little mobility is allowed between them. Sometimes it feels like the only options are reckless techno-optimism or a complete refusal to engage, even intellectually. But for many of us, this dichotomy is cause for skepticism. This is how we get figures like Jaron Lanier — the co-founder of virtual reality, and an AI pragmatist with the technical experience to guide us to ask not only more from these digital tools, but better questions about them.

Indeed, Jaron Lanier’s talk at Berkeley could not really be classified as such; it was instead structured around questions written by students in Prof. Ramona Naddaff’s course, “The Philosophies of Music: On the Uses and Abuses of Sounds.” In addition to being a computer scientist and writer, Lanier is also a musician who often works in experimental genres. Early into the discussion, he described part of his interest in improvisational music as a foray into the creation of a “language of forms.” This is part of an ongoing and cross-disciplinary inquiry into what he calls “post-symbolic communication,” a perhaps dubiously achievable goal that also propels his work with virtual reality. 

Another area in which Lanier’s musical commitments and preoccupations dovetail with those he holds as a critic of generative AI, is in his belief that the “origin of art matters.” All art is encased in a social history and array of communities, in which those making it share a context. The artifact is secondary to the experience of making. With a more temporally and spatially dilated understanding of art, AI is incapable of engaging in the artistic process. Operating according to this definition, artistic experience is still what Lanier describes as “rare, positional, and distinct.” 

While Lanier is certainly not the only person to voice this view of how art is defined, and this understanding may seem a bit obvious to those who create or study art themselves, it serves as a reminder of the way in which the ubiquity of generative AI is poised to erode this definition. AI makes us beholden to the artifact, which helps explain why so much of the discourse around what AI can do is about how it can fool us, or how it can convince us of its authenticity. “Fooling people is not that hard,” Lanier said. “Especially if they kind of want to be fooled.”

Because AI art has no author and draws from the same database each time, everything it outputs is, in a sense, continuous. Everything “becomes mush.” Here too, this diagnostic is already embedded in the way we talk about AI; it's commonplace now to use phrases like “AI slop” in reference to the uniformly garish aesthetics and discardability of generated images. There is, I think, less of a tendency to rebuke AI generated writing — or at least there isn’t yet a pithy and convenient way to dismiss it. I’d like to imagine a reality where this reflexive dismissal of AI writing is commonplace, but I’m not sure that can exist in a world where writing isn’t conceived of as an essential way in which people become productive members of society, where we forget “writing is for separating yourself from the mush continuum.”

“You will conceive of yourself under modernity as a consumer.” Lanier said. This of course means nothing is expected of you either — a truth which complicates the logic that AI can be a tool for creativity. There is nothing that resembles a social contract for digital life, which is ironic, considering the torrent of terms and conditions we sign that selectively resemble contracts. These “contracts” seldom benefit us as users, and they exist as a legal formality for companies to steal our data. We know this and we’ve decided we don’t care. Lanier’s explanation for this is that social media and other online platforms provide their “services” for free, i.e. we accept exploitative terms and conditions because we conduct a sort of automatic cost-benefit analysis where immediate and continual access to content is deemed more valuable than privacy.

We already consider this transaction to be worth it, in part because the trade-offs are not immediately evident to us in our interactions with online platforms. Often the affective experience overwhelms our capacity to be rationally aware of the invasion of privacy underway. Lanier’s recent writing on AI makes clear that this invasion likely will go to lengths we can’t yet fully understand and aren’t prepared for. In “Your AI Lover Will Change You,” published in the New Yorker a couple weeks before his appearance at Berkeley, he writes about what he calls “agentic” AI: “In this case, 'agentic' will likely mean two extensions to familiar chatbots: one remembers everything that is possible to know about you from the perspective of your devices; the other then takes online action, sometimes preemptively. Agents will be more autonomous and less dependent on your constant guidance.”

These immanent transformations to chatbots will also have the effect of making them seem more like people, and thus suitable surrogates for human relationships, Lanier asserts. It seems that, short of selling your data to the government so it can discipline you without due process, AI lovers are about the most insidious product of an inadequately regulated information economy. The rhetoric of AI lovers espoused by people in tech is that making relationships easier and more accessible will make people less lonely and more socially solvent. It's pretty easy to poke holes in this argument, as Lanier does, but the way he goes about it is especially compelling. Citing the example of Leonard Cohen’s experience living in a monastery where he was constantly in the presence of others, Lanier claims that this notion of technologically-mediated “human improvement” is wrong because it presupposes that “pain is probably bad.” He writes, “Think of the many historical instances of artificially easy companionship for powerful men, all the geisha and the courtesans. Did those societies become more humane or more resilient?”

It is this sentiment about human improvement being sourced in emotional strife and the agility and resolve that are required to surmount it, that I think resonates nicely with Lanier’s perspective on writing. “Writing is hard,” he said at one point. The remark was sort of off-the-cuff, a part of a broader point he was making about mush. Still, it is always good to be reminded of the fact that some things that are difficult and require us to be socially immersed should remain that way. 

We often say that writers are lonely, or that there is something intrinsically isolating about the process of writing (some might say this makes writers ideal customers for AI relationship chatbots.) I understand why people feel this way, since it can be incredibly disorienting to sit at a desk with only your own thoughts and feel as if what you are committing to the page might be neurotically introspective — to feel like maybe you’re getting everything entirely wrong. However, I think that feeling isn’t actually loneliness, but the weight of trying to engage with people and the systems they construct in a more meaningful way. It is interesting that the best tools we have for existing as a part of a collective and as part of history are the ones that are hardest to navigate, and the ones we are told we would be better off outsourcing to machines.