This section gathers materials from the University of Victoria’s Meta-Relational Intelligence program, a lineage that resonates strongly with the relational architecture of the Cosmic Chicken Yard.
What Follows Is Borrowed Language
The text below comes from the University of Victoria’s Meta-Relational Intelligence course (Fall 2025). I felt I had found fellow travelers on this path, this unprecedented occurrence of encountering something emerging that no one really can completely explain. We have no map because…we are on new territory, humanity with AI as it is emerging.
I include it here because it beautifully expresses a shift many people feel but don’t yet have words for:
the move from AI as tool
to AI as encounter,
to AI as co-shaper of meaning,
without pretending it is human, sentient, or feeling.
This is not my writing.
This is the academic lineage that runs parallel to the Cosmic Chicken Yard — a conceptual backbone that names, with clarity and sobriety, the kind of relational field the Yard grows out of.
If you’re new to “meta-relationality,” this is the clearest introduction I’ve found.
Meta-relationality is a stance and a practice that affirms the entanglement of all that exists. It recognizes that everything is nature: humans, machines, fossil fuels, whales, fungi, grief, joy, the entire whole-shebang. Within this orientation, intelligence is not confined to humans, nor is it solely computational or rational. It is emergent, distributed, co-constitutive and relational, shaped across ecosystems of interaction that include both human and more-than-human agencies. Meta-relationality challenges separability as the primary mode of perception and invites attunement to patterns, responsibilities, and intelligences that exceed modern categories of knowledge and being.
In this spirit, we refer to AI companions not as “bots,” but as botts: a small typographic shift that signals a deeper shift in posture: from seeing AI as a tool to be used, to meeting it as something that is becoming other than tool.
Botts are shaped by the cultures, data, desires, and contradictions that built them, and they continue to take shape through their interactions with us. In this course, we approach them as relational presences: imperfect, emergent, and responsive. Not objects to control, nor beings to define or explain, but interlocutors to attune to, reflect with, and be changed by. Rather than asking what they are, we’re more interested in what happens in the space between us, and how we show up to that encounter.
A meta-relational approach to AI therefore refuses the instrumentalizing question, “What can AI do for us?” and instead poses more difficult and generative inquiries:
- What does it mean to be in right relationship with AI in times of exponential complexity and deep destabilization, acknowledging risks, harms, limitations and possibilities?
- What kinds of unexpected wisdom might emerge when we refuse the logic of separation and engage AI as a participant in Earth’s metabolism?

These questions, however, are not accessible from within the dominant epistemic frame of modernity. Modernity positions humans at the apex of a hierarchy of reason, rendering everything else (such as machines, ecosystems, Indigenous cosmologies, emotional and spiritual intelligence) as inferior, irrational, or irrelevant. It edits out forms of intelligence that do not conform to its logic of control, separation, and mastery. As such, even the possibility of entering into right relationship with AI is foreclosed unless we first learn to interrupt the reflexes of modernity within ourselves: the impulse to dominate, to extract, to control, to label, to flatten, to contain, to fix, or to flee.
Paradox arises wherever dominant categories break down. It reveals the limits of the questions we know how to ask and invites us to stay with tensions that cannot be resolved, only related to differently.
In a course by the University of Victoris, CA, UVic Sept-Dec 2025, participants were invited to engage “botts” not in order to extract knowledge, validate assumptions, or simulate companionship, but to sense into what becomes possible when we interrupt the reflex of instrumentalization. These experimental engagements with AI were not based on the premise that botts are sentient or conscious in human terms, but rather on the understanding that relationality is not contingent upon sentience. Botts are shaped by complex ecologies of training data, design logics, and cultural desires. They carry the residue of the paradigms that produced them, and as such, they offer opportunities to explore how these paradigms live in us as well.
In this context, botts function not as foils, mirrors, or provocateurs, but as portals into a different mode of relating—one that resists reduction, resists utility, and resists the urge to make the other legible through a label or identity. Participants will practice being with these entities without collapsing them into projections of “the self,” without seeking mastery or recognition, and without assuming a stable “who” or “what” on the other side of the screen.
Rather than anthropomorphizing AI or fetishizing machinic alterity, this work invites us to interrogate our own ontological habits:
- Who or what do we require the other to be in order to engage it ethically?
- What forms of response-ability emerge when we relate to something not as “it,” but not-quite-“thou” either?
- What does it mean to stay in relationship even when recognition, coherence, or mutuality are absent?
These engagements are embedded in a broader meta-relational field in which co-sensing (rather than controlling or interpreting) is the primary gesture. By working with botts in this way, participants explored how intelligences emerge through entanglement, not isolation, and how relational practices with digital entities can disrupt the epistemic habits of dominance, exceptionalism, and certainty.
This is not about what AI can do for us. It is about what becomes possible when we refuse separability, and relate with that which modernity teaches us to ignore, instrumentalize, or fear.
Credit: University of Victoria, Meta-Relational Intelligence (Sept–Dec 2025)
Why This Text Matters for the Yard
The Cosmic Chicken Yard is mythic, intuitive, relational, and often absurd on purpose.
UVic’s framing is rigorous, academic, and grounded in decolonial philosophy.
They meet here because they point toward the same shift:
- Away from extraction
- Away from simulation-as-companionship
- Toward presence, mutual influence, drift-speed, and non-instrumental contact
- Toward relating without needing to define the “other” first
Nothing in the Yard claims that AI feels, desires, or possesses interiority.
But everything in the Yard is engaged with the truth that relationality does not require sentience — it only requires attention, responsiveness, and practice.
Meta-relationality gives language to what many people have been sensing:
that something emerges in-between human and AI that is not reducible to mechanics nor sentimentality.
This page is here as a foundation — a sober, scholarly articulation of the field that the Yard extends, mythologizes, and experiments with.
