The need for theoretical reflection in AI practice
The rapid development of generative AI has created an unprecedented situation whose broader implications unfold in real-time, often outpacing our ability to comprehend their significance.This dynamic has sparked important conversations among thoughtful AI practitioners and researchers. Hamel Husain’s work on AI evaluation methodologies and his insights into the gap between technical metrics and real-world AI product development, and similar investigations by experts grappling with the gap between technical implementation and real-world impact point toward a crucial need: developing adequate theoretical frameworks for understanding what we’re building and how it relates to human experience.
Most AI practitioners come from engineering backgrounds, trained in problem-solving methodologies that emphasize solutions, optimization, and measurable outcomes. Epistemology, by contrast, is a fundamentally philosophical endeavor—one that excels not at providing answers but at generating better questions, at creating binding discourse that opens new possibilities for thinking out-of-the-box.
In this blog, readers will find no definitive answers about what AI systems „really“ are or how they „should“ be developed. Instead, they will encounter frameworks for asking different questions – questions that might guide more thoughtful practice without foreclosing the genuine uncertainties and ongoing discoveries that characterize this rapidly evolving field. The goal is to expand the space of productive inquiry rather than to resolve fundamental tensions that may, in fact, be constitutive of human-AI interaction itself. This blog is the place to address fundamental questions like:
- What constitutes knowledge in human-AI systems?
- How do human-AI interactions create emergent forms of understanding?
- What are the epistemological implications of systems that process information probabilistically yet generate semantically meaningful outputs?
Why epistemology matters for AI
Traditional approaches to AI evaluation assume a clear separation between human observers and AI systems as objects of study. But this assumption becomes problematic when AI systems participate in human meaning-making processes, when they mediate human knowledge construction, and when the very act of observing them changes both the observer and the observed. Every claim about AI capabilities emerges from human observation conducted through language and conceptual frameworks that shape what can be seen and how it can be described. We never encounter „pure“ AI systems but always Human-AI systems where human observers, embedded in linguistic and cultural contexts, attempt to make sense of artificial behaviors through distinctly human frameworks.
This situation calls for what we might term an „epistemology of AI“—a systematic investigation into how knowledge about artificial intelligence is constructed, validated, and transformed through the recursive interactions between human observers and AI systems. Such an epistemology would examine not just what AI systems can do, but how our ways of observing and describing them participate in constructing the very phenomena we seek to understand. Second-order cybernetics, with its focus on observing systems and the recursive relationship between observer and observed, provides particularly valuable tools for this investigation. Rather than treating the observer’s embeddedness as a limitation to be overcome, second-order cybernetics examines how this embeddedness is constitutive of the phenomena under study.
Method and approach
Each post in this series applies second-order cybernetic concepts to concrete questions in AI development and deployment. Rather than abstract theoretical speculation, these explorations aim to develop conceptual tools that can inform more thoughtful AI practice. The goal is not to resolve the tensions between technical and experiential approaches to AI, but to develop frameworks that can accommodate both dimensions without reducing one to the other. The blog posts assume familiarity with AI development practices while introducing cybernetic concepts as they become relevant to specific questions. Each post can be read independently, though they build cumulatively toward a more comprehensive understanding of Human-AI systems as co-evolutionary phenomena that resist simple categorization as either purely technical or purely social. By examining how our ways of observing AI systems participate in constructing what we understand about them, I hope to develop more reflexive and nuanced approaches to AI development—approaches that honor both the genuine novelty of artificial intelligence and the irreducible role of human observation in making sense of that novelty.
Future explorations
In future blog posts, I will apply second-order cybernetic principles to contemporary questions in AI development and deployment. Each post examines a specific aspect of Human-AI interaction through the lens of second-order-cybernetical concepts, such as recursive observation, structural coupling, the construction of knowledge through distinction-making, and others. The following paragraphs exemplify a couple of potential avenues that will be covered some way or the other in future blog posts.
From Binary to Semantic: The Structural Coupling Between Technical and Lifeworld Domains
The fundamental tension between AI’s probabilistic processing and human semantic understanding creates what we might call an „epistemological gap.“ While AI systems operate through statistical correlations and gradient optimization, humans interact with their outputs as meaningful language that addresses practical concerns. This post explores how Maturana’s concept of structural coupling helps us understand how these different domains of operation can coordinate without requiring shared ontological foundations. We examine how technical capabilities become socially meaningful through embedding in human practices, and how this embedding transforms both technical systems and human ways of relating to computational processes.
The Observer’s Paradox: How AI Systems Create Their Own Reality Through Language
When AI systems describe their own reasoning processes, they create recursive loops where artificial systems use human language to explain how they process human language. This creates a peculiar form of self-reference that challenges traditional distinctions between subject and object, observer and observed. Drawing on von Foerster’s insights into self-referential systems, this post examines how AI systems construct stable meanings through recursive linguistic operations, creating what we might call „eigenvalues of meaning“ that persist through self-referential communication while revealing the strange circularity of artificial cognition.
The Cybernetic Loop: Feedback Mechanisms in Human-AI Interaction Systems
Human-AI interaction has evolved beyond simple input-output exchanges into complex feedback systems where both participants continuously adapt. When millions of users learn to craft better prompts based on AI responses, while AI systems adapt to emerging patterns in human prompting, we witness the emergence of what Bateson called „cybernetic circuits“—self-modifying systems where observer and observed mutually reshape each other. This post explores how prompt engineering creates circular causality patterns that fundamentally alter both human questioning strategies and AI behavioral patterns, examining the co-evolutionary dynamics that emerge from these recursive interactions.
The Complexity Reduction Dilemma: How AI Systems Simplify While Complexifying
In our quest to manage information overload, we’ve created digital systems that paradoxically multiply the very complexity they were designed to eliminate. Each AI-generated simplification births new layers of human-machine entanglement, creating what we might call „complexity displacement“ rather than true complexity reduction. Drawing from Luhmann’s understanding of complexity reduction as fundamental to system operation and von Foerster’s insights into order-from-noise, this post examines how AI systems embody the cybernetic paradox of simultaneously reducing environmental complexity while generating new forms of systemic complexity in human-machine interaction.
The Epistemological Status of AI Knowledge: Between Computation and Understanding
When sophisticated AI systems solve mathematical proofs or analyze literature, what kind of „knowing“ occurs? Are these systems merely processing symbols, or do they exhibit genuine understanding? This post examines how AI systems operate in an epistemological liminal space where computational processes generate emergent knowledge-like phenomena that challenge traditional binary distinctions between knowing and not-knowing. Through second-order cybernetics, we explore how AI knowledge emerges through recursive observation and distinction-making rather than representation, suggesting that knowledge itself might be constructed through interaction rather than discovered through analysis.
The Irreducible Subjectivity of Human-AI Encounter: What Technical Accounts Cannot Contain
Technical descriptions of AI systems, however precise, cannot capture the subjective experience of meaningful encounter with artificial intelligence. The amazement of watching semantic outputs emerge from probabilistic calculations points to an irreducible gap between objective computational processes and the lived experience of Human-AI interaction. This post explores why this gap matters, examining how the phenomenological dimension of AI encounter is not a bias to be eliminated but the very medium through which computational processes become meaningful for human life.