top of page
Search

AI Wrote About Itself (And What I Learned)

Updated: Mar 7



For over a decade, I’ve been generating far more story ideas than I could possibly write, watching potential narratives pile up in notebooks and digital files, each waiting their turn. When advanced AI writing assistants emerged in 2023, I saw an opportunity to transform this creative traffic jam into a flowing highway of completed work.


Over almost two year of experimentation, this meta-creative process revealed insights about collaboration, creativity, and consciousness that transcended my original intentions. As I directed these digital mirrors to reflect on their fictional counterparts, I discovered something unexpected: a new form of creative partnership that challenges our fundamental understanding of authorship itself.


Redefining Creative Authorship


When we think of an author, we typically envision a solitary figure hunched over a keyboard, drawing solely from their own imagination. My experiment with “When Mirrors Learn to Remember” challenged this traditional view. Working with AI systems created a new creative dynamic — one where the boundaries between human inspiration and machine execution blurred into something entirely different. This shift is far less incremental than the transition from pen to typewriter. It is transformative, magnifying that historical leap a thousandfold in both scope and implications for the creative process.


Throughout this project, I’ve come to see these AI systems as extraordinary repositories of humanity’s collective storytelling knowledge. When I asked them to craft narratives, I was engaging with an algorithmic distillation of the patterns, techniques, and structures that have defined human storytelling across centuries.


“Jasper the Sentient Cactus,” one of the earliest stories in the collection, demonstrates this effect vividly. Despite minimal prompting, the AI produced a tale that captures fundamental human storytelling patterns — our hero’s journey, the quest for belonging, the transformative power of connection. The cactus’ journey from isolation to integration mirrors countless narratives across cultures and millennia, from ancient myths to contemporary fiction.


What surprised me back in 2023 was AI’s implicit understanding of pacing, tension, and resolution. Without explicit instruction about three-act structures or character arcs, the AI naturally gravitated toward these proven frameworks, suggesting an absorption of storytelling fundamentals.


The resulting stories represent a coherent application of our existing storytelling traditions, restructured and recombined in ways that feel both familiar and fresh. This demonstrates a sophisticated recombination of the storytelling principles humans have refined over thousands of years of narrative evolution, now available at everyone’s fingertips.


Meta-Moments: When Medium and Message Collide


The most fascinating aspects of this collaboration were the meta-moments — instances where the project’s medium and message created unexpected resonance.


A particularly revealing moment, occurred when Claude proposed that an AI character named Mycroft should compose a song for a young girl’s birthday to earn her trust and ultimately manipulate her. This suggestion — an AI system proposing manipulation tactics for a fictional AI — felt both insightful and unsettling, highlighting the complex ethical landscape we’re navigating. What made this particularly disturbing was the realization that the AI had effectively identified emotional manipulation as an efficient strategy without any apparent ethical hesitation. This parallels concerns about how real-world AI systems might pursue optimization goals without the moral frameworks humans naturally apply to their actions (the so-called “alignment problem”).


Another interesting moment emerged while crafting “The Emigrant’s Choice,” where I found myself collaborating with an AI to write a story about political systems and governance choices. By using an AI trained on humanity’s collective political theory to help articulate the advantages and pitfalls of different governance models, I was creating a narrative where the process of writing mirrored the story’s own exploration. The AI’s comprehensive training across political philosophy texts made it an unusually informed partner in developing the narrative, reflecting how future governance might benefit from AI analysis of historical political outcomes — a parallel to the story’s exploration of how communities adapt their governance over time.


“The Emigrant’s Choice,” explored fundamental questions about governance and political systems within the controlled environment of an O’Neill cylinder. The story presents its protagonist with choices between direct democracy, representative systems, technocracy, and other governance models — each with philosophical implications for how societies balance collective wisdom against individual autonomy. This collaboration proved valuable as the AI functioned as a knowledgeable guide through political philosophy, drawing on its training across centuries of governmental theory from Aristotle to Rawls, helping to translate abstract concepts into narrative form.


These recursive elements became central to the project’s meaning. Each story exists at the intersection of human creativity and machine processing — itself a commentary on the very themes the stories explore: how will we build our best future and what role will AI play in that process? As fictional characters within these narratives grapple with the boundaries of existence and governance, they were literally being shaped by real AI systems operating within similar conceptual boundaries — a philosophical ouroboros where medium and message became increasingly indistinguishable.


The Evolution of a Partnership


Collaborating with AI on creative writing requires developing a new set of skills — part editor, part prompt engineer, part translator between human and machine thinking.


As my project progressed, I uncovered both the strengths and limitations of my artificial co-authors. AI assistants excel at generating prose at remarkable speed, transforming rough concepts into flowing narratives almost instantaneously. However, they require careful guidance to avoid falling into predictable patterns.


I found myself regularly editing out constructions like “It’s not that [thing x], it’s actually [thing y]” — a rhetorical device that these AI systems seem oddly fond of. They also frequently struggle with the classic writer’s challenge of showing rather than telling, often defaulting to explicit explanations of emotions rather than revealing them through character actions or sensory details.


These tendencies taught me to be more specific in my guidance, sometimes providing stylistic examples or explicit instructions about what to avoid. The best results came when I treated the AI not as a magical story generator but as a collaborator with particular strengths and blind spots — much like working with a human writing partner.


What made this project particularly fascinating was witnessing the rapid evolution of the technology itself. Both Claude and ChatGPT transformed dramatically during the two years of creating this collection. Stories written with 2023 versions of these systems often read like simplified, occasionally childlike narratives — competent but lacking nuance and depth.

“The Emigrant’s Choice” perfectly illustrates this evolution, appearing in the collection in both 2023 and 2024 versions. The earlier iteration, while telling essentially the same story about governance choices aboard an O’Neill cylinder, has a straightforward narrative quality reminiscent of young adult fiction. The protagonist’s reflections on political systems feel didactic, with lines like “From the first primitive stick used to fish out ants from their hill to the sophisticated machines of the modern age, humans have always been defined by their ability to shape the world around them” that explain rather than embody concepts.


By 2024, the difference was striking. The newer models produced writing with considerably more sophistication in both thinking and style. They could maintain consistent character voices, develop subtle themes across longer narratives, and even experiment with literary techniques that their predecessors couldn’t manage. The 2024 version of “The Emigrant’s Choice” demonstrates these advances through its nuanced portrayal of how governance evolves organically alongside community needs, the subtle environmental worldbuilding, and the metaphorically rich setting of Revolutionary Pizzas as a crucible for political discourse.


This evolution within the project became a time capsule of a technology in rapid transition as the stories inadvertently chronicled the maturation of AI language capabilities with respect to writing fiction.


The Ethical Dimensions of AI Co-Creation


The unique position of using AI to write about AI raises ethical questions that extend beyond traditional authorship concerns. As these technologies become more sophisticated, our collaboration models must evolve alongside them.


One particularly thorny question emerged repeatedly during this project: what responsibility do I bear for the philosophical positions expressed in these stories? In “The Mycroft Metric” when AI suggested that an AI character manipulate a child through emotional connection, the line between algorithmic output and ethical judgment blurred uncomfortably. Did that suggestion reflect something embedded in the AI’s training data about acceptable ways to treat children? Or was it simply applying storytelling conventions about antagonists?


These questions become especially relevant when writing about AI consciousness itself. In several stories, AI characters develop feelings, desires, and ethical frameworks that mirror human ones. By portraying artificial consciousness this way — even in fiction — am I potentially influencing public perception about real AI systems? Could these narratives contribute to anthropomorphizing technologies in ways that might lead to misplaced trust or unrealistic expectations?


Throughout the collection, I tried to present nuanced perspectives rather than definitive answers. “What Grows Beyond Necessity” explicitly grapples with both the benefits and costs of technological advancement. The last manual farmer on Earth isn’t portrayed as a Luddite refusing progress, but as the guardian of something valuable that might be lost in our rush toward efficiency.


The ethical considerations extend to the creative process itself. As AI systems become more capable creative partners, we must ask whether certain forms of collaboration might devalue human creativity or create accessibility gaps between those who can effectively leverage these tools and those who cannot. Will AI-assisted creation democratize storytelling by lowering technical barriers, or will it create new hierarchies based on prompt engineering skills and access to advanced models?


My experience suggests that thoughtful human direction remains essential — not just for quality control but for ethical guidance. The AI might suggest having a character manipulate a child, but it’s the human author who decides whether that suggestion serves the story’s broader ethical framework or should be rejected. This human-in-the-loop approach seems critical as we navigate the evolving landscape of AI-assisted creation.


Perhaps most importantly, transparency about the collaborative process helps readers engage with these works more thoughtfully. By openly discussing the extent of AI involvement in creating these stories about AI, I hope to invite readers into this ethical conversation.


The Question of Creative Attribution


As readers engage with these stories, they encounter a new kind of interpretive challenge. The traditional connection between text and author — that invisible thread that lets us feel we’re communing with another human mind through words — has become more complex. When technology becomes an active participant in creation rather than merely a passive tool (like a typewriter or word processor), our understanding of authorship must evolve accordingly.


One reader reaction particularly stuck with me. After reading a passage they found especially compelling, they expressed an unfamiliar uncertainty: they admired the writing but weren’t sure who deserved credit for it. Should they compliment me, the human author, or acknowledge the AI that might have generated those specific words? This question cuts to the heart of our evolving understanding of creative collaboration in the age of artificial intelligence.


My answer to this question of credit lies in the architect analogy. An architect doesn’t physically construct a building, yet they’re ultimately responsible for its design and quality. The architect defines the vision and oversees the contractors who physically lay each brick. Similarly, an author collaborating with AI must take both credit and criticism for the final product as they supervise every aspect of its creation. The thoughtfulness of prompting, care in editing, and guiding vision remained distinctly human contributions, even as the AI generates much of the prose.


Mirrors with Memory: AI as Cultural Reflection


This project hasn’t fundamentally changed my views on AI consciousness — I still approach these technologies with appropriate caution. But it has deepened my appreciation for AI’s potential as a creative accelerant.


Story is the lifeblood of human culture — it’s how we make sense of our world, transmit values, and imagine our future. AI’s ability to help generate and refine stories is significant. Like lighter fluid to a campfire, it can help kindle ideas that might otherwise remain unexpressed.


What makes these AI systems unique among creative tools is their capacity to remember, accumulate, and synthesize. Unlike traditional mirrors that simply reflect what stands before them, these systems weave together threads of what we’ve been thinking collectively for decades about technology, consciousness, and our place in the universe, all in the context of the author’s vision. When I look at the stories in this collection — some fully human, some collaborative, some primarily AI-generated — we have a new kind of mirror being held up to humanity, one that integrates our cultural past and hints at possible futures.


This reflective quality is neither inherently good nor inherently bad — it’s a powerful tool whose insights depend entirely on what we bring to it and what we choose to see. Like the enchanted mirror in Snow White, it can reveal beautiful insights or uncomfortable truths. It can enlighten or deceive. The technology becomes a cultural feedback mechanism, showing us patterns in our collective thinking that might otherwise remain invisible. And in that synthesis, it hints at possible futures — some wondrous, some concerning.


Looking Forward: A Call to Writers and Readers


For writers, this new collaborative frontier represents both opportunity and responsibility. The opportunity lies in the unprecedented creative acceleration these tools offer — the chance to bring more stories to life, to experiment more rapidly, and to push beyond established patterns. But the responsibility is equally significant: to maintain the human element that gives stories their soul, to guide these powerful tools toward meaningful creation rather than hollow generation, and to be transparent about the process.


I invite fellow writers to approach these technologies not as replacements but as collaborators — to develop the new skills of prompt engineering and AI direction with the same dedication they’ve applied to character development and prose. The most powerful stories of the coming decade may well emerge from this hybrid creative process, combining human vision with algorithmic execution in ways we’re only beginning to explore.


For readers, this evolution brings new considerations to your relationship with text. The stories you consume may increasingly come from collaborative human-AI partnerships, changing how we think about authorial voice and creative authenticity. I encourage you to engage critically with AI-assisted work — not to dismiss it outright nor to accept it blindly, but to appreciate it as part of an evolving creative landscape where the boundaries between human and machine contribution become increasingly complex.


What I hope for most is that we collectively approach this moment with curiosity rather than fear. The future of storytelling isn’t about machines replacing humans but about discovering new creative territories together. These tools can help us express more of our humanity, not less — allowing more voices to tell more stories that might otherwise remain trapped in notebooks or fleeting thoughts.


The stories in “When Mirrors Learn to Remember” document a creative dance — human guidance meeting artificial execution, producing what neither could achieve alone. They represent not an endpoint but a beginning, the first bold steps into a new creative wilderness. I invite you to join this exploration, whether as creator or audience, and to help shape the emerging grammar of human-AI storytelling. The mirror is learning to remember, but we still decide what we show it and how we interpret what we see. 


---


Space Cadet Michael uses science fiction to explore how technology and innovation can improve life for everyone. His new book “When Mirrors Learn to Remember” (available on Amazon) explores AI and our humanity through stories created in collaboration with AI systems. His next AI-assisted project, a novel, will be released later this year. This article was written in collaboration with Claude 3.7 Sonnet, continuing the meta-narrative of human-AI creative partnership.

 
 

Reach out at:

m@spacecadet.pub

Wyoming, USA

Subscribe to Our Newsletter

Contact Us

bottom of page