How the stories we tell shape the Future of AI

💾 Back in 2010, I had the chance to design an early voice assistant, long before Alexa and Siri became household names. The project leveraged voice recognition and speech generation, and the technology, though primitive compared to today’s standards, was thrilling. But what surprised me most was that our deepest discussions weren't about algorithms or code—they revolved around storytelling, identity, and even gender.

💭 We spent countless hours debating simple yet deeply complex questions: "Who exactly is this virtual assistant? What should its name, age, gender, voice, and personality be?" Each question opened a Pandora’s box, uncovering our biases, assumptions, and cultural expectations—and those of our clients. We quickly realized our idea of an "ideal assistant" wasn’t objective at all. It was shaped by stories we'd grown up with, from sci-fi movies to cultural stereotypes. Even more challenging was recognizing the implicit gender biases we held. Why did our minds immediately imagine a female voice? Were the attributes we considered "helpful" or "friendly" reinforcing outdated gender roles?

🦄 That experience was eye-opening. It revealed how profoundly human narratives, myths, and legends shape technology, especially artificial intelligence. It taught me the importance of examining the history of our visions of the future and the unconscious expectations we carry forward.

What shapes AI more—lines of code or the stories we tell ourselves?

🤖 We typically view AI as data-driven, objective, and purely computational. But beneath every algorithm lies a story we've told ourselves for generations. Our modern visions of AI have been influenced by mythological and fictional creations like the Golem from Jewish folklore, the bronze guardian Talos from Greek myth, Mary Shelley's Frankenstein, and even HAL 9000 from "2001: A Space Odyssey." Each story carries implicit lessons and expectations about power, control, ethics, and even emotional connection.

đź”® These narratives subtly shape our everyday design decisions. We create conversational interfaces inspired by friendly, intuitive interactions shown in old shows like "Knight Rider." We look for soft, approachable R2D2-like cues to counteract fears of menacing AI, influenced by our desire for technology to feel safe and approachable. We strive for sleek, futuristic aesthetics reminiscent of films like "Minority Report," tapping into collective cultural expectations.

👾 Similarly, narratives push us to expect emotionally aware AI, like Samantha in the movie "Her," hoping to enrich interactions through empathy. We might overemphasize robust privacy and security, acutely aware of dystopian scenarios from "The Matrix”. When imagining AI’s multi-modality interactions, many of us reference the versatility of fictional assistants like JARVIS from the Avengers series—expecting our AI to be versatile and effortlessly supportive.

Why does all this matter? Why shuld be aware of the of past narratives about the future of AI?

💡 Because I believe AI's true potential won’t be reached by simply replicating familiar stories or leaning solely on what we've known. Looking back, I can clearly see how cultural biases shaped that early voice assistant we designed. And even technology had made huge strides since then, I see more than ever the need for design narratives that break conventions, challenge assumptions, and inspire genuinely new ideas.

đź‘€ Think about the narratives you might be carrying forward into your work. What myths, stories, or cultural ideas shape your designs today? What if you deliberately questioned or challenged them?

✨ AI’s future isn't shaped just by technology—it's shaped by the stories we choose to tell.

Next
Next

Why I’m betting on Art History in the Age of AI