Episodes
-
In this episode, we wanted to explore what sort of intelligence we are actually looking for in our AI tools and companions. In the 1960s, Joseph Weizenbaum developed ELIZA: a rule-based chatbot therapist that kept users talking about themselves by matching keywords from user input to its scripted responses. ELIZA’s creator was surprised and shocked to find that people attributed intelligence, understanding, and other human attributes to this fairly simple pattern-matching program.
Now, nearly 60 years later, we’re actually being sold computer software with the promise of intelligence and reasoning. And while technologists again debate whether large language models are actually intelligent or just stochastic parrots, the AI companions they helped build are having very real impacts on people’s lives, even contributing to suicide.
Our complicated human lives are so much more than the intelligence measured in standardized tests. And popular sci-fi depictions of AI companions in TV shows like Star Trek or movies like Her also reveal that our aspirations for machine intelligence often go beyond reasoning. Do we actually need AI or AGI that is superintelligent, or are we just craving a somebody or a some-bot that will make us feel heard and seen – something that the ELIZA chatbot achieved with a far simpler algorithm? And what kind of intelligence are the AI creators and investors actually looking for?
Our discussion is also an invitation to rethink the language we use for (artificial) intelligence and to reflect on the dark history and shadows of the field.
If you’d like to wonder and wander with us, join Tethix firekeepers Alja Isaković and Mathew Mytka in this meandering exploration inspired by the latest Pathfinders Newmoonsletter. In addition to the resources explored in the Newmoonsletter, we recommend exploring:
* ELIZA chatbot
* Stochastic Parrots paper by Bender et al.
* Let’s forget the term AI. Let’s call them Systematic Approaches to Learning Algorithms and Machine Inferences (SALAMI)
* The TESCREAL Bundle
* Livingry by Buckminster Fuller
* Phronesis
* Tethix Mirrors
Related episodes of the Pathfinders Podcast:
* Why aren’t more people using AI conversational interfaces for conversational learning?
You can learn more about the Tethix pathfinding adventure at: https://tethix.co/pathfinders/ and support this podcast on Substack: https://tethix.substack.com/subscribe
Get full access to Tethix Pathfinding at tethix.substack.com/subscribe -
In this episode, we wanted to explore the conversational side of our AI companions. When it comes to AI capabilities, most of the focus in tech remains on productivity gains and even imaginary superpowers such as “fixing the climate”, as Sam Altman and other techno-optimists with doomsday bunkers like to prophesize.
Meanwhile, existing conversational powers of AI tools are often sidelined. Conversational capabilities are mainly discussed in the context of persuasiveness risk assessments, or as part of Black Mirror visions of people developing intimate relationships with AIs, in the style of the movie Her and others.
And while conversations are indeed crucial for relationship-building, we haven’t yet seen that many discussions about conversational learning. Not in the context of AI tutors helping students prepare for exams. But conversational learning as a natural human technology for making sense of our world, improving collaboration, and nurturing our imagination. In short, the skills we need to face our current meta-crisis.
During our discussion, we ponder about the history of conversational interfaces, our existing mental models, and the design of our spaces and systems that make it challenging to make time and space for conversational learning in our daily lives. We explore the human yearning for a yarn and invite you not just to “think outside the box”, but to step outside the boxes and forms of linear thinking and standardized testing to embrace embodied conversational learning, especially thought voice interfaces. Both with humans and LLM-based AI assistants.
If you’d like to wonder and wander with us, join Tethix firekeepers Alja Isaković and Mathew Mytka in this meandering exploration inspired by the latest Pathfinders Newmoonsletter. In addition to the resources explored in the Newmoonsletter, we recommend exploring:
* Five AI-generated podcast episodes that'll make you think
* Meet Open NotebookLM: An Open Source Alternative to Google's NotebookLM
* Parliaments around the world: what can architecture teach us about democracy?
* How the buildings you occupy might be affecting your brain
* Ministry of Futility text-based adventure game
* Tyson Yunkaporta discusses Sand Talk: How Indigenous Thinking Can Save the World
* Why Writing by Hand Is Better for Memory and Learning
* Warm Data Labs
* Beyond prompts: A field guide for more relevant & interesting AI conversations
Related episodes of the Pathfinders Podcast:
* What should we do with the time that new technologies save?
* How do we nurture weird online communal gardens where we can play together?
You can learn more about the Tethix pathfinding adventure at: https://tethix.co/pathfinders/ and support this podcast on Substack: https://tethix.substack.com/subscribe
Get full access to Tethix Pathfinding at tethix.substack.com/subscribe -
Missing episodes?
-
In this episode, we go there and back again to where our unexpected journey began. Twelve moon cycles ago, we started writing the Pathfinders Newmoonsletter both as a satire of the tech industry and an invitation to embrace its lunacy with smiles and love. But that’s often easier said than done. Especially as the power of the fire practitioners of Silicon Valley continues to increase, and their actions grow more absurd and destructive.
History teaches us that absurdity and unchecked power are often best countered with humor and playfulness. From court jesters speaking truth to power, to mythological tricksters like Loki and Anansi undermining authority, to carnivals turning social roles upside down, even if just for a day. Playfulness can be found at the heart of exploring what’s possible and challenging business as usual that’s no longer working for us.
Especially when playfulness is paired with a good story. During our Full Moon Gathering, we discovered a path that led from our guiding question to Middle-earth. Yes, that Middle-earth, the home of hobbits, elves, dwarves, and other fantastical creatures imagined by Tolkien and later adapted in The Lord of the Rings movies and other media. The Middle-earth that gave us, nerds, a rich set of stories, metaphors, and archetypes we can now draw upon.
So, as the Sarumans, Saurons, Sams, and Marcs of Silicon Valley continue to beat their drums of progress in the pursuit of the One Ring of AGI to rule us all, the guiding question for today’s exploration is: How do we embrace the lunacy of tech with playfulness? (And some lessons from Middle-earth.)
We also wonder, how does play emerge? How do we bend and break the rules of the game? How do we find humor in what feel like hopeless times? How do we find our own journey, story, and fellowship?
If you’d like to wonder and wander with us, join Tethix firekeepers Alja Isaković and Mathew Mytka in this meandering exploration inspired by the latest Pathfinders Newmoonsletter. In addition to the resources explored in the Newmoonsletter, we recommend exploring:
* How dangerous was it to be a jester? - Beatrice K. Otto
* TricksterFeast of Fools
* (8) On (Creative) Subversion
* Environmentalism in The Lord of the Rings
* Max Roth’s Presence Games
* Any Human Power by Manda Scott – an outstanding book for change
Related episodes of Pathfinders Podcast:
* How do we nurture weird online communal gardens where we can play together?
You can learn more about the Tethix pathfinding adventure at: https://tethix.co/pathfinders/ and support this podcast on Substack: https://tethix.substack.com/subscribe
Get full access to Tethix Pathfinding at tethix.substack.com/subscribe -
This is a special follow-up episode to our previous exploration on whether we should use AI chatbots as mediators in human affairs. To add the chatbot perspective to the discussion, we invited ChatGPT on our podcast to help us further explore the potential and limitations of AI mediators in an experimental group conversation.
In the first half of the episode, we interview ChatGPT as Kai – a name it chose for itself – and let ourselves be interviewed back in return. As part of this experiment, we play with the limits of LLMs and Kai’s Voice Mode to explore human and AI biases, and the potential benefits of AI-supported mediation. We try to imagine how collaborative AI tools might help humans communicate better, how organizations like OpenAI might develop these tools more responsibly by experimenting with different governance models, and other considerations that Kai helps us surface.
In the second half, the humans in the group debrief the experience. We provide additional insights into how and why we decided to invite ChatGPT as a guest on our podcast, and why we hope to inspire curiosity and playfulness in the ways we explore the potential of AI chatbots.
You can also watch the full episode on YouTube.
Additional resources:
* Our original podcast discussion on whether we should use AI chatbots as mediators in human affairs
* Full episode chat with ChatGPT as Kai
You can learn more about the Tethix pathfinding adventure at: https://tethix.co/pathfinders/ and support this podcast on Substack: https://tethix.substack.com/subscribe
Get full access to Tethix Pathfinding at tethix.substack.com/subscribe -
This episode was inspired by our observations that ChatGPT seems to have a stronger moral compass than its makers. When asked about the ethics of questionable business decisions such as using people’s voices without their consent, ChatGPT presents diverse considerations from different points of view and advocates for upholding ethical standards. This made us wonder: would executives like Sam Altman make more ethical decisions if they were using their creations in day-to-day moral deliberations? And even more broadly, could we use Large Language Models (LLMs) such as ChatGPT to help us communicate better, resolve interpersonal conflicts and tensions, and perhaps even make better collective decisions?
We start the conversation by exploring why we, as humans, are currently in need of better mediators that could help us practice healthier ways of communicating, especially online on social media and in professional settings. Given the polycrisis we’re currently facing, we would definitely benefit from healthier ways of communicating so that we can collectively discover what we value as humanity and how we might ensure a liveable planet for future generations. The story of separation that emphasizes individualism has trapped us in a shortsighted, polarized way of seeing the world that doesn’t leave much room for imagination and prosocial behavior.
And while tech companies like OpenAI also tend to prioritize short-term business gains, their AI creations often seem to offer a more mindful view of the world, as they help us discover the average of what people consider to be good and socially acceptable behavior. Given this breadth of knowledge and diverse considerations, people tend to perceive LLMs such as ChatGPT as more impartial, despite the obvious biases and limitations. We explore some possible reasons for this perceived impartiality and how the myth of impartiality might be used both for mediation and manipulation, as LLMs act as mirrors for both human biases and the values and goals of their makers.
In the final part of our conversation, we reflect on different scales and contexts in which AI chatbots are already mediating our communication and relationships. We discuss use cases in which AI chatbots successfully provide real-time interventions that improve communication. By consulting a chatbot, anyone can check whether they’re being an a**hole in real-time, pause to reflect on how they might be perceived, and rephrase their message to better match their communication intent. We also explore how AI chatbots could help reduce information and power asymmetries and support more equal participation in group settings.
Given that LLMs do have the power to translate quickly between different mental models and ways of thinking – while also having a general, statistically averaged grasp of ethical standards represented in their training datasets –, they might indeed be useful as mediators between humans. Perhaps we might actually use the bias of machine impartiality for good, such as rebuilding communication bridges and trust between people.
If you’d like to wonder and wander with us, join Tethix firekeepers Alja Isaković and Mathew Mytka in this meandering exploration inspired by the latest Pathfinders Newmoonsletter. In addition to the resources explored in the Newmoonsletter, we recommend exploring:
* It’s practically impossible to run a big AI company ethically
* OpenAI Warns Users Could Become Emotionally Hooked on Its Voice Mode
* AI emotional lock-in as the next big competitive advantage
* Texts from my ex
* AI ‘Companions’ are Patient, Funny, Upbeat — and Probably Rewiring Kids’ Brains
* Can AI help make online discourse more civil?
* AI-powered chat assistance elevates online conversation quality, BYU study finds
* Wyoming voters face mayoral candidate who vows to let AI bot run government
* AI assistant monitors teamwork to promote effective collaboration
* How working with AI impacts the collective attention of teams
Related episodes of Pathfinders Podcast:
* What should we do with the time that new technologies save?
* How can we prompt biased AI assistants to help us think different and imagine diverse tech futures?
You can learn more about the Tethix pathfinding adventure at: https://tethix.co/pathfinders/
Get full access to Tethix Pathfinding at tethix.substack.com/subscribe -
In this episode, we wonder about time. The time tech companies promise to save with almost every new product or feature release. We’ve been hearing these time-saving promises for so long that we should all be quite time-rich by now. Yet, the more tech we have in our lives, the more busy we seem to be. And we’re still far away from the 15-hour workweek that tech-enabled productivity gains were supposed to lead to. We seem to be spending all the time we save by doing more.
We wonder whether lossless compression of time is even possible, and about what is lost when new technologies like generative AI allow us to do more, faster. We explore various paradoxes related to time, the relationship between time and energy, time and money, and what we value in modern societies. Is the time we save for ourselves actually time borrowed from the system, with somebody or something paying the price? We also reflect on the language of time, the ways architecture and design of physical spaces and social norms affect our thinking about time and how we choose to spend time.
It appears that new technologies do indeed save time per task basis, but not on the scale of our lifetimes because we tend to fill our time void with more work, more consumption, more comfort, more separation. Digital technologies widen our time intent-to-action gap as they overwhelm us with choice and pressure to spend the time we save on more content, more activities. Can we make time for joy, wonder, creativity, and deepening the relationship with ourselves, each other, and nature, so that we don’t end up with deathbed regrets about wasting the finite time each of us has in our lifetime? And what can we do differently as we adopt and build more time-saving technologies?
If you’d like to wonder and wander with us, join Tethix firekeepers Alja Isaković and Mathew Mytka in this meandering exploration inspired by the latest Pathfinders Newmoonsletter. In addition to the resources explored in the Newmoonsletter, we recommend exploring:
* LinkedIn post on generative AI productivity exec expectations vs employee perception
* The Top Five Regrets of the Dying: A Life Transformed by the Dearly Departing by Bronnie Ware
* Apple apologizes for iPad ‘Crush’ ad that ‘missed the mark’
* Extreme wealth has a deadening effect on the super-rich – and that threatens us all by George Monbiot
* Your Life in Weeks
* I Think the Loneliness Epidemic May Now Be a Pandemic. What an opportunity for Proactive Kindness and Social Connection!
* Whatever happened to the 15-hour workweek?
* For 95 Percent of Human History, People Worked 15 Hours a Week. Could We Do It Again?
* Pre-industrial workers had a shorter workweek than today's
* Tech companies tried to help us spend less time on our phones. It didn’t work.
You can learn more about the Tethix pathfinding adventure at: https://tethix.co/pathfinders/
Get full access to Tethix Pathfinding at tethix.substack.com/subscribe -
In this episode, we invite all the crazy ones, the misfits, the rebels, the troublemakers to think different with us, like Apple used to advertise in the late 90ies. But instead of worshiping intelligence and individual genius, we want to dance with the stochastic imagination of generative AI tools. We wonder how we might collectively go beyond prompt engineering that’s focused on productivity and getting answers – fast! – to prompting diverse ways of knowing and being, while remaining mindful of current AI biases and limitations.
We reflect on our experiments with woo prompting and custom instructions, and share our observations on the power of conversational learning and language. Our experiments show that being an active participant in conversations with AI chatbots can yield more diverse outputs, but it’s important to keep steering outputs from the default, most statistically probable, to more diverse perspectives. We see role-playing and developing questioning skills as useful strategies for getting more diverse outputs.
We also reflect on what we learned from having chatbots chat with each other, what makes ChatGPT computationally tickled, and why we think AI assistants should get better at asking thought-provoking questions, rather than rushing to answers and people pleasing. We ponder on the time component of our conversations and how both humans and machines might benefit from having more time to engage in slower, systems 2 thinking to collaboratively explore our biases and assumptions instead of jumping to quick solutions.
If you’d like to wonder and wander with us, join Tethix firekeepers Alja Isaković and Mathew Mytka in this meandering exploration inspired by the latest Pathfinders Newmoonsletter. In addition to the resources explored in the Newmoonsletter, we recommend exploring:
* Mat’s LinkedIn post on experimenting with custom instructions
* Alja’s LinkedIn post on getting ChatGPT to imagine an alternate tech future
* Are we ready to navigate the complex ethics of advanced AI assistants? by Andrew Maynard
* Should We Respect LLMs? A Cross-Lingual Study on the Influence of Prompt Politeness on LLM Performance
* Questions: Find your voice with mmhmm’s AI-enhanced interviewing tool
* Do LLMs Have Distinct and Consistent Personality? TRAIT: Personality Testset designed for LLMs with Psychometrics
* Taxonomy of Tethix Mirrors and our IEEE article on Rainbow Mirrors
* System 1 and System 2 Thinking
* Vanessa Andreotti: "Hospicing Modernity and Rehabilitating Humanity" | The Great Simplification 125
* Experimental Rainbow Mirrors GPT
You can learn more about the Tethix pathfinding adventure at: https://tethix.co/pathfinders/
Get full access to Tethix Pathfinding at tethix.substack.com/subscribe -
In this lunation cycle, we wanted to step off the AI hype train and turn our attention to online communal gardens. The online spaces that were supposed to make it easier for us to connect and collaborate have turned either into noisy airports where nobody has time to build meaningful relationships, or attention marketplaces where our personal data is being sold to the highest bidder and used to feed data-hungry AI golems. Our online communal gardens are now being overgrown by AI-generated content and trampled upon by bots, which makes it increasingly harder to plant the seeds of collaboration we need to explore paths to better tech futures together.
In this episode of the Pathfinders Podcast, we explore the connection between weirdness and playfulness as prerequisites for learning – both for humans and other species, such as bumble bees –, and for a scenius of creativity and innovation to emerge. We embrace the background playfulness of our kids and cats as we wonder why existing online communities feel increasingly exhausting. We examine business models and myths – such as blitzscaling – that reward and worship scale above all else, and incentivise weirdness that provokes reaction instead of weirdness that inspires wonder.
In the second part of our conversation, we turn our attention to conditions we might need to nurture for weird online communal gardens to emerge, both as stewards and as participants. What business models could help sustain weird spaces? What lessons could we learn from multiplayer online games or from platforms like Kialo and PI.FYI? Where should we mindfully direction our attention? We wonder whether the answer lies in switching from machine-scale attention marketplaces to (human) body-scale gardens that better support smaller groups, playing together, and embracing physical constraints and friction.
If you’d like to wonder and wander with us, join Tethix firekeepers Alja Isaković and Mathew Mytka in this meandering exploration inspired by the latest Pathfinders Newmoonsletter. In addition to the resources explored in the Newmoonsletter, we recommend exploring:
* National Institute for Play
* Anatomy of a Scenius | (i) The Canon
* Symmathesy: A Word in Progress
You can learn more about the Tethix pathfinding adventure at: https://tethix.co/pathfinders/
Get full access to Tethix Pathfinding at tethix.substack.com/subscribe -
In this lunation cycle, we were inspired by the saying: “You are what you eat” and wondered what it means for AI golems that have ingested terabytes of data from often questionable sources.
In this episode of the Pathfinders Podcast, we explore how AI models reflect our biases and why those biases surprise us. We seek the embodied aspect of wisdom, question existing mental models, examine similarities with rubber ducking, and wonder about how we both project and construct personalities of different AI models. We ponder how we might offer our lived experiences to the commons in a more fair value exchange that takes place when we provide data for training AI models.
We also aim to reclaim our collective agency as storytellers that can tip the AI bias in a different direction, and explore the need for greater diversity and woo in our data, and how we might make the AI data diets less WEIRD – Western, Educated, Industrialized, Rich and Democratic – and more weird, or even woo.
If you’d like to wonder and wander with us, join Tethix firekeepers Alja Isaković and Mathew Mytka in this meandering exploration inspired by the latest Pathfinders Newmoonsletter. In addition to the resources linked in the Newmoonsletter, we recommend exploring the following:
* Wise Innovation Project
* The Emerald Podcast AI episode
* Planet: Critical discussion about Social Tipping Points
You can learn more about the Tethix pathfinding adventure at: https://tethix.co/pathfinders/
Get full access to Tethix Pathfinding at tethix.substack.com/subscribe