Episódios

  • Geertrui Mieke de Ketelaere reflects on the uncertain trajectory of AI, whether AI is socially or environmentally sustainable, and using AI to become good ancestors.

    Mieke joined Kimberly to discuss the current trajectory of AI; uncertainties created by current AI applications; the potent intersection of humanlike AI and heightened social/personal anxiety; Russian nesting dolls (matryoshka) as an analogy for AI systems; challenges with open source AI; the current state of public literacy and regulation; the Safe AI Companion Collective; social and environmental sustainability; expanding our POV beyond human intelligence; and striving to become good ancestors in our use of AI and beyond.

    A transcript of this episode is here.

    Geertrui Mieke de Ketelaere is an engineer, strategic advisor and Adjunct Professor of AI at Vlerick Business School focused on sustainable, ethical, and trustworthy AI. A prolific author, speaker and researcher, Mieke is passionate about building bridges between business, research and government in the domain of AI. Learn more about Mieke’s work here: www.gmdeketelaere.com

  • Vaishnavi J respects youth, advises considering the youth experience in all digital products, and asserts age-appropriate design is an underappreciated business asset.

    Vaishnavi joined Kimberly to discuss: the spaces youth inhabit online; the four pillars of safety by design; age-appropriate design choices; kids’ unique needs and vulnerabilities; what both digital libertarians and abstentionists get wrong; why great experiences and safety aren’t mutually exclusive; how younger cohorts perceive harm; centering youth experiences; business benefits of age-appropriate design; KOSPA and the duty of care; implications for content policy and product roadmaps; the youth experience as digital table stakes and an engine of growth.

    A transcript of this episode is here.

    Vaishnavi J is the founder and principal of Vyanams Strategies (VYS), helping companies, civil society, and governments build healthier online communities for young people. VYS leverages extensive experience at leading technology companies to develop tactical product and policy solutions for child safety and privacy. These range from product guidance, content policies, operations workflows, trust & safety strategies, and organizational design.

    Additional Resources:

    Monthly Youth Tech Policy Brief: https://quire.substack.com

  • Estão a faltar episódios?

    Clique aqui para atualizar o feed.

  • Kathleen Walch and Ron Schmelzer analyze AI patterns and factors hindering adoption, why AI is never ‘set it and forget it’, and the criticality of critical thinking.

    The dynamic duo behind Cognilytica (now PMI) join Kimberly to discuss: the seven (7) patterns of AI; fears and concerns stymying AI adoption; the tension between top-down and bottom-ups AI adoption; the AI value proposition; what differentiates CPMAI from good old-fashioned project management; AI’s Red Queen moment; critical thinking as a uniquely human skill; the DKIUW pyramid and limits of machine understanding; why you can’t sit AI out.

    A transcript of this episode is here.

    Kathleen Walch and Ron Schmelzer are the co-founders of Cognilytica, an AI research and analyst firm which was acquired by PMI (Project Management Institute) in September 2024. Their work, which includes the CPMAI project management methodology and the top-rated AI Today podcast, focuses on enabling AI adoption and skill development.

    Additional Resources:

    CPMAI certification: https://courses.cognilytica.com/

    AI Today podcast: https://www.cognilytica.com/aitoday/

  • Dr. Marisa Tschopp explores our evolving, often odd, expectations for AI companions while embracing radical empathy, resisting relentless PR and trusting in humanity.

    Marisa and Kimberly discuss recent research into AI-based conversational agents, the limits of artificial companionship, implications for mental health therapy, the importance of radical empathy and differentiation, why users defy simplistic categorization, corporate incentives and rampant marketing gags, reasons for optimism, and retaining trust in human connections. A transcript of this episode is here.

    Dr. Marisa Tschopp is a Psychologist, a Human-AI Interaction Researcher at scip AG and an ardent supporter of Women in AI. Marisa’s research focuses on human-AI relationships, trust in AI, agency, behavioral performance assessment of conversational systems (A-IQ), and gender issues in AI.

    Additional Resources:

    The Impact of Human-AI Relationship Perception on Voice Shopping Intentions in Human Machine Collaboration Publication

    How do users perceive their relationship with conversational AI? Publication

    KI als Freundin: Funktioniert eine Chatbot-Beziehung? TV Show (German, SRF)

    Friends with AI? It’s complicated! TEDxBoston Talk

  • John Danaher assesses how AI may reshape ethical and social norms, minds the anticipatory gap in regulation, and applies the MVPP to decide against digitizing himself.

    John parlayed an interest in science fiction into researching legal philosophy, emerging technology, and society. Flipping the script on ethical assessment, John identifies six (6) mechanisms by which technology may reshape ethical principles and social norms. John further illustrates the impact AI can have on decision sets and relationships. We then discuss the dilemma articulated by the aptly named anticipatory gap. In which the effort required to regulate nascent tech is proportional to our understanding of its ultimate effects.

    Finally, we turn our attention to the rapid rise of digital duplicates. John provides examples and proposes a Minimally Viable Permissibility Principle (MVPP) for evaluating the use of digital duplicates. Emphasizing the difficulty of mitigating the risks posed after a digital duplicate is let loose in the wide, John declines the opportunity to digitally duplicate himself.

    John Danaher is a Sr. Lecturer in Ethics at the NUI Galway School of Law. A prolific scholar, he is the author of Automation and Utopia: Human Flourishing in a World Without Work (Harvard University Press, 2019). Papers referenced in this episode include The Ethics of Personalized Digital Duplicates: A Minimal Viability Principle and How Technology Alters Morality and Why It Matters.

    A transcript of this episode is here.

  • Ben Bland expressively explores emotive AI’s shaky scientific underpinnings, the gap between reality and perception, popular applications, and critical apprehensions.

    Ben exposes the scientific contention surrounding human emotion. He talks terms (emotive? empathic? not telepathic!) and outlines a spectrum of emotive applications. We discuss the powerful, often subtle, and sometimes insidious ways emotion can be leveraged. Ben explains the negative effects of perpetual positivity and why drawing clear red lines around the tech is difficult.

    He also addresses the qualitative sea change brought about by large language models (LLMs), implicit vs explicit design and commercial objectives. Noting that the social and psychological impacts of emotive AI systems have been poorly explored, he muses about the potential to actively evolve your machine’s emotional capability.

    Ben confronts the challenges of defining standards when the language is tricky, the science is shaky, and applications are proliferating. Lastly, Ben jazzes up empathy as a human superpower. While optimistic about empathic AI’s potential, he counsels proceeding with caution.

    Ben Bland is an independent consultant in ethical innovation. An active community contributor, Ben is the Chair of the IEEE P7014 Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems and Vice-Chair of IEEE P7014.1 Recommended Practice for Ethical Considerations of Emulated Empathy in Partner-based General-Purpose Artificial Intelligence Systems.

    A transcript of this episode is here.

  • Philip Rathle traverses from knowledge graphs to LLMs and illustrates how loading the dice with GraphRAG enhances deterministic reasoning, explainability and agency.

    Philip explains why knowledge graphs are a natural fit for capturing data about real-world systems. Starting with Kevin Bacon, he identifies many ‘graphy’ problems confronting us today. Philip then describes how interconnected systems benefit from the dynamism and data network effects afforded by knowledge graphs.

    Next, Philip provides a primer on how Retrieval Augmented Generation (RAG) loads the dice for large language models (LLMs). He also differentiates between vector- and graph-based RAG. Along the way, we discuss the nature and locus of reasoning (or lack thereof) in LLM systems. Philip articulates the benefits of GraphRAG including deterministic reasoning, fine-grained access control and explainability. He also ruminates on graphs as a bridge to human agency as graphs can be reasoned on by both humans and machines. Lastly, Philip shares what is happening now and next in GraphRAG applications and beyond.

    Philip Rathle is the Chief Technology Officer (CTO) at Neo4j. Philip was a key contributor to the development of the GQL standard and recently authored The GraphRAG Manifesto: Adding Knowledge to GenAI (neo4j.com) a go-to resource for all things GraphRAG.

    A transcript of this episode is here.

  • Matthew Scherer makes the case for bottom-up AI adoption, being OK with not using AI, innovation as a relative good, and transparently safeguarding workers’ rights.

    Matthew champions a worker-led approach to AI adoption in the workplace. He traverses the slippery slope from safety to surveillance and guards against unnecessarily intrusive solutions.

    Matthew then illustrates why AI isn’t great at making employment decisions; even in objectively data rich environments such as the NBA. He also addresses the intractable problem of bias in hiring and flawed comparisons between humans and AI. We discuss the unquantifiable dynamics of human interactions and being OK with our inability to automate hiring and firing.

    Matthew explains how the patchwork of emerging privacy regulations reflects cultural norms towards workers. He invokes the Ford Pinto and the Titan submersible catastrophe when challenging the concept of innovation as an intrinsic good. Matthew then makes the case for transparency as a gateway to enforcing existing civil rights and laws.

    Matthew Scherer is a Senior Policy Counsel for Workers' Rights and Technology at the Center for Democracy and Technology (CDT). He studies how emerging technologies affect workers in the workplace and labor market. Matt is also an Advisor for the International Center for Advocates Against Discrimination.

    A transcript of this episode is here.

  • Heidi Lanford connects data to cocktails and campaigns while considering the nature of data disruption, getting from analytics to AI, and using data with confidence.

    Heidi studied mathematics and statistics and never looked back. Reflecting on analytics then and now, she confirms the appetite for data has never been higher. Yet adoption, momentum and focus remain evergreen barriers. Heidi issues a cocktail party challenge while discussing the core competencies of effective data leaders.

    Heidi believes data and CDOs are disruptive by nature. But this only matters if your business incentives are properly aligned. She revels in agile experimentation while counseling that speed is not enough. We discuss how good old-fashioned analytics put the right pressure on the foundational data needed for AI.

    Heidi then campaigns for endemic data literacy. Along the way she pans JIT holiday training and promotes confident decision making as the metric that matters. Never saying never, Heidi celebrates human experts and the spotlight AI is shining on data.

    Heidi Lanford is a Global Chief Data & Analytics Officer who has served as Chief Data Officer (CDO) at the Fitch Group and VP of Enterprise Data & Analytics at Red Hat (IBM). In 2023, Heidi co-founded two AI startups LiveFire AI and AIQScore. Heidi serves as a Board Member at the University of Virginia School of Data Science, is a Founding Board Member of the Data Leadership Collaborative, and an Advisor to Domino Data Labs and Linea.

    A transcript of this episode is here.

  • Marianna B. Ganapini contemplates AI nudging, entropy as a bellwether of risk, accessible ethical assessment, ethical ROI, the limits of trust and irrational beliefs.

    Marianna studies how AI-driven nudging ups the ethical ante relative to autonomy and decision-making. This is a solvable problem that may still prove difficult to regulate. She posits that the level of entropy within a system correlates with risks seen and unseen. We discuss the relationship between risk and harm and why a lack of knowledge imbues moral responsibility. Marianna describes how macro-level assessments can effectively take an AI system’s temperature (risk-wise). Addressing the evolving responsible AI discourse, Marianna asserts that limiting trust to moral agents is overly restrictive. The real problem is conflating trust between humans with the trust afforded any number of entities from your pet to your Roomba. Marianna also cautions against hastily judging another’s beliefs, even when they overhype AI. Acknowledging progress, Marianna advocates for increased interdisciplinary efforts and ethical certifications.

    Marianna B. Ganapini is a Professor of Philosophy and Founder of Logica.Now, a consultancy which seeks to educate and engage organizations in ethical AI inquiry. She is also a Faculty Director at the Montreal AI Ethics Institute and Visiting Scholar at the ND-IBM Tech Ethics Lab .

    A transcript of this episode is here.

  • Miriam Vogel disputes AI is lawless, endorses good AI hygiene, reviews regulatory progress and pitfalls, boosts literacy and diversity, and remains net positive on AI.

    Miriam Vogel traverses her unforeseen path from in-house counsel to public policy innovator. Miriam acknowledges that AI systems raise some novel questions but reiterates there is much to learn from existing policies and laws. Drawing analogies to flying and driving, Miriam demonstrates the need for both standardized and context-specific guidance.

    Miriam and Kimberly then discuss what constitutes good AI hygiene, what meaningful transparency looks like, and why a multi-disciplinary mindset matters. While reiterating the business value of beneficial AI Miriam notes businesses are now on notice regarding their AI liability. She is clear-sighted regarding the complexity, but views regulation done right as a means to spur innovation and trust. In that vein, Miriam outlines the progress to-date and work still to come to enact federal AI policies and raise our collective AI literacy. Lastly, Miriam raises questions everyone should ask to ensure we each benefit from the opportunities AI presents.

    Miriam Vogel is the President and CEO of Equal AI, a non-profit movement committed to reducing bias and responsibly governing AI. Miriam also chairs the US National AI Advisory Committee (NAIAC).

    A transcript of this episode is here.

  • Melissa Sariffodeen contends learning requires unlearning, ponders human-AI relationships, prioritizes outcomes over outputs, and values the disquiet of constructive critique.

    Melissa artfully illustrates barriers to innovation through the eyes of a child learning to code and a seasoned driver learning to not drive. Drawing on decades of experience teaching technical skills, she identifies why AI creates new challenges for upskilling. Kimberly and Melissa then debate viewing AI systems through the lens of tools vs. relationships. An avowed lifelong learner, Melissa believes prior learnings are sometimes detrimental to innovation. Melissa therefore advocates for unlearning as a key step in unlocking growth. She also proposes a new model for organizational learning and development. A pragmatic tech optimist, Melissa acknowledges the messy middle and reaffirms the importance of diversity and critically questioning our beliefs and habits.

    Melissa Sariffodeen is the founder of the The Digital Potential Lab, co-founder and CEO of Canada Learning Code and a Professor at the Ivey Business School at Western University where she focuses on the management of information and communication technologies.

    A transcript of this episode is here.

  • Shannon Mullen O’Keefe champions collaboration, serendipitous discovery, curious conversations, ethical leadership, and purposeful curation of our technical creations.

    Shannon shares her professional journey from curating leaders to innovative ideas. From lightbulbs to online dating and AI voice technology, Shannon highlights the simultaneously beautiful and nefarious applications of tech and the need to assess our creations continuously and critically. She highlights powerful insights spurred by the values and questions posed in the book 10 Moral Questions: How to Design Tech and AI Responsibly. We discuss the ‘business of business,’ consumer appetite for ethical businesses, and why conversation is the bedrock of culture. Throughout, Shannon highlights the importance and joy of discovery, embracing nature, sitting in darkness, and mustering the will to change our minds, even if that means turning our creations off.

    Shannon Mullen O’Keefe is the Curator of the Museum of Ideas and co-author of the Q Collective’s book 10 Moral Questions: How to Design Tech and AI Responsibly. Learn more at https://www.10moralquestions.com/.

    A transcript of this episode is here.

  • Sarah Gibbons and Kate Moran riff on the experience of using current AI tools, how AI systems may change our behavior and the application of AI to human-centered design.

    Sarah and Kate share their non-linear paths to becoming leading user experience (UX) designers. Defining the human-centric mindset Sarah stresses that intent is design and we are all designers. Kate and Sarah then challenge teams to resist short-term problem hunting for AI alone. This leads to an energized and frank debate about the tensions created by broad availability of AI tools with “shitty” user interfaces, why conversational interfaces aren’t the be-all-end-all and whether calls for more discernment and critical thinking are reasonable or even new. Kate and Sara then discuss their research into our nascent AI mental models and emergent impacts on user behavior. Kate discusses how AI can be used for UX design along with some far-fetched claims. Finally, both Kate and Sara share exciting areas of ongoing research.

    Sarah Gibbons and Kate Moran are Vice Presidents at Nielson Norman Group where they lead strategy, research, and design in the areas of human-centered design and user experience (UX).

    A transcript of this episode is here.

  • Simon Johnson takes on techno-optimism, the link between technology and human well-being, the law of intended consequences, the modern union remit and political will.

    In this sobering tour through time, Simon proves that widespread human flourishing is not intrinsic to tech innovation. He challenges the ‘productivity bandwagon’ (an economic maxim so pervasive it did not have a name) and shows that productivity and market polarization often go hand-in-hand. Simon also views big tech’s persuasive powers through the lens of OpenAI’s board debacle.

    Kimberly and Simon discuss the heyday of shared worker value, the commercial logic of automation and augmenting human work with technology. Simon highlights stakeholder capitalism’s current view of labor as a cost rather than people as a resource. He underscores the need for active attention to task creation, strong labor movements and participatory political action (shouting and all). Simon believes that shared prosperity is possible. Make no mistake, however, achieving it requires wisdom and hard work.

    Simon Johnson is the Head of the Economics and Management group at MIT’s Sloan School of Management. Simon co-authored the stellar book “Power and Progress: Our 1,000 Year Struggle Over Technology and Prosperity with Daren Acemoglu.

    A transcript of this episode is here.

  • Professor Rose Luckin provides an engaging tutorial on the opportunities, risks, and challenges of AI in education and why AI raises the bar for human learning.

    Acknowledging AI’s real and present risks, Rose is optimistic about the power of AI to transform education and meet the needs of diverse student populations. From adaptive learning platforms to assistive tools, Rose highlights opportunities for AI to make us smarter, supercharge learner-educator engagement and level the educational playing field. Along the way, she confronts overconfidence in AI, the temptation to offload challenging cognitive workloads and the risk of constraining a learner’s choices prematurely. Rose also adroitly addresses conflicting visions of human quantification as the holy grail and the seeds of our demise. She asserts that AI ups the ante on education: how else can we deploy AI wisely? Rising to the challenge requires the hard work of tailoring strategies for specific learning communities and broad education about AI itself.

    Rose Luckin is a Professor of Learner Centered Design at the UCL Knowledge Lab and Founder of EDUCATE Ventures Research Ltd., a London hub for educational technology start-ups, researchers and educators involved in evidence-based educational technology and leveraging data and AI for educational benefit. Explore Rose’s 2018 book Machine Learning and Human Intelligence (free after creating account) and the EDUCATE Ventures newsletter The Skinny.

    A transcript of this episode is here.

  • Katrina Ingram addresses AI power dynamics, regulatory floors and ethical ceilings, inevitability narratives, self-limiting predictions, and public AI education.

    Katrina traces her career from communications to her current pursuits in applied AI ethics. Showcasing her way with words, Katrina dissects popular AI narratives. While contemplating AI FOMO, she cautions against an engineering mentality and champions the power to say ‘no.’ Katrina contrasts buying groceries with AI solutions and describes regulations as the floor and ethics as the ceiling for responsible AI. Katrina then considers the sublimation of AI ethics into AI safety and risk management, whether Sci-Fi has led us astray and who decides what. We also discuss the law of diminishing returns, the inevitability narrative around AI, and how predictions based on the past can narrow future possibilities. Katrina commiserates with consumers but cautions against throwing privacy to the wind. Finally, she highlights the gap in funding for public education and literacy.

    Katrina Ingram is the Founder & CEO Ethically Aligned AI, a Canadian consultancy enabling organizations to practically apply ethics in their AI pursuits.

    A transcript of this episode is here.

  • Paulo Carvão discusses AI’s impact on the public interest, emerging regulatory schemes, progress over perfection, and education as the lynchpin for ethical tech.

    In this thoughtful discussion, Paulo outlines the cultural, ideological and business factors underpinning the current data economy. An economy in which the manipulation of personal data into private corporate assets is foundational. Opting for optimism over cynicism, Paul advocates for a first principles approach to ethical development of AI and emerging tech. He argues that regulation creates a positive tension that enables innovation. Paulo examines the emerging regulatory regimes of the EU, the US and China. Preferencing progress over perfection, he describes why regulating technology for technology’s sake is fraught. Acknowledging the challenge facing existing school systems, Paulo articulates the foundational elements required of a ‘bilingual’ education to enable future generations to “do the right things.”

    Paulo Carvão is a Senior Fellow at the Harvard Advanced Leadership Initiative, a global tech executive and investor. Follow his writings and subscribe to his newsletter on the Tech and Democracy substack.

    A transcript of this episode is here.

  • Dr. Christina Jayne Colclough reflects on AI Regulations at Work.

    In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.

  • Giselle Mota reflects on Inclusion at Work in the age of AI.

    In this capsule series, prior guests share their insights on current happenings in AI and intuitions about what to expect next.