Episodios

  • This and all episodes at: https://aiandyou.net/ .

    My guest has fundamentally reshaped the landscape of digital technology. Your experience of the Internet owes a large part of its identity to Tim O’Reilly, the founder, CEO, and Chairman of O’Reilly Media, the company that has been providing the picks and shovels of learning to the Silicon Valley gold rush for the past thirty-five years, a platform that has connected and informed the people at ground zero of the online revolution since before there was a World Wide Web, through every medium from books to blogs. And the man behind that company has catalyzed and promoted the great thought movements that shaped how the digital world unfolded, such as Open Source, the principle of freedom and transparency in sharing the code that makes up the moving parts of that world, notably through the Open Source Conference which was like Woodstock for developers and ran from the beginning of that era for many years and which I personally presented at many times. Named by Inc. magazine as the “Oracle of Silicon Valley,” Tim created the term “Web 2.0” to denote the shift towards the era where users like you and me participate by creating our own content, which turned into social media and which is now just part of the digital water we swim in. His 2017 book, WTF: What’s the Future and Why It’s Up to Us explores the technological forces on our world and how to harness them for a better future.

    We talk about intellectual property rights in the generative AI era – Taylor Swift will make an appearance again - and Tim’s conversations with Sam Altman, parallels with the evolution of Linux, comparing incentives with social media, the future of content generating work, and opportunities for entrepreneurial flowering.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    How do our brains produce thinking? My guest is an expert in cognitive neuroscience, the field that aims to answer that question. Paul Reber is professor of psychology at Northwestern University, Director of Cognitive and Affective Neuroscience, and head of the Brain, Behavior, and Cognition program, focusing on human memory—how the brain encodes, stores, and retrieves information. With a PhD from Carnegie Mellon, his work has been cited over 6,000 times. He has served as Associate Editor for the journal Cortex and contributed to NIH review panels. His recent projects explore applications of memory science in skill training and cognitive aging. If we want to build AI that reproduces human intelligence, we need to understand that as well as possible.

    In part 2, we talk about how to memorize something like a TED talk, the difference between human and computer memory, how humans make memories more resilient, catastrophic interference, and just how big is the human brain and can we fill it up?

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • ¿Faltan episodios?

    Pulsa aquí para actualizar resultados

  • This and all episodes at: https://aiandyou.net/ .

    How do our brains produce thinking? My guest is an expert in cognitive neuroscience, the field that aims to answer that question. Paul Reber is professor of psychology at Northwestern University, Director of Cognitive and Affective Neuroscience, and head of the Brain, Behavior, and Cognition program, focusing on human memory—how the brain encodes, stores, and retrieves information. With a PhD from Carnegie Mellon, his work has been cited over 6,000 times. He has served as Associate Editor for the journal Cortex and contributed to NIH review panels. His recent projects explore applications of memory science in skill training and cognitive aging. If we want to build AI that reproduces human intelligence, we need to understand that as well as possible.

    In part 1, we talk about distinguishing neuroscience from cognitive neuroscience, the physical structure of the brain, how we learn physical skills, comparing the brain to AI, and foundational problems in neuroscience.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    On the recent wrap-up/predictions panel we had so much fascinating discussion about AI in religion with panelist Beth Singler that I said we should have her back on the show by herself to talk about that, so here she is!

    Beth is the Assistant Professor in Digital Religions and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. As an anthropologist, her research addresses the human, religious, cultural, social, and ethical implications of developments in AI and robotics. She received the 2021 Digital Religion Research Award from the Network for New Media, Religion, and Digital Culture Studies. Her popular science communication work includes a series of award-winning short documentaries on AI. She is co-editor of the Cambridge Companion to Religion and AI, and author of Religion and AI: An Introduction, both published last year.

    In part 2, we talk about Roko’s Basilisk, which is a concept that changes your life the moment you find out what it is, experiences of AI saying that it’s a God, the reverse Garland test (that’s based on ex Machina), simulation theories starting with Plato’s Cave, more chatbot priests, how Beth does research, and… Battlestar Galactica.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    On the recent wrap-up/predictions panel we had so much fascinating discussion about AI in religion with panelist Beth Singler that I said we should have her back on the show by herself to talk about that, so here she is!

    Beth is the Assistant Professor in Digital Religions and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. As an anthropologist, her research addresses the human, religious, cultural, social, and ethical implications of developments in AI and robotics. She received the 2021 Digital Religion Research Award from the Network for New Media, Religion, and Digital Culture Studies. Her popular science communication work includes a series of award-winning short documentaries on AI. She is co-editor of the Cambridge Companion to Religion and AI, and author of Religion and AI: An Introduction, both published last year.

    In part 1, we talk about why religion and AI is a thing and what its dimensions are, the influence of science fiction, tropes like End Times, AI used in religious roles, and the Singularity.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Continuing our exploration of AI in education, I am joined by Nick Potkalitsky, founder of Pragmatic AI Solutions and co-author of AI in Education: A Roadmap For Teacher-Led Transformation. With a doctorate in narrative and rhetorical studies, he leads AI curriculum integration at The Miami Valley School and develops pioneering AI literacy programs. His Substack “Educating AI” offers curriculum guidance and expert insights.

    We talk about how AI has landed emotionally for teachers, whether there’s a generational divide in the different reactions teachers have had to AI, how students are using AI tools and the homework problem, the changing landscape of policies in schools, how university requirements are evolving, and the teacher-led transformation of education that Nick foresees.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Virtually every issue around AI – pro, con, in-between – is reflected in education right now. And teachers are on the front lines of this disruption. So it’s especially important that UNESCO – that’s the United Nations Educational, Scientific, and Cultural Organization - has developed an AI Competency Framework for Teachers, and here to talk about that and his other work is the co-author of that framework, Mutlu Cukurova, professor of Learning and Artificial Intelligence at University College London. He investigates human-AI complementarity in education, aiming to address the pressing socio-educational challenge of preparing people for a future with AI systems that will require a great deal more than the routine cognitive skills currently prized by many education systems and traditional approaches to automation with AI. He is part of UCL's Grand Challenges on Transformative Technologies group, was named in Stanford’s Top 2% Scientists List, and is editor of the British Journal of Educational Technology and Associate Editor of the International Journal of Child-Computer Interaction.

    In part 2, we talk about how the competency framework helps teachers use large language models, intelligent tutoring systems, the distinctions between human and machine intelligence, how to find the place to be human in a world of expanding AI capabilities, and the opportunities for teachers in this world.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Virtually every issue around AI – pro, con, in-between – is reflected in education right now. And teachers are on the front lines of this disruption. So it’s especially important that UNESCO – that’s the United Nations Educational, Scientific, and Cultural Organization - has developed an AI Competency Framework for Teachers, and here to talk about that and his other work is the co-author of that framework, Mutlu Cukurova, professor of Learning and Artificial Intelligence at University College London. He investigates human-AI complementarity in education, aiming to address the pressing socio-educational challenge of preparing people for a future with AI systems that will require a great deal more than the routine cognitive skills currently prized by many education systems and traditional approaches to automation with AI. He is part of UCL's Grand Challenges on Transformative Technologies group, was named in Stanford’s Top 2% Scientists List, and is editor of the British Journal of Educational Technology and Associate Editor of the International Journal of Child-Computer Interaction.

    We talk about the role of UNESCO with respect to AI in education, societal and ethical issues of large language models in developing countries, the types of competencies assessed in classrooms that are affected by AI, what the AI Competency Framework for Teachers is, and how to use it.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Few institutions are under as much pressure today as journalism and news publishing, and AI features squarely in the middle of those pressures. Disinformation, social media, automated news generation, the list goes on; we’re talking about the fabric of our information society. Here to help us understand these issues is Neil Brown, former editor and vice president of the Tampa Bay Times while they won six Pulitzers, and president of the Poynter Institute for Media Studies. For over 50 years Poynter has trained journalists and protected the ethical standards of the industry through mechanisms like the International Fact-Checking Network and the Craig Newmark Center for Ethics and Leadership. Neil spent four decades as a journalist, launched PolitiFact.com, and was co-chair of the Pulitzer Prize Board. His mission is to strengthen democracy and confront society's most complex problems by improving the value of journalism and increasing media literacy, so we are very fortunate to have him on the show to field my challenging questions!

    We talk about the use of AI in journalism, in writing stories, its effect on our writing standards, different levels of stories in journalism, and the potential use of AI in interactive news publishing.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Few institutions are under as much pressure today as journalism and news publishing, and AI features squarely in the middle of those pressures. Disinformation, social media, automated news generation, the list goes on; we’re talking about the fabric of our information society. Here to help us understand these issues is Neil Brown, former editor and vice president of the Tampa Bay Times while they won six Pulitzers, and president of the Poynter Institute for Media Studies. For over 50 years Poynter has trained journalists and protected the ethical standards of the industry through mechanisms like the International Fact-Checking Network and the Craig Newmark Center for Ethics and Leadership. Neil spent four decades as a journalist, launched PolitiFact.com, and was co-chair of the Pulitzer Prize Board. His mission is to strengthen democracy and confront society's most complex problems by improving the value of journalism and increasing media literacy, so we are very fortunate to have him on the show to field my challenging questions!

    We talk about pressures on news organizations, the evolution of the relationship between journalism and publishing, how revenue models are changing, the impact and use of AI or psychometric analysis tools, and much more.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    In our last episode of 2024, we have our traditional end of year retrospective/prediction episode. We’ll be taking a look back over the year just ending and forward to 2025, but we’re not going to focus on technology, when GPT-5 is going to drop, etc. The space is already stuffed full of that sort of thing. We’re going to look at the time through an anthropological lens, for which I am rejoined by two former guests, anthropologist Beth Singler, who was in episodes 38 and 39, and philosopher John Zerilli, who was in episodes 78 and 79. Beth is Assistant Professor in Digital Religion(s) and co-lead of the Media Existential Encounters and Evolving Technology Lab at the University of Zurich, where she leads projects on religion and AI. Her most recent books are Religion and Artificial Intelligence and The Cambridge Companion to Religion and Artificial Intelligence. John is a Lecturer at the University of Edinburgh, with a PhD in cognitive science and philosophy, and carrying out research at the universities of Oxford and Cambridge. His most recent book, A Citizen’s Guide to Artificial Intelligence, was published in 2021.

    We consider how AI has been reshaping public narratives and attitudes over questions like job replacement, creativity, education, law, and religion.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Here to give us insights into some of the really cool stuff Google DeepMind is doing is Alexandra Belias, Head of product policy & partnerships. She serves as a bridge between DeepMind’s product policy organization and the policy community. She previously led their international public policy work. She has an MPA in Economic Policy from LSE and is currently a tech fellow at the Harvard Carr Center for Human Rights.

    We talk about Google DeepMind's science policy, the emerging network of national AI safety institutes, the tension between regulation and innovation, AlphaFold and its successors, AlphaMissense and AlphaProteo, their SynthID watermarking detection tool, reducing contrail pollution through AI, and safety frameworks for frontier AI.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    It's tough enough being a teacher in the AI age, so can you imagine what it's like training the teachers themselves? That's what Dwayne Wood, Associate Professor at National University of San Diego does. He is the Academic Program Director for the Educational Technology Master’s program there, so he’s front and center of the question of how teachers deal with AI in the classroom and has been working on addressing the current shortage of teachers.

    We talk about the possible impact of AI on essential learning skills, the difference between technical and tactical competence, the in-person educational experience, and how Dwayne sees things changing in the next year.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    It's tough enough being a teacher in the AI age, so can you imagine what it's like training the teachers themselves? That's what Dwayne Wood, Associate Professor at National University of San Diego does. He is the Academic Program Director for the Educational Technology Master’s program there, so he’s front and center of the question of how teachers deal with AI in the classroom and has been working on addressing the current shortage of teachers.

    We talk about the relationships between teachers and students, the shifting base of fundamental skills in an AI world, the skills needed by instructional designers, how to teach effective and safe use of generative AI, and how to place the guardrails around learners using it.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    We are going big on the show this time, with astrophysicist J. Craig Wheeler, Samuel T. and Fern Yanagisawa Regents Professor of Astronomy, Emeritus, at the University of Texas at Austin, and author of the book The Path to Singularity: How Technology will Challenge the Future of Humanity, released on November 19. He is a Fellow of the American Physical Society and Legacy Fellow of the American Astronomical Society, has published nearly 400 scientific papers, authored both professional and popular books on supernovae, and served on advisory committees for NSF, NASA, and the National Research Council. His new book, spanning the range of technologies that are propelling us towards singularity from robots to space colonization, has a foreword by Neil DeGrasse Tyson, who says, “The world is long overdue for a peek at the state of society and what its future looks like through the lens of a scientist. And when that scientist is also an astrophysicist, you can guarantee the perspectives shared will be as deep and as vast as the universe itself.”

    We talk about the evolution of homo sapiens, high reliability organizations, brain computer interfaces, and transhumanism among other topics.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.

    Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.

    In the conclusion, we talk about whether students need to read as much as they used to now they have AI, fact checking, some disturbing stories about the use of AI detectors in schools, where the panel sees these trends evolving to, what they’re doing to help students learn better in an AI world, and… Iron Man.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.

    Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.

    We talk about how much kids were using GenAI without our knowing, how to turn GenAI in schools from a threat to an opportunity, the issue of cheating with ChatGPT, the discrepancy between how many workers are using AI and how many teachers are using it, how rules get made, confirmation bias and AI, using tools versus gaining competencies, and whether teachers will quit.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter.

    Professor Caroline Bassett is the first Director of Cambridge Digital Humanities, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.

    In the conclusion, we talk about how technology shapes our psychology, how it enables mass movements, science fiction, the role of big Silicon Valley companies, and much more.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter.

    Professor Caroline Bassett is the first Director of Cambridge Digital Humanities, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.

    In part 1 we talk about what digital humanities is, how it intersects with AI, what science and the humanities have to learn from each other, Joseph Weizenbaum and the reactions to his ELIZA chatbot, Luddites, and how passively or otherwise we accept new technology. Caroline really made me see in particular how what she calls "technocratic rationality," a way of thinking borne out of a technological culture accelerated by AI, reduces the novelty which we can experience in the world in a way we should certainly preserve.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Cognitive architecture deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of John Laird. Is cognitive architecture the gateway to artificial general intelligence?

    John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his PhD from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems.

    We talk about relationships between cognitive architectures and AGI, where explainability and transparency come in, Turing tests, where we could be in 10 years, how to recognize AGI, metacognition, and the SOAR architecture.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.