エピソード

  • This and all episodes at: https://aiandyou.net/ .

    We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.

    Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.

    In the conclusion, we talk about whether students need to read as much as they used to now they have AI, fact checking, some disturbing stories about the use of AI detectors in schools, where the panel sees these trends evolving to, what they’re doing to help students learn better in an AI world, and… Iron Man.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    We're extending the conversation about AI in education to the front lines in this episode, with four very experienced and credentialed educators discussing their experiences and insights into AI in schools.

    Jose Luis Navarro IV is the leading coach and consultant at the Navarro Group. He previously served as a Support Coordinator, leading innovative reforms in the Los Angeles Unified School District.Zack Kleypas is Superintendent of Schools in Thorndale, Texas, and named 2023 Texas Secondary Principal of the Year by the Texas Association of Secondary School Principals.Jeff Austin is a former high school teacher and principal who now works as a coach for Teacher Powered Schools and Los Angeles Education Partnership.And Jose Gonzalez, Chief Technology Officer for the Los Angeles County Office of Education and former Vice Mayor of the city of Cudahy near Los Angeles.

    We talk about how much kids were using GenAI without our knowing, how to turn GenAI in schools from a threat to an opportunity, the issue of cheating with ChatGPT, the discrepancy between how many workers are using AI and how many teachers are using it, how rules get made, confirmation bias and AI, using tools versus gaining competencies, and whether teachers will quit.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • エピソードを見逃しましたか?

    フィードを更新するにはここをクリックしてください。

  • This and all episodes at: https://aiandyou.net/ .

    Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter.

    Professor Caroline Bassett is the first Director of Cambridge Digital Humanities, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.

    In the conclusion, we talk about how technology shapes our psychology, how it enables mass movements, science fiction, the role of big Silicon Valley companies, and much more.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Digital Humanities sounds at first blush like a contradiction of terms: the intersection of our digital, technology-centric culture, and the humanities, like arts, literature, and philosophy. Aren't those like oil and water? But my guest illustrates just how important this discipline is by illuminating both of those fields from viewpoints I found fascinating and very different from what we normally encounter.

    Professor Caroline Bassett is the first Director of Cambridge Digital Humanities, an interdisciplinary research center in Cambridge University. She is a Fellow of Corpus Christi College and researches digital technologies and cultural change with a focus on AI. She co-founded the Sussex Humanities Lab and at Cambridge she inaugurated the Masters of Philosophy in Digital Humanities and last month launched the new doctoral programme in Digital Humanities.

    In part 1 we talk about what digital humanities is, how it intersects with AI, what science and the humanities have to learn from each other, Joseph Weizenbaum and the reactions to his ELIZA chatbot, Luddites, and how passively or otherwise we accept new technology. Caroline really made me see in particular how what she calls "technocratic rationality," a way of thinking borne out of a technological culture accelerated by AI, reduces the novelty which we can experience in the world in a way we should certainly preserve.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Cognitive architecture deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of John Laird. Is cognitive architecture the gateway to artificial general intelligence?

    John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his PhD from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems.

    We talk about relationships between cognitive architectures and AGI, where explainability and transparency come in, Turing tests, where we could be in 10 years, how to recognize AGI, metacognition, and the SOAR architecture.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Cognitive architecture deals in models of how the brain - or AI - does its magic. A challenging discipline to say the least, and we are lucky to have a foremost cognitive architect on the show in the person of John Laird. Is cognitive architecture the gateway to artificial general intelligence?

    John is Principal Cognitive Architect and co-director of the Center for Integrated Cognition. He received his Ph.D. from Carnegie Mellon University in 1985, working with famed early AI pioneer Allen Newell. He is the John L. Tishman Emeritus Professor of Engineering at the University of Michigan, where he was a faculty member for 36 years. He is a Fellow of AAAI, ACM, AAAS, and the Cognitive Science Society. In 2018, he was co-winner of the Herbert A. Simon Prize for Advances in Cognitive Systems.

    We talk about decision loops, models of the mind, symbolic versus neural models, and how large language models do reasoning.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    My guest today founded the United Kingdom's AI in Education initiative, but Sir Anthony Seldon is known to millions more there as the author of books about prime ministers, having just published one about Liz Truss.

    Sir Anthony is one of Britain’s leading contemporary historians, educationalists, commentators and political authors. For 20 years he was a transformative headmaster (“principal” in North American lingo) first at Brighton College and then Wellington College, one of the country’s leading independent schools. From 2015 to 2020 he served as Vice-Chancellor of the University of Buckingham. He is now head of Epsom College. He is the author or editor of over 35 books on contemporary history, including insider accounts on the last six prime ministers. In 2018 he wrote The Fourth Education Revolution, which anticipates stunning, unprecedented effects of AI on education. He was knighted in 2014 for services to education and modern political history.

    Managing to avoid nearly all the potential Truss references, I talked with him about how teachers should think about the size of the impact of AI on education, the benefits of AI to students and teachers, what the AI in Education initiative is doing, and what the best role of teachers in the classroom is in the AI age.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with Ravin Jesuthasan, co-author with Tanuj Kapilashrami of the new book, The Skills-Powered Organization: The Journey to The Next Generation Enterprise, released on October 1.

    Ravin is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the bestselling books Work without Jobs, as well as the books Transformative HR, Lead the Work, and Reinventing Jobs.

    He was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by Tech News, and as one of the top 100 HR influencers by HR Executive.

    In the conclusion, we talk about how AI is reshaping HR functions, including hiring, staffing, and restructuring processes, the role of AI in mentoring and augmenting work, the relationship between the future of work and the future of education, the real value of a degree today, and how AI affects privilege and inequality in the new work environment.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    How is work shifting from jobs to skills, and how do companies and individuals adapt to this AI-fueled change? I talk with Ravin Jesuthasan, co-author with Tanuj Kapilashrami of the new book, The Skills-Powered Organization: The Journey to The Next Generation Enterprise, released on October 1.

    Ravin is a futurist and authority on the future of work, human capital, and AI, and is Senior Partner and Global Leader for Transformation Services at Mercer. He is a member of the World Economic Forum's Future Skills Executive Board and of the Global Foresight Network. He is the author of the Wall Street Journal bestseller Work without Jobs, as well as the books Transformative HR, Lead the Work, and Reinventing Jobs.

    Ravin was featured on PBS’s documentary series “Future of Work,” has been recognized as one of the top 8 future of work influencers by Tech News, and as one of the top 100 HR influencers by HR Executive.

    In this first part, we talk about the impact of AI on work processes, the role of HR in adapting to these changes, and the evolving organizational models that focus on agility, flexibility, and skill-based work transitions. We also discuss AI's role in healthcare, and the importance of transferable skills in an AI-driven world.

    All this plus our usual look at today's AI headlines!

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of iQ Company, where he invents advanced intelligence systems.He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode.

    Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.

    In the conclusion of the interview, we talk about the details of the collective intelligence architecture of agents, why Craig says it’s safe, morality of superintelligence, the risks of bad actors, and leading indicators of AGI.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Artificial General Intelligence - AGI - an AI system that’s as intelligent as an average human being in all the ways that human beings are usually intelligent. Helping us understand what it means and how we might get there is Craig A. Kaplan, founder of iQ Company, where he invents advanced intelligence systems.He also founded and ran PredictWallStreet, a financial services firm whose clients included NASDAQ, TD Ameritrade, Schwab, and other well-known financial institutions. In 2018, PredictWallStreet harnessed the collective intelligence of millions of retail investors to power a top 10 hedge fund performance, and we talk about it in this episode.

    Craig is a visiting professor in computer science at the University of California, and earned master’s and doctoral degrees from famed robotics hub Carnegie Mellon University, where he co-authored research with the Nobel-Prize-winning economist and AI pioneer Dr. Herbert A. Simon.

    We talk about his work with Herb Simon, bounded rationality, connectionist vs symbolic architectures, jailbreaking large language models, collective intelligence architectures for AI, and a lot more!

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist.

    I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call.

    In the conclusion, we talk about verification processes, ingenious schemes to verify hardware platforms, the frontier AI safety commitments, and who should set safety standards for the industry.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    We are talking about international governance of AI again today, a field that is just growing and growing as governments across the globe grapple with the seemingly intractable idea of regulating something they don’t understand. Helping them understand that is Markus Anderljung, Director of Policy and Research at the Centre for the Governance of AI in the UK. He aims to produce rigorous recommendations for governments and AI companies, researching frontier AI regulation, responsible cutting-edge development, national security implications of AI, and compute governance. He is an Adjunct Fellow at the Center for a New American Security, and a member of the OECD AI Policy Observatory’s Expert Group on AI Futures. He was previously seconded to the UK Cabinet Office as a Senior Policy Specialist.

    I know “governance” sounds really dry and a million miles away from the drama of existential threats, and jobs going away, and loss of privacy on a global scale; but governance is exactly the mechanism by which we can hope to do something about all of those things. Whenever you say, or you hear someone say, “Someone ought to do something about that,” governance is what answers that call.

    We talk about just what the Centre is, what it does and how it does it, and definitions of artificial general intelligence insofar as they affect governance – just what is the difference between training a system with 1025 and 1026 flops, for instance? And also in this part Markus will talk about how monitoring and verification might specifically work.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. How we interact with them is the problem.

    And that's where Sophie Kleber comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow.

    In the conclusion of our interview, we talk about about how she got into the user experience field, the emergence of a third paradigm of user interfaces, the future of smart homes, privacy, large language models coming to consumer devices, and brain-computer interfaces.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Virtually everything that’s difficult about getting computers to do work for us is in getting them to understand our question or request and in our understanding their answer. How we interact with them is the problem.

    And that's where Sophie Kleber comes in. She is the UX – that’s User Experience – Director for the Future of Work at Google and an expert in ethical AI and future human-machine interaction. She deeply understands the emotional development of automated assistants, artificial intelligence, and physical spaces. Sophie develops technology that enables individuals to be their best selves. Before joining Google, Sophie held the Global Executive Creative Director role at Huge, collaborating with brands like IKEA and Thomson Reuters. She holds an MA in Communication Design and an MBA in Product Design, and is a Fulbright fellow.

    We talk about the Uncanny Valley and how we relate to computers as though they were human or inhuman, and what if they looked like Bugs Bunny. We talk about the environments and situations where some people have intimate relationships with AIs, gender stereotyping in large language models, and where emotional interactions with computers help or hinder.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Teachers all over the world right now are having similar thoughts: Is AI going to take my job? How do I deal with homework that might have been done by ChatGPT? I know, because I've talked with many teachers, and these are universal concerns.

    So I'm visiting the topic of AI in education - not for the first time, not for the last. There are important and urgent issues to tackle; they become most acute at the high school level, but this episode will be useful for all levels.

    The reason it's so important for me to work with schools so much as an AI change management consultant is that there's no need for teachers to fear for their jobs. They are doing the most important job on the planet right now because they are literally educating the generation that is going to save the world. And generative AI has not created a learning problem: it's created learning opportunities. It's not created a teaching problem; it's created teaching opportunities. It has, however, created an assessment problem, and I'll talk about that.

    Kids need their human teachers more than ever before to model for them how to deal with disruption from technology, because change will never again happen as slowly as it does today, and all of their careers will be disrupted far more than anyone's is today. No student is going to remember something ChatGPT said for the rest of their life. The teacher’s job is to focus on the qualities that the AI cannot embody—the personal interactions that occur face to face when the teacher makes that lasting impression that inspires the student.

    Let's have honest, deep, and productive conversations about these issues now. A new school year is approaching and this is the time.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, Automation and Utopia: Human Flourishing in a World without Work, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do.

    John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of Robot Sex: Social And Ethical Implications from MIT Press, and his work has appeared in The Guardian, Aeon, and The Philosopher’s Magazine.

    In the conclusion of the interview we talk about generative AI extending our minds, the Luddite Fallacy and why this time things will be different, the effects of automation on class structure, and… Taylor Swift.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Is work heading for utopia? My guest today is John Danaher, senior lecturer in law at the University of Galway and author of the 2019 book, Automation and Utopia: Human Flourishing in a World without Work, which is an amazingly broad discourse on the future of work ranging from today’s immediate issues to the different kinds of utopia – or dystopia, depending on your viewpoint – ultimately possible when automation becomes capable of replicating everything that humans do.

    John has published over 40 papers on topics including the risks of advanced AI, the meaning of life in the future of work, the ethics of human enhancement, the intersection of law and neuroscience, the utility of brain-based lie detection, and the philosophy of religion. He is co-editor of Robot Sex: Social And Ethical Implications from MIT Press, and his work has appeared in The Guardian, Aeon, and The Philosopher’s Magazine.

    In the first part of the interview we talk about how much jobs may be automated and the methodology behind studies of that, the impact of automation on job satisfaction, what’s happening in academia, and much more.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Helping the British Government understand AI since 2016 is our guest, Lord Tim Clement-Jones, co-founder and co-chair of Britain's All-Party Parliamentary Group on Artificial Intelligence since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the House of Lords Select Committee on Artificial Intelligence which reported in 2018 with “AI in the UK: Ready Willing and Able?” and its follow-up report in 2020 “AI in the UK: No Room for Complacency.” His new book, "Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future" came out in the UK in March, with a North American release date of July 18.

    In the second half, we talk about elections, including the one just held in the UK, and disinformation, what AI and robots do to the flow of capital, the effects of AI upon education and enterprise culture, privacy and making AI accountable and trustworthy.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.

  • This and all episodes at: https://aiandyou.net/ .

    Helping the British Government understand AI since 2016 is our guest, Lord Tim Clement-Jones, co-founder and co-chair of Britain's All-Party Parliamentary Group on Artificial Intelligence since 2016. He is also former Liberal Democrat House of Lords spokesperson for Science, Innovation and Technology and former Chair of the House of Lords Select Committee on Artificial Intelligence which reported in 2018 with “AI in the UK: Ready Willing and Able?” and its follow-up report in 2020 “AI in the UK: No Room for Complacency.” His new book, "Living with the Algorithm: Servant or Master?: AI Governance and Policy for the Future" came out in the UK in March, with a North American release date of July 18.

    In this first part, Tim gives a big picture of how #AI regulation has been proceeding on the global stage since before large language models were a thing, giving us the context that took us from the Asilomar Principles to today’s Hiroshima principles and the EU AI Act and the new ISO standard 42001 for AI. And we talk about long-term planning, intellectual property rights, the effects of the open letters that called for a pause or moratorium on model training, and much more.

    All this plus our usual look at today's AI headlines.

    Transcript and URLs referenced at HumanCusp Blog.