Episodios
-
This and all episodes at: https://aiandyou.net/ .
Our relationship with time is dysfunctional. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book Four Thousand Weeks: Time Management for Mortals and former author of the psychology column âThis Column Will Change Your Lifeâ in The Guardian.
Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, itâs not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI â which accelerates everything it touches - have on our work life?
This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at OliverBurkeman.com.
In the conclusion of the interview, we talk about whether this is Luddism, the influence of the Silicon Valley billionairesâ pursuit of immortality, the appropriate use of AI to save us time, and what will remain constant throughout any amount of technological evolution.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
Our relationship with time is dysfunctional. Here to help us explore possibly the most critical effect of AI on the pace of life is Oliver Burkeman, author of the best-selling self-help book Four Thousand Weeks: Time Management for Mortals and former author of the psychology column âThis Column Will Change Your Lifeâ in The Guardian.
Most of us can attest to being severely overworked and with a shrinking amount of personal time left over. This is true despite the introduction into our lives of a huge amount of technology from the PC to the Internet. Why have tools like email, Google, and instant messaging not reduced our workload and stress? In fact, itâs not hard to believe that they are responsible for making those things worse. In which case, we must ask, what effect will unleashing AI â which accelerates everything it touches - have on our work life?
This is exactly the thought space that Oliver inhabits, and his work has made a major difference in my own life. Read Oliver's posts and subscribe to his newsletter at OliverBurkeman.com.
In this first half of the interview we talk about the parable of the rocks in the jar and how itâs a pernicious lie, the psychology of perceiving life as finite, and how technology has not changed our work stress and may be making it worse through induced demand.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
¿Faltan episodios?
-
This and all episodes at: https://aiandyou.net/ .
Mounir Shita, CEO of Kimera Systems, is author of the upcoming book The Science of Intelligence, which contains some interesting and thought-provoking explorations of intelligence that had me thinking about Pedro Domingosâ book The Master Algorithm. We talk about theories of AGI, free will, egg smashing, and Mounir's prototype smartphone app that learned how to silence itself in a movie theater!
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show Gary Bolles, author of The Next Rules of Work and a co-founder of eParachute.com, helping job-hunters & career changers with programs inspired by the evergreen book âWhat Color Is Your Parachute?â written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the worldâs largest gathering of impact entrepreneurs and investors.
Gary is adjunct Chair for the Future of Work for Singularity University, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for âwhatâs next.â
In the conclusion of the interview, we talk about unbossing and holacracies, how AI will impact organizational structures, fear, FOMO, and agency, and the Singularity University.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
There is, perhaps, no more burning topic at the moment than the future of work, and so I am particularly grateful to welcome to the show Gary Bolles, author of The Next Rules of Work and a co-founder of eParachute.com, helping job-hunters & career changers with programs inspired by the evergreen book âWhat Color Is Your Parachute?â written by his father. Gary's courses on LinkedIn Learning have over 1 million learners and he is a former Silicon Valley executive and a co-founder of SoCap, the worldâs largest gathering of impact entrepreneurs and investors.
Gary is adjunct Chair for the Future of Work for Singularity University, and as a partner in the consulting agency Charrette, he helps organizations, communities, educators and governments develop strategies for âwhatâs next.â
In the first half of the interview, we talk about the gig economy, the new rules of work, what ChatGPT did to the job market, and an interesting concept called the community operating system.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
My guest is the co-host of the Good Robot Podcast, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The Leverhulme Centre for the Future of Intelligence at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called The Good Robot: Why Technology Needs Feminism.
In this conclusion of the interview, we talk about unconscious bias, hiring standards, stochastic parrots, science fiction, and the early participation of women in computing.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
My guest is the co-host of the Good Robot Podcast, "Where technology meets feminism." Eleanor Drage is a Senior Research Fellow at The Leverhulme Centre for the Future of Intelligence at the University of Cambridge and was named in the Top 100 Brilliant Women in AI Ethics of 2022. She is also co-author of a recent book also called The Good Robot: Why Technology Needs Feminism.
We talk about about all that, plus some quantum mechanics, saunas, ham, lesbian bacteria, and⊠well itâll all make more sense when you listen.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
My guest is a really good role model for how a young person can carve out an important niche in the AI space, especially for people who arenât inclined to the computer science side of the field. Fiona McEvoy is author of the blog YouTheData.com, with a specific focus on the intersection of technology and society. She was named as one of â30 Influential Women Advancing AI in San Franciscoâ by REâąWORK, and in 2020 was honored in the inaugural Brilliant Women in AI Ethics Hall of Fame, established to recognize âBrilliant women who have made exceptional contributions to the space of AI Ethics and diversity.â
We talk about her journey to becoming an influential communicator and the ways she carries that out, what itâs like for young people in this social cauldron being heated by AI, and some of the key issues affecting them.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
At the end of February there was a landmark conference in Panama City and online, the Beneficial AGI Summit. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues.
He is the co-founder and CEO of The Millennium Project on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the Enterprise, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy.
He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science & Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more.
In this second half we talk about approaches for actually controlling the development of AGI that were developed at the conference, the AI arms race, and⊠why Jerome doesnât like the term futurism.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
At the end of February there was a landmark conference in Panama City and online, the Beneficial AGI Summit. AGI of course standing for Artificial General Intelligence, the Holy Grail of AI. My guest is Jerome C. Glenn, one of the organizers and sponsors, and who has a long and storied history of pivotal leadership and contributions to addressing existential issues.
He is the co-founder and CEO of The Millennium Project on global futures research, was contracted by the European Commission to write the AGI paper for their Horizon 2025-2027 program, was the Washington, DC representative for the United Nations University as executive director of their American Council, and was instrumental in naming the first Space Shuttle the Enterprise, banning the first space weapon (the Fractional Orbital Bombardment System) in SALT II, and shared the 2022 Lifeboat Guardian Award with Volodymyr Zelenskyy.
He has over 50 years of futures research experience working for governments, international organizations, and private industry in Science & Technology Policy, Environmental Security, Economics, Education, Defense, Space, and much more.
In this first half we talk about his recent work with groups of the United Nations General Assembly, and his decentralized approach to grassroots empowerment in both implementing AGI and working together to regulate it.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots.
Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN.
In this part we talk about how robots and AI can bring out the best and the worst in us, the responsibilities of roboticists, the difference between robots having emotions and our believing that they have emotions, and how this will evolve over the next decade or more.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
How is our relationship with bots - robots and chatbots - evolving and what does it mean? We're talking with Eve Herold, who has a new book, Robots and the People Who Love Them: Holding on to our Humanity in an Age of Social Robots.
Eve is an award-winning science writer and consultant in the scientific and medical nonprofit space. She writes about issues at the crossroads of science and society, and has been featured in Vice, Medium, The Boston Globe, The Wall Street Journal, Prevention, The Kiplinger Report, and The Washington Post and on MSNBC, NPR, and CNN.
In this part we talk about how people â including soldiers in combat - get attached to AIs and robots, we discuss ELIZA, Woebot, and Samantha from the movie Her, and the role of robots in helping take care of us physically and emotionally, among many other topics.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable.Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. Itâs those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.
In this part we talk about how we should respond to the problem of unsafe AI development and how Roman and his community are addressing it, what he would do with infinite resources, and⊠the threat Romanâs coffee cup poses to humanity.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
Returning as our first three-peat guest is Roman Yampolskiy, tenured Associate Professor of Computer Science at the University of Louisville in Kentucky where he is also the director of the Cyber Security Laboratory. Roman is here to talk about his new book, AI: Unexplainable, Unpredictable, Uncontrollable.Roman has been central in the field of warning about the Control Problem and Value Alignment Problems of AI from the very beginning, back when doing so earned people some scorn from practitioners, yet Roman is a professor of computer science and applies rigorous methods to his analyses of these problems. Itâs those rigorous methods that we tap into in this interview, because Roman connects principles of computer science to the issue of existential risk from AI.
In this part we talk about why this work is important to Roman, the dimensions of the elements of unexplainability, unpredictability, and uncontrollability, the level of urgency of the problems, and drill down into why todayâs AI is not safe and why itâs getting worse.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI.
That was last year.
Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to âhelp create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.â
In the conclusion, we talk about the role of sleep in human cognition, AGI and consciousness, and⊠penguins.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
Artificial General Intelligence: Once upon a time, this was considered a pipe dream, a fantasy of dreamers with no sense of the practical limitations of real AI.
That was last year.
Now, AGI is an explicit goal of many enterprises, notably among them Simuli. Their CEO, Rachel St. Clair, co-founded the company with Ben Goertzel, who has also been on this show. Rachel is a Fellow of the Center for Future Mind, with a doctorate in Complex Systems and Brain Sciences from Florida Atlantic University. She researches artificial general intelligence, focusing on complex systems and neuromorphic learning algorithms. Her goal is to âhelp create human-like, conscious, artificial, general intelligence to help humans solve the worst of our problems.â
In part 1 we talk about markers for AGI, distinctions between it and narrow artificial intelligence, self-driving cars, robotics, and embodiment, and⊠disco balls.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
Since I published my first book on AI in 2017, the public conversation and perception of the existential risk - risk to our existence - from AI has evolved and broadened. I talk about how that conversation has changed from Nick Bostrom's Superintelligence, the "hard take-off" and what that means, and through to the tossing about of cryptic signatures like p(doom) and e/acc, which I explain and critique.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System.
In part two we talk about psychology of combat decisions, AI and strategic defense, and nuclear conflict destabilization.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
Increasing AI in weapons: is this a good thing (more selective targeting, fewer innocents killed) or bad (risk of losing control in critical situations)? It's hard to decide where to stand, and many people can't help but think of Skynet and don't get further. Here to help us pick through those arguments, calling from Munich is my guest, Frank Sauer, head of research at the Metis Institute for Strategy and Foresight and a senior research fellow at the Bundeswehr University in Munich. He has a Ph.D. from Goethe University in Frankfurt and is an expert in the field of international politics with a focus on security. His research focuses on the military application of artificial intelligence and robotics. He is a member of the International Committee for Robot Arms Control. He also serves on the International Panel on the Regulation of Autonomous Weapons and the Expert Commission on the responsible use of technologies in the European Future Combat Air System.
In this first part we talk about the ethics of autonomy in weapons systems and compare human to machine decision making in combat.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
-
This and all episodes at: https://aiandyou.net/ .
Literally writing the book on AI is my guest Peter Norvig, who is coauthor of the standard text, Artificial Intelligence: A Modern Approach, used in 135 countries and 1500+ universities. Peter is a Distinguished Education Fellow at Stanford's Human-Centered AI Institute and a researcher at Google. He was head of NASA Ames's Computational Sciences Division and a recipient of NASA's Exceptional Achievement Award in 2001. He has taught at USC, Stanford, and Berkeley, from which he received a PhD in 1986 and the distinguished alumni award in 2006.
Heâs also the author of the worldâs longest palindromic sentence.
In this second half of the interview, we talk about how the rise in prominence of AI in the general population has changed how he communicates about AI, his feelings about the calls for slowdown in model development, and his thinking about general intelligence in large language models; and AI Winters.
All this plus our usual look at today's AI headlines.
Transcript and URLs referenced at HumanCusp Blog.
- Mostrar más