Episodes

  • “I think it's very early for us to see how AI is going to impact us all, especially documentary filmmakers. And so I embrace technology, and I encourage everyone as filmmakers to do so. We're looking at how AI is facilitating filmmakers to tell stories, create more visual worlds. I think that right now we're in the play phase of AI, where there's a lot of new tools and you're playing in a sandbox with them to see how they will develop.

    I don't think that AI has developed to the extent that it is in some way dramatically changing the film industry as we speak, but in the next two years, it will. We have yet to see how it will. As someone who creates films, I always experiment, and then I see what it is that I'd like to take from that technology as I move forward.”

    Sharmeen Obaid-Chinoy is an Oscar and Emmy award-winning Canadian-Pakistani filmmaker whose work highlights extraordinary women and their stories. She earned her first Academy Award in 2012 for her documentary Saving Face, about the Pakistani women targeted by brutal acid attacks. Today, Obaid-Chinoy is the first female film director to have won two Oscars by the age of 37. In 2023, it was announced that Obaid-Chinoy will direct the next Star Wars film starring Daisy Ridley. Her most recent project, co-directed alongside Trish Dalton, is the new documentary Diane von Fürstenberg: Woman in Charge, about the trailblazing Belgian fashion designer who invented the wrap dress 50 years ago. The film had its world premiere as the opening night selection at the 2024 Tribeca Festival on June 5th and premiered on June 25th on Hulu in the U.S. and Disney+ internationally. A product of Obaid-Chinoy's incredibly talented female filmmaking team, Woman in Charge provides an intimate look into Diane von Fürstenberg’s life and accomplishments and chronicles the trajectory of her signature dress from an innovative fashion statement to a powerful symbol of feminism.

    “I think it's very early for us to see how AI is going to impact us all, especially documentary filmmakers. And so I embrace technology, and I encourage everyone as filmmakers to do so. We're looking at how AI is facilitating filmmakers to tell stories, create more visual worlds. I think that right now we're in the play phase of AI, where there's a lot of new tools and you're playing in a sandbox with them to see how they will develop.

    I don't think that AI has developed to the extent that it is in some way dramatically changing the film industry as we speak, but in the next two years, it will. We have yet to see how it will. As someone who creates films, I always experiment, and then I see what it is that I'd like to take from that technology as I move forward.”

    www.hulu.com/movie/diane-von-furstenberg-woman-in-charge-95fb421e-b7b1-4bfc-9bbf-ea666dba0b02
    https://www.disneyplus.com/movies/diane-von-furstenberg-woman-in-charge/1jrpX9AhsaJ6
    https://socfilms.com

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “Having worked in this space for seven years, really since the inception of DeepFakes in late 2017, for some time, it was possible with just a few hours a day to really be on top of the key kind of technical developments. It's now truly global. AI-generated media have really exploded, particularly the last 18 months, but they've been bubbling under the surface for some time in various different use cases. The disinformation and deepfakes in the political sphere really matches some of the fears held five, six years ago, but at the time were more speculative. The fears around how deepfakes could be used in propaganda efforts, in attempts to destabilize democratic processes, to try and influence elections have really kind of reached a fever pitch  Up until this year, I've always really said, “Well, look, we've got some fairly narrow examples of deepfakes and AI-generated content being deployed, but it's nowhere near on the scale or the effectiveness required to actually have that kind of massive impact.” This year, it's no longer a question of are deepfakes going to be used, it's now how effective are they actually going to be? I'm worried. I think a lot of the discourse around gen AI and so on is very much you're either an AI zoomer or an AI doomer, right? But for me, I don't think we need to have this kind of mutually exclusive attitude. I think we can kind of look at different use cases. There are really powerful and quite amazing use cases, but those very same baseline technologies can be weaponized if they're not developed responsibly with the appropriate safety measures, guardrails, and understanding from people using and developing them. So it is really about that balancing act for me. And a lot of my research over the years has been focused on mapping the evolution of AI generated content as a malicious tool.”

    Henry Ajder is an advisor, speaker, and broadcaster working at the frontier of the generative AI and the synthetic media revolution. He advises organizations on the opportunities and challenges these technologies present, including Adobe, Meta, The European Commission, BBC, The Partnership on AI, and The House of Lords. Previously, Henry led Synthetic Futures, the first initiative dedicated to ethical generative AI and metaverse technologies, bringing together over 50 industry-leading organizations. Henry presented the BBC documentary series, The Future Will be Synthesised.
    www.henryajder.com
    www.bbc.co.uk/programmes/m0017cgr

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • Missing episodes?

    Click here to refresh the feed.

  • How is artificial intelligence redefining our perception of reality and truth? Can AI be creative? And how is it changing art and innovation? Does AI-generated perfection detach us from reality and genuine human connection?

    Henry Ajder is an advisor, speaker, and broadcaster working at the frontier of the generative AI and the synthetic media revolution. He advises organizations on the opportunities and challenges these technologies present, including Adobe, Meta, The European Commission, BBC, The Partnership on AI, and The House of Lords. Previously, Henry led Synthetic Futures, the first initiative dedicated to ethical generative AI and metaverse technologies, bringing together over 50 industry-leading organizations. Henry presented the BBC documentary series, The Future Will be Synthesised.

    “Having worked in this space for seven years, really since the inception of DeepFakes in late 2017, for some time, it was possible with just a few hours a day to really be on top of the key kind of technical developments. It's now truly global. AI-generated media have really exploded, particularly the last 18 months, but they've been bubbling under the surface for some time in various different use cases. The disinformation and deepfakes in the political sphere really matches some of the fears held five, six years ago, but at the time were more speculative. The fears around how deepfakes could be used in propaganda efforts, in attempts to destabilize democratic processes, to try and influence elections have really kind of reached a fever pitch  Up until this year, I've always really said, “Well, look, we've got some fairly narrow examples of deepfakes and AI-generated content being deployed, but it's nowhere near on the scale or the effectiveness required to actually have that kind of massive impact.” This year, it's no longer a question of are deepfakes going to be used, it's now how effective are they actually going to be? I'm worried. I think a lot of the discourse around gen AI and so on is very much you're either an AI zoomer or an AI doomer, right? But for me, I don't think we need to have this kind of mutually exclusive attitude. I think we can kind of look at different use cases. There are really powerful and quite amazing use cases, but those very same baseline technologies can be weaponized if they're not developed responsibly with the appropriate safety measures, guardrails, and understanding from people using and developing them. So it is really about that balancing act for me. And a lot of my research over the years has been focused on mapping the evolution of AI generated content as a malicious tool.”
    www.henryajder.com
    www.bbc.co.uk/programmes/m0017cgr

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “When AI takes over with our information sources and pollutes it to a certain point, we'll stop believing that there is any such thing as truth anymore. ‘We now live in an era in which the truth is behind a paywall  and the lies are free.’ One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.”

    Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and a Senior Advisor for Public Trust in Science at the Aspen Institute. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan. He has taught philosophy at Colgate University, Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston. His books include On Disinformation and How to Talk to a Science Denier and the novels The Art of Good and Evil and The Sin Eater.

    https://leemcintyrebooks.com
    www.penguinrandomhouse.com/books/730833/on-disinformation-by-lee-mcintyre
    https://mitpress.mit.edu/9780262545051/
    https://leemcintyrebooks.com/books/the-art-of-good-and-evil/
    https://leemcintyrebooks.com/books/the-sin-eater/

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • How do we fight for truth and protect democracy in a post-truth world? How does bias affect our understanding of facts?

    Lee McIntyre is a Research Fellow at the Center for Philosophy and History of Science at Boston University and a Senior Advisor for Public Trust in Science at the Aspen Institute. He holds a B.A. from Wesleyan University and a Ph.D. in Philosophy from the University of Michigan. He has taught philosophy at Colgate University, Boston University, Tufts Experimental College, Simmons College, and Harvard Extension School (where he received the Dean’s Letter of Commendation for Distinguished Teaching). Formerly Executive Director of the Institute for Quantitative Social Science at Harvard University, he has also served as a policy advisor to the Executive Dean of the Faculty of Arts and Sciences at Harvard and as Associate Editor in the Research Department of the Federal Reserve Bank of Boston. His books include On Disinformation and How to Talk to a Science Denier and the novels The Art of Good and Evil and The Sin Eater.

    “When AI takes over with our information sources and pollutes it to a certain point, we'll stop believing that there is any such thing as truth anymore. ‘We now live in an era in which the truth is behind a paywall  and the lies are free.’ One thing people don't realize is that the goal of disinformation is not simply to get you to believe a falsehood. It's to demoralize you into giving up on the idea of truth, to polarize us around factual issues, to get us to distrust people who don't believe the same lie. And even if somebody doesn't believe the lie, it can still make them cynical. I mean, we've all had friends who don't even watch the news anymore. There's a chilling quotation from Holocaust historian Hannah Arendt about how when you always lie to someone, the consequence is not necessarily that they believe the lie, but that they begin to lose their critical faculties, that they begin to give up on the idea of truth, and so they can't judge for themselves what's true and what's false anymore. That's the scary part, the nexus between post-truth and autocracy. That's what the authoritarian wants. Not necessarily to get you to believe the lie. But to give up on truth, because when you give up on truth, then there's no blame, no accountability, and they can just assert their power. There's a connection between disinformation and denial.”

    https://leemcintyrebooks.com
    www.penguinrandomhouse.com/books/730833/on-disinformation-by-lee-mcintyre
    https://mitpress.mit.edu/9780262545051/
    https://leemcintyrebooks.com/books/the-art-of-good-and-evil/
    https://leemcintyrebooks.com/books/the-sin-eater/

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “I think one very big example of this phenomenon is the computational irreducibility. This idea that even though you know the rules by which something operates, that doesn't immediately tell you everything about what the system will do. You might have to follow a billion steps in the actual operation of those rules to find out what the system does.

    There's no way to jump ahead and just say, "the answer will be such and such." Well, computational irreducibility, in a sense, goes against the hope, at least, of, for example, mathematical science. A lot of the hope of mathematical science is that we'll just work out a formula for how something is going to operate. We don't have to kind of go through the steps and watch it operate. We can just kind of jump to the end and apply the formula. Well, computational irreducibility says that that isn't something you can generally do. It says that there are plenty of things in the world where you have to kind of go through the steps to see what will happen.

    In a sense, even though that's kind of a bad thing for science, it says that there's sort of limitations on the extent to which we can use science to predict things. It's sort of a good thing, I think, for leading one's life because it means that as we experience the passage of time, in a sense, that corresponds to the sort of irreducible computation of what we will do.

    It's something where that sort of tells one that the passage of time has a meaningful effect. There's something that where you can't just jump to the end and say, "I don't need to live all the years of my life. I can just go and say, and the result will be such and such." No, actually, there's something sort of irreducible about that actual progression of time and the actual living of those years of life, so to speak. So that's kind of one of the enriching aspects of this concept of computational irreducibility. It's a pretty important concept. It's something which I think, for example, in the future of human society, will be something where people right now will think of it as this kind of geeky scientific idea, but in the future, it's going to be a pivotal kind of thing for the understanding of how one should conduct the future of human society.”

    Stephen Wolfram is a computer scientist, mathematician, and theoretical physicist. He is the founder and CEO of Wolfram Research, the creator of Mathematica, Wolfram|Alpha, and the Wolfram Language. He received his PhD in theoretical physics at Caltech by the age of 20 and in 1981, became the youngest recipient of a MacArthur Fellowship. Wolfram authored A New Kind of Science and launched the Wolfram Physics Project. He has pioneered computational thinking and has been responsible for many discoveries, inventions and innovations in science, technology and business.

    www.stephenwolfram.com
    www.wolfram.com
    www.wolframalpha.com
    www.wolframscience.com/nks/
    www.amazon.com/dp/1579550088/ref=nosim?tag=turingmachi08-20
    www.wolframphysics.org
    www.wolfram-media.com/products/what-is-chatgpt-doing-and-why-does-it-work/

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • How can computational language help decode the mysteries of nature and the universe? What is ChatGPT doing and why does it work? How will AI affect education, the arts and society?

    Stephen Wolfram is a computer scientist, mathematician, and theoretical physicist. He is the founder and CEO of Wolfram Research, the creator of Mathematica, Wolfram|Alpha, and the Wolfram Language. He received his PhD in theoretical physics at Caltech by the age of 20 and in 1981, became the youngest recipient of a MacArthur Fellowship. Wolfram authored A New Kind of Science and launched the Wolfram Physics Project. He has pioneered computational thinking and has been responsible for many discoveries, inventions and innovations in science, technology and business.

    “I think one very big example of this phenomenon is the computational irreducibility. This idea that even though you know the rules by which something operates, that doesn't immediately tell you everything about what the system will do. You might have to follow a billion steps in the actual operation of those rules to find out what the system does.

    There's no way to jump ahead and just say, "the answer will be such and such." Well, computational irreducibility, in a sense, goes against the hope, at least, of, for example, mathematical science. A lot of the hope of mathematical science is that we'll just work out a formula for how something is going to operate. We don't have to kind of go through the steps and watch it operate. We can just kind of jump to the end and apply the formula. Well, computational irreducibility says that that isn't something you can generally do. It says that there are plenty of things in the world where you have to kind of go through the steps to see what will happen.

    In a sense, even though that's kind of a bad thing for science, it says that there's sort of limitations on the extent to which we can use science to predict things. It's sort of a good thing, I think, for leading one's life because it means that as we experience the passage of time, in a sense, that corresponds to the sort of irreducible computation of what we will do.

    It's something where that sort of tells one that the passage of time has a meaningful effect. There's something that where you can't just jump to the end and say, "I don't need to live all the years of my life. I can just go and say, and the result will be such and such." No, actually, there's something sort of irreducible about that actual progression of time and the actual living of those years of life, so to speak. So that's kind of one of the enriching aspects of this concept of computational irreducibility. It's a pretty important concept. It's something which I think, for example, in the future of human society, will be something where people right now will think of it as this kind of geeky scientific idea, but in the future, it's going to be a pivotal kind of thing for the understanding of how one should conduct the future of human society.”

    www.stephenwolfram.com
    www.wolfram.com
    www.wolframalpha.com
    www.wolframscience.com/nks/
    www.amazon.com/dp/1579550088/ref=nosim?tag=turingmachi08-20
    www.wolframphysics.org
    www.wolfram-media.com/products/what-is-chatgpt-doing-and-why-does-it-work/

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “Generative AI, particularly Large Language Models, they seem to be engaging in conversation with us. We ask questions, and they reply. It seems like they're talking to us. I don't think they are. I think they're playing a game very much like a game of chess. You make a move and your chess computer makes an appropriate response to that move. It doesn't have any other interest in the game whatsoever. That's what I think Large Language Models are doing. They're just making communicative moves in this game of language that they've learned through training on vast quantities of human-produced text.”

    Keith Frankish is an Honorary Professor of Philosophy at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete. Frankish mainly works in the philosophy of mind and has published widely about topics such as human consciousness and cognition. Profoundly inspired by Daniel Dennett, Frankish is best known for defending an “illusionist” view of consciousness. He is also editor of Illusionism as a Theory of Consciousness and co-edits, in addition to others, The Cambridge Handbook of Cognitive Science.

    www.keithfrankish.com
    www.cambridge.org/core/books/cambridge-handbook-of-cognitive-science/F9996E61AF5E8C0B096EBFED57596B42
    www.imprint.co.uk/product/illusionism

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • Is consciousness an illusion? Is it just a complex set of cognitive processes without a central, subjective experience? How can we better integrate philosophy with everyday life and the arts?

    Keith Frankish is an Honorary Professor of Philosophy at the University of Sheffield, a Visiting Research Fellow with The Open University, and an Adjunct Professor with the Brain and Mind Programme in Neurosciences at the University of Crete. Frankish mainly works in the philosophy of mind and has published widely about topics such as human consciousness and cognition. Profoundly inspired by Daniel Dennett, Frankish is best known for defending an “illusionist” view of consciousness. He is also editor of Illusionism as a Theory of Consciousness and co-edits, in addition to others, The Cambridge Handbook of Cognitive Science.

    “Generative AI, particularly Large Language Models, they seem to be engaging in conversation with us. We ask questions, and they reply. It seems like they're talking to us. I don't think they are. I think they're playing a game very much like a game of chess. You make a move and your chess computer makes an appropriate response to that move. It doesn't have any other interest in the game whatsoever. That's what I think Large Language Models are doing. They're just making communicative moves in this game of language that they've learned through training on vast quantities of human-produced text.”

    www.keithfrankish.com
    www.cambridge.org/core/books/cambridge-handbook-of-cognitive-science/F9996E61AF5E8C0B096EBFED57596B42www.imprint.co.uk/product/illusionism

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “We and all living beings thrive by being actors in the planet’s regeneration, a civilizational goal that should commence and never cease. We practiced degeneration as a species and it brought us to the threshold of an unimaginable crisis. To reverse global warming, we need to reverse global degeneration.”

    Can we really end the climate crisis in one generation? What kind of bold collective action, technologies, and nature-based solutions would it take to do it?

    Paul Hawken is a renowned environmentalist, entrepreneur, author, and activist committed to sustainability and transforming the business-environment relationship. A leading voice in the environmental movement, he has founded successful eco-friendly businesses, authored influential works on commerce and ecology, and advised global leaders on economic and environmental policies. As the founder of Project Regeneration and Project Drawdown, Paul leads efforts to identify and model solutions to reverse global warming, showcasing actionable strategies. His pioneering work in corporate ecological reform continues to shape a sustainable future. He is the author of eight books, including Regeneration: Ending the Climate Crisis in One Generation.

    https://regeneration.org
    https://paulhawken.com
    https://drawdown.org
    https://regeneration.org/nexus

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • Can we really end the climate crisis in one generation? What kind of bold collective action, technologies, and nature-based solutions would it take to do it?

    Paul Hawken is a renowned environmentalist, entrepreneur, author, and activist committed to sustainability and transforming the business-environment relationship. A leading voice in the environmental movement, he has founded successful eco-friendly businesses, authored influential works on commerce and ecology, and advised global leaders on economic and environmental policies. As the founder of Project Regeneration and Project Drawdown, Paul leads efforts to identify and model solutions to reverse global warming, showcasing actionable strategies. His pioneering work in corporate ecological reform continues to shape a sustainable future. He is the author of eight books, including Regeneration: Ending the Climate Crisis in One Generation.

    “We and all living beings thrive by being actors in the planet’s regeneration, a civilizational goal that should commence and never cease. We practiced degeneration as a species and it brought us to the threshold of an unimaginable crisis. To reverse global warming, we need to reverse global degeneration.”

    https://regeneration.org
    https://paulhawken.com
    https://drawdown.org
    https://regeneration.org/nexus

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “So, New York City will ultimately build a seawall that it estimates will cost somewhere in the order of 120 billion dollars. And, you know, the fact is that many cities in the United States will not be able to afford that, especially smaller ones and especially southern ones.

    A part of planning for this needs to include thinking about managed retreat from highly vulnerable areas. The tax base of that community that supports schools undermines the real estate market and the value of property, and it can lead to a spiral of economic decline that can be really dangerous for the people who remain. This can really hollow out a community and that's an enormous challenge to deal with, but one way to deal with it is to try to keep the resources and infrastructure in a community proportional to the population that's utilizing it and to maintain some energy and prosperity and vitality. So, I think a lot of places in the United States need to plan to get smaller, which is really the antithesis of the American philosophy of growth and economic growth.

    If you want to keep your community intact, you could move together, or you could move to a place where your neighbors have also moved or something like that. That's the kind of new idea that is being batted around that can help keep communities coherent.”

    Abrahm Lustgarten is an investigative reporter, author, and filmmaker whose work focuses on human adaptation to climate change. His 2010 Frontline documentary The Spill, which investigated BP’s company culture, was nominated for an Emmy. His 2015 longform series Killing the Colorado, about the draining of the Colorado river, was nominated for a Pulitzer Prize. Lustgarten is a senior reporter at ProPublica, and contributes to publications like The New York Times Magazine and The Atlantic. His research on climate migration influenced President Biden’s creation of a climate migration study group. This is also the topic of his newly published book, On The Move: The Overheating Earth and the Uprooting of America in which he explores how climate change is uprooting American lives.

    https://abrahm.com
    https://us.macmillan.com/books/9780374171735/onthemove

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • An estimated one in two people will experience degrading environmental conditions this century and will be faced with the difficult question of whether to leave their homes. Will you be among those who migrate in response to climate change? If so, where will you go?

    Abrahm Lustgarten is an investigative reporter, author, and filmmaker whose work focuses on human adaptation to climate change. His 2010 Frontline documentary The Spill, which investigated BP’s company culture, was nominated for an Emmy. His 2015 longform series Killing the Colorado, about the draining of the Colorado river, was nominated for a Pulitzer Prize. Lustgarten is a senior reporter at ProPublica, and contributes to publications like The New York Times Magazine and The Atlantic. His research on climate migration influenced President Biden’s creation of a climate migration study group. This is also the topic of his newly published book, On The Move: The Overheating Earth and the Uprooting of America in which he explores how climate change is uprooting American lives.

    “So, New York City will ultimately build a seawall that it estimates will cost somewhere in the order of 120 billion dollars. And, you know, the fact is that many cities in the United States will not be able to afford that, especially smaller ones and especially southern ones.

    A part of planning for this needs to include thinking about managed retreat from highly vulnerable areas. The tax base of that community that supports schools undermines the real estate market and the value of property, and it can lead to a spiral of economic decline that can be really dangerous for the people who remain. This can really hollow out a community and that's an enormous challenge to deal with, but one way to deal with it is to try to keep the resources and infrastructure in a community proportional to the population that's utilizing it and to maintain some energy and prosperity and vitality. So, I think a lot of places in the United States need to plan to get smaller, which is really the antithesis of the American philosophy of growth and economic growth.

    If you want to keep your community intact, you could move together, or you could move to a place where your neighbors have also moved or something like that. That's the kind of new idea that is being batted around that can help keep communities coherent.”

    https://abrahm.com
    https://us.macmillan.com/books/9780374171735/onthemove

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “There’s a lot of greenwashing that's going on these days. It is great marketing. And that was really the reason why I wrote this book. I had started to see the patterns. You can start to tell them the companies that are genuinely doing it versus the companies that are just talking about it. So that was one indicator, you know, a company that would send out a press release about their goals and what they anticipated to do in the next 5 to 10 years was very different from companies who had said, you know what, this is what we've achieved. Regenerative started coming into the lexicon, the term in 2017, 2018. And regenerative means to regenerate, means to bring life into something. To sustain means to keep the status quo. And regenerative looks at things from a very holistic lens. You know, it's like if you're going to run a regenerative farm, it's all the different components of the farm and the ecosystem ideally come within the ecosystem.”

    Esha Chhabra has written for national and international publications over the last 15 years, focusing on global development, the environment, and the intersection of business and impact. Her work has been featured in The New York Times, The Economist, The Guardian, and other publications. She is the author of Working to Restore: Harnessing the Power of Business to Heal the Earth.

    www.eshachhabra.com
    www.beacon.org/Working-to-Restore-P2081.aspx

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • What is regenerative business? How can we create a business mindset that addresses social, economic and environmental issues?

    Esha Chhabra has written for national and international publications over the last 15 years, focusing on global development, the environment, and the intersection of business and impact. Her work has been featured in The New York Times, The Economist, The Guardian, and other publications. She is the author of Working to Restore: Harnessing the Power of Business to Heal the Earth.

    “There’s a lot of greenwashing that's going on these days. It is great marketing. And that was really the reason why I wrote this book. I had started to see the patterns. You can start to tell them the companies that are genuinely doing it versus the companies that are just talking about it. So that was one indicator, you know, a company that would send out a press release about their goals and what they anticipated to do in the next 5 to 10 years was very different from companies who had said, you know what, this is what we've achieved. Regenerative started coming into the lexicon, the term in 2017, 2018. And regenerative means to regenerate, means to bring life into something. To sustain means to keep the status quo. And regenerative looks at things from a very holistic lens. You know, it's like if you're going to run a regenerative farm, it's all the different components of the farm and the ecosystem ideally come within the ecosystem.”

    www.eshachhabra.com
    www.beacon.org/Working-to-Restore-P2081.aspx

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”

    Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.

    https://raphaelmilliere.com
    https://researchers.mq.edu.au/en/persons/raphael-milliere

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • How can we ensure that AI is aligned with human values? What can AI teach us about human cognition and creativity?

    Dr. Raphaël Millière is Assistant Professor in Philosophy of AI at Macquarie University in Sydney, Australia. His research primarily explores the theoretical foundations and inner workings of AI systems based on deep learning, such as large language models. He investigates whether these systems can exhibit human-like cognitive capacities, drawing on theories and methods from cognitive science. He is also interested in how insights from studying AI might shed new light on human cognition. Ultimately, his work aims to advance our understanding of both artificial and natural intelligence.

    “I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”

    https://raphaelmilliere.com
    https://researchers.mq.edu.au/en/persons/raphael-milliere

    “I'd like to focus more on the immediate harms that the kinds of AI technologies we have today might pose. With language models, the kind of technology that powers ChatGPT and other chatbots, there are harms that might result from regular use of these systems, and then there are harms that might result from malicious use. Regular use would be how you and I might use ChatGPT and other chatbots to do ordinary things. There is a concern that these systems might reproduce and amplify, for example, racist or sexist biases, or spread misinformation. These systems are known to, as researchers put it, “hallucinate” in some cases, making up facts or false citations. And then there are the harms from malicious use, which might result from some bad actors using the systems for nefarious purposes. That would include disinformation on a mass scale. You could imagine a bad actor using language models to automate the creation of fake news and propaganda to try to manipulate voters, for example. And this takes us into the medium term future, because we're not quite there, but another concern would be language models providing dangerous, potentially illegal information that is not readily available on the internet for anyone to access. As they get better over time, there is a concern that in the wrong hands, these systems might become quite powerful weapons, at least indirectly, and so people have been trying to mitigate these potential harms.”

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “Bookselling captured my imagination and my heart as soon as I started working at the bookstore because I could see the potential for this great, amazing community-oriented work. Of course, it's a thrill to be around books, to meet authors, to read all this stuff, and to spend all day with people who love books, but what I think I really fell in love with was the sense of community, the people behind it, and the way a bookstore can really be an engine for positive social change within its community and in a broader sense as well. My whole nonfiction book project started with a tweet thread. It was about how every bookseller has to be prepared to have this discussion: a customer comes in, and they're like, this book is 50 percent off on Amazon. Why should I buy it here? So, I don't think about it quite as withholding from Amazon as much as contributing to these local community-oriented businesses.
    The thing that unites my poetry and the nonfiction writing is my main obsession as a writer. It's the question of, how do you live meaningfully in late capitalism? As corporations and global capitalist forces take over the world, what does it mean to try to have a meaningful human life? I think the proliferation of objects might reflect that. A lot of what we do in this world is collect objects, and regardless of whether it's good or bad, you build a nest. I think that in Picture Window in particular, I wanted to write about the domestic in a way that I hadn't written in so far. And then the pandemic happened, so I was forced into this weird, uneasy, claustrophobic domesticity. When your attention is so focused within your own home and within your own family, every object in your house takes on a new resonance. So, when a tennis ball that you've never seen somehow shows up in your house, that's weird. It's poetic. It feels dreamlike.”

    Danny Caine is the author of the poetry collections Continental Breakfast, El Dorado Freddy's, Flavortown, and Picture Window, as well as the books How to Protect Bookstores and Why and How to Resist Amazon and Why. His poetry has appeared in The Slowdown, Lit Hub, Diagram, HAD, and Barrelhouse.  He's a co-owner of The Raven Bookstore, Publisher's Weekly's 2022 Bookstore of the Year.

    www.dannycaine.com
    www.ravenbookstore.com

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • What is the future of literature in the age of generative AI? How can bookstores build community and be engines for positive social change? What does it mean to try to have a meaningful human life?

    Danny Caine is the author of the poetry collections Continental Breakfast, El Dorado Freddy's, Flavortown, and Picture Window, as well as the books How to Protect Bookstores and Why and How to Resist Amazon and Why. His poetry has appeared in The Slowdown, Lit Hub, Diagram, HAD, and Barrelhouse.  He's a co-owner of The Raven Bookstore, Publisher's Weekly's 2022 Bookstore of the Year.

    “Bookselling captured my imagination and my heart as soon as I started working at the bookstore because I could see the potential for this great, amazing community-oriented work. Of course, it's a thrill to be around books, to meet authors, to read all this stuff, and to spend all day with people who love books, but what I think I really fell in love with was the sense of community, the people behind it, and the way a bookstore can really be an engine for positive social change within its community and in a broader sense as well. My whole nonfiction book project started with a tweet thread. It was about how every bookseller has to be prepared to have this discussion: a customer comes in, and they're like, this book is 50 percent off on Amazon. Why should I buy it here? So, I don't think about it quite as withholding from Amazon as much as contributing to these local community-oriented businesses.
    The thing that unites my poetry and the nonfiction writing is my main obsession as a writer. It's the question of, how do you live meaningfully in late capitalism? As corporations and global capitalist forces take over the world, what does it mean to try to have a meaningful human life? I think the proliferation of objects might reflect that. A lot of what we do in this world is collect objects, and regardless of whether it's good or bad, you build a nest. I think that in Picture Window in particular, I wanted to write about the domestic in a way that I hadn't written in so far. And then the pandemic happened, so I was forced into this weird, uneasy, claustrophobic domesticity. When your attention is so focused within your own home and within your own family, every object in your house takes on a new resonance. So, when a tennis ball that you've never seen somehow shows up in your house, that's weird. It's poetic. It feels dreamlike.”

    www.dannycaine.com
    www.ravenbookstore.com

    www.creativeprocess.info
    www.oneplanetpodcast.org
    IG www.instagram.com/creativeprocesspodcast

  • “A lot of our work is comparative. We look at background behavior. Is there a burst of new activity? We zoom in on that and ask why that is suddenly appearing and why it didn't appear before. Imagine one day you wake up and you find water in a pot is boiling and you want to understand why water is boiling. If you go at it one molecule at a time, it's not giving you the big picture of what is going on. We've probably all done this: you take milk, stick it in the fridge, too lazy to go to the grocery, so you just leave it there. The 11th day, the milk's gone bad. Why did that happen on the 11th day? What was happening was that all you could see was the kind of macro level, you couldn't see the individual pieces of milk. This is a new area of physics, exactly the same as how shock waves—a wave that builds up so quickly, there's no kind of precursor—appear. Using the data we collect online, we have a tool for making predictions of when we expect shocks to arise and what shape they'll have. So the reason we went for a systems level view is because you can't understand water boiling one molecule at a time.”

    How can physics help solve messy, real world problems? How can we embrace the possibilities of AI while limiting existential risk and abuse by bad actors?

    Neil Johnson is a physics professor at George Washington University. His new initiative in Complexity and Data Science at the Dynamic Online Networks Lab combines cross-disciplinary fundamental research with data science to attack complex real-world problems. His research interests lie in the broad area of Complex Systems and ‘many-body’ out-of-equilibrium systems of collections of objects, ranging from crowds of particles to crowds of people and from environments as distinct as quantum information processing in nanostructures to the online world of collective behavior on social media. https://physics.columbian.gwu.edu/neil-johnson https://donlab.columbian.gwu.edu

    www.creativeprocess.infowww.oneplanetpodcast.org IG www.instagram.com/creativeprocesspodcast