Episodit

  • On this episode, I’m joined by Senator Mike Rounds, US Senator for South Dakota and Co-Chair of the Senate AI Caucus, to discuss how the US can regulate AI responsibly while fostering innovation. With his extensive experience in both state and federal government, Senator Rounds shares his insights into the Bipartisan Senate AI Working Group and its roadmap for AI policy.

    Key Takeaways:

    (01:23) The Bipartisan Senate AI Working Group aims to balance AI regulation and innovation.

    (05:07) Why intellectual property protections are essential in AI development.

    (07:27) National security implications of AI in weapons systems and defense.

    (09:19) The potential of AI to revolutionize healthcare through faster drug approvals.

    (10:55) How AI can aid in detecting and combating biological threats.

    (15:00) The importance of workforce training to mitigate AI-driven job displacement.

    (19:05) The role of community colleges in preparing the workforce for an AI-driven future.

    (24:00) Insights from international collaboration on AI regulation.

    Resources Mentioned:

    Senator Mike Rounds Homepage - https://www.rounds.senate.gov/

    GUIDE AI Initiative - https://www.rounds.senate.gov/newsroom/press-releases/rounds-introduces-artificial-intelligence-policy-package

    Medshield - https://www.linkedin.com/company/medshield-llc

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • In this episode, I’m joined by Charity Rae Clark, Vermont Attorney General, and Monique Priestley, Vermont State Representative. They have been instrumental in shaping Vermont’s legislative approach to data privacy and AI. We dive into the challenges of regulating AI to keep citizens safe, the importance of data minimization and the broader implications for society.

    Key Takeaways:

    (02:10) “Free” apps and websites take payment with your data.

    (08:15) The Data Privacy Act includes stringent provisions to protect children online.

    (10:05) Protecting consumer privacy and reducing security risks.

    (15:29) Vermont’s legislative journey includes educating lawmakers.

    (18:45) Innovation and regulation must be balanced for future AI development.

    (23:50) Collaboration and education can overcome intense pressure from lobbyists.

    (30:02) AI’s potential to exacerbate discrimination demands regulation.

    (36:15) Deepfakes present a growing threat.

    (42:40) Consumer trust could be lost due to premature releases of AI products.

    (50:10) The necessity of a strong foundation in data privacy. 

    Resources Mentioned:

    Charity Rae Clark -

    https://www.linkedin.com/in/charityrclark/

    Monique Priestley -

    https://www.linkedin.com/in/mepriestley/

    Vermont -

    https://www.linkedin.com/company/state-of-vermont/

    “The Age of Surveillance Capitalism” by Shoshana Zuboff -

    https://www.amazon.com/Age-Surveillance-Capitalism-Future-Frontier/dp/1610395697

    Why Privacy Matters” by Neil Richards -

    https://www.amazon.com/Why-Privacy-Matters-Neil-Richards/dp/0190940553

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • Puuttuva jakso?

    Paina tästä ja päivitä feedi.

  • Dive into the tangled web of AI and copyright law with Keith Kupferschmid, CEO of the Copyright Alliance, as he reveals how AI companies navigate legal responsibilities and examines what creators can do to safeguard their intellectual property in an AI-driven world.

    Key Takeaways:

    (02:00) The Copyright Alliance represents over 15,000 organizations and 2 million individual creators.

    (05:12) Two potential copyright infringement settings: during the ingestion process and the output stage.

    (06:00) There have been 17 or 18 AI copyright cases filed recently.

    (08:00) Fair Use in AI is not categorical and is decided on a case-by-case basis.

    (13:32) AI companies often shift liability to prompters, but both can be held liable under existing laws.

    (15:00) Creators should clearly state their licensing preferences on their works to protect themselves.

    (17:50) Current copyright laws are flexible enough to adapt to AI without needing new legislation.

    (20:00) Market-based solutions, such as licensing, are crucial for addressing AI copyright issues.

    (27:34) Education and public awareness are vital for understanding copyright issues related to AI.

    Resources Mentioned:

    Keith Kupferschmid - https://www.linkedin.com/in/keith-kupferschmid-723b19a/

    Copyright Alliance - https://copyrightalliance.org

    U.S. Copyright Office - https://www.copyright.gov

    Getty Images Licensing - https://www.gettyimages.com

    National Association of Realtors - https://www.nar.realtor

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • The future of AI lies at the intersection of technology and ethics. How do we navigate this complex landscape?  Today, I’m joined by Maria Luciana Axente, Head of Public Policy and Ethics at PwC UK and Intellectual Forum Senior Research Associate at Jesus College Cambridge, who offers key insights into the ethical implications of AI.

    Key Takeaways:

    (03:56) The importance of integrating ethical principles into AI.

    (08:22) Preserving humanity in the age of AI.

    (12:19) Embedding value alignment in AI systems.

    (15:59) Fairness and voluntary commitments in AI.

    (21:01) Participatory AI and including diverse voices.

    (24:05) Cultural value systems shaping AI policies.

    (26:25) The importance of reflecting on AI’s impact before implementation.

    (27:48) Learning from other industries to govern AI better.

    (28:59) AI as a socio-technical system, not just technology.

    Resources Mentioned:

    Maria Luciana Axente - https://www.linkedin.com/in/mariaaxente/

    PwC UK - https://www.linkedin.com/company/pwc-uk/

    Jesus College Cambridge - https://www.linkedin.com/company/jesus-college-cambridge/

    PWC homepage - https://www.pwc.co.uk/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • Can AI spark new creative revolutions? On this episode, I’m joined by Lianne Baron, Strategic Partner Manager for Creative Partnerships at Meta. Lianne unveils how AI is not just a tool but a transformative force in the creative landscape, emphasizing the irreplaceable value of human imagination. We explore the rapid pace of innovation, the challenges of embracing new tech, and the exciting future of idea generation and delivery.

    Key Takeaways:

    (03:50) Embrace AI's changes; it challenges traditional methods.

    (05:13) AI speeds up the journey from imagination to delivery.

    (07:15) The move to cinematic quality sparks excitement and fear.

    (08:30) Education is key in democratizing AI for all.

    (15:00) Risk of bias without diverse voices in AI development.

    (17:15) Ideas, not skills, are the new currency in AI.

    (26:16) Imagination and human experience are irreplaceable by AI.

    (29:11) AI can democratize storytelling, sharing diverse narratives.

    (33:00) AI breaks down barriers, fostering new creative opportunities.

    (36:20) Understanding authenticity is crucial in an AI-driven world.

    Resources Mentioned:

    Lianne Baron - https://www.linkedin.com/in/liannebaron/

    Meta - https://www.meta.com/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • The potential of AI is transforming industries, but how do we regulate this rapidly evolving technology without stifling innovation?

    On this episode, I’m joined by Professor Zico Kolter, Professor and Director of the Machine Learning Department at Carnegie Mellon University and Chief Expert at Bosch USA, who shares his insights on AI regulation and its challenges.

    Key Takeaways:

    (02:41) AI innovation outpaces legislation. 

    (04:00) Regulating technology vs. its usage is crucial. 

    (06:36) AI is advancing faster than ever. 

    (11:14) Companies must prevent AI misuse. 

    (15:30) Bias-free algorithms are not feasible. 

    (21:34) Human interaction in AI decisions is essential. 

    (27:49) The competitive environment benefits AI development. 

    (32:26) Perfectly accepted regulations indicate mistakes. 

    (37:52) Regulations should adapt to technological changes. 

    (42:49) AI developers aim to benefit people.

    (45:16) Human-in-the-loop AI is crucial for reliability. 

    (46:30) Addressing gaps in AI systems is critical.

    Resources Mentioned:

    Zico Kolter - https://www.linkedin.com/in/zico-kolter-560382a4/

    Carnegie Mellon University - https://www.linkedin.com/school/carnegie-mellon-university/

    Bosch USA - https://www.linkedin.com/company/boschusa/

    EU AI Act - https://ec.europa.eu/digital-strategy/our-policies/eu-regulatory-framework-artificial-intelligence_en

    OpenAI - https://www.openai.com/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Professor Paul Rainey to discuss the evolutionary principles applicable to AI development and the potential risks of self-replicating AI systems. Paul is Director of the Department of Microbial Population Biology at the Max Planck Institute for Evolutionary Biology in Plön; Professor at ESPCI in Paris; Fellow of the Royal Society of New Zealand; a Member of EMBO & European Academy of Microbiology; and Honorary Professor at Christian Albrechts University in Kiel. 

    Key Takeaways:

    (00:04) Evolutionary transitions form higher-level structures.

    (00:06) Eukaryotic cells parallel future AI-human interactions.

    (00:08) Major evolutionary transitions inform AI-human interactions.

    (00:11) Algorithms can evolve with variation, replication and heredity.

    (00:13) Natural selection drives complexity.

    (00:18) AI adapts to selective pressures unpredictably.

    (00:21) Humans risk losing autonomy to AI.

    (00:25) Societal engagement is needed before developing self-replicating AIs.

    (00:30) The challenge of controlling self-replicating systems.

    (00:33) Interdisciplinary collaboration is crucial for AI challenges.

    Resources Mentioned:

    Max Planck Institute for Evolutionary Biology

    Professor Paul Rainey - Max Planck Institute

    Max Planck Research Magazine - Issue 3/2023

    Paul Rainey’s article in The Royal Society Publishing

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • In this episode, I’m joined by Jaap van Etten, CEO and Co-Founder of Datenna, the leading provider of techno-economic intelligence in China. Jaap’s unique background as a diplomat turned entrepreneur provides invaluable insights into the intersection of AI, innovation and policy.

    Key Takeaways:

    (01:30) Transitioning from diplomat to tech entrepreneur.

    (05:23) Key differences in AI approaches between China, Europe and the US.

    (07:20) The Chinese entrepreneurial mindset and its impact on innovation.

    (10:03) China’s strategy in AI and the importance of being a technological leader.

    (17:05) Challenges and misconceptions about China’s technological capabilities.

    (23:17) Recommendations for AI regulation and international cooperation.

    (30:19) Jaap’s perspective on the future of AI legislation.

    (35:12) The role of AI in policymaking and decision-making.

    (40:54) Policymakers need scenario planning and foresight exercises to keep up with rapid technological advancements.

    Resources:

    Jaap van Etten - https://www.linkedin.com/in/jaapvanetten/

    Datenna - https://www.linkedin.com/company/datenna/

    https://www.nytimes.com/2006/05/15/technology/15fraud.htm

    http://www.china.org.cn/english/scitech/168482.htm 

    https://en.wikipedia.org/wiki/Hanxin 

    https://www.linkedin.com/pulse/china-marching-forward-artificial-intelligence-jaap-van-etten/ 

    https://github.com/Kkevsterrr/geneva 

    https://geneva.cs.umd.edu 

    https://www.grc.com/sn/sn-779.pdf

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Dr. Abhinav Valada, Professor and Director of the Robot Learning Lab at the University of Freiburg, to explore the future of robotics and the essential regulations needed for their integration into society.

    Key Takeaways:

    (00:00) The potential economic impact of AI. 

    (03:37) The distinction between perceived and actual AI capabilities. 

    (04:24) Challenges in training robots with real-world data. 

    (08:51) Limitations of current AI reasoning capabilities. 

    (13:16) The importance of conveying robot intent for collaboration. 

    (17:33) The need for specific guidelines for robotic systems. 

    (21:00) Mandating AI ethics courses in Germany. 

    (25:10) Collaborative robots and workforce implications. 

    (30:00) Privacy issues in human-robot interaction.

    (35:02) The importance of pilot programs for autonomous vehicles. 

    (39:00) International collaboration in AI legislation. 

    (40:38) Inclusion of diverse voices in robotics research.

    Resources Mentioned:

    Dr. Abhinav Valada - https://www.linkedin.com/in/avalada/

    University of Freiburg - https://www.linkedin.com/company/university-of-freiburg/

    EU AI Act - https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence

    Robot Learning Lab, University of Freiburg - https://www.researchgate.net/lab/Robot-Learning-Lab-Abhinav-Valada

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • Striking a balance between artificial intelligence innovation and regulation is crucial for leveraging its benefits while safeguarding against risks. On this episode, I’m joined by Congressman Buddy Carter, U.S. Representative for Georgia's 1st District, to explore the complexities of AI regulation and its impact on healthcare and other sectors.

    Key Takeaways:

    (01:48) President Biden's Executive Order on AI aims to set new standards.

    (04:34) AI's potential in healthcare, including telehealth and drug development.

    (05:47) Legal implications for doctors not using available AI technologies.

    (07:55) AI could speed up the drug development process.

    (10:52) The need for constantly updated AI standards.

    (11:56) Debate on creating a separate regulatory body for AI.

    (14:03) Importance of including diverse voices in AI regulation.

    (16:57) Federal preemption of state and local AI laws to avoid regulatory patchwork.

    Resources Mentioned:

    Buddy Carter - https://www.linkedin.com/in/buddycarterga/

    President Biden's Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

    EU AI Act - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

    Section 230 of the Communications Decency Act - https://www.eff.org/issues/cda230

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I am joined by Daniel Colson, Executive Director of the AI Policy Institute, to consider some pressing issues. Daniel shares his insights into the risks, opportunities and future directions of AI policy.

    Key Takeaways:

    (02:15) Daniel analyzes President Biden's recent executive order on AI.

    (04:13) Differentiating risks in AI technologies and their applications.

    (08:52) Concerns about the open-sourcing of AI models and abuse potential.

    (16:45) The importance of inclusive discussions in AI policymaking.

    (19:25) Challenges and risks of regulatory capture in the AI sector.

    (26:45) Balancing innovation with regulation.

    (33:14) The potential for AI to transform employment and the economy.

    (37:52) How AI's rapid evolution challenges our role as the dominant thinkers and prompts careful deliberation on its impact.

    Resources Mentioned:

    Daniel Colson - https://www.linkedin.com/in/danieljcolson/

    AI Policy Institute - https://www.linkedin.com/company/aipolicyinstitute/

    AI Policy Institute | Website - https://www.theaipi.org/

    #AIRegulation #AISafety #AIStandard

  • On this episode of Regulating AI, I sit down with Professor Effy Vayena, Chair of Bioethics and Associate Vice President of Digital Transformation and Governance of the Swiss Federal Institute of Technology (ETH) and Co-Director of Stavros Niarchos Foundation Bioethics Academy. Together we delve deep into the world of AI, its ethical challenges, and how thoughtful regulation can ensure equitable benefits.

    Key Takeaways:

    (03:45) The importance of developing and using technology in ways that meet ethical standards.

    (10:31) The necessity of agile regulation and continuous dialogue with tech developers.

    (13:19) The concept of regulatory sandboxes for testing policies in a controlled environment. 

    (17:07) Balancing AI innovation with patient privacy and data security.

    (24:14) Strategies to ensure AI benefits reach marginalized communities and promote health equity.

    (35:10) Considering the global impact of AI and the digital divide.

    (41:06) Including and educating the public in AI regulatory processes.

    (44:04) The importance of international collaboration in AI regulation.

    Resources Mentioned:

    Professor Effy Vayena - https://www.linkedin.com/in/effy-vayena-467b1353/

    Swiss Federal Institute of Technology (ETH) - https://www.linkedin.com/school/eth-zurich/

    ETH Zurich - https://ethz.ch/en.html

    European Union’s AI Act - https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai

    U.S. FDA guidelines on AI in medical devices - https://www.fda.gov/medical-devices/software-medical-device-samd/artificial-intelligence-and-machine-learning-software-medical-device

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • The integration of AI into healthcare is not only transforming the way we diagnose, treat and manage patient care but is also redefining the roles of doctors. Join me as I sit down with Dr. Brennan Spiegel to explore how AI is revolutionizing the medical field. Brennan is a Professor of Medicine and Public Health; George and Dorothy Gourrich Chair in Digital Health Ethics; Director of Health Services Research; Director, Graduate Program in Health Delivery Science; Cedars-Sinai Site Director, Clinical and Translational Science Institute; and Editor-in-Chief, Journal of Medical Extended Reality.

    Key Takeaways:

    (03:00) Balancing AI benefits with concerns about algorithmic bias and fairness.

    (05:47) Evaluating AI for implicit bias in mental health applications.

    (08:03) The need for standardized guidance and rigorous oversight in AI applications.

    (10:03) Ensuring data transmitted between AI providers and health systems is HIPAA compliant.

    (16:42) The evolving role of doctors in the context of AI integration.

    (21:22) The importance of traditional knowledge alongside AI in medical practice.

    (24:44) International collaboration and standardized approaches to AI in healthcare.

    Resources Mentioned:

    Dr. Brennan Spiegel - https://www.linkedin.com/in/brennan-spiegel-md-mshs-2938a4142/

    Cedars-Sinai - https://www.linkedin.com/company/cedars-sinai-medical-center/

    Brennan Spiegel on X - https://x.com/BrennanSpiegel

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • In this episode, I welcome Carmel Shachar, Faculty Director of the Health Law and Policy Clinic and Assistant Clinical Professor of Law at Harvard Law School Center for Health Law and Policy Innovation. We delve into how AI is shaping the future of healthcare, its profound impacts and the vital importance of thoughtful regulation. The interplay between AI and healthcare is increasingly critical, pushing the boundaries of medicine while challenging our regulatory frameworks.

    Key Takeaways:

    (00:00) AI’s challenges in balancing patient data needs.

    (03:09) The revolutionary potential of AI in healthcare innovation.

    (04:30) How AI is driving precision and personalized medicine.

    (06:19) The urgent need for healthcare system evolution.

    (09:00) Potential negative impacts of poorly implemented AI.

    (12:00) The unique challenges posed by AI as a medical device.

    (15:10) Minimizing regulatory handoffs to enhance AI efficacy.

    (18:00) How AI can reduce healthcare disparities.

    (20:00) Ethical considerations and biases in AI deployment.

    (25:00) AI’s growing impact on healthcare operations and management.

    (30:00) Enhancing patient-physician communication with AI tools.

    (39:00) Future directions in AI and healthcare policy.

    Resources Mentioned:

    Carmel Shachar - https://www.linkedin.com/in/carmel-shachar-7b3a8525/

    Harvard Law School Center for Health Law and Policy Innovation - https://www.linkedin.com/company/harvardchlpi/

    Carmel Shachar's Faculty Profile at Harvard Law School - https://hls.harvard.edu/faculty/carmel-shachar/

    Precision Medicine, Artificial Intelligence and the Law Project - https://petrieflom.law.harvard.edu/research/precision-medicine-artificial-intelligence-and-law

    Petrie-Flom Center Blog - https://blog.petrieflom.law.harvard.edu/

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I welcome Ari Kaplan, Head Evangelist of Databricks, a leading data and AI company. We discuss the intricacies of AI regulation, how different regions, like the US and EU, are addressing AI’s rapid development, and the importance of industry perspectives in shaping effective legislation.

    Key Takeaways:

    (04:42) Insights on the rapid advancements in AI technology and legislative responses.

    (10:32) The role of tech leaders in shaping AI policy and bridging knowledge gaps.

    (13:57) Open-source versus closed-source AI — Ari Kaplan advocates for transparency.

    (16:56) Ethical concerns in AI across different countries.

    (21:21) The necessity for both industry-specific and overarching AI regulations.

    (25:09) Automation’s potential to improve efficiency also raises employment risk.

    (29:17) A balanced, educational approach in the age of AI is crucial.

    (32:45) Risks associated with generative AI and the importance of intellectual property rights.

    Resources Mentioned:

    Ari Kaplan - https://www.linkedin.com/in/arikaplan/

    Databricks - https://www.linkedin.com/company/databricks/

    Unity Catalog Governance Value Levers - https://www.databricks.com/blog/unity-catalog-governance-value-levers

    President Biden’s Executive Order on AI - https://www.whitehouse.gov/briefing-room/presidential-actions/2023/10/30/executive-order-on-the-safe-secure-and-trustworthy-development-and-use-of-artificial-intelligence/

    EU AI Act Information - https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • In this episode, I welcome Nicolas Kourtellis, Co-Director of Telefónica Research and Head of Systems AI Lab at Telefónica Innovación Digital, a company of the Telefonica Group. Nicolas shares his expert insights on the pivotal role of AI in revolutionizing telecommunications, the challenges of AI regulation and the innovative strides Telefónica is making toward sustainable and ethical AI deployment. 

    Imagine a world where every device you own not only connects seamlessly but also intelligently adapts to your needs. This isn’t just a vision for the future; it’s the reality AI is creating today in telecommunications.

    Key Takeaways:

    (00:00) AI research focuses and applications in telecommunications.

    (03:24) AI’s role in optimizing network systems and enhancing user privacy is critical.

    (06:00) How Telefónica uses AI to improve customer service through AI chatbots.

    (12:03) The ethical considerations and sustainability of AI models.

    (16:08) Democratizing AI to make it accessible and beneficial for all users.

    (18:09) Designing AI systems with privacy and security from the start.

    (27:00) The challenges and opportunities AI presents for the workforce.

    (30:25) The potential of 6G and its reliance on AI technologies.

    (32:16) The integral role of AI in future technological advancements and network optimizations.

    (39:35) The societal impacts of AI in telecommunications.

    Resources Mentioned:

    Nicolas Kourtellis - https://www.linkedin.com/in/nicolas-kourtellis-3a154511/

    Telefónica Innovación Digital - https://www.linkedin.com/company/telefonica-innovacion-digital/

    Telefonica Group - https://www.linkedin.com/company/telefonica/

    You can find all of Nicolas’ publications on his Google Scholar page: http://scholar.google.com/citations?user=Q5oWwiQAAAAJ 

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode of the Regulating AI Podcast, I'm joined by Dr. Irina Mirkina, Innovation Manager and AI Lead at UNICEF's Office of Innovation. An AI strategist, speaker, and expert for the European Commission, Dr. Mirkina brings a wealth of experience from academia, the private sector, and now, the humanitarian sector. Today’s discussion focuses on AI for social good.

    Key Takeaways:

    (03:31) The role of international organizations like UNICEF in shaping global AI regulations.

    (07:06) Challenges of democratizing AI across different regions to overcome the digital divide.

    (10:28) The importance of developing AI systems that cater to local contexts.

    (13:23) The transformative potential and limitations of AI in personalized education.

    (16:37) Engaging vulnerable populations directly in AI policy discussions.

    (20:47) UNICEF's use of AI in addressing humanitarian challenges.

    (25:10) The role of civil society in AI regulation and policymaking.

    (33:50) AI's risks and limitations, including issues of open-source management and societal impact.

    (38:57) The critical need for international collaboration and standardization in AI regulations.

    Resources Mentioned:

    Dr. Irina Mirkina - https://www.linkedin.com/in/irinamirkina/

    UNICEF Office of Innovation - https://www.unicef.org/innovation/

    Policy Guidance on AI for Children by UNICEF - https://www.unicef.org/globalinsight/media/2356/file/UNICEF-Global-Insight-policy-guidance-AI-children-2.0-2021.pdf

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I’m joined by Professor Angela Zhang, Associate Professor of Law at the University of Hong Kong and Director of the Philip K. H. Wong Center for Chinese Law. We delve into the complexities of AI regulation in China, exploring how the government’s strategies impact both the global market and internal policies.

    Key Takeaways:

    (02:14) The introduction of China’s approach to AI regulation.

    (06:40) Discussion on the volatile nature of Chinese regulatory processes.

    (10:26) How China’s AI strategy impacts international relations and global standards.

    (13:32) Angela explains the strategic use of law as an enabler in China’s AI development.

    (18:53) High-level talks between the US and China on AI risk have not led to substantive actions.

    (22:04) The US’s short-term gains from AI chip restrictions on China may lead to long-term disadvantages as China becomes self-sufficient and less cooperative.

    (24:13) Unintended consequences of the Chinese regulatory system.

    (29:19) Angela advocates for a slower development of AI technology to better assess and manage risks before they become unmanageable.

    Resources Mentioned:

    Professor Angela Zhang - http://www.angelazhang.net

    High Wire by Angela Zhang - https://global.oup.com/academic/product/high-wire-9780197682258

    Article: The Promise and Perils of China’s Regulation - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676

    Research: Generative AI and Copyright: A Dynamic Perspective - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4716233

    Research: The Promise and Perils of China's Regulation of Artificial Intelligence - https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4708676

    Angela Zhang’s Website - https://www.angelazhang.net/

    High Wire Book Trailer - https://www.youtube.com/watch?v=u6OPSit6k6s

    Purchase High Wire by Angela Zhang - https://www.amazon.com/High-Wire-Regulates-Governs-Economy/dp/0197682251/ref=sr_1_1?crid=2A7D070KIAGT&keywords=high+wire+angela+zhang&qid=1706441967&sprefix=high+wire+angela+zha,aps,333&sr=8-1

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I am thrilled to sit down with Congressman Joseph Morelle, who represents New York's 25th Congressional District and serves on the House Appropriations Committee. As an influential voice in the dialogue on artificial intelligence, Congressman Morelle shares his deep insights into AI's potential and challenges, particularly concerning legislation and societal impacts.

    Key Takeaways:

    (02:13) Congressman Morelle's extensive experience in AI legislation and its implications.

    (04:27) Deep fakes and their growing threat to privacy and integrity.

    (07:13) Introducing federal legislation against non-consensual deep fakes.

    (14:00) Urgent need for social media platforms to enforce their guidelines rigorously.

    (19:46) The No AI Fraud Act and protecting individual likeness in AI use.

    (23:06) The importance of adaptable and 'living' statutes in technology regulation.

    (32:59) The critical role of continuous education and skill adaptation in the AI era.

    (37:47) Exploring the use of AI in Congress to ensure unbiased, culturally appropriate policymaking and data privacy.

    Resources Mentioned:

    Congressman Joseph Morelle - https://www.linkedin.com/in/joe-morelle-8246099/

    No AI Fraud Act - https://www.congress.gov/bill/118th-congress/house-bill/6943/text?s=1&r=9

    Preventing Deep Fakes of Intimate Images Act - https://www.congress.gov/bill/118th-congress/house-bill/3106

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard

  • On this episode, I welcome Dr. Sethuraman Panchanathan, Director of the U.S. National Science Foundation and a professor at Arizona State University. Sethuraman shares personal insights on the transformative power of artificial intelligence and the importance of democratizing this technology to be sure it benefits humanity as a whole.

    Key Takeaways:

    (00:21) AI’s pivotal role in enhancing speech-language services.

    (01:28) Introduction to Sethuraman’s visionary leadership at NSF.

    (02:36) NSF’s significant AI investment totaled over $820 million.

    (06:19) The shift toward interdisciplinary AI research at NSF.

    (10:26) NSF’s initiative of launching 25 AI institutes for innovation.

    (18:26) Emphasis on AI democratization through education and training.

    (25:11) The NSF ExpandAI program boosts AI in minority-serving institutions.

    (30:21) Focus on ethical AI development to build public trust.

    (40:10) AI’s transformative applications in healthcare, agriculture and more.

    (42:45) The importance of ethical guardrails in AI’s development.

    (43:08) Advancing AI through international collaborations.

    (44:53) Lessons from a career in AI and advice for the next generation.

    (50:19) Motivating young researchers and entrepreneurs in AI.

    (52:24) Advocating for AI innovation and accessibility for everyone.

    Resources Mentioned:

    Dr. Sethuraman Panchanathan -

    https://www.linkedin.com/in/drpanch/

    U.S. National Science Foundation | LinkedIn -

    https://www.linkedin.com/company/national-science-foundation/

    U.S. National Science Foundation | Website -

    https://www.nsf.gov/

    Arizona State University -

    https://www.linkedin.com/school/arizona-state-university/

    ExpandAI Program -

    https://new.nsf.gov/funding/opportunities/expanding-ai-innovation-through-capacity-building

    Dr. Sethuraman Panchanathan’s NSF Profile -

    https://www.nsf.gov/staff/staff_bio.jsp?lan=spanchan

    NSF Regional Innovation Engines -

    https://new.nsf.gov/funding/initiatives/regional-innovation-engines

    National AI Research Resource (NAIRR) -

    https://new.nsf.gov/focus-areas/artificial-intelligence/nairr

    NSF Focus on Artificial Intelligence -

    https://new.nsf.gov/focus-areas/artificial-intelligence

    NSF AI Research Funding -

    https://new.nsf.gov/funding/opportunities/national-artificial-intelligence-research

    GRANTED Initiative for Broadening Participation in STEM -

    https://new.nsf.gov/funding/initiatives/broadening-participation/granted

    Thanks for listening to the Regulating AI: Innovate Responsibly podcast. If you enjoyed this episode, please leave a 5-star review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation.

    #AIRegulation #AISafety #AIStandard