Эпизоды
-
In this episode, BABL AI CEO Dr. Shea Brown interviews Aleksandr Tiulkanov, an expert in AI compliance and digital policy. Aleksandr shares his fascinating journey from being a commercial contracts lawyer to becoming a leader in AI policy at Deloitte and the Council of Europe. 🚀
🔍 What’s in this episode?
The transition from legal tech to AI compliance.
Key differences between the Council of Europe’s Framework Convention on AI and the EU AI Act.
How the EU AI Act fits into Europe’s product safety legislation.
The challenges and confusion around conformity assessments and AI literacy requirements.
Insights into Aleksandr’s courses designed for governance, risk, and compliance professionals.
🛠️ Aleksandr also dives into practical advice for preparing for the EU AI Act, even in the absence of finalized standards, and the role of frameworks like ISO 42,001.
📚 Learn more about Aleksandr’s courses: https://aia.tiulkanov.info
🤝 Follow Aleksandr on LinkedIn: https://www.linkedin.com/in/tyulkanov/Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this episode of Lunchtime BABLing, Dr. Shea Brown, CEO of BABL AI, is joined by Jeffery Recker and Bryan Ilg to tackle one of the most pressing questions of our time: How will AI impact the future of work?
From fears of job displacement to the rise of entirely new roles, the trio explores:
🔹 How AI will reshape industries and automate parts of our jobs.
🔹 The importance of upskilling to stay competitive in an AI-driven world.
🔹 Emerging career paths in responsible AI, compliance, and risk management.
🔹 The delicate balance between technological disruption and human creativity.
📌 Whether you're a seasoned professional, a student planning your career, or just curious about the future, this episode has something for you.
👉 Don’t miss this insightful conversation about navigating the rapidly changing job market and preparing for a future where AI is a part of nearly every role.
🎧 Listen on your favorite podcast platform or watch the full discussion here. Don’t forget to like, subscribe, and hit the notification bell to stay updated on the latest AI trends and insights!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Пропущенные эпизоды?
-
🎙️ Lunchtime BABLing Podcast: What Will a Trump Presidency Mean for AI Regulations?
In this thought-provoking episode, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker and CSO Bryan Ilg to explore the potential impact of a Trump presidency on the landscape of AI regulation. 🚨🤖
Key topics include:
Federal deregulation and the push for state-level AI governance.
The potential repeal of Biden's executive order on AI.
Implications for organizations navigating a fragmented compliance framework.
The role of global AI policies, such as the EU AI Act, in shaping U.S. corporate strategies.
How deregulation might affect innovation, litigation, and risk management in AI development.
This is NOT a political podcast—we focus solely on the implications for AI governance and the tech landscape in the U.S. and beyond. Whether you're an industry professional, policymaker, or tech enthusiast, this episode offers essential insights into the evolving world of AI regulation.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Welcome to a special Lunchtime BABLing episode, BABL Deep Dive, hosted by BABL AI CEO Dr. Shea Brown and Chief Sales Officer Brian Ilg. This in-depth discussion explores the fundamentals and nuances of AI assurance—what it is, why it's crucial for modern enterprises, and how it works in practice.
Dr. Brown breaks down the concept of AI assurance, highlighting its role in mitigating risks, ensuring regulatory compliance, and building trust with stakeholders. Brian Ilg shares key insights from his conversations with clients, addressing common questions and challenges that arise when organizations seek to audit and assure their AI systems.
This episode features a detailed presentation from a recent risk conference, offering a behind-the-scenes look at how BABL AI conducts independent AI audits and assurance engagements. If you're a current or prospective client, an executive curious about AI compliance, or someone exploring careers in AI governance, this episode is packed with valuable information on frameworks, criteria, and best practices for AI risk management.
Watch now to learn how AI assurance can protect your organization from potential pitfalls and enhance your reputation as a responsible, forward-thinking entity in the age of AI!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
👉 Lunchtime BABLing listeners can save 20% on all BABL AI online courses using coupon code "BABLING20".
📚 Courses Mentioned:
1️⃣ AI Literacy Requirements Course: https://courses.babl.ai/p/ai-literacy-for-eu-ai-act-general-workforce
2️⃣ EU AI Act - Conformity Requirements for High-Risk AI Systems Course: https://courses.babl.ai/p/eu-ai-act-conformity-requirements-for-high-risk-ai-systems
3️⃣ EU AI Act - Quality Management System Certification: https://courses.babl.ai/p/eu-ai-act-quality-management-system-oversight-certification
4️⃣ BABL AI Course Catalog: https://babl.ai/courses/
🔗 Follow us for more: https://linktr.ee/babl.ai
In this episode of Lunchtime BABLing, CEO Dr. Shea Brown dives into the "AI Literacy Requirements of the EU AI Act," focusing on the upcoming compliance obligations set to take effect on February 2, 2025. Dr. Brown explains the significance of Article 4 and discusses what "AI literacy" means for companies that provide or deploy AI systems, offering practical insights into how organizations can meet these new regulatory requirements.
Throughout the episode, Dr. Brown covers:
AI literacy obligations for providers and deployers under the EU AI Act.
The importance of AI literacy in ensuring compliance.
An overview of BABL AI’s upcoming courses, including the AI Literacy Training for the general workforce, launching November 4.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this episode of Lunchtime BABLing, hosted by Dr. Shea Brown, CEO of BABL AI, we're joined by frequent guest Jeffery Recker, Co-Founder and Chief Operating Officer of BABL AI. Together, they dive into an interesting question in the AI world today: Will AI really replace our jobs?
Drawing insights from a recent interview with MIT economist Daron Acemoglu, Shea and Jeffery discuss the projected economic impact of AI and what they believe the hype surrounding AI-driven job loss will actually look like. With only 5% of jobs expected to be heavily impacted by AI, is the AI revolution really what everyone thinks it is?
They explore themes such as the overcorrection in AI investment, the role of responsible AI governance, and how strategic implementation of AI can create competitive advantages for companies. Tune in for an honest and insightful conversation on what AI will mean for the future of work, the economy, and beyond.
If you enjoy this episode, don't forget to like and subscribe for more discussions on AI, ethics, and technology!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Welcome back to another insightful episode of Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker dive into a fascinating discussion on how the NIST AI Risk Management Framework could play a crucial role in guiding companies like Deloitte through Federal Trade Commission (FTC) investigations.
In this episode, Shea and Jeffery on a recent complaint filed against Deloitte regarding its automated decision system for Medicaid eligibility in Texas, and how adherence to established frameworks could have mitigated the issues at hand.
📍 Topics discussed:
Deloitte’s Medicaid eligibility system in Texas
The role of the FTC and the NIST AI Risk Management Framework
How AI governance can safeguard against unintentional harm
Why proactive risk management is key, even for non-AI systems
What companies can learn from this case to improve compliance and oversight
Tune in now and stay ahead of the curve! 🔊✨
👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Check out the babl.ai website for more stuff on AI Governance and Responsible AI!
-
In the second part of our in-depth discussion on the EU AI Act, BABL AI CEO Dr. Shea Brown and COO Jeffery Recker continue to explore the essential steps organizations need to take to comply with this groundbreaking regulation. If you missed Part One, be sure to check it out, as this episode builds on the foundational insights shared there.
In this episode, titled "Where to Get Started with the EU AI Act: Part Two," Dr. Brown and Mr. Recker dive deeper into the practical aspects of compliance, including:
Documentation & Transparency: Understanding the extensive documentation and transparency measures required to demonstrate compliance and maintain up-to-date records.
Challenges for Different Organizations: A look at how compliance challenges differ for small and medium-sized enterprises compared to larger organizations, and what proactive steps can be taken.
Global Compliance Considerations: Discussing the merits of pursuing global compliance strategies and the implications of the EU AI Act on businesses operating outside the EU.
Enforcement & Penalties: Insight into how the EU AI Act will be enforced, the bodies responsible for oversight, and the significant penalties for non-compliance.
Balancing Innovation with Regulation: How the EU AI Act aims to foster innovation while ensuring that AI systems are human-centric and trustworthy.
Whether you're a startup navigating the complexities of AI governance or a large enterprise seeking to align with global standards, this episode offers valuable guidance on how to approach the EU AI Act and ensure your AI systems are compliant, trustworthy, and ready for the future.
🔗 Key Topics Discussed:
What documentation and transparency measures are required to demonstrate compliance?
How can businesses effectively maintain and update these records?
How will the EU AI Act be enforced, and which bodies are responsible for its oversight and implementation?
What are the biggest challenges you foresee in complying with the EU AI Act?
What resources or support mechanisms are being provided to businesses to help them comply with the new regulations?
How does the EU AI Act balance the need for regulation with the need to foster innovation and competitiveness in the AI sector?
What are the penalties for non-compliance, and how will they be determined and applied?
What guidelines should entities follow to ensure their AI systems are human-centric and trustworthy?
What proactive measures can entities take to ensure their AI systems remain compliant as technology and regulations evolve?
How do you see the EU AI Act evolving in the future, and what additional measures or amendments might be necessary?
👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this episode of Lunchtime BABLing, BABL AI CEO Dr. Shea Brown is joined by COO Jeffery Recker to kick off a deep dive into the EU AI Act. Titled "Where to Get Started with the EU AI Act: Part One," this episode is designed for organizations navigating the complexities of the new regulations.
With the EU AI Act officially in place, the discussion centers on what businesses and AI developers need to do to prepare. Dr. Brown and Mr. Recker cover crucial topics including the primary objectives of the Act, the specific aspects of AI systems that will be audited, and the high-risk AI systems requiring special attention under the new regulations.
The episode also tackles practical questions, such as how often audits should be conducted to ensure ongoing compliance and how much of the process can realistically be automated. Whether you're just starting out with compliance or looking to refine your approach, this episode offers valuable insights into aligning your AI practices with the requirements of the EU AI Act.
Don't miss this informative session to ensure your organization is ready for the changes ahead!
🔗 Key Topics Discussed:
What are the primary objectives of the EU AI Act, and how does it aim to regulate AI technologies within the EU?
What impact will this have outside the EU?
What specific aspects of AI systems will need conformity assessments for compliance with the EU AI Act?
Are there any particular high-risk AI systems that require special attention under the new regulations?
How do you assess and manage the risks associated with AI systems?
What are the key provisions and requirements of the Act that businesses and AI developers need to be aware of?
How are we ensuring that our AI systems comply with GDPR and other relevant data protection regulations?
How often should these conformity assessments be conducted to ensure ongoing compliance with the EU AI Act?
📌 Stay tuned for Part Two where we continue this discussion with more in-depth analysis and practical tips!
👍 If you found this episode helpful, please like and subscribe to stay updated on future episodes.
#AI #EUAIACT #ArtificialIntelligence #Compliance #TechRegulation #AIAudit #LunchtimeBABLing #BABLAICheck out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Welcome back to Lunchtime BABLing! In this episode, BABL AI CEO Dr. Shea Brown and Bryan Ilg delve into the crucial topic of "Building Trust in AI."
Episode Highlights:
Trust Survey Insights: Bryan shares findings from a recent PwC trust survey, highlighting the importance of trust between businesses and their stakeholders, including consumers, employees, and investors.
AI's Role in Trust: Discussion on how AI adoption impacts trust and the bottom line for organizations.
Internal vs. External Trust: Insights into the significance of building both internal (employee) and external (consumer) trust.
Responsible AI: Exploring the need for responsible AI strategies, data privacy, bias and fairness, and the importance of transparency and accountability.
Practical Steps: Tips for businesses on how to bridge the trust gap and effectively communicate their AI governance and responsible practices.
Join us as we explore how businesses can build a trustworthy AI ecosystem, ensuring ethical practices and fostering a strong relationship with all stakeholders.
If you enjoyed this episode, please like, subscribe, and share your thoughts in the comments below!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Join us for an insightful episode of "Lunchtime BABLing" as BABL AI CEO Shea Brown and VP of Sales Bryan Ilg dive deep into New York City's Local Law 144, a year after its implementation. This law mandates the auditing of AI tools used in hiring for bias, ensuring fair and equitable practices in the workplace.
Episode Highlights:
Understanding Local Law 144: A breakdown of what the law entails, its goals, and its impact on employers and AI tool providers.
Year One Insights: What has been learned from the first year of compliance, including common challenges and successes.
Preparing for Year Two: Key considerations for organizations as they navigate the second year of compliance. Learn about the nuances of data sharing, audit requirements, and maintaining compliance.
Data Types and Testing: Detailed explanation of historical data vs. test data, and their roles in bias audits.
Practical Advice: Decision trees and strategic advice for employers on how to handle their data and audit needs effectively.
This episode is packed with valuable information for employers, HR professionals, and AI tool providers to ensure compliance with New York City's AI bias audit requirements. Stay informed and ahead of the curve with expert insights from Shea and Bryan.
🔗 Don't forget to like, subscribe, and share! If you're watching on YouTube, hit the like button and subscribe to stay updated with our latest episodes. If you're tuning in via podcast, thank you for listening! See you next week on Lunchtime BABLing.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this insightful episode of Lunchtime BABLing, BABL AI CEO Shea Brown and COO Jeffery Recker dive deep into Colorado's pioneering AI Consumer Protection Law. This legislation marks a significant move at the state level to regulate artificial intelligence, aiming to protect consumers from algorithmic discrimination.
Shea and Jeffery discuss the implications for developers and deployers of AI systems, emphasizing the need for robust risk assessments, documentation, and compliance strategies. They explore how this law parallels the EU AI Act, focusing particularly on discrimination and the responsibilities laid out for both AI developers and deployers.
Listeners, don't miss the chance to enhance your understanding of AI governance with a special offer from BABL AI: Enjoy 20% off all courses using the coupon code "BABLING20."
Explore our courses here: https://courses.babl.ai/
For a deeper dive into Colorado's AI law, check out our detailed blog post: "Colorado's Comprehensive AI Regulation: A Closer Look at the New AI Consumer Protection Law". Don't forget to subscribe to our newsletter at the bottom of the page for the latest updates and insights.
Link to the blog here: https://babl.ai/colorados-comprehensive-ai-regulation-a-closer-look-at-the-new-ai-consumer-protection-law/
Timestamps:
00:21 - Welcome and Introductions
00:43 - Overview of Colorado's AI Consumer Protection Law
01:52 - State vs. Federal Initiatives in AI Regulation
04:00 - Detailed Discussion on the Law's Provisions
07:02 - Risk Management and Compliance Techniques
09:51 - Importance of Proper Documentation
12:21 - Developer and Deployer Obligations
17:12 - Strategies for Public Disclosure and Risk Notification
20:48 - Annual Impact Assessments
22:44 - Transparency in AI Decision-Making
24:05 - Consumer Rights in AI Decisions
26:03 - Public Disclosure Requirements
28:36 - Final Thoughts and Takeaways
Remember to like, subscribe, and comment with your thoughts or questions. Your interaction helps us bring more valuable content to you!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
🎙️ Welcome back to Lunchtime BABLing, where we bring you the latest insights into the rapidly evolving world of AI ethics and governance! In this episode, BABL AI CEO Shea Brown and VP of Sales Bryan Ilg delve into the intricacies of the newly released NIST AI Risk Management Framework, with a specific focus on its implications for generative AI technologies.
🔍 The conversation kicks off with Shea and Bryan providing an overview of the NIST framework, highlighting its significance as a voluntary guideline for governing AI systems. They discuss how the framework's "govern, map, measure, manage" functions serve as a roadmap for organizations to navigate the complex landscape of AI risk management.
📑 Titled "NIST AI Risk Management Framework: Generative AI Profile," this episode delves deep into the companion document that focuses specifically on generative AI. Shea and Bryan explore the unique challenges posed by generative AI in terms of information integrity, human-AI interactions, and automation bias.
🧠 Shea provides valuable insights into the distinctions between AI, machine learning, and generative AI, shedding light on the nuanced risks associated with generative AI's ability to create content autonomously. The discussion delves into the implications of misinformation and disinformation campaigns fueled by generative AI technologies.
🔒 As the conversation unfolds, Shea and Bryan discuss the voluntary nature of the NIST framework and explore strategies for driving industry-wide adoption. They examine the role of certifications and standards in building trust and credibility in AI systems, emphasizing the importance of transparent and accountable AI governance practices.
🌐 Join Shea and Bryan as they navigate the complex terrain of AI risk management, offering valuable insights into the evolving landscape of AI ethics and governance. Whether you're a seasoned AI practitioner or simply curious about the ethical implications of AI technologies, this episode is packed with actionable takeaways and thought-provoking discussions.
🎧 Tune in now to stay informed and engaged with the latest advancements in AI ethics and governance, and join the conversation on responsible AI development and deployment!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
In this episode of the Lunchtime BABLing Podcast, Dr. Shea Brown, CEO of BABL AI, dives into the intricacies of the EU AI Act alongside Jeffery Recker, the COO of BABL AI. Titled "The EU AI Act: Prohibited and High-Risk Systems and why you should care," this conversation sheds light on the recent passing of the EU AI Act by the parliament and its implications for businesses and individuals alike.
Dr. Brown and Jeffery explore the journey of the EU AI Act, from its proposal to its finalization, outlining the key milestones and upcoming steps. They delve into the categorization of AI systems into prohibited and high-risk categories, discussing the significance of compliance and the potential impacts on businesses operating within the EU.
The conversation extends to the importance of understanding biases in AI algorithms, the complexities surrounding compliance, and the value of getting ahead of the curve in implementing necessary measures. Dr. Brown offers insights into how BABL AI assists organizations in navigating the regulatory landscape, emphasizing the importance of building trust and quality products in the AI ecosystem.
Key Topics Covered:
Overview of the EU AI Act and its journey to enactment
Differentiating prohibited and high-risk AI systems
Understanding biases in AI algorithms and their implications
Compliance challenges and the importance of early action
How BABL AI supports organizations in achieving compliance and building trust
Why You Should Tune In:
Whether you're a business operating within the EU or an individual interested in the impact of AI regulation, this episode provides valuable insights into the evolving regulatory landscape and its implications. Dr. Shea Brown and Jeffery Recker offer expert perspectives on navigating compliance challenges and the importance of ethical AI governance.
Don't Miss Out:
Subscribe to the Lunchtime BABLing Podcast for more thought-provoking discussions on AI, ethics, and governance. Stay tuned for upcoming episodes and join the conversation on critical topics shaping the future of technology.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Join us in this latest episode of the Lunchtime BABLing Podcast, where Shea Brown, CEO of BABL AI, shares invaluable insights from a live webinar Q&A session on carving out a niche in AI Ethics Consulting. Dive deep into the world of AI ethics, algorithm auditing, and the journey of building a boutique firm focused on ethical risk, bias, and effective governance in AI technologies.
In This Episode:
Introduction to AI Ethics Consulting: Shea Brown introduces the session, providing a backdrop for his journey and the birth of BABL AI.
Journey of BABL AI: Discover the challenges and milestones in creating and growing an AI ethics consulting firm.
Insights from the Field: Shea shares his experiences and learnings from auditing algorithms for ethical risks and navigating the evolving landscape of AI ethics.
Live Q&A Highlights: Audience questions range from enrolling in AI ethics courses, the role of lawyers in AI audits, to the importance of philosophy in AI ethics consulting.
Advice on Career Pivoting: Shea offers advice on pivoting into AI ethics consulting, highlighting the importance of understanding regulatory requirements and finding one’s niche.
Auditing Process Explained: Get a high-level overview of the auditing process, including the distinction between assessments and formal audits.
Building a Career in AI Ethics: Discussion on the demand for AI ethics consulting, networking strategies, and the interdisciplinary nature of audit teams.
Key Takeaways:
The essential blend of skills needed in AI ethics consulting.
Insights into the challenges and opportunities in the field of AI ethics.
Practical advice for individuals looking to enter or pivot into AI ethics consulting.
Don’t miss this opportunity to learn from one of the pioneers in AI ethics consulting. Whether you’re new to the field or looking to deepen your knowledge, this episode is packed with insights, experiences, and advice to guide you on your journey.
Listeners can use coupon code "FREEFEB" to get our "Finding Your Place in AI Ethics Consulting" course for free. Link on our Website.
Lunchtime BABLing listeners can use coupon code "BABLING" to save 20% on all our course offerings. Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Welcome to another enlightening episode of Lunchtime BABLing, proudly presented by BABL AI, where we dive deep into the evolving world of artificial intelligence and its governance. In this episode, Shea is thrilled to bring you a series of exciting updates and educational insights that are shaping the future of AI.
What's Inside:
1. BABL AI Joins the NIST Consortium: We kick off with the groundbreaking announcement that BABL AI has officially become a part of the prestigious NIST consortium. Discover what this means for the future of AI development and governance, and how this collaboration is set to elevate the standards of AI technologies and applications.
2. Introducing ISO 42001: Next, Shea delves into the newly announced ISO 42001, a comprehensive governance framework that promises to redefine AI governance. Join Shea as she explores the high-level components of this auditable framework, shedding light on its significance and the impact it's poised to have on the AI industry.
3. Aligning Education with Innovation: We also explore how BABL AI’s online courses are perfectly aligned with the NIST AI framework, ISO 42001, and other pivotal regulations and frameworks. Learn how our educational offerings are designed to empower you with the competencies needed to navigate and excel in the complex landscape of AI governance. Whether you're a professional looking to enhance your skills or a student eager to enter the AI field, our courses offer invaluable insights and knowledge that align with the latest standards and frameworks.Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Sign up for Free to our online course "Finding your place in AI Ethics Consulting," during the month of February 2024.
🌍 In this news episode of Lunchtime BABLing, Shea does dive deep into the complex world of AI regulatory compliance on a global scale. As the digital frontier expands, understanding and adhering to AI regulations becomes crucial for businesses and technologists alike. This episode offers a high-level guide on what to consider for AI regulatory compliance globally.
🔍 Highlights of This Episode:
EU AI Act: Your Compliance Compass - Discover how the European Union's AI Act serves as a holistic framework that can guide you through 95% of global AI compliance challenges.
Common Grounds in Global AI Laws - Shea explore the shared foundations across various AI regulations, highlighting the common themes across global regulatory requirements.
Proactive Mindset Shift - The importance of shifting corporate mindsets towards proactive risk management in AI cannot be overstated. We discuss why companies must start establishing Key Performance Indicators (KPIs) now to identify and mitigate risks before facing legal consequences.
NIST's Role in Measuring AI Risk - Get insights into how the National Institute of Standards and Technology (NIST) is developing methodologies to quantify risk in AI systems, and what this means for the future of AI.
🚀 Takeaway:
This episode is a must-listen for anyone involved in AI development, deployment, or governance. Whether you're a startup or a multinational corporation, aligning with global AI regulations is imperative. Lunchtime BABLing will provide you with the knowledge and strategies to navigate this complex landscape effectively, ensuring your AI solutions are not only innovative but also compliant and ethical.
👉 Subscribe to our channel for more insights into AI technology and its global impact. Don't forget to hit the like button if you find this episode valuable and share it with your network to spread the knowledge.
#AICompliance #EUAIAct #AIRegulation #RiskManagement #TechnologyPodcast #AIethics #GlobalAI #ArtificialIntelligenceCheck out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
Sign up for our Free during the month of February for our online course Finding you place in AI Ethics Consulting.
Link here: https://courses.babl.ai/p/finding-your-place-ai-ethics-consulting
Lunchtime BABLing listeners can save 20% off all our online courses by using coupon code "BABLING."
Link here: https://babl.ai/courses/
🤖 Welcome to another engaging episode of Lunchtime BABLing! In this episode, we delve into the intricate world of AI ethics with a special focus on its socio-technical aspects.
🎙️ Join our host, Shea Brown, as they welcome a distinguished guest, Borhane Blili-Hamelin, PhD. Together, they explore some thought-provoking parallels between implementing AI ethics in industry and research environments. This discussion promises to shed light on the challenges and nuances of applying ethical principles in the fast-evolving field of artificial intelligence.
🔍 The conversation is not just theoretical but is grounded in ongoing research. Borhane Blili-Hamelin and Leif Hancox-Li's joint work, which was a highlight at the NeurIPS 2022 workshop, forms the basis of this insightful discussion. The workshop, held on November 28 and December 5, 2022, provided a platform for presenting their findings and perspectives.
Link to paper here: https://arxiv.org/abs/2209.00692
💡 Whether you're a professional in the field, a student, or just someone intrigued by the ethical dimensions of AI, this episode is a must-watch! So, grab your lunch, sit back, and let's BABL about the socio-technical side of AI ethics.
👍 Don't forget to like, share, and subscribe for more insightful episodes of Lunchtime BABLing. Your support helps us continue to bring fascinating topics and expert insights to your screen.
📢 We love hearing from you! Share your thoughts on this episode in the comments below. What are your views on AI ethics in industry versus research? Let's keep the conversation going!
🔔 Stay tuned for more episodes by hitting the bell icon to get notified about our latest uploads.
#LunchtimeBABLing #AIethics #SocioTechnical #ArtificialIntelligence #EthicsInAI #NeurIPS2022 #AIResearch #IndustryVsResearch #TechEthicsCheck out the babl.ai website for more stuff on AI Governance and
Responsible AI! -
📺 About This Episode:
Join us on a riveting journey into the heart of AI integration in the business world in our latest episode of Lunchtime BABLing, where we talk about "What Things Should Companies Consider When Implementing AI." Host Shea Brown, CEO of BABL AI, teams up with Bryan Ilg, our VP of Sales, to unravel the complexities and opportunities presented by AI in the modern business landscape.
In this episode, we dive deep into the nuances of AI implementation, shedding light on often-overlooked aspects such as reputational and regulatory risks, and the paramount importance of trust and effective governance. Shea and Bryan offer their expert insights into the criticality of establishing robust AI governance frameworks and enhancing existing strategies to stay ahead in this rapidly evolving domain.
Whether you're a business owner, an executive, or simply intrigued by the ethical and practical dimensions of AI in business, this episode is packed with valuable insights and actionable advice.
🔗 Stay Connected:
Hit that like and subscribe button for more enlightening episodes.
Tune into our podcast across various platforms for your on-the-go AI insights.
👋 Thank you for joining us on Lunchtime BABLing as we explore the intricate dance of AI, business, and ethics. Can't wait to share more in our upcoming episodes!Check out the babl.ai website for more stuff on AI Governance and
Responsible AI! - Показать больше