Episodi
-
Join Allen Firstenberg and Linda Lawton of Two Voice Devs as they record live from Google I/O 2025! As the conference neared the end, they dive deep into the groundbreaking announcements in generative AI, discussing the latest advancements and what they mean for developers, especially those in Conversational AI.
This episode explores the new and updated models that are set to redefine content creation:
Lyria: Google's innovative streaming audio generation API, its unique WebSocket-based approach, and the fascinating possibilities (and challenges!) of dynamic music creation, including its potential for YouTube content and the ever-present copyright questions surrounding AI-generated media.
Veo 3: The video generation powerhouse, now enhanced with synchronized audio and voice, realistic lip-sync for characters (yes, even cartoon animals!), and improvements in "world physics." They also tackle the implications of its pricing for professional and individual creators.
Imagen 4: Discover the highly anticipated improvements in text generation within images, including stylized fonts and potential for other languages.
Allen and Linda also share some early creations with these new models.
Whether you're building the next great voice app, creating dynamic content, or just curious about the cutting edge of AI, this episode offers a developer-focused perspective on the future of generative media.
00:00:00: Introduction to Two Voice Devs at I/O 2025
00:00:50: I/O 2025: New Generative AI Models Overview
00:01:20: Lyria: Streaming Audio Generation and Documentation Challenges
00:03:00: Lyria's Practical Use Cases & Generative AI Copyright Questions
00:10:00: Veo 3: Video Generation with Synchronized Audio and Voice Features
00:12:10: Veo 3 Pricing and Cost Implications for Developers
00:14:20: Imagen 4: Improved Text Generation in Images
00:17:40: Professional Use Cases for Veo and Imagen
00:19:10: Flow: The New Professional Studio System for Creators
00:22:00: Gemini Ultra Tiered Pricing and Regional Restrictions
00:24:20: Concluding Thoughts and Call to Action
#GoogleIO2025 #GenerativeAI #AIModels #Lyria #Veo3 #Imagen4 #FlowAI #TwoVoiceDevs #VoiceTech #ConversationalAI #AIDevelopment #MachineLearning #ContentCreation #YouTubeCreators #GoogleAI #VertexAI #GeminiUltra #CopyrightAI #TechPodcast
-
Recorded live from the podcast space at I/O, Allen Firstenberg and Roya dive into the overwhelming, yet incredibly exciting, world of AI announcements permeating the conference this year.
They discuss the pervasive theme of AI augmenting human intelligence rather than replacing it, exploring concrete examples across various domains. From breakthroughs in mathematics with AlphaProof to the efficiency gains of the new Gemma 3 model (running on small devices with a tiny memory footprint and reduced environmental impact), they cover the cutting edge of AI research and application.
Discover how models like CoScientists and Notebook LM are revolutionizing research and productivity (including generating podcasts from your notes!), the advancements in Gemini's audio output for more natural and multilingual conversations, and the potential for intelligence explosion with Alpha Evolve. Allen and Roya also unpack the fascinating Gemini Diffusion model's application to text and code generation and the critical role of AI in healthcare with the Amy model.
The conversation wouldn't be complete without tackling the big question: the AGI (Artificial General Intelligence) debate. Is it coming soon, or is it still a distant concept? Join Allen and Roya for their perspectives straight from the heart of Google I/O.
Tune in to get a developer's perspective on the future of AI driven by the latest announcements from Google I/O!
00:00 - Intro & AI Everywhere at I/O
01:36 - The Core Theme: AI Augments Humans
01:55 - AI in Math: AlphaProof
04:30 - Gemma 3: Small, Efficient, Open Models
07:07 - AI for Researchers: CoScientists & Notebook LM
10:05 - Enhanced Audio: Gemini Voice & Translation
12:09 - Alpha Evolve: Feedback Loops & Intelligence Explosion
14:15 - Gemini Diffusion: Diffusion for Text & Code
21:11 - AI in Healthcare: The Amy Model
22:08 - The AGI Debate: Is it Coming?
#GoogleIO #IO2025 #AI #MachineLearning #DeepLearning #GeminiAI #GemmaAI #DiffusionModels #NotebookLM #HealthcareAI #AGI #ArtificialGeneralIntelligence #TwoVoiceDevs #TechPodcast #Developers #ConversationalAI
-
Episodi mancanti?
-
The buzz from Google IO 2025 is deafening, especially about the new smart glasses announcement! On this episode of Two Voice Devs, Allen Firstenberg and Noble Ackerson — former Google Glass Explorers themselves — dive deep into their first impressions of Google's Project Astra / Android XR / Gemini glasses prototype.
Drawing on their unique experience from the early days of Glass, Allen and Noble discuss the evolution of wearable computing, the collision of conversational AI (Gemini) and spatial computing (Android XR), and what this new device means for the future.
They share their thoughts on the hardware design, the user interface (is it Gemini, Android XR, or both?), and critically examine the product strategy compared to Glass and other devices like the Apple Vision Pro. Most importantly for developers, they ponder the crucial question: what is the developer story here? Is Google providing the necessary tools and documentation, or are we repeating past mistakes?
Tune in for a candid, experienced perspective on Google's latest foray into smart glasses and whether this iteration truly builds on the lessons learned from the past.
0:00:30 - Introduction: Google IO buzz and the glasses question
0:01:16 - Remembering Google Glass: First impressions & the "art of the possible"
0:02:35 - From Glass to Assistant: The evolution of ubiquitous computing
0:03:42 - The Collision: Conversational AI meets Spatial Computing
0:03:58 - First Impressions: Trying on the new Google glasses prototype at IO
0:04:25 - How Glass Shaped Us: Focusing on human factors and product strategy
0:05:44 - The "If You Build It They Will Come" Trap: Why problem-solving is key
0:07:48 - Contrasting with Apple Vision Pro & the "Start with VR" concern
0:09:14 - Breaking Down the Stack: Hardware, Android XR, and Gemini
0:14:24 - Hardware Deep Dive: Weight, balance, optics, and the lower display decision
0:18:38 - UI/Interaction Discussion: Gemini's role, gestures, voice/tap inputs
0:19:37 - The Developer Story: Lack of clarity and need for APIs/documentation
0:27:55 - Rapid Fire: Best thing & Biggest Irk point about the prototype
0:32:16 - The Big Question: Would we buy one today?
0:33:08 - Final Thoughts: Value proposition and learning from Glass
#AndroidXR #Gemini #GoogleGlass #GoogleIO #IO2025 #ProjectAstra #SmartGlasses #WearableTech #SpatialComputing #ConversationalAI #VoiceFirst #VoiceDevs #GlassExplorers #TechPodcast #DeveloperLife #HumanComputerInteraction #ProductStrategy #Google #GoogleDeepMind #DeepMind
-
Join us on Two Voice Devs as Allen Firstenberg talks with Rizel Scarlett, Tech Lead for Open Source Developer Relations at Block. Rizel shares her fascinating journey from psychology student to software engineer and now a leader in developer advocacy, highlighting her passion for teaching and creative problem-solving.
The conversation dives deep into Block's innovative open source work, particularly their AI agent called Goose, which leverages the Model Context Protocol (MCP). Rizel explains what MCP is, seeing it as an SDK or API for AI agents, and discusses the excitement around its potential to democratize coding and other tools for developers and non-developers alike, sharing compelling use cases like automating tasks in Google Docs and interacting with Blender.
However, the discussion doesn't shy away from the critical challenges facing MCP, especially concerning security. Rizel addresses concerns about trusting community-built MCP servers, potential vulnerabilities, and mitigation strategies like allow lists and building internal, vetted servers. They also explore the complexities of exposing large APIs, the demand for local AI for privacy, the current limitations of local models, and the user experience of installing and trusting MCP plugins.
Rizel shares examples of promising MCP servers, including those focused on "long-term memory" and, notably, a speech/voice-controlled coding server, bringing the conversation back to the show's roots in voice development and accessibility, touching upon the concept of temporary disability.
The episode concludes by reflecting on whether MCP is currently a "small, beginner solution" being hyped as a "massive, full-featured" one, the need for more honest conversations about its limitations, and the ongoing efforts within the community and companies like Block to improve the protocol, including discussions around official registries and easier installation methods like deep links.
Tune in for a candid look at the exciting, yet challenging, landscape of AI agents, MCP, and open source development.
More Info:
* Goose - https://github.com/block/goose
* Pieces for Developers - https://pieces.app/features/mcp
* Speech MCP - https://glama.ai/mcp/servers/@Kvadratni/speech-mcp
[00:00:48] Meet Rizel Scarlett & Her Career Journey (Psychology to Dev Advocacy)
[00:03:54] Introducing Block & Its Mission (Square, Cash App, etc.)
[00:04:58] Block's Open Source Division and the Goose AI Agent
[00:05:48] Diving into the Model Context Protocol (MCP)
[00:07:56] What is MCP? (SDK for Agents) & Exciting Use Cases (Democratization, non-developers)
[00:10:36] Major Security Concerns with MCP (Trust, vulnerabilities, typo squatting)
[00:11:48] Mitigation Strategies & Authentication (Allow Lists, Internal Servers, Vetting)
[00:17:59] The Current State of MCP: An Infancy Protocol
[00:20:09] Complexity & Context Window Challenges with MCP Servers
[00:23:14] User Demand for Local AI & Data Privacy
[00:25:31] User Experience of MCP Plugin Installation & Trust
[00:28:42] Examples of Useful MCP Servers (Pieces, Computer Controller, Speech)
[00:31:18] The Power of Voice-Controlled Coding (Accessibility, temporary disability)
[00:33:59] MCP: Hype vs. Reality & The Need for Honest Conversations
[00:36:00] Efforts to Improve MCP (Committees, Registries, Deep Links)
#developer #programming #tech #opensource #block #ai #aigent #llm #mcp #modelcontextprotocol #devrel #developeradvocacy #security #cybersecurity #privacy #localai #remoteai #accessibility #voicecoding #riselscarlett #gooseai
-
How do you know if a Large Language Model is good for your specific task? You benchmark it! In this episode, Allen speaks with Amy Russ about her fascinating career path from international affairs to data, and how that unique perspective now informs her work in LLM benchmarking.
Amy explains what benchmarking is, why it's crucial for both model builders and app developers, and how it goes far beyond simple technical tests to include societal, cultural, and ethical considerations like preventing harms.
Learn about the complex process involving diverse teams, defining fuzzy criteria, and the technical tools used, including data versioning and prompt template engines. Amy also shares insights on how to get involved in open benchmarking efforts and where to find benchmarks relevant to your own LLM projects.
Whether you're building models or using them in your applications, understanding benchmarking is key to finding and evaluating the best AI for your needs.
Learn More:
* ML Commons - https://mlcommons.org/
Timestamps:
00:18 Amy's Career Path (From Diplomacy to Data)
02:46 What Amy Does Now (Benchmarking & Policy)
03:38 Defining LLM Benchmarking
05:08 Policy & Societal Benchmarking (Preventing Harms)
07:55 The Need for Diverse Benchmarking Teams
09:55 Technical Aspects & Tooling (Data Integrity, Versioning)
10:50 Prompt Engineering & Versioning for Benchmarking
12:48 Preventing Models from Tuning to Benchmarks
15:30 Prompt Template Engines & Generating Prompts
17:10 Other Benchmarking Tools & Testing Nuances
19:10 Benchmarking Compared to Traditional QA
21:45 Evaluating Benchmark Results (Human & Metrics)
23:05 The Challenge of Establishing an Evaluation Scale
23:58 How to Get Started in Benchmarking (Volunteering, Organizations)
25:20 Open Benchmarks & Where to Find Them
26:35 Benchmarking Your Own Model or App
28:55 Why Benchmarking Matters for App Builders
29:55 Where to Learn More & Follow Amy
Hashtags:
#LLM #Benchmarking #AI #MachineLearning #GenAI #DataScience #DataEngineering #PromptEngineering #ModelEvaluation #TechPodcast #Developer #TwoVoiceDevs #MLCommons #QA
-
Join Allen Firstenberg from Google Cloud Next 2025 as he sits down with Ankur Kotwal, Google's Global Head of Cloud Advocacy. In this episode of Two Voice Devs, Allen and Ankur dive deep into the world of Developer Relations (DevRel) at Google, discussing its crucial role as a bridge connecting Google's product teams and engineers with the global developer community.
Ankur shares his fascinating personal journey, from coding BASIC as a child alongside his developer dad to leading a key part of Google Cloud's developer outreach. They explore the ever-evolving landscape of technology, using the metaphor of "waves" – from early desktop computing and the internet to mobile apps and the current tidal wave of AI and "vibe coding."
This conversation offers valuable insights for all developers navigating the pace of technological change. Discover what Developer Relations is and how it serves as that essential bridge, functioning bidirectionally (both outbound communication and inbound feedback). Learn about the importance of community programs like Google Developer Experts (GDEs), and how developers can effectively connect with DevRel teams to share their experiences and help shape the future of products. Ankur and Allen also reflect on the need for continuous learning, understanding underlying tech layers, and the shared passion that drives innovation in our industry.
Whether you're a long-time developer or just starting out, learn how to ride the waves, connect with peers, and make your voice heard in the developer ecosystem by engaging with the DevRel bridge.
More Info:
* Google Developers Program: https://goo.gle/google-for-developers
Timestamps:
00:49 - Ankur's Role as Global Head of Cloud Advocacy
01:48 - The Bi-directional Nature of Developer Relations
02:34 - Ankur's Journey into Tech and DevRel
09:47 - What is Developer Relations? (The DevRel Bridge Explained)
12:06 - The Value of Community and Google Developer Experts (GDEs)
14:08 - Allen's Motivation for Being a GDE
18:24 - Riding the Waves of Technological Change (AI, Vibe Coding)
20:37 - The Importance of Understanding Abstraction Layers
25:41 - How Developers Can Engage with the DevRel Bridge
30:50 - Providing Feedback: Does it Make a Difference?
Hashtags:
#DeveloperRelations #DevRel #GoogleCloud #CloudAdvocacy #DeveloperCommunity #TechEvolution #AI #ArtificialIntelligence #VibeCoding #GoogleGemini #SoftwareDevelopment #Programming #Google #GoogleCloudNext #GoogleDevRel #GDG #GDE #TwoVoiceDevs #Podcast #Developers
-
Join Allen Firstenberg and Alice Keeler, the Two Voice Devs, live from Day 1 of Google Cloud Next 2025 in Las Vegas! In this episode, recorded amidst the energy of the show floor, Allen and Alice dive into the major announcements and highlights impacting developers, especially those interested in AI and conversational interfaces.
Alice, known as the "Queen of Spreadsheets" and a Google Developer Expert for Workspace and App Sheet, shares her unique perspective on using accessible tools like App Script for real-world solutions, contrasting it with the high-end tech on display.
They unpack the new suite of generative AI models announced, including Veo for video, Chirp 3 for audio, Lyric for sound generation, and updates to Imagen, all available on Vertex AI. They recount the breathtaking private premiere at Sphere, discussing how Google DeepMind's cutting-edge AI enhanced the classic Wizard of Oz film, expanding and interpolating scenes that never existed – and connect this advanced technology back to tools developers can use today.
A major focus is the new Agent Builder, a tool poised to revolutionize how developers create multimodal AI agents capable of natural voice, text, and image interactions, demonstrated through exciting examples. They discuss the accessibility of this tool for developers of all levels and its potential to automate tedious tasks and create entirely new user experiences.
Plus, they touch on the new Agent to Agent Protocol for complex AI workflows, updates to AI Studio, and the production readiness of the Gemini 2.0 Live API.
Get a developer's take on the biggest news from Google Cloud Next 2025 Day 1 and a look ahead to the developer keynote.
More Info:
* Google Developers Program: https://goo.gle/google-for-developers
* Next 2025 Announcements: https://cloud.google.com/blog/topics/google-cloud-next/google-cloud-next-2025-wrap-up
00:00:31 Welcome to Google Cloud Next 2025
00:01:18 Meet Alice Keeler: Math Teacher, GDE, and App Script Developer
00:03:44 App Script: Accessible Development & Real-World Solutions
00:05:40 Cloud Next 2025 Day 1 Keynote Highlights
00:06:18 New Generative AI Models: Veo (Video), Chirp 3 (Audio), Lyric (Sound), Imagen Updates
00:09:00 The Sphere Experience & DeepMind's Wizard of Oz AI Enhancement
00:14:00 From Hollywood Magic to Public Tools: Vertex AI Capabilities
00:16:30 Agent Builder: The Future of AI Agents & Accessible Development
00:23:37 Agent to Agent Protocol: Enabling Complex AI Workflows
00:25:20 Other Developer News: AI Studio Revamp & Gemini 2.0 Live API
00:26:30 Connecting with Experts & Discovering What's Next
#GoogleCloudNext #GCNext #LasVegasSphere #SpehereLasVegas #TwoVoiceDevs #AI #GenerativeAI #VertexAI #Gemini #AgentBuilder #AppScript #Developers #LowCode #NoCode #AIInEducation #AIDevelopment #ConversationalAI #VoiceAI #MachineLearning #WizardOfOz
-
Following up on our recent conversation about the Model Context Protocol (MCP), Mark and Allen take a step deeper from a developer's perspective. While still in the shallow end, they explore the TypeScript SDK, the MCP Inspector tool, and the Smithery.ai registry to understand how developers define and host MCP servers and tools.
They look at code examples for both local (Standard IO) and potentially remote (Streamable HTTP) server implementations, discussing how tools, resources, and prompts are registered and interact. They also touch on the challenges of configuration, authentication, and the practical messy realities encountered when trying to use MCP tools in clients like Claude Desktop.
This code dive generates more questions than answers about the practical hosting models, configuration complexities, and the vision for MCP in the AI ecosystem. Is it the USBC of AI tools, or more like a 9-pin serial port needing detailed manual setup? Join Mark and Allen as they navigate the current state of MCP code and ponder its future role.
If you have insights into these complexities or are building with MCP, they'd love to hear from you!
00:40 Following up on the previous MCP episode
01:20 Reconsidering MCP's purpose and metaphors
03:25 Practical challenges with clients (like Claude Desktop) and configuration
05:00 Discussing future AI interfaces and app integration
09:15 Understanding Local vs. Remote MCP servers and hosting models
12:10 Comparing MCP setup to early web development (CGI)
13:20 Diving into the MCP TypeScript SDK code (Standard IO, HTTP transports)
23:00 Running a local MCP server and using the Inspector tool
23:50 Code walkthrough: Defining tools, resources, and prompts
31:15 Exploring remote (HTTP) connection options in the Inspector
32:30 Introducing Smithery.ai as a potential MCP registry
33:45 Navigating the Smithery registry and encountering configuration confusion
36:15 Analyzing server source code vs. registry listings - Highlighting discrepancies
44:30 Reflecting on the current practical usability and complexity of MCP
46:10 Analogy: MCP as a serial port vs. USBC
#ModelContextProtocol #MCP #AIDevelopment #DeveloperTools #Programming #TypeScript #APIs #ToolsForAI #LLMTools #TechPodcast #SoftwareDevelopment #TwoVoiceDevs #AI #GenerativeAI #Anthropic #Google #LangChain #Coding #AIAPI
-
Join Allen Firstenberg and Michal Stanislawek on Two Voice Devs as they dive deep into the Model Context Protocol (MCP), a proposition by Anthropic that's gaining traction in the AI landscape. What exactly is MCP, and is it the key to seamless integration of external services with large language models?
In this insightful discussion, Allen and Michal unravel the complexities of MCP, exploring its potential to solve integration pain points, its current implementation challenges with local "servers," and the crucial missing pieces like robust authentication and monetization. They also discuss the implications of MCP for AI applications, compare it to established protocols, and ponder its relationship with Google's newly announced Agent to Agent (A2A) protocol.
Is MCP a game-changer that will empower natural language interaction with all kinds of software, from Blender to Slack? Or are there fundamental hurdles to overcome before it reaches its full potential? Tune in to get a developer's perspective on this evolving technology and understand its possible future in the world of AI.
Timestamps:
00:00:55: What is MCP and what does it stand for?
00:02:35: What pain points is MCP trying to solve?
00:04:35: The local nature of current MCP "servers" and its implications.
00:07:15: MCP as a communication protocol and the concept of "tools."
00:08:35: The potential for MCP server discovery and the lack thereof currently.
00:10:25: Security and trust concerns with local MCP servers.
00:13:30: The intended architecture of MCP and the local server model.
00:16:35: The absence of built-in authentication and authorization in MCP.
00:18:35: MCP as a standardized framework and the "plugin" analogy.
00:20:35: MCP's role in defining "AI apps."
00:22:35: The need for a registry component for broader adoption.
00:23:35: What MCP clients currently exist and the breadth of adoption.
00:26:25: MCP and its application in the context of AI agents.
00:29:25: What is still needed for widespread adoption of remote MCP servers?
00:35:15: The concept of an MCP "meta server" or aggregator.
00:38:55: How does Google's Agent to Agent (A2A) protocol fit in?
00:41:45: The debate between MCP servers and specialized AI agents.
00:43:15: The right level of abstraction for tool definitions.
00:46:05: The future evolution of MCP and the importance of experimentation.
#MCP #ModelContextProtocol #AI #LargeLanguageModels #LLM #Anthropic #Claude #ClaudeDesktop #ClaudeOS #Google #Agent2Agent #A2A #GeminiOS #ServerClient #AIAgents #Developer #TechPodcast #TwoVoiceDevs #APIs #SoftwareIntegration #FutureofAI
-
Following up on last week's captivating discussion, Allen Firstenberg and Noble Ackerson dive deeper into the world of Generative UI. Explore real-world examples of its potential pitfalls and discover how Noble is tackling these challenges through innovative approaches.
This episode unveils the power of dynamically adapting user interfaces based on preferences and intent, ultimately aiming for outcome-focused experiences that seamlessly guide users to their goals. Inspired by the insightful quotes from Arthur C. Clarke ("Any sufficiently advanced technology is indistinguishable from magic") and Larry Niven ("Any sufficiently advanced magic is indistinguishable from technology"), we explore how fine-tuning Large Language Models (LLMs) can bridge this gap.
Noble shares a practical demonstration of a smart home dashboard leveraging Generative UI and then delves into the crucial technique of fine-tuning LLMs. Learn why fine-tuning isn't about teaching new knowledge but rather new patterns and vocabulary to better understand domain-specific needs, like rendering accessible and effective visualizations. We demystify the process, discuss essential hyperparameters like learning rate and training epochs, and explore the practicalities of deploying fine-tuned models using tools like Google Cloud Run.
Join us for an insightful conversation that blends cutting-edge AI with practical software engineering principles, revealing how seemingly magical user experiences are built with careful technical considerations.
Timestamps:
0:00:00 Introduction and Recap of Generative UI
0:03:20 Demonstrating Generative UI Pitfalls with a Smart Home Dashboard
0:05:15 Dynamic Adaptation and User Intent
0:11:30 Accessibility and Customization in Generative UI
0:13:30 Encountering Limitations and the Need for Fine-Tuning
0:17:50 Introducing Fine-Tuning for LLMs: Adapting Pre-trained Models
0:19:30 Fine-Tuning for New Patterns and Domain-Specific Understanding
0:20:50 The Role of Training Data in Supervised Fine-Tuning
0:23:30 Generalization of Patterns by LLMs
0:24:20 Exploring Key Fine-Tuning Hyperparameters: Learning Rate and Training Epochs
0:30:30 Demystifying Supervised Fine-Tuning and its Benefits
0:33:30 Saving and Hosting Fine-Tuned Models: Hugging Face and Google Cloud Run
0:36:50 Integrating Fine-Tuned Models into Applications
0:38:50 The Model is Not the Product: Focus on User Value
0:39:40 Closing Remarks and Teasing Future Discussions on Monitoring
Hashtags:
#GenerativeUI #AI #LLM #LargeLanguageModels #FineTuning #MachineLearning #UserInterface #UX #Developers #Programming #SoftwareEngineering #CloudComputing #GoogleCloudRun #GoogleGemini #GoogleGemma #HuggingFace #AIforDevelopers #TechPodcast #TwoVoiceDevs #ArtificialIntelligence #TechMagic
-
Allen and Noble dive deep into the fascinating world of Generative UI, a concept that goes beyond simply using AI to design interfaces and explores the possibility of UIs dynamically generated in real-time by AI LLMs, tailored to individual user needs and context. Noble, a returning Google Developers Expert in AI, clarifies the crucial distinction between generative UI and AI-aided UI generation. They discuss potential applications like dynamic menus and personalized settings, while also tackling the challenges around predictability, usability, and the role of established design patterns. Discover how agents, constrained within defined boundaries, can power this technology and the current limitations when it comes to generating complex UI components. Join the conversation as they explore the cutting edge of how AI could revolutionize the way we interact with software.
Timestamps:
00:00:00 - Introduction and Noble's return as a Google Developers Expert in AI
00:02:00 - Defining Generative UI and distinguishing it from AI-aided design
00:03:30 - Exploring potential examples of Generative UI based on user needs and context
00:04:45 - The difference between traditional static UIs and dynamic generative UIs
00:06:45 - How LLMs can be leveraged for real-time UI generation
00:07:15 - The overlap and distinction between Generative UI and Conversational
UI
00:08:30 - Challenges of Generative UI: Predictability and guiding users
00:09:30 - The importance of maintaining established UX patterns in Generative UI
00:12:30 - Traditional UI limitations and the promise of personalized generative UIs
00:14:00 - Context-specific information access and adapting to user roles
00:15:30 - An example of Generative UI in a business intelligence dashboard
00:17:00 - A six-stage pipeline for how Generative UI systems might work
00:19:00 - The concept of "agents on rails" in the context of UI generation
00:20:30 - The reasoning and tool-calling aspects of generative UI agents
00:22:30 - Tools as the core of UI generation and component recognition challenges
00:24:30 - Demonstrating the dynamic generation of UI components (charts)
00:27:30 - Exploring interactions and limitations of the generative UI demo
00:29:15 - The "hallucination" of UI components and the need for fine-tuning
00:31:30 - Conclusion and future discussion on component fine-tuning
#GenerativeUI #AI #LLM #UserInterface #UX #AIDesign #DynamicUI #TwoVoiceDevs #GoogleDevelopersExperts #TechPodcast #SoftwareDevelopment #WebDevelopment #AIAgents
-
DeepSeek AI is turning heads, achieving incredible results with older hardware and clever techniques! Join Allen and Roya as they unravel the secrets behind DeepSeek's success, from their unique attention mechanisms to their cost-effective AI training strategies. But is all as it seems? They also tackle the controversies surrounding DeepSeek, including accusations of data plagiarism and concerns about censorship. This episode is a must-listen for anyone interested in the future of AI!
Timestamps:
0:00 Why DeepSeek is creating buzz
1:06 Unveiling DeepSeek's Two Key Models
2:59 Understanding the Power of Attention
4:12 What is the latent space?
5:55 The nail salon example: Multi-Head Attention Explained
10:02 The doctor/cook/police analogy: Mixture of Experts Explained
13:51 AI vs. AI: DeepSeek's Cost-Saving Training Method
16:01 Hallucinations: Is AI Training Too Risky?
20:59 What are Reasoning Models and Why Do They Matter?
26:53 LLMs are pattern systems explained
28:22 How DeepSeek is using old GPUs
32:53 OpenAI vs. DeepSeek: The Data Plagiarism Debate
39:32 Political Correctness: The Challenge of Guardrails in AI
42:16 Why Open Source is Crucial for the Future of AI
43:20 Run DeepSeek locally on OLAMA
43:56 Final Thoughts
Hashtags: #DeepSeek #AI #LLM #Innovation #TechNews #Podcast #ArtificialIntelligence #MachineLearning #Ethics #OpenAI #DataPrivacy #Censorship #TwoVoiceDevs #DeepLearning #ReasoningModel #AIRevolution #ChinaTech
-
Amazon has announced Alexa Plus, powered by large language models (LLMs), and developers are buzzing with anticipation (and a healthy dose of skepticism!). Join Mark Tucker and Allen Firstenberg, your Two Voice Devs, as they dissect the news, explore the potential of the AI-native SDKs, and debate whether this overhaul will reignite the spark for Alexa development.
In this deep dive, we cover:
* The basics of Alexa Plus: What it is, who gets it for free, and how it differs from classic Alexa skills.
* The fate of classic Alexa skills: Are they migrating, evolving, or being left behind? We explore how current skills might benefit from AI enhancements.
* Alexa's New AI SDKs (Alexa+):
** Action SDK: Turn your existing APIs into voice experiences. Is it all about selling stuff?
** WebAction SDK: Integrate your website with Alexa using low-code workflows. But how does it really work?
** Multi-Agent SDK: Surface your existing bots and agents through Alexa. What's the difference between these and existing Alexa skills?
* The Big Questions: Personalization, monetization, notifications, handling hallucinations, response times, identity, and more!
* And finally, our predictions! Will Alexa Plus make developing for Alexa fun again? Mark and Allen give their takes!
Whether you're a seasoned Alexa developer or just curious about the future of voice interfaces, this episode is packed with insights, questions, and a healthy dose of developer humor. Subscribe to Two Voice Devs for more cutting-edge discussions on voice technology!
More Info:
* https://developer.amazon.com/en-US/blogs/alexa/alexa-skills-kit/2025/02/new-alexa-announce-blog
Timestamps:
0:00:00 Introduction
0:01:00 Alexa Plus Overview
0:02:00 Pricing & Classic Skills
0:05:00 Access & Availability
0:06:00 Alexa AI SDKs
0:12:00 Action SDK
0:21:00 WebAction SDK
0:27:00 Multi-Agent SDK
0:31:00 Big Questions for Developers
0:36:00 Will Alexa Be Fun Again?
0:41:00 Response Times & Notifications
0:45:00 Multimodal Experiences
0:46:00 Conclusion
#Alexa #AlexaPlus #VoiceDevelopment #AI #LLM #Amazon #Skills #VoiceFirst #Podcast #Developer #Tech #ArtificialIntelligence #TTS #ASR # ConversationalAI
-
Allen Firstenberg and Linda Lawton dive deep into the power of Google's Imagen 3 Editing API. Discover how to effortlessly edit and enhance images, opening up a world of creative possibilities for developers!
* Learn how the In-Painting/In-filling feature can quickly remove wires from an image, add highlights, and correct shading on images that the AI generated, and more.
* Explore how to create your own 3D-printed objects from scratch using AI.
* Discover how you can reference images to put models or products into a specific scene.
* Learn how to use the Out-Painting feature to extend images beyond their original boundaries, transforming portraits into landscapes and beyond.
Also, be prepared for some unexpected and hilarious AI hallucinations along the way as Allen tries to zoom out from an image multiple times! Plus, the duo discusses the ethical implications of AI-generated content and how creatives can leverage these tools to enhance their own artwork.
Don't miss this exciting exploration of Imagen 3 and its potential to revolutionize image manipulation for developers and creators alike!
Timestamps:
00:00:00 Introduction
00:00:55 Imagen 3 Editing API
00:04:36 In-Painting/In-Filling
00:04:52 Generating 3D Models
00:09:00 Vertex AI Studio
00:10:15 Imagen and Gemini Together
00:13:14 Generating Images with Reference Images
00:20:11 Out-Painting
00:31:00 Ethical Implications
#Imagen3 #AI #ImageEditing #GoogleAI #VertexAI #VertexAISprint #MachineLearning #DeveloperTools #GenerativeAI #GenAI #3DPrinting #AIArt
-
Are you building AI models and systems? Then you need to understand AI ethics! In this episode of Two Voice Devs, Allen Firstenberg welcomes Parul, a Senior Production Engineer at Meta, to dive deep into the world of AI ethics. Learn why fairness and bias are critical considerations for developers, and discover practical techniques to mitigate bias in your AI systems.
Parul shares her experiences and passion for AI ethics, detailing how biases in training data and system design can lead to unfair or even harmful outcomes. This episode provides concrete examples, actionable advice, and valuable resources for developers who want to build more ethical and equitable AI.
More Info:
* Fairlearn: https://fairlearn.org/
* AIF360: https://aif360.readthedocs.io/en/stable/
* what-if tool: https://pair-code.github.io/what-if-tool/
Timestamps:
00:00:00 Introduction
00:00:20 Guest Introduction: Parul, Meta
00:02:22 What is AI Ethics?
00:06:13 Why is AI Ethics Important?
00:08:15 AI Systems vs. AI Models
00:09:52 Examples of Bias in AI Systems
00:12:23 Minimizing Biases: Developer Responsibility
00:14:53 Tips for Minimizing Unfairness and Biases
00:19:40 Fairness Constraints: Demographic Parity
00:23:17 The Bigger Picture: Roles & Responsibilities
00:29:23 Monitoring: Bias Benchmarks
00:32:00 Open Source Frameworks for AI Ethics
00:34:02 Call to Action & Closing
#AIethics #Fairness #Bias #MachineLearning #ArtificialIntelligence #Developers #OpenSource #EthicalAI #TwoVoiceDevs #TechPodcast #DataScience #AIdevelopment
-
Are you overwhelmed by the sheer number of Large Language Models (LLMs) available? Choosing the right LLM for your project isn't about picking the most popular one – it's about understanding your specific needs and rigorously evaluating your options.
In this episode of Two Voice Devs, Allen Firstenberg and guest host Brad Nemer, a seasoned product manager, dive deep into the world of LLM evaluation. They go beyond the marketing buzz and explore practical tools and strategies for making informed decisions.
Whether you're a developer, a product manager, or just curious about the practical applications of LLMs, this episode provides invaluable insights into making the right choices for your projects. Don't get caught up in the hype – learn how to evaluate LLMs effectively!
More Info:
https://www.udacity.com/blog/2025/01/how-to-choose-the-right-ai-model-for-your-product.html
[00:00:00] Introduction: Meet Brad Niemer
[00:00:38] Brad's Journey to Product Management & AI
[00:03:12] Collaboration with Noble Ackerson and the LLM Evaluation Challenge
[00:05:23] The Role of a Product Manager.
[00:07:43] Product manager relation to engineering.
[00:13:46] Exploring Evaluation Tools: Hugging Face
[00:16:58] Exploring Evaluation Tools: Chatbot Arena (Human Evaluation)
[00:20:30] Chatbot Arena: Code Generation Evaluation
[00:24:43] Evaluating LLMs: Beyond Chatbots and Truth
[00:26:11] Exploring Evaluation Tools: Artificial Analysis (Quality, Speed, Price)
[00:28:47] Exploring Evaluation Tools: Galileo (Hallucination Report)
[00:31:16] Case Study: DeepSeek and the Importance of Contextual Evaluation
[00:34:53] The Future of LLM Testing and Quality Assurance
[00:37:49] Wrap Up contact information.
#LLM #LargeLanguageModels #AIEvaluation #ProductManagement #TechTalk #TwoVoiceDevs #HuggingFace #GenAI #GenerativeAI #ChatbotArena #ArtificialAnalysis #Galileo #DeepSeek #ChatGPT #Gemini #Mistral #Claude #ModelSelection #AIdevelopment #SoftwareDevelopment #Testing #QA #RAG #MachineLearning #NLP #Coding #TechPodcast #YouTubeTech #Developers
-
Google's white paper on AI Agents has sparked debate – are they truly the next leap in AI, or just large language models dressed up with new terminology? Join Allen and Mark of Two Voice Devs as they dive into the details, exploring the potential of Google's framework while also critically examining its shortcomings. They analyze the core components of agents – models, tools, and orchestration – highlighting the value of defining tools as capable of taking actions. But they also raise key questions about the blurry line between models and agents, the confusing definitions of extensions and functions, and the critical omission of authentication and identity considerations. This episode is a balanced take on a fascinating and complex topic, offering developers valuable insights into the evolution of AI systems.
Key Moments:
[00:00:20] The core definition of agents: A promising start, or too broad?
[00:05:08] Model vs. Orchestration: Understanding the decision-making layers.
[00:17:33] "Tools" Unpacked: Exploring actions, extensions, and functions
[00:25:14] The crucial gap: Authentication, Identity, and User context.
[00:29:36] Reasoning techniques: React, Chain, and Tree of Thought explained.
[00:35:41] The model-agent debate: Where is the boundary line?
[00:37:45] Setting the stage for Gemini 2.0?
[00:39:06] A valuable discussion starter, but with room to grow.
Hashtags:
#AIAgents #GoogleAI #LLM #GenerativeAI #AIInnovation #TechAnalysis #TwoVoiceDevs #AIDevelopment #AIArchitecture #SoftwareEngineering #DeveloperPodcast #GeminiAI #MachineLearning #DeepLearning #AITools #Authentication #TechDiscussion #BalancedTech
-
Join Allen Firstenberg as he welcomes Lee Mallon, a first-time guest host, for an in-depth discussion about the future of development, user experiences, and the exciting potential of AI-driven personalization! Lee shares his journey from coding on a Toshiba MX 128k to becoming CTO of Yalpop, a company reinventing learning through personalized experiences. This isn't just another AI hype-cast – it's a deep dive into how we can shift our mindset to truly put users at the center of our development process, leveraging new tech to create delightful and efficient experiences.
Lee and Allen discuss everything from the limitations of current recommendation engines to the emerging potential of AI agents and just-in-time interfaces. This is a must-watch for any developer looking to stay ahead of the curve and build truly impactful applications.
#AI #ArtificialIntelligence #GenAI #GenerativeAI #Personalization #UserExperience #UX #Development #WebDev #FutureOfTech #LLMs #LargeLanguageModels #AIagents #MachineLearning #SoftwareDevelopment #Programming #WebDevelopment #TwoVoiceDevs #Podcast #TechPodcast #Innovation #Code #Coding #Developer #TechTrends #UserCentricDesign #Web4 #NoCode #LowCode #DigitalTransformation
-
Hold onto your keyboards, folks! AI is shaking up the software engineering world, and in this electrifying episode of Two Voice Devs, Allen and Mark are diving headfirst into the chaos. We're not just talking about the theory – we're getting real about how AI coding tools are actually impacting developers right now. Is this the end of coding as we know it, or the dawn of a new era of software creation?
More Info:
* https://newsletter.pragmaticengineer.com/p/how-ai-will-change-software-engineering
* https://addyo.substack.com/p/the-70-problem-hard-truths-about
[00:00:00] Introduction: Meet Allen and Mark and hear about their busy start to the year.
[00:00:39] The Trigger: Discover the article from The Pragmatic Engineer that sparked this conversation about the role of AI in software engineering.
[00:02:16] Addressing the Panic: We discuss the common fear: is AI going to steal developer jobs?
[00:03:34] Key Article Points: Allen breaks down the seven key areas of the article: how developers are using AI, the "70% Problem," and more.
[00:04:43] Design Patterns & Craftsmanship: Mark discusses how AI-driven development relates to established software patterns and developer craftsmanship.
[00:07:44] The Knowledge Paradox: Unpack the key difference in how senior and junior developers use AI and the potential issues it raises.
[00:10:06] AI vs. Stack Overflow: We explore the differences between getting code from AI and from community platforms like Stack Overflow.
[00:12:49] Personal Experiences: Allen and Mark share how they're actually using AI tools in their coding workflows.
[00:17:09] AI Usage Patterns: Discussing the "constant conversation", "trust but verify", and "AI first draft" patterns.
[00:20:55] The 70% Problem Revisited: Is AI just getting us part way there?
[00:23:24] AI as a Team Member: Exploring the idea of AI as a pair programming partner and whether it's actually helping.
[00:24:41] Trusting your Experience: the importance of listening to the gut feeling of an experienced developer when AI-generated code "feels" wrong.
[00:26:06] Programming Languages are Easy for AI: The simplicity and consistency of programing grammars.
[00:27:47] Is English the New Programming Language?: We debate the idea that natural language is becoming as important as coding and discuss what "programming" really means.
[00:30:36] The Problem with Trying to Make Programming Easy: Historical attempts to make programming easier are revisited.
[00:32:37] Programming vs the Rest of the Job: The core job of a software developer is more than just programming and writing code.
[00:37:21] Quality & Craftsmanship in the Age of AI: We explore what will make software stand out in the future and how crafting great software still matters.
[00:40:27] AI for Personal Software: Could AI drive a renaissance in personal software, similar to the spreadsheet?
[00:42:53] The Importance of AI Literacy: Mastering AI development is the new skill to make developers even more valuable.
[00:43:47] Closing Thoughts: The essential skills of developers remain crucial as we move into the future of AI driven coding.
[00:44:59] Call to Action: We encourage you to join the conversation and share your thoughts on AI and software development.
This isn't just another tech discussion – it's a high-stakes debate about the so
ul of software engineering. Will AI become our greatest ally, or our ultimate replacement? Tune in to find out!
#AIApocalypse #CodeRevolution #SoftwareEngineering #ArtificialIntelligence #Coding #Programming #Developers #TechPodcast #TwoVoiceDevs #MachineLearning #AICoding #FutureofCode #TechDebate #DeveloperSkills #CodeCraft #AIvsHuman #CodeNewbie #SeniorDev #JuniorDev #TechTrends
-
Join Mark and Allen, your favorite Two Voice Devs, as they explore the exciting (and sometimes frustrating!) world of Gemini 2.0's search grounding capabilities and how to use it with LangChainJS! Allen shares his recent holiday project: a deep dive into Google's latest AI tools, including the revamped search grounding feature, and how he made it work seamlessly across Gemini 1.5 and 2.0. We'll show you the code and demonstrate the differences between using search grounding and not, using real-world examples. Learn how to build your own powerful, grounded AI applications and stay ahead of the curve in the rapidly changing AI landscape!
In this episode, you'll discover:
[00:00:00] Introduction to Two Voice Devs and what we've been up to
[00:00:24] Allen discusses tackling bug fixes and updates with Gemini 2.0 and LangChain
[00:00:51] The new Gemini 2.0 Search Grounding Tool: what's new? What does it mean to be "agentic"?
[00:02:13] Allen dives into the Google Search Tool, understanding the differences between 1.5 and 2.0, and building a layer for easy use in LangChain
[00:03:06] Allen walks us through the code! The magic of setting up a model with or without search capabilities in LangChainJS
[00:04:48] Using output parsers and annotating your results in LangChainJS
[00:05:53] Similarities between Perplexity's results, and how LangChainJS handles output
[00:06:46] Running the same query with and without grounding, and the dramatic difference in the response (Who won the Nobel Prize for Physics in 2024?)
[00:08:26] A closer look at how LangChainJS presents its source references and how to use them in your projects.
[00:12:55] Taking advantage of tools that Google is providing
[00:13:20] The goal of keeping backward compatibility for developers
[00:15:39] Exploring how this is a version of RAG and how that compares to using external data sources
[00:16:50] What are data sources in VertexAI and how they relate?
[00:19:14] What is the cost? How is Google pricing the search capability?
[00:20:59] More to come soon from Allen with LangChainJS!
Don't miss this deep dive into cutting-edge AI development! Like, subscribe, and share if you find this information helpful!
#Gemini #LangChain #LangChainJS #AI #ArtificialIntelligence #GoogleAI #VertexAI #SearchGrounding #RAG #RetrievalAugmentedGeneration #LLM #LargeLanguageModels #OpenSource #TwoVoiceDevs #Programming #Coding #GoogleSearch #DataScience #MachineLearning #Innovation #TechPodcast #TechVideo
- Mostra di più