Episodi
-
In this episode, Noam Rubin, a Software Developer at Vanta reveals how his team uses data-driven strategies to design, test, and improve cutting-edge AI features. Learn how customer insights, rapid prototyping, and iterative development transform raw ideas into tools that make compliance and security easier for businesses everywhere.
Chapters:
00:00 - Introduction
02:47 - The process of building AI products at Vanta
04:51 - The role of customer feedback in product development
06:59 - Integrating AI into security and compliance workflows
08:06 - Using data specifications to guide product development
10:10 - Collaborating with subject matter experts to refine AI models
12:14 - Iterative testing and refining AI features
14:10 - Quality control and ensuring AI accuracy
16:00 - The importance of dogfooding and internal feedback loops
18:23 - Scaling AI features and rolling them out to wider audiences
20:50 - Educating engineers and democratizing AI at Vanta
22:20 - Key lessons learned from building AI products
24:12 - Maintaining AI quality through continuous feedback
26:00 - The future of AI in business and product development -
In this episode of High Agency, former OpenAI researcher Stan Polu shares his journey from AI research to founding Dust, an enterprise AI platform. Stan offers a contrarian view on the future of AI, suggesting we may be hitting a plateau in model capabilities since GPT-4. He discusses why startups should focus on product-market fit before investing in GPUs, shares practical lessons for building AI products, and predicts increased competition between AI labs and API developers.
Chapters:
00:00 - Introducing Dust: an enterprise AI platform
06:07 - From Stripe to OpenAI: Stan's journey
10:29 - Why research wasn't enough: building Dust
15:10 - Best practices for building an AI product
20:50 - Is prompt engineering here to stay
23:40 - Understanding language models and their limitations
32:56 - Predictions for AI in 2025
39:53 - Measuring progress toward AGI
42:26 - The true value of AI technology--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is the LLM evals platform for enterprises. We give you the tools that top teams use to ship and scale AI with confidence. To find out more go to humanloop.com -
Episodi mancanti?
-
In this episode, we explore how Replicate is breaking down barriers in AI development through its open-source platform. CEO Ben Firshman shares how Replicate enables developers without machine learning expertise to run AI models in the cloud.
00:00 Introduction
00:29 Overview of Replicate
03:13 Replicate's user base
05:45 Enterprise use cases and lowering the AI barrier
07:45 The complexity of traditional AI deployment
10:24 Simplifying AI with Replicate's API
13:50 ControlNets and the challenges of image models
19:42 Fragmentation in AI models: images vs. language
25:05 Customization and multi-model pipelines in production
26:33 Learning by doing: skills for AI engineers
28:44 Applying AI in governments
31:12 Iterative development and co-evolution of AI specs
33:13 Final reflections on AI hype
35:18 Conclusion--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
How do you build AI tools that actually meet users’ needs? In this episode of High Agency, Raza speaks with Lorilyn McCue, the driving force behind Superhuman’s AI-powered features. Lorilyn lays out the principles that guide her team’s work, from continuous learning to prioritizing user feedback. Learn how Superhuman’s "learning-first" approach allows them to fine-tune features like Ask AI and AI-driven summaries, creating practical solutions for today’s professionals.
00:00 - Introduction
04:20 - Overview of the Superhuman
06:50 - Instant Reply and Ask AI
10:00 - Building On-Demand vs. Always-On AI Features
13:45 - Prompt Engineering for Effective Summarization
22:35 - The Importance of Seamless AI Integration in User Workflows
25:10 - Developing Advanced Email Search with Contextual Reasoning
29:45 - Leveraging User Feedback
32:15 - Balancing Customization and Scalability in AI-Generated Emails
36:05 - Approach to Prioritization
39:30 - Real-World Use Cases: The Versatility of Current AI Capabilities
43:15 - Learning and Staying Updated in the Rapidly Evolving AI Field
46:00 - Is AI Overhyped or Underhyped?
49:20 - Final Thoughts and Closing Remarks--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
This week on High Agency, Raza Habib is joined by Chroma founder Jeff Huber. They cover the evolution of vector databases in AI engineering, challenge common assumptions about RAG and share insights from Chroma's journey. Jeff shares insights from Chroma's development, including their focus on developer experience and observations about real-world usage patterns. They also get into whether or not we can expect a super AI any time soon and what is over and under hyped in the industry today.
00:00 - Introduction
02:30 - Why vector databases matter for AI
06:00 - Understanding embeddings and similarity search
12:00 - Chroma early days
15:45 - Problems with existing vector database solutions
19:30 - Workload patterns in AI applications
23:40 - Real-world use cases and search applications
27:15 - The problem with RAG terminology
31:45 - Dynamic retrieval and model interactions
35:30 - Email processing and instruction management
39:15 - Context windows vs vector databases
42:30 - Enterprise adoption and production systems
45:45 - The journey from GPT-3 to production AI
48:15 - Internal vs customer-facing applications
51:00 - Advice for AI engineers--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
In this episode of High Agency podcast, Peter Gostev shares his experiences implementing LLMs at NatWest and Moonpig. He discusses creating an AI strategy, talks about challenges in deploying LLMs in large organizations, and shares thoughts on underappreciated AI developments.
00:00 - Introduction
00:44 - OpenAI dev day reactions
03:47 - Using AI to automate customer service
10:43 - Impact of AI products
13:41 - Who are the users of LLMs
14:47 - Challenges building with AI in a large enterprise
21:22 - AI use cases at Moonpig
24:34 - How to create an AI strategy
28:10 - Underappreciated AI developments--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an LLM evals platform for enterprises. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
In this episode of High Agency, we're joined by Surojit Chatterjee, former CPO of Coinbase and now CEO of Ema. Surojit unveils his audacious plan to create universal AI employees and revolutionize Fortune 1000 workforce. Drawing from his career at tech giants like Google and Coinbase, he shares how these experiences fueled his vision for Ema. Surojit dives into the challenges of building AI agents, explores the concept of artificial humans, and predicts how this technology could transform the future of SaaS
(00:00:00) Introduction and Surojit’s background
(00:03:00) Founding story of Ema (Universal AI Employee)
(00:04:53) How the Universal AI Employee works
(00:08:39) Ema’s data integration and security
(00:12:57) AI employee use cases in enterprises
(00:15:02) Challenges with building AI agents
(00:16:45) Evaluations, hallucinations, customizing models
(00:19:52) Artificial human metaphor
(00:25:42) AI employee vs humans
(00:31:25) Advice for AI builders
(00:37:14) Is AI overhyped or underhyped?
(00:39:28) How the business model of SaaS will change
--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
Hamel Husain is a seasoned AI consultant and engineer with experience at companies like GitHub, DataRobot, and Airbnb. He is a trailblazer in AI development, known for his innovative work in literate programming and AI-assisted development tools. Shawn Wang (aka Swyx) is the host of the Latent Space podcast, the author of the essay 'Rise of the AI Engineer,' and the founder of the AI Engineer World Fair. In this episode, Hamel and Swyx share their unique insights on building effective AI products, the critical importance of evaluations, and their vision for the future of AI engineering.
Chapters
00:00 - Introduction and recent AI advancements06:14 - The critical role of evals in AI product development
15:33 - Common pitfalls in AI product development
26:33 - Literate programming: A new paradigm for AI development
39:58 - Answer AI and innovative approaches to software development
51:56 - Integrating AI with literate programming environments
58:47 - The importance of understanding AI prompts
01:00:37 - Assessing the current state of AI adoption
01:07:10 - Challenges in evaluating AI models
--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
Raz Nussbaum is a Senior Product Manager in AI at Gong — the leading AI platform for revenue teams. He is an absolute legend when it comes to building and scaling AI products that genuinely deliver value. In this episode, he opens up about what it takes to build successful AI products in an era where things change at lightning speed.
Chapters
00:00 - Introduction
01:16 - How LLMs Changed Product Development at Gong AI
08:32 - Including Product Managers in Development Process
13:05 - Testing and Monitoring Pre vs Post-deployment
17:53 - New Challenges in the Face of Generative AI
19:39 - Shipping Fast and Interacting with the Market
23:25 - What's Next For Gong AI
25:13 - The Psychology of Trusting AI
28:19 - Is AI Overhyped or Underhyped?--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
In this episode, we dive deep into the world of AI-assisted creative writing with James Yu, founder of Sudowrite. James shares the journey of building an AI assistant for novelists, helping writers develop ideas, manage complex storylines, and avoid clichés. James gets into the backlash the company faced when they first released Story Engine and how they're working to build a community of users.
00:00 - Introduction and Background of Sudowrite
02:26 - The Early Days: Concept, Skepticism, and User Adoption
05:20 - Sudowrite's Interface, Features, and User Base
10:23 - Developing and Iterating Features in Sudowrite
17:29 - The Evolution of Story Bible and Writing Assistance
24:27 - Challenges in Maintaining Coherence and AI-Assisted Writing
29:12 - Evaluating AI Features and the Role of Prompt Engineering
33:35 - Handling Tropes, Clichés, and Fine-Tuning for Author Voice
40:43 - The Controversy and Future of AI in Creative Work
51:37 - Predictions for AI in the Next Five Years--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
In this episode, LiveKit CEO Russ d'Sa explores the critical role of real-time communication infrastructure in the AI revolution. From building voice demos to powering OpenAI's ChatGPT, he shares insights on technical challenges around building multimodal AI on the web and what new possibilities are opening up.
00:00 - Introduction and Background
01:34 - The Evolution of AI and Lessons for Founders
05:20 - Timelines and Technological Progress
10:32 - Overview of LiveKit and Its Impact on AI Development
13:39 - Why LiveKit Matters for AI Developers
19:08 - Partnership with OpenAI
21:25 - Challenges in Streaming and Real-Time Data Transmission
30:07 - Building a global network for AI communication
37:21 - Real-world applications of LiveKit in AI systems
40:55 - Future of AI and the Concept of Abundance
43:38 - The Irony of Wealth in an Age of AII hope you enjoy the conversation and if you do, please subscribe!
--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
This week we’re talking to Lin Qiao, former PyTorch lead at Meta and current CEO of Fireworks AI. We discuss the evolution of AI frameworks, the challenges of optimizing inference for generative AI, the future of AI hardware, and open-source models. Lin shares insights on PyTorch design philosophy, how to achieve low latency, and the potential for AI to become as ubiquitous as electricity in our daily lives.
Chapters:
00:00 - Introduction and PyTorch Background
04:28 - PyTorch's Success and Design Philosophy
08:20 - Lessons from PyTorch and Transition to Fireworks AI
14:52 - Challenges in Gen AI Application Development
22:03 - Fireworks AI's Approach
24:24 - Technical Deep Dive: How to Achieve Low Latency
29:32 - Hardware Competition and Future Outlook
31:21 - Open Source vs. Proprietary Models
37:54 - Future of AI and ConclusionI hope you enjoy the conversation and if you do, please subscribe!
--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
In this episode of High Agency, we are speaking to Paras Jain who is the CEO of AI video generation startup Genmo. Paras shares insights from his experience working on autonomous vehicles, why he chose academia over an offer from Tesla, and the research-minded approach that has lead to Genmo's rapid success.
Chapters:
(00:00) Introduction
(01:52) Lessons from selling an AI company to Tesla
(07:01) Working within GPU constraints and transformer architecture
(11:18) Moving from research to startup success
(14:36) Leading the video generation industry
(16:05) Training diffusion models for videos
(19:36) Evaluating AI video generation
(24:06) Scaling laws and data architecture
(28:34) Issues with scaling diffusion models
(33:09) Business use cases for video generation models
(36:43) Potential and limitations of video generation
(40:59) Ethical training of video models -
In this week’s episode of the High Agency podcast, Humanloop Co-Founder and CEO Raza Habib sat down with Eddie Kim, co-founder and Head of Technology at Gusto and guest host Ali Rowghani to discuss how Gusto has applied AI to revolutionize ops-heavy processes like payroll and HR admin. Eddie also elaborates why Gusto is choosing to build, and not buy, the majority of their GenAI tech stack.
Chapters
00:00 - Introduction and Background
02:15 - Overview of Gusto's Business
05:59 - Operational Complexity and AI Opportunities
08:51 - Build vs. Buy: Internal vs. External AI Tools
10:07 - Prioritizing AI Use Cases
13:53 - Human-in-the-Loop Approach
19:39 - Centralized AI Team and Approach
22:53 - Measuring ROI from AI Initiatives
32:25 - AI-Powered Reporting Feature
38:46 - Code Generation and Developer Tools
42:52 - Impact of AI on Companies and Society
47:22 - AI Safety and Risks
49:54 - Closing ThoughtsI hope you enjoy the conversation and if you do, please subscribe!
--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com/podcast -
In this episode, we sit down with Michael Royzen, CEO and co-founder of Phind. Michael shares insights from his journey in building the first LLM-based search engine for developers, the challenges of creating reliable AI models, and his vision for how AI will transform the work of developers in the near future.
Tune in to discover the groundbreaking advancements and practical implications of AI technology in coding and beyond.
I hope you enjoy the conversation and if you do, please subscribe!
--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
Jason Liu is a true Renaissance Man in the world of AI. He began his career working on traditional ML recommender systems at tech giants like Meta and Stitch Fix and quickly pivoted into LLMs app development when ChatGPT opened its API in 2022. As the creator of Instructor, a Python library that structures LLM outputs for RAG applications, Jason has made significant contributions to the AI community. Today, Jason is a sought-after speaker, course creator, and Fortune 500 advisor.
In this episode, we cut through the AI hype to explore effective strategies for building valuable AI products and discuss the future of AI across industries.
Chapters:
00:00 - Introduction and Background
08:55 - The Role of Iterative Development and Metrics10:43 - The Importance of Hyperparameters and Experimentation
18:22 - Introducing Instructor: Ensuring Structured Outputs
20:26 - Use Cases for Instructor: Reports, Memos, and More
28:13 - Automating Research, Due Diligence, and Decision-Making
31:12 - Challenges and Limitations of Language Models
32:50 - Aligning Evaluation Metrics with Business Outcomes
35:09 - Improving Recommendation Systems and Search Algorithms
46:05 - The Future of AI and the Role of Engineers and Product Leaders
51:45 - The Raptor Paper: Organizing and Summarizing Text ChunksI hope you enjoy the conversation and if you do, please subscribe!
--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
If you need to understand the future trajectory of AI, Logan Kilpatrick will help you do just that. Having seen the frontier at both OpenAI and Google.
Logan led developer relations at OpenAI before leading product on the Google AI Studio. He's been closer than anyone to developers building with LLMs and has seen behind the curtain at two frontier labs.
Logan and I talked about:
🔸 What it was like joining OpenAI the day ChatGPT hit 1 million users
🔸 What you might expect from GPT5
🔸 Google's latest innovations and the battle with OpenAI
🔸 How can you stay ahead and achieve real ROI
🔸 Logan's insights into the form factor of AI and what will replace chatbotsChapters:
00:00 - Introduction
01:50 - OpenAI and the Release of ChatGPT
07:43 - Characteristics of Successful AI Products and Teams
10:00 - The Rate of Change in AI
12:22 - The Future of AI and the Role of Systems
13:47 - ROI in AI and Challenges with Cost
18:07 - Advice for Builders and the Potential of Fine-Tuning
20:52 - The Role of Prompt Engineering in AI Development
25:27 - The Current State of Gemini
34:07 - Future Form Factors of AI
39:34 - Challenges and Opportunities in Building AI StartupsI hope you enjoy the conversation and if you do, please subscribe!
--------------------------------------------------------------------------------------------------------------------------------------------------
Humanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to humanloop.com -
I'm excited to share this conversation with Max Rumpf the founder of Sid.AI. I wanted to speak to Max because Retrieval Augmented generation has become core to building AI applications and he knows more about RAG than anyone I know.
We get deep into the challenges of building RAG systems and the episode is full of technical detail and practical insights.
We cover:
00:00 - Introduction to Max Rumpf and SID.ai
03:39 - How SID.ai's RAG approach differs from basic tutorials
07:30 - Challenges of document processing and chunking strategies
13:07 - Retrieval techniques and hybrid search approaches
15:06 - Discussion on knowledge graphs and their limitations
20:58 - Reranking in RAG systems and performance improvements
32:14 - Impact of longer context windows on RAG systems
35:10 - The future of RAG and information retrieval
39:47 - Recent research papers on AI and hallucination detection
42:04 - Value-augmented sampling for language model alignment
43:11 - Future trends and investment opportunities in AI
43:50 - SEO optimization for LLMs and its potential as a business
45:20 - Closing thoughts and wrap-upI hope you enjoy the conversation and if you do, please subscribe!
-
In this episode, I had the pleasure of speaking with Wade Foster, the founder and CEO of Zapier. We discussed Zapier's journey with AI, from their early experiments to the company-wide AI hackathon they held in March. Wade shared insights on how they prioritize AI projects, the challenges they've faced, and the opportunities they see in the AI space. We also talked about the future of AI and how it might impact the way we work
-
In this episode, I chatted with Shawn Wang about his upcoming AI engineering conference and what an AI engineer really is. It's been a year since he penned the viral essay "Rise of the AI Engineer' and we discuss if this new role will be enduring, the make up of the optimal AI team and trends in machine learning.
The Rise of the AI Engineer Blog Post: https://www.latent.space/p/ai-engineer
Chapters
00:00 - Introduction and background on Shawn Wang (Swyx)
03:45 - Reflecting on the "Rise of the AI Engineer" essay
07:30 - Skills and characteristics of AI Engineers
12:15 - Team composition for AI products
16:30 - Vertical vs. horizontal AI startups
23:00 - Advice for AI product creators and leaders
28:15 - Tools and buying vs. building for AI products
33:30 - Key trends in AI research and development
41:00 - Closing thoughts and information on the AI Engineer World Fair SummitHumanloop is an Integrated Development Environment for Large Language Models. It enables product teams to develop LLM-based applications that are reliable and scalable. To find out more go to https://hubs.ly/Q02yV72D0
- Mostra di più