Episoder

  • Rosa Lin is the founder of Tolstoy [www.tolstoy.ai], which specializes in extracting data from documents. As I learned, this is a much tougher problem than traditional OCR! It requires a combination of deep learning and classic CV methods. Rosa also talks about her fascinating background as a journalist and her experience going through Y-Combinator.

    For more about this podcast, visit www.yaoshiang.com/podcast.html.

    For the video version including visual examples of Tolstoy's work, visit https://www.youtube.com/watch?v=QtHEXvcGGRs&t=9s.

    0:26: The problems Tolstoy solves: extracting data from documents like emails, news articles, forms, and handwritten notes and then running NLP algorithms to classify and summarize.

    02:54: Typical customers: tech startups, news organizations, utilities, energy companies, legal firms, and educational institutions.

    05:05: First walk-through of a use case: Digitizing articles for The Wall Street Journal (with images showing why off the shelf OCR failed).

    07:19: Specifics of why OCR fails: multiple articles in a single page, columns, images, headings, and handwriting.

    09:18: Training a custom model to deal with columns, with visuals showing how Tolstoy works better than Google Cloud Vision.

    11:30: A classic computer vision algorithm for identifying paragraphs.

    12:30: Transfer learning with modern Convolution Neural Networks to identify images vs text.

    13:38: Second walk-through of a use case: a classification task for a utility company to help find lead pipes.

    15:20: Can you spot the handwritten word “lead”?

    17:50: Tips for building products around inevitably imprecise ML models.

    19:37: Rosa’s personal journey from biology and journalism to entrepreneurship and ML.

    22:49: Seeing the promise of AI in 2015 while at the World Bank and starting an AI hobbyist club.

    26:25: How training in journalism translated to the skills required for journalism.

    28:40: Rosa’s experience with Y-Combinator (YC W17)

  • A true “aha” conversation! Learn how deep learning techniques from natural language processing (NLP) are applied to drug discovery, specifically, protein to protein interactions. Includes a quick and dirty primer on just enough biology to understand the training data A-Alpha Bio uses for their ML models.

    For more episodes, visit https://yaoshiang.com/podcast.html.

    Show Notes:

    0:37 - The basics of synthetic biology for machine learning practitioners

    0:50 - What are proteins and why do they matter?

    1:50 - A protein is a string of 20 amino acids… which means it starts looking like a Natural Language Processing problem.

    2:35 - DeepMind’s AlphaFold and Meta FAIR’s ESMFold: taking as input a string of amino acids, and then predicting the 3D structure of proteins.

    6:23: Where Alphafold got their training data: The Protein Data Bank.

    8:07: A Alpha Bio’s product: AlphaSeq. 10:45: The source of the name “A Alpha Bio”: yeast genders. 11:36: Applications of synthetic biology: pharmaceuticals, agriculture.

    15:00: Applying ML to predict protein to protein interactions.

    20:30: !!! The actual ML techniques applied: treating proteins as strings and applying NLP architectures: RNNs, LSTMs, Attention, and Transformers.

    22:50: Discrete Optimization problem to then generate proteins.

    28:30: The insights behind why applying ML would work.

    31:20: The rise of deep learning in the field of computational biology.

    32:50: Ryan’s journey into machine learning and data science

    35:20: Advice for deep learning people interested in applying ML to biology

    Additional papers covering the topic of ML in biology:

    https://www.nature.com/articles/s41586-021-03819-2 - The AlphaFold paper.

    https://pubmed.ncbi.nlm.nih.gov/35830864/ - A broad overview of deep learning in biology.

    https://pubmed.ncbi.nlm.nih.gov/35862514/ - A paper out of the Baker lab in which the authors use deep learning to design proteins from scratch.

    https://pubmed.ncbi.nlm.nih.gov/35099535/ - From Charlotte Deane’s lab with collaborators from Roche, this paper presents a deep learning approach to rapidly and accurately model the structure of antibody CDR3 loops. One of the papers mentioned in the review above.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9129155/ - This is recent work from A-Alpha; this paper doesn’t include any ML but does include some great examples of AlphaSeq data and how it can be applied.

    The YouTube version of this podcast is available at https://www.youtube.com/watch?v=k2OzeRQIXMs.

  • Mangler du episoder?

    Klikk her for å oppdatere manuelt.

  • Ciro Greco has built ML systems used at many named-brand retailers. In this episode, he gives us tips on getting value out of ML at “reasonable scale” companies with NLP and information retrieval. The concept of “reasonable scale” was one he returned to, and he clearly has a very nuanced understanding of that segment and how they are different from the hyper scale tech giants. He also brings advanced ideas like embeddings from NLP towards e-commerce personalization.

    For more episodes, visit https://yaoshiang.com/podcast.html.

    Show Notes:

    1:36: Key differences in applying ML at “reasonable scale” companies like major retailers where you can’t just “big-data” your way out of problems, compared to the hyper scale tech giants.

    3:22: The basics of personalization: suggestions, search, recommendations, and categories.

    4:38: A non-obvious challenge: how to personalize for non-logged-in users without a profile who visit infrequently.

    9:00: Different incentives for reasonable scale vs hyper scale companies.

    9:44: Getting your data right: data ingestion, data practices, organizing teams around data, transforming data, infrastructure for flexible data access, so that you can make developers productive when you have finite resources.

    11:23: Learning from experience that data - replayability and replicitability - is more important than modeling.

    12:58: Learnings from experiences at presenting at top tier conferences: so many published papers come from the hyper scale companies.

    14:19: Taking session data and catalog data to create a “product to vector” embedding to personalize an experience.

    19:20: Requirements on how to sell: the sales people must communicate to the “people who write the check” that data integration is a first class citizen, not a downstream task, to achieve ROI.

    21:09: Dynamics of regulatory and privacy issues, and how to tackle them as an organization.

    24:10: Ciro talks about his personal journey into ML, starting with a PhD in neuroscience and linguistics.

    25:46: Early challenges in applying deep learning to NLP.

    26:22: The “a ha” moment that led to Ciro’s first startup delivering search products.

    27:55: Changes in the role of a data scientist over the past decade. From the role of PhDs who had to tackle problems with very little tooling, to today where there are so many tools available. And a shift towards understanding products and customers.

    For the video version of this podcast, visit https://www.youtube.com/watch?v=F3e0UPqenwo

  • Yihua Liao is Head of Data Science at Netskope, a next-generation cybersecurity firm. Yihua talks about using both CV and NLP to create novel cybersecurity features. Yihua Liao’s PhD research was on security and machine learning, and he previously worked at Microsoft, Facebook, Uber, and his own startup.

    For more information about this podcast, visit https://yaoshiang.com/podcast.html.

    Show Notes:

    00:24 - How Netskope addresses cybersecurity.

    00:57 - Netskope’s unique approach to cybersecurity through network traffic routing.

    02:51 - The prior approach to cybersecurity: a focus on the physical perimeter and firewalls.

    03:44 - A unique application of Image Classification in cybersecurity: identifying sensitive documents like driver’s licenses so CISOs (chief information security officer) can set security rules for them.

    07:45 - Challenges of building Image Classifiers #1: High quality data.

    08:45 - Challenges of building Image Classifiers #2: Managing false positive and false negatives (recall and precision).

    09:15 - Challenges of building Image Classifiers #3: Managing latency (15 ms) for a real-time use case.

    10:38 - An application of NLP (natural language processing) in cybersecurity: classifying phishing websites.

    13:46 - Optimizing LLMs (Large Language Models) through quantization and distillation.

    14:45 - How Yihua got into ML. 16:10 - How ML has evolved over the past 15 years.

    Notes: https://www.netskope.com/ https://www.netskope.com/blog/enhancing-security-with-ai-ml

    A video version of this episode is available at https://www.youtube.com/watch?v=F3e0UPqenwo.

  • Ted tells us about applying machine learning to the field of baseball cards! 33% of Americans have trading cards, making this a very large addressable market. Learn some tips on scrappy ways to launch an app, and how similarity search powers one of the killer features of the CollX app.

    Key Moments:

    Building an application that works around the potential errors of an ML model (15:10). The data and ML behind his trading card valuation model, especially when recent transactions don’t exist. (18:30). Dealing with the latency inherent in ML and networking through the concept of “building lists” (18:25). Early work on product search (24:00). Working with bad training data and adding a “wizard behind the curtains” to deliver value while labeling data (26:18). More UX techniques to reduce perceived latency (28:00). Helping users understand that ML models are not 100% accurate (30:45). Advice for entrepreneurs trying to launch an app (35:20).

    A video version of this episode with visuals is available at https://www.youtube.com/watch?v=RX9xIYnn2v4.

    To learn more about this podcast, visit https://yaoshiang.com/podcast.html

  • In this episode, I interview my colleague Tom Rikert at Masterful AI. Tom is building the "AutoML 2.0" platform for computer vision. We talk about the product for the first 10 minutes, and then spend some learning about his work at MIT CSAIL, which got him into robotics and computer vision, as well as his experiences selling a startup to Google and his time as a venture capitalist at Andreessen Horowitz and Nextworld Capital.

    To learn more about this podcast, visit https://yaoshiang.com/podcast.html.

    For a video version of this episode, visit https://www.youtube.com/watch?v=g-bPAPZRlBc .

  • Welcome to the Building Things with Machine Learning Podcast.

    Every episode, I’ll be interviewing someone who building really interesting products using machine learning.

    Our focus is really on applications:

    Medical diagnostics

    Autonomous vehicles & advanced driver assistance systems (ADAS)

    Geospatial analytics

    Media and Content analysis

    Manufacturing

    Logistics

    And AEC, Architecture / Engineering / Construction

    What you won’t get are coding tips or research papers. Although ML developers are definitely part of our audience, so are product managers and marketers and entrepreneurs - anyone who wants to see how machine learning is being used in action.

    I wanted to start this podcast because Building things with machine learning is a different discipline than traditional software development.

    Some big differences:

    A developer has a lot less control than traditional software. ML is also used to solve many problems in the real world where it must be paired with physical sensors and actuators and robotics. Finally, ML is never perfect - it’s always a game of probabilities - so this is a new way of thinking about things than the traditional concept of software, where it’s realistic to think about stamping out every single bug and defect. In ML, you have to build applications that you accept will not be 100% accurate.

    I hope you get something out of the “Building Things with Machine Learning” podcast!

    If you enjoy this podcast, please give us a rating on your podcast store. It helps others find our podcast.