Episoder
-
In this episode of Generative AI 101, we explore the example-heavy world of few shot prompting. Imagine describing a dish you love to a chef and then offering them multiple recipes on how to make it. That's few-shot prompting—maximum input, maximum efficiency. We'll explore how this technique can turn your AI into a translation whiz or a sentiment analysis expert with just one example. We'll also discuss the pros and cons of this approach.Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we dive into the art of one-shot prompting. Imagine teaching someone a recipe with just one perfect example instead of a whole cookbook. That's one-shot prompting—minimal input, maximum efficiency. We'll explore how this technique can turn your AI into a translation whiz or a sentiment analysis expert with just one example. We'll also discuss the pros and cons of this approach, compare it with zero-shot prompting, and share vivid real-world scenarios. Tune in to discover how one-shot prompting can make your AI sharper and more effective than ever.Connect with Emily Laird on LinkedIn
-
Manglende episoder?
-
Join us on Generative AI 101 as we demystify zero-shot prompting—a technique that lets LLMs perform tasks without prior examples. Imagine whipping up a dish you've never heard of with no recipe; that's zero-shot prompting in action. We’ll explore how AI models leverage vast pre-existing knowledge to answer questions, classify sentiments, and more, all without specific training. But it's not all smooth sailing; we'll also discuss quirks like the "Reversal Curse," where LLMs struggle with reversed statements. The Reversal Curse PaperConnect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we continue with the art of crafting the perfect prompt for AI. Much like brewing the perfect cup of coffee, the right balance of persona, output format, context, examples, and instructions can transform a bland response into a rich, engaging answer. Tune in as Emily breaks down these five key elements, provides real-world examples, and shares tips to elevate your prompt engineering skills from average to exceptional. Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we uncover the art of crafting perfect prompts for AI. Think of it as giving precise instructions to a master chef. We'll break down the anatomy of a prompt into three key components: instructions, context, and constraints. By comparing effective and ineffective examples, we show you how to transform vague requests into specific, actionable prompts. From enhancing customer support interactions to generating insightful data reports, you'll learn how well-structured prompts can elevate AI responses. Tune in for practical tips and interactive exercises that will make your prompts sing.Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we dig into the world of prompt engineering. Think of it as giving your AI precise instructions, like ordering the perfect Starbucks beverage. We’ll explore how writing effective prompts can turn an AI into a super-smart assistant, capable of generating compelling text, aiding customer support, and analyzing large amounts of data. From overcoming writer’s block to handling customer queries efficiently, prompt engineering is the unsung hero that maximizes AI performance. Tune in to discover the art and science behind guiding AI to produce innovative and useful outputs.Credit to Simon Willison who coined the term "prompt engineering" in 2022. Connect with Emily Laird on LinkedIn
-
Join us as we explore the world of Meta's LLaMA in this episode of Generative AI 101. From its origins in 2013 with Facebook AI Research (FAIR), led by AI visionary Yann LeCun, to the groundbreaking release of LLaMA models in 2023, we'll explore how Meta has pushed the boundaries of AI. Discover the evolution from the initial LLaMA leak that took the AI world by storm to the refined, ethically-designed LLaMA 2, and finally, the powerful and efficient LLaMA 3 released in 2024. Tune in to understand how Meta is not just keeping pace but setting new standards in the AI landscape.Connect with Emily Laird on LinkedIn
-
Get ready to meet Microsoft Copilot, the AI assistant that's redefining productivity. Launched in February 2023, Copilot evolved from Bing Chat to become a versatile tool embedded in Microsoft 365. Powered by OpenAI’s GPT-4 and the Microsoft Prometheus model, Copilot excels at everything from drafting emails and analyzing data to creating stunning PowerPoint presentations. Despite its initial quirks, Copilot has become an indispensable ally in the digital workspace, thanks to Microsoft’s hefty investments and fine-tuning efforts. Connect with Emily Laird on LinkedIn
-
Bonjour, tech-savvy wanderers! In this episode of Generative AI 101, we're diving fork-first into Mistral AI, the French powerhouse in the world of artificial intelligence. Founded by ex-Google DeepMind and Meta researchers, Mistral AI offers cutting-edge models that are shaking up the industry. From the versatile Mistral Large and Small to the sharp Mistral Embed and open-source Mistral 7B, these models are giving giants like GPT-4 a run for their money. We'll also explore their Mixtral series, the crème de la crème of AI, and the charming Le Chat chatbot. Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we explore the world of Anthropic's Claude AI—a chatbot born from the minds of ex-OpenAI siblings and backed by tech giants like Google and Amazon. Picture Claude as the cool, thoughtful cousin of ChatGPT, capable of handling 200,000 tokens at once. With the latest iteration, Claude 3.5 Sonnet, this AI sets new benchmarks in graduate-level reasoning, broad knowledge, and coding proficiency. Whether you need an advanced model for automation or a speed demon for instant translation, Claude is redefining what smart, safe AI can do.
Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we explore the slick, high-octane world of Google's Gemini. Think of it as the James Bond of AI—sharp, sophisticated, and always ahead of the curve. We’ll dish out the inside scoop on Gemini’s cutting-edge tech, from its inception to its rise as a superstar in the AI universe. Get ready to explore its suave capabilities, from powering chatty virtual assistants to mastering the nuances of human language and crunching mountains of data like it’s nothing. Plus, we’ll sprinkle in some juicy tidbits about Gemini’s meteoric rise, its flair for languages, and the latest bells and whistles.Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we explore the modern marvel that is ChatGPT. Discover what "GPT" stands for and how this "Generative Pre-trained Transformer" operates, processing text like a high-powered engine. Learn what ChatGPT can do, from generating human-like responses to engaging in multi-language conversations and analyzing vast text data. Tune in to explore the incredible capabilities and the global impact of ChatGPT.Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, go on an insider’s tour of a large language model (LLM). Discover how each component, from the transformer architecture and positional encoding to the multi-head attention layers and feed-forward neural networks, contributes to creating intelligent, coherent text. We’ll explore tokenization and resource management techniques like mixed-precision training and model parallelism. Join us for a fascinating look at the complex, finely-tuned process that powers modern AI, turning raw text into human-like responses.
Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we explore the intricate process of training Large Language Models (LLMs). Imagine training a brilliant student with the entire internet as their textbook—books, academic papers, Wikipedia, social media posts, and code repositories. We’ll cover the stages of data collection, cleaning, and tokenization. Learn how transformers, with their self-attention mechanisms, help these models understand and generate coherent text. Discover the training process using powerful GPUs or TPUs and techniques like distributed and mixed precision training. We'll also address the challenges, including the need for computational resources and ensuring data diversity. Finally, understand how fine-tuning these models for specific tasks makes them even more capable.
Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we trace the evolution of Large Language Models (LLMs) from their early, simplistic beginnings to the sophisticated powerhouses they are today. Starting with basic models that struggled with coherence, we'll see how the introduction of transformers in 2017 revolutionized the field. Discover how models like GPT-2 and GPT-3 brought human-like text generation to new heights, and learn about the advancements in GPT-4, which offers even greater accuracy and versatility. Join us to understand the incredible journey of LLMs, from data training to fine-tuning, and how they've transformed our digital interactions.Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we explore Large Language Models (LLMs) and their significance. Imagine chatting with an AI that feels almost human—you're likely interacting with an LLM. These models, trained on massive datasets, understand and generate text with impressive accuracy. With billions of parameters, they handle a wide range of tasks from chatbots and virtual assistants to sentiment analysis and document summarization.Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we explore the core techniques and methods in Natural Language Processing (NLP). Starting with rule-based approaches that rely on handcrafted rules, we move to statistical models that learn patterns from vast amounts of data. We'll explain n-gram models and their limitations before diving into the revolution brought by machine learning, where algorithms like Support Vector Machines (SVMs) and decision trees learn from annotated datasets. Finally, we arrive at deep learning and neural networks, particularly Transformers, which enable advanced models like BERT and GPT-3 to understand context and generate human-like text.Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we break down the fundamental concepts of Natural Language Processing (NLP). Imagine trying to read a book that's one long, unbroken string of text—impossible, right? That’s where tokenization comes in, breaking text into manageable chunks. We’ll also cover stemming and lemmatization, techniques for reducing words to their root forms, and explain the importance of stop words—the linguistic background noise. Finally, we’ll explore Named Entity Recognition (NER), which identifies key names and places in text. These basics form the foundation of NLP, making our interactions with technology smoother and more intuitive.
Connect with Emily Laird on LinkedIn
-
In this episode of Generative AI 101, we journey through the captivating history of Natural Language Processing (NLP), from Alan Turing's pioneering question "Can machines think?" to the game-changing advancements of modern AI. Discover how NLP evolved from early rule-based systems and statistical methods to the revolutionary introduction of machine learning, deep learning, and OpenAI's GPT-3. Tune in to understand how these milestones have transformed machines' ability to understand and generate human language, making our tech experiences smoother and more intuitive.
Connect with Emily Laird on LinkedIn
-
Let's explore Natural Language Processing (NLP). Picture this: you’re chatting with your phone, asking it to find the nearest pizza joint, and it not only understands you but also provides a list of places with mouth-watering photos. That’s NLP in action. We'll explain how NLP allows machines to interpret and respond to human language naturally, like teaching a robot to be a linguist. Discover its key applications, from virtual assistants and machine translation to sentiment analysis and healthcare. Tune in to learn why NLP is the magic making our interactions with technology smoother and more intuitive.
Connect with Emily Laird on LinkedIn
- Vis mere