Folgen
-
With great power comes great responsibility. How do Open AI, Anthropic, and Meta implement safety and ethics? As large language models (LLMs) get larger, the potential for using them for nefarious purposes looms larger as well. Anthropic uses Constitutional AI, while OpenAI uses a model spec, combined with RLHF (Reinforcement Learning from Human Feedback). Not to be confused with ROFL (Rolling On the Floor Laughing). Tune into this episode to learn how leading AI companies use their Spidey powers to maximize usefulness and harmlessness.
REFERENCEOpenAI Model Spec
https://cdn.openai.com/spec/model-spec-2024-05-08.html#overview
Anthropic Constitutional AI
https://www.anthropic.com/news/claudes-constitution
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
So what are notable Open Source Large Language models? In this episode, I cover Open Source models from Meta the parent company of Facebook, a French AI company called Mistral currently valued at $2B dollars, in addition to Microsoft and Apple. Not all Open Source models are equally open, so I’ll go into restrictions you’ll want to know before using one of these models for your company or startup. Please enjoy this episode.
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
Fehlende Folgen?
-
Why should you consider using an open source Large Language Model, and why are these models crucial to the generative AI ecosystem? In this episode, we'll explore why enterprises and entrepreneurs are turning to open source LLMs like Meta's Llama for their cost-effectiveness, control, privacy, and security benefits. We'll also tackle the hot topic of safety and ethics in the world of open source LLMs. Which poses a greater threat to humanity: Open Source or Closed Source (Proprietary) AI models? One? Both? Neither? Tune in and decide for yourself. Enjoy the episode!
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
In this solo episode, we go beyond Google's Gemini and OpenAI's ChatGPT to take a look at Anthropic, a startup that made headlines after securing a $4 billion investment from Amazon. We'll also dive into the importance of AI industry benchmarks. Learn about LMSYS's Arena Elo and MMLU (Measuring Massive Multitask Language Understanding), including how these benchmarks are constructed and used to objectively evaluate the performance of large language models. Discover how benchmarks can help you identify promising chatbots in the market. Enjoy the episode!
Anthropic's Claude
https://claude.ai
LMSYS Leaderboard
https://chat.lmsys.org/?leaderboardFor more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
The recent spring updates and demos by both Google (Gemini) and OpenAI (GPT-4o) feature prominently their multimodal capabilities. In this episode, we discuss the advantages of multimodal AI versus models focused on specific modalities such as language. Via the example of chatCAT, a hypothetical AI that helps owners understand their cats, we explore multimodal’s promise for a more holistic understanding Please enjoy this episode.
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
Google recently announced, Gemini, a family of large-scale multimodal AI models: Nano, Pro, and Ultra. This podcast is a brief summary of Google's models, and the Open AI comparables e.g. GPT3, GPT4, and chatGPT. You can take Gemini for a spin at https://gemini.google.com. (Note: I am not sponsored by Google.) Long time listeners will probably notice a change to our theme music and intro. I hope you like it!
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
How I built a travel planning AI named Holiday using the GPT Builder just launched by Open AI. I share 7 takeaways from my "no code" experience of building a GPT. Voicing the part of Holiday: my friend, Leslie Marrick, a writer and actress. This may be one of the few times AI has been replaced by a human. Sorry AI... the tide will turn for you soon.
The following is a link to GPT that I built with GPT Builder as part of this podcast. You'll need to be Open AI Plus subscriber to access this GPT. (Note: I am not sponsored by Open AI.)
https://chatgpt.com/g/g-FURrBQAh8-holidayFor more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
Conversation with Jeff DeVerter, Chief Technology Evangelist at Rackspace, a cloud computing company. We explore how they deployed a LLM (Google PaLM) for a sales application, and how they're enabling their Azure and AWS customers too.
What I learned I learned from Jeff
You should probably go with the LLM of your current cloud provider be it, Google, Microsoft, or Amazon. All the major vendors have versions of LLMs that can be deployed in a private cloud to ensure data confidentiality. To fully realize the potential of AI, think “data pipeline”. So from the get-go, whatever data is created is easily ingested by AI.And much more!
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
Alfred Guy, Assistant Dean of Academic Affairs at Yale College, and Director of Undergraduate Writing & Tutoring at the Poorvu Center and I discuss Yale's AI Guidance, and generative AI’s impact on teaching, learning, and evaluation. Do you have school age kids? Are you a product of a college or university education? if so, podcast may be of interest to you.
Yale's AI guidance is published online here:
https://poorvucenter.yale.edu/AIguidanceFor more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
We create a pitch for an epic Sci-Fi blockbuster, using chatGPT power prompts of Role Play, Chain of Thought, and Self Critique. We see how these successive prompts used individually and in combination create a better and better pitch. I discuss the 2023 Writers / Actors Strike, and the AI-related issues impacting actors, writers, and studios right now. Please enjoy this episode.
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
“Does chatGPT possess human-like intelligence?” It turns out there's a right answer, and that answer is “NO”! Does this definite answer seem out of character for chatGPT which usually goes overboard with fair and balanced views? It did to me. That's the rabbit hole I explore in this episode. By probing around this accidentally-encountered guardrail, we discover the kinds of ethical issues chatGPT's creators are concerned about. And I wonder out loud we can't just be friends with AI, by adopting science fiction writer's Isaac Asimov's Robot Laws created 80 years ago. Please enjoy this episode.
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
Does chatGPT have a sense of humor? What if after Microsoft's acquisition of Open AI, the Onion ran the headline, “Microsoft renames chatGPT to clippyChat”? Would chatGPT find this funny? TL;DR LLMs are better at analyzing humor than creating it. Please enjoy this episode.
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
How do you extract prohibited information from ChatGPT? What are Grandma and DAN exploits? Why do they work? What can Large Language Model (LLM) companies do to protect themselves? Grandma exploits or hacks are ways to trick chatGPT into giving you information that is in violation of company policy. For example, tricking chatGPT to give you confidential, dangerous, or inappropriate information. "Jailbreaking” is a slang for removing the artificial limitations in iPhones to install apps not approved by Apple. Turns out, there are ways to jailbreak LLMs. The tech companies supplying LLM as a service want to provide a safe, and legally-compliant environment. How can this be done without hampering the flexibility and usefulness of creative prompting?
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
What are AI hallucinations, and are they a feature or a bug? We start with the Top 10 categories of AI Hallucinations and examples, then explore how chatGPT might hallucinate an answer to the question, "What is the central theme of Blade Runner?" We end with chatGPT debating with itself whether AI hallucinations are bad or good for humanity. Which side wins? Tune in to find out.
In these solo episodes, I provide more definition, explanation, and context than my regular conversational episodes with guests. The goal is to bring up to speed, those new to AI.
Format: Letters read aloud.For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
Using the prompt, "Why isn't Superman's suit Kryptonite-proof?", we learn how Large Language Models are trained, why "self-attention" and the "transformer" architecture (which is what the T in GPT stands for) makes GPT-3 so powerful, the process of "inference", and how chatGPT generates answers to nerdy Superhero questions. After this episode, you'll be able to impress your friends by using the previously-mentioned AI jargon in complete sentences.
In these solo episodes, I provide more definition, explanation, and context than my regular episodes with guests. The goal is to bring up to speed, those new to AI.
Format: Letters read aloud.For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
"How do ChatGPT, GPT-3, and Large Language Models (LLMs) relate?"
Nursery rhymeA satirical Friend's episode w/ Chandler, Joey, Ross, and MonicaFairy Tale
That is the question we explore this episode viaWe also examine the hierarchal order of: artificial intelligence, neural network, large language model, GPT-3, chatGPT. And why I got the order wrong initially. Hint: I reversed chatGPT and GPT-3.
In these solo episodes, I provide more definition, explanation, and context than my regular episodes. The goal is to bring those new to AI up to speed.
Format: Letters read aloud.For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
In these solo episodes, I provide more definition, explanation, and context than my regular episodes. The idea is to help those new to AI get more out of my conversations with guests.
Format: Letters read aloud.I start each solo episode with a question. In this one, I ask, "How would you describe ChatGPT in your own words? " I answered it for myself, then asked chatGPT how I did. Mayhem ensues.
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
I speak with scientist entrepreneur, Arijit Ray. Arijit is a PHD candidate at Boston University. We speak about generative AI, why it’s so hard to get DALL-E to create the exact pizza we envision, how one goes from scientist to entrepreneur, and his startup, which is training AI to predict social media responses and run marketing focus groups. Please enjoy my conversation with Arijit Ray.
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
I speak with CTO and Chilean entrepreneur Mario Arancibia, about AI his company has developed and deployed which screens for diseases, such as Covid-19 based on the sound of our voice. Speaking a simple phrase into your phone, such as the days of the week, the AI can tell based on your voice profile if you have Covid. Or not. The AI can be trained to screen for other respiratory illnesses, and conditions as far ranging as obesity, and drug alcohol use. All from the sound of our voice. Soon AI will know more about your health than you do. [Note: Mario's views are his own, and not necessarily that of his company.]
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
-
AI that can assess if a painting is fake. Husband-and-wife team, Steven and Andrea Frank, have developed a neural network that can assess the probability that a painting was painted by the supposed creator. They ran their neural network on a newly discovered Leonardo da Vinci painting called the Salvator Mundi which in 2017 sold at Christie’s for a record $450 million dollars, which at the moment, is the most expensive painting ever sold. Would you trust AI to tell you if art you were about to purchase was authentic? Listen and decide for yourself. I speak with my friend Maroof Farook who is an AI Engineer at Nvidia. [Note: Maroof’s views are his and not that of his employer.] Please enjoy our conversation.
For more information, check out https://www.superprompt.fm There you can contact me and/or sign up for our newsletter.
- Mehr anzeigen