Episodios
-
Tom and Nate catch up on the happenings in AI. Of course, we're focused on the biggest awards available to us as esteemed scientists (or something close enough) -- the Nobel Prizes! What does it mean in the trajectory of AI for Hinton and Hassabis to carry added scientific weight. Honestly, feels like a sinking ship.
Some links:
* Schmidhuber tweet: https://x.com/SchmidhuberAI/status/1844022724328394780
* Hinton "I'm proud my student fired Sam": https://x.com/Grady_Booch/status/184414542282424329000:00 Introduction
04:43 Criticism of AI-related Nobel Prize awards
09:06 Geoffrey Hinton's comments on winning the Nobel Prize
18:14 Debate on who should be credited for current AI advancements
25:53 Changes in the nature of scientific research and recognition
34:44 Changes in AI safety culture and company dynamics
37:27 Discussion on AI scaling and its impact on the industry
42:21 Reflection on the ongoing AI hype cycleRetort on YouTube: https://www.youtube.com/@TheRetortAIPodcast
Retort on Twitter: https://x.com/retortai
Retort website: https://retortai.com/
Retort email: mail at retortai dot com -
Tom and Nate catch up on recent events (before the OpenAI o1 release) and opportunities in transparency/policy. We recap the legendary scam of Matt from IT department, why disclosing the outcomes of process is not enough, and more. This is a great episode on understanding why the process technology was birthed from is just as important as the outcome!
Some links:
* Nathan's post on Model Specs for regulation https://www.interconnects.ai/p/a-post-training-approach-to-ai-regulation
* Nathan's post on inference spend https://www.interconnects.ai/p/openai-strawberry-and-inference-scaling-lawsSend your questions to mail at retortai dot com
-
¿Faltan episodios?
-
Tom and Nate catch up on core themes of AI after a somewhat unintended summer break. We discuss the moral groundings and philosophy of what we're building, our travels, The Anxious Generation, AGI obsessions, an update on AI Ethics vs. AI Safety, and plenty more in between.
As always, contact us at [email protected]
Some links we mention in the episode:
* The Emotional Dog and its Rational Tail https://motherjones.com/wp-content/uploads/emotional_dog_and_rational_tail.pdf
* The Anxious Generation https://www.amazon.com/Anxious-Generation-Rewiring-Childhood-Epidemic/dp/0593655036
* Shadow Lake Lodge https://shadowlakelodge.com/
* Recent Dwarkesh Podcast https://www.dwarkeshpatel.com/p/joe-carlsmith -
Tom and Nate catch up on the rapidly evolving (and political) space of AI regulation. We cover CA SB 1047, recent policing of data scraping, presidential appointees, antitrust intention vs. implementation, FLOP thresholds, and everything else touching the future of large ML models.
Nate's internet cut out, so this episode ends a little abruptly. Reach out with any questions to mail at retortai.com
Some links:
- night falls on the cumberlands https://en.wikipedia.org/wiki/Night_Comes_to_the_Cumberlands
- hillbilly elegy https://en.wikipedia.org/wiki/Hillbilly_Elegy
- wired piece on data https://www.wired.com/story/youtube-training-data-apple-nvidia-anthropic/
- nate's recent piece on AI regulation https://www.interconnects.ai/p/sb-1047-and-open-weights00:00 Intro
01:19 Training Data and the Media
03:43 Norms, Power, and the Limits of Regulation
08:52 OpenAI's Business Model
12:33 Antitrust: The Essential Tool for Governing AI
17:11 Users as Afterthoughts
20:07 Depoliticizing AI
26:14 "Breaking Bad" & the AI Parallel
28:11 The "Little Tech" Agenda
31:03 Reframing the Narrative of Big Tech
32:20 "The Lean Startup" & AI's Uncertainty -
Tom and Nate revisit one of their old ideas -- AI through the lens of public health infrastructure, and especially alignment. Sorry about Tom's glitchy audio, I figured it out after the fact that he was talking into the microphone at the wrong angle. Regardless, here are some links for this week. Links:
- Data foundry for AI https://scale.com/blog/scale-ai-series-f
- Information piece on Scale AI ($) https://www.theinformation.com/articles/why-a-14-billion-startup-is-now-hiring-phds-to-train-ai-from-their-living-rooms?shared=168f685a864ca709
- ChatGPT compounding math: https://chatgpt.com/share/2c19a357-acb2-441d-8203-946b74ce785ccontact us at mail at retortai dot com
00:00 Intro
00:39 Chicago's Tech Scene and the "The Bear"
01:22 AI and Public Health: A New Framework
08:17 Lessons for AI from Sanitation Infrastructure
12:58 The Mental Health Impact of Generative AI
23:28 Aligning AI with Diverse Societal Values
27:06 Power Dynamics in AI's Development
33:02 The Need for a Neutral AI Research Body (NAIRR)
36:57 New Regulations for a New Era of AI
41:05 Outro: Join the Conversation -
Tom and Nate caught up last week (sorry for the editing delay) on the big two views of the AI future: Apple Intelligence and Situational Awareness (Nationalistic AI doom prevention). One of our best episodes, here are the links:
* The Kekulé Problem https://en.wikipedia.org/wiki/The_Kekul%C3%A9_Problem
* Truth and Method https://en.wikipedia.org/wiki/Truth_and_Method
* Situational Awareness https://situational-awareness.ai/00:00 A Hypothetical Life: From Germany to AGI
01:20 Leopold Aschenbrenner: Situational Awareness and Extrapolation
02:01 The Retort: Apple vs. Doomsday AI
03:40 Credentials and Social Choice Theory
05:14 Dissecting "Situational Awareness": Hype vs. Reality
07:16 The Limits of Language Models: Are They Really Intelligent?
11:04 Apple's Vision: AI for Consumers, Not Conquerors
13:53 Silicon Valley Myopia and the Geopolitics of AI
18:25 Beyond Benchmarks: The Scientist vs. The Engineer
22:04 What is Intelligence? The Narrowness of Human Fixation
24:32 A Growing Disrespect for Language?
27:40 The Power of Talking to Language Models
32:50 Language: Representation or Revelation?
38:54 The Future of Meaning: Will AI Obliterate Art?
45:32 A Vision for AI as Public Infrastructure -
Tom and Nate catch up on many AI policy happenings recently. California's "anti open source" 1047 bill, the senate AI roadmap, Google's search snaifu, OpenAI's normal nonsense, and reader feedback! A bit of a mailbag. Enjoy.
00:00 Murky waters in AI policy
00:33 The Senate AI Roadmap
05:14 The Executive Branch Takes the Lead
08:33 California's Senate AI Bill
22:22 OpenAI's Two Audiences
28:53 The Problem with OpenAI Model Spec
39:50 A New World of AI RegulationA bunch of links...
Data and society whitepaper: https://static1.squarespace.com/static/66465fcd83d1881b974fe099/t/664b866c9524f174acd7931c/1716225644575/24.05.18+-+AI+Shadow+Report+V4.pdf
https://senateshadowreport.com/California bill
https://www.hyperdimensional.co/p/california-senate-passes-sb-1047
https://legiscan.com/CA/text/SB1047/id/2999979Data walls
https://www.interconnects.ai/p/the-data-wallInterconnects Merch
https://interconnects.myshopify.com/ -
Tom and Nate discuss two major OpenAI happenings in the last week. The popular one, the chat assistant, and what it reveals about OpenAI's worldview. We pair this with discussion of OpenAI's new Model Spec, which details their RLHF goals: https://cdn.openai.com/spec/model-spec-2024-05-08.html
This is a monumental week for AI. The product transition is completed, we can't just be researchers anymore.
00:00 Guess the Donkey Kong Character
00:50 OpenAI's New AI Girlfriend
07:08 OpenAI's Business Model and Responsible AI
08:45 GPT-2 Chatbot Thing and OpenAI's Weirdness
12:48 OpenAI and the Mystery Box
19:10 The Blurring Boundaries of Intimacy and Technology
22:05 Rousseau's Discourse on Inequality and the Impact of Technology
26:16 OpenAI's Model Spec and Its Objectives
30:10 The Unintelligibility of "Benefiting Humanity"
37:01 The Chain of Command and the Paradox of AI Love
45:46 The Form and Content of OpenAI's Model Spec
48:51 The Future of AI and Societal Disruptions -
Tom and Nate discuss the shifting power landscape in AI. They try to discern what is special about Silicon Valley's grasp on the ecosystem and what other types of power (e.g. those in New York and Washington DC) will do to mobilize their influence.
Here's the one Tweet we referenced on the FAccT community: https://twitter.com/KLdivergence/status/1653843497932267520
00:00: Introduction and Cryptozoologists
02:00: DC and the National AI Research Resource (NAIR)
05:34: The Three Legs of the AI World: Silicon Valley, New York, and DC
11:00: The AI Safety vs. Ethics Debate
13:42: The Rise of the Third Entity: The Government's Role in AI
19:42: New York's Influence and the Power of Narrative
29:36: Silicon Valley's Insularity and the Need for Regulation
36:50: The Amazon Antitrust Paradox and the Shifting Landscape
48:20: The Energy Conundrum and the Need for Policy Solutions
56:34: Conclusion: Finding Common Ground and Building a Better Future for AI -
Tom and Nate cover the state of the industry after Llama 3. Is Zuck the best storyteller in AI? Is he the best CEO? Are CEOs doing anything other than buying compute? We cover what it means to be successful at the highest level this week.
Links:
Dwarkesh interview with Zuck https://www.dwarkeshpatel.com/p/mark-zuckerberg
Capuchin monkey https://en.wikipedia.org/wiki/Capuchin_monkey00:00 Introductions & advice from a wolf
00:45 Llama 3
07:15 Resources and investment required for large language models
14:10 What it means to be a leader in the rapidly evolving AI landscape
22:07 How much of AI progress is driven by stories vs resources
29:41 Critiquing the concept of Artificial General Intelligence (AGI)
38:10 Misappropriation of the term AGI by tech leaders
42:09 The future of open models and AI development -
Tom and Nate catch up after a few weeks off the pod. We discuss what it means for the pace and size of open models to get bigger and bigger. In some ways, this disillusionment is a great way to zoom our into the big picture. These models are coming. These models are getting cheaper. We need to think about risks and infrastructure more than open vs. closed.
00:00 Introduction
01:16 Recent developments in open model releases
04:21 Tom's experience viewing the total solar eclipse
09:38 The Three-Body Problem book and Netflix
14:06 The Gartner Hype Cycle
22:51 Infrastructure constraints on scaling AI
28:47 Metaphors and narratives around AI risk
34:43 Rethinking AI risk as public health problems
37:37 The "one-way door" nature of releasing open model weights
44:04 The relationship between the AI ecosystem and the models
48:24 Wrapping up the discussion in the "trough of disillusionment"We've got some links for you again:
- Gartner hype cycle https://en.wikipedia.org/wiki/Gartner_hype_cycle
- MSFT Supercomputer https://www.theinformation.com/articles/microsoft-and-openai-plot-100-billion-stargate-ai-supercomputer
- Safety is about systems https://www.aisnakeoil.com/p/ai-safety-is-not-a-model-property
- Earth day history https://www.earthday.org/history/
- For our loyal listeners http://tudorsbiscuitworld.com/ -
Tom and Nate catch up on the ridiculous of Nvidia GTC, the lack of trust in AI, and some important taxonomies and politics around governing AI. Safety institutes, reward model benchmarks, Nathan's bad joke delivery, and all the normal good stuff in this episode! Yes, we're also sick of the Taylor Swift jokes, but they get the clicks.
The Taylor moment: https://twitter.com/DrJimFan/status/1769817948930072930
00:00 Intros and discussion on NVIDIA's influence in AI and the Bay Area
09:08 Mustafa Suleyman's new role and discussion on AI safety
11:31 The shift from performance to trust in AI evaluation
17:31 The role of government agencies in AI policy and regulation
24:07 The role of accreditation in establishing legitimacy and trust
32:11 Grok's open source release and its impact on the AI community
39:34 Responsibility and accountability in AI and social media platforms -
Tom and Nate sit down to discuss Claude 3 and some updates on what it means to be open. Not surprisingly, we get into debating some different views. We cover Dune 2's impact on AI and have a brief giveaway at the end. Cheers!
More at retortai.com. Contact us at mail at domain.
Some topics:
- The pace of progress in AI and whether it feels meaningful or like "progress fatigue" to different groups- The role of hype and "vibes" in driving interest and investment in new AI models
- Whether the value being created by large language models is actually just being concentrated in a few big tech companies
- The debate around whether open source AI is feasible given the massive compute requirements
- The limitations of "open letters" and events with Chatham House rules as forms of politics and accountability around AI
- The analogy between the AI arms race and historical arms races like the dreadnought naval arms race
- The role of narratives, pop culture, and "priesthoods" in shaping public understanding of AI
Chapters & transcript partially created with https://github.com/FanaHOVA/smol-podcaster.
00:00 Introduction and the spirit of open source
04:32 Historical parallels of technology arms races
10:26 The practical use of language models and their impact on society
22:21 The role and potential of open source in AI development
28:05 The challenges of achieving coordination and scale in open AI development
34:18 Pop culture's influence on the AI conversation, specifically through "Dune" -
This week Tom and Nate cover all the big topics from the big picture lens. Sora, Gemini 1.5's context length, Gemini's bias backlash, Gemma open models, it was a busy week in AI. We come to the conclusion that we can no longer trust a lot of these big companies to do much. We are the gladiators playing the crowd of AI. This was a great one, I'm proud on one of Tom's all time best jokes.
Thanks for listening, and reach out with any questions. -
A metaphor episode! We are trying to figure how much the Waymo incident is or is not about AI. We bring back our Berkeley roots and talk about traditions in the Bay around distributed technology. Scooters and robots are not safe in this episode, sadly. Here's the link to the Verge piece Tom read from: https://www.theverge.com/2024/2/11/24069251/waymo-driverless-taxi-fire-vandalized-video-san-francisco-china-town
-
... and you should too. We catch up this week on all things Apple Vision Pro and how these devices will intersect with AI. It really turned more into a commentary on the future of society, and how various technologies may or may not tap into our subconscious.
The only link we've got for you is DeepDream: https://en.wikipedia.org/wiki/DeepDream -
Wow, one of our favorites. This week Tom and Nate have a lot to cover. We cover AI2's new OPEN large language models (OLMo) and all that means, the alchemical model merging craze powering waifu factories, model weight leaks from Mistral, the calling card for our loyal fans, and more topics.
We have a lot of links you'll enjoy as you'll go through it:
The Mistral leak: https://huggingface.co/miqudev/miqu-1-70b/discussions/10 Writing on model merging: https://www.interconnects.ai/p/model-merging Writing on open LLMs: https://www.interconnects.ai/p/olmo The original mechanical turk: https://en.wikipedia.org/wiki/Mechanical_Turk This Waifu does not exist: https://thisanimedoesnotexist.ai/ The warriors film https://www.youtube.com/watch?v=--gdB-nnQkU The Waifu Research Department: https://huggingface.co/waifu-research-department -
We recovered this episode from the depth of lost podcast recordings! We carry on and Tom tells the story of his wonderful sociology turned AI Ph.D. at Berkeley. This comes with plenty of great commentary on the current state of the field and striving for impact. We cover the riverbank of Vienna, the heart of the sperm whale, and deep life lessons.
-
This week Tom and Nate catch up on two everlasting themes of ML: compute and evaluation. We chat about AI2, Zuck's GPUs, evaluation as procurement, NIST comments, neglecting reward models, and plenty of other topics. We're on the tracks for 2024 and waiting for some things to happen. Links for what we covered this week:
Zuck interview on The VergeSaturday night live George Washington during revolutionary warNIST RFISam Altman's uncomfortable proposition -
We're excited to bring you something special today! Our first cross over episode brings some fresh energy to the podcast. Tom and Nate are joined by Jordan Schneider of ChinaTalk (A popular Substack-based publication covering all things China https://www.chinatalk.media/). We cover lots of great ground here, from the economics of Hirschman to the competition from France. All good Patriots should listen to this episode, as we give a real assessment of where competition lies on the U.S.'s path to commercializing AI. Enjoy our best effort at a journal club!
- Mostrar más