Folgen

  • In this episode, Jason Howell and Jeff Jarvis discuss the Rabbit R1 AI device with guest Mark Spoonauer from Tom's Guide, delving into its capabilities, design, and potential use cases. Also, Microsoft's VASA-1 model, Meta's slew of AI announcements, the closure of Oxford's Future of Humanity Institute, and the appointment of Paul Cristiano to the US AI Safety Institute.


    Consider donating to the AI Inside Patreon: http://www.patreon.com/aiinsideshow


    INTERVIEW WITH MARK SPOONAUER, EIC OF TOM'S GUIDE

    First impressions of the Rabbit R1 AI deviceDesign and form factor of the Rabbit R1Capabilities and limitations of the Rabbit R1Potential use cases for the Rabbit R1Comparison with other AI devices like Meta's Ray-Ban glassesPricing and availability of the Rabbit R1Concerns about the Rabbit R1 being a companion device rather than a phone replacementSocial implications and potential issues with the Rabbit R1

    NEWS

    The Humane AI pin bad-review backlashAI wearables like LimitlessLimitations of audio-only AI interfaces like the IYO OneClosure of Oxford's Future of Humanity Institute and resignation of Nick BostromAppointment of Paul Cristiano as head of US AI Safety Institute at NISTMicrosoft's new VASA-1 AI model for animating faces from photos and audioGoogle's restructuring, combining Android and hardware teams for AI integrationYouTube's new "Ask" feature for premium subscribers to interact with videos using AIMeta's announcements: multimodal AI support for Ray-Ban glasses, AI assistant integration, and Lama 3 language model

    Hosted on Acast. See acast.com/privacy for more information.

  • Join Jason Howell as he chats with the founders of Augie and Revoldiv. These two AI tools are harnessing the power of AI to transform the video production and transcription process for creators and professionals.


    Please support AI Inside by joining us on our Patreon!


    Interview with Jeremy Toeman, founder of AugX Labs, and Augie

    Jeremy Toeman's background at CNET and the shift to video contentThe inspiration behind creating Augie and AugX LabsHow Augie works and it's core functionalityLive demo of Augie's video creation capabilitiesChallenges faced in developing AI tools like AugieThe future of AI-generated content and its implicationsCODE: yellowgold for a free month of the premium tier!

    Interview with Surafel Defar, founder of Revoldiv

    Surafel's journey from software engineering to entrepreneurshipThe origins and purpose of RevoldivRevoldiv's transcription capabilities and user interfaceUnexpected use cases of RevoldivBalancing coding and business responsibilities as a founderThe importance of user feedback and adaptation in the AI field

    Hosted on Acast. See acast.com/privacy for more information.

  • Fehlende Folgen?

    Hier klicken, um den Feed zu aktualisieren.

  • Jeff Jarvis and Jason Howell discuss Google's AI announcements at Google Cloud Next, how AI is integrating with advertising and marketing, the ethical debates surrounding data harvesting, calls for stronger misinformation safeguards, and the impressive emotional depth of AI-generated music.


    Please support this show directly by becoming a Patron!


    NEWS

    Jeff Jarvis' experience speaking at the Nordic Media and AI conferenceJony Ive and Sam Altman are DEFINITELY working on an AI device togetherGoogle Cloud Next 2024 announcements (Gemini 1.5 Pro, Google Vids, AI meeting notes)WPP's partnership with Google's Gemini AI for advertising and marketingActivist groups calling for stronger action against AI-generated misinformation and deepfakesThe New York Times article on the data race to feed AI systemsPerplexity's plans to sell ads on their platformSuno AI music generation service and the emotional quality of AI-generated music

    Hosted on Acast. See acast.com/privacy for more information.

  • Jason Howell and Jeff Jarvis dive into the $665 million Shiba Inu coin donation to the Future of Life Institute, the potential $100 billion "Stargate" supercomputer from OpenAI and Microsoft, Jason gives a hands-on demo of Stability AI's new music generation model, and more!


    NEWS

    - Future of Life Institute's $665 million donation from Vitalik Buterin (Ethereum co-founder) in Shiba Inu coins

    - Artifact social news curation app shutting down, acquired by Yahoo

    - Yum Foods has plans to go all in on generative AI

    - How people are using generative AI based on Harvard Business Review report

    - New York City's chatbot providing incorrect legal/business advice

    - Generative AI used to create virtual persona of real person for advertising

    - OpenAI and Microsoft in talks for $100 billion "Stargate" supercomputer


    DEMO

    - Stability AI's Stable Audio 2.0 for music generation


    Hosted on Acast. See acast.com/privacy for more information.

  • Sal Khan, founder and CEO of Khan Academy and author of the new book "Brave New Words: HOW AI WILL REVOLUTIONIZE EDUCATION (AND WHY THAT'S A GOOD THING)", joins Jason Howell and Jeff Jarvis to talk about the many challenges educators face as AI tools become pervasive and ever more powerful. Sal explores how teachers and administrations are embracing technology to improve the learning environment and make their own jobs easier and more effective.


    Become a Patron of AI Inside and support our work directly


    INTERVIEW with Sal Khan, founder of Khan Academy

    Sal's upcoming book "Brave New Words: How AI Will Revolutionize Education and Why That's a Good Thing"Challenges teachers face with AI pushing against traditional teaching methodsLegitimate concerns about AI use in classroomsAI as a tool, not a replacement for teachersPotential of AI to mitigate cheating and provide process insightsImpact of AI on education and the role of humanitiesSuggestions for educators to use AI in classroomsAI as a tutor and writing coachAI supporting student assignments and teacher productivityManaging teacher resistance and concerns about AISal's perspective on AI bias and representationEthical considerations for AI in assessment and hiring

    NEWS

    Stability CEO resigns from generative AI companyUnited Nations adopts U.S.-led resolution to safely develop AIBen Evans: The problem of AI ethics, and laws about AITennessee Adopts ELVIS Act, Protecting Artists’ Voices From AI ImpersonationFinancial Times tests an AI chatbot trained on decades of its own articlesOpenAI is pitching Sora to HollywoodSora: first impressions

    Hosted on Acast. See acast.com/privacy for more information.

  • This week, Jason Howell and Jeff Jarvis welcome Evan Brown to round up AI regulation efforts worldwide including Utah's disclosure law and the EU's comprehensive AI Ant. Plus, big news for Microsoft AI, Nvidia's behemoth AI chips, and more!


    SUPPORT THE SHOW!


    INTERVIEW WITH EVAN BROWN

    Utah has a brand new law that regulates generative AI

    EU votes to ban riskiest forms of AI and impose restrictions on others

    Journalism and AI

    YouTube Introduces Mandatory Disclosure For AI-Generated Content


    NEWS

    Big changes at Inflection AI & Microsoft

    WSJ report on Microsoft/Inflection

    Reid Hoffman's tweets about it

    Apple Is in Talks to Let Google Gemini Power iPhone AI Features

    Nvidia reveals Blackwell B200 GPU, the ‘world’s most powerful chip’ for AI

    Jeff & Mikah react to the NVIDIA keynote

    xAI open sources Grok

    A good video explaining generative AI


    Hosted on Acast. See acast.com/privacy for more information.

  • In this episode of AI Inside, Jeff Jarvis and Jason Howell explore the tempering of expectations around generative AI, the promise and concerns of open-sourcing models, and the unique ways AI is being leveraged in journalism, elderly care, and even digital recreations of deceased celebrities.


    NEWS

    OpenAI's internal investigation into Sam Altman's ouster as he returns to the boardGoogle implementing safeguards restricting election-related information on its AIExpectations around generative AI capabilities and impact being tamped downOpen-sourcing AI models and the debate around itElon Musk's vague promise to open-source "Grok"The use of AI in journalism for analysis and data processingMidjourney's new character reference feature for consistent image generationAI-powered companion dolls for elderly companionshipAI application for bereavement assistanceDigital recreation of Marilyn Monroe's voice using AI

    Keep connected to the show:


    WEBSITE: http://aiinside.show

    VIDEO: http://www.youtube.com/@YellowgoldStudios?sub_confirmation=1

    PATREON: http://www.patreon.com/aiinsideshow

    TWITTER: http://www.twitter.com/AIInsideShow

    INSTAGRAM: http://www.instagram.com/aiinsideshow

    THREADS: https://www.threads.net/@aiinsideshow

    MASTODON: https://mastodon.social/@aiinsideshow


    Hosted on Acast. See acast.com/privacy for more information.

  • Jason Howell and Jeff Jarvis discuss Elon Musk's legal battle with OpenAI, Nvidia's CEO on making programming obsolete, calls for responsible AI innovation, and more!


    NEWS

    Discussion about Elon Musk suing OpenAI for breaching contract over its stated mission of openness and building responsible AIOpenAI releasing emails from Musk revealing his desire for control and more fundingAnthropic rolling out new versions of Claude LLM (Claude 3 Opus, Sonnet, Haiku)Nvidia CEO Jensen Huang's statement on AI making programming unnecessary for everyoneAmericans for Responsible AI Innovation launching and calls for regulationOpen letter from researchers for a "safe harbor" for independent AI evaluationTrust in AI companies dropping according to Edelman reportBad bots: Issues with Amazon's Rufus shopping bot and H&R Block's tax advice botAmazon's $1 billion industrial innovation fund for AI and roboticsHumanoid robot developments from companies like Figure and Magic Lab

    Keep connected to the show:


    WEBSITE: http://aiinside.show

    VIDEO: http://www.youtube.com/@YellowgoldStudios?sub_confirmation=1

    PATREON: http://www.patreon.com/aiinsideshow

    TWITTER: http://www.twitter.com/AIInsideShow

    INSTAGRAM: http://www.instagram.com/aiinsideshow

    THREADS: https://www.threads.net/@aiinsideshow

    MASTODON: https://mastodon.social/@aiinsideshow


    Hosted on Acast. See acast.com/privacy for more information.

  • This week Jason Howell and Jeff Jarvis talk with Dan Patterson of Blackbird.ai about their context-providing service Compass before diving into the week's top AI stories including Google's "biased" Gemini model, data sharing deals between companies like Reddit for model training, and the risks and benefits of open sourcing AI systems.


    INTERVIEW

    Overview of Blackbird AI's mission to track narrative threats and attacks like misinformation and disinformationIntroduction of Blackbird's new product Compass for providing context around claims using AI analysisExplanation of how Compass works to check claims and provide contextual information from authoritative sourcesDiscussion around Compass being built on Blackbird's Raven Risk large language model (LLM) and related APIsExamples provided of using Compass for real-world claims like "Is the earth flat?"Intention for Compass to help provide clarity and essential context to media contentDiscussion around target users for Compass - social media companies, comms agencies, journalistsExplanation that Compass determines authority based on how authoritative sites reference each otherDiscussion around Compass having a framework for integrating with fact-checking databases

    NEWS

    Discussion around the challenges and nuances of implementing guardrails for AINews segment on Google's Gemini model controversy over biased image generationDeals emerging between tech companies to sell data for AI model training, including Reddit-Google and rumored Tumblr-OpenAITyler Perry putting his studio expansion plans on hold due to the emergence of AI like OpenAI's SoraAnalysis of benefits and risks of open-sourcing AI models

    Keep connected to the show:


    WEBSITE: http://aiinside.show

    VIDEO: http://www.youtube.com/@YellowgoldStudios?sub_confirmation=1

    PATREON: http://www.patreon.com/aiinsideshow

    TWITTER: http://www.twitter.com/AIInsideShow

    INSTAGRAM: http://www.instagram.com/aiinsideshow

    THREADS: https://www.threads.net/@aiinsideshow

    MASTODON: https://mastodon.social/@aiinsideshow


    Hosted on Acast. See acast.com/privacy for more information.

  • Jason Howell and Jeff Jarvis discuss the latest AI news, including OpenAI's new Sora video generator, legal issues for an Air Canada chatbot, missteps in publishing AI art in a medical journal, and Jason gives a hands-on demo of Perplexity Pro including a solid example of why it's necessary to scrutinize the output of LLMs.

    Release of OpenAI's Sora for video generationDebate over whether advanced AI capabilities indicate nearing AGIMeta's new AI model V-JEPA learns from video like LLMs learn from textEffect of Sora release on Adobe stock valuationAir Canada chatbot provides misleading refund info, company gets suedScientific journal publishes AI-generated rat diagram from MidjourneyBritish painter Harold Cohen spent 40+ years refining an image-generating robot

    PERPLEXITY PRO DEMONSTRATION

    Integrates multiple models like GPT-3.5 and Stable DiffusionCapability to search across internet sources with up-to-date resultsThe need to verify its output: Got its own capabilities wrong by claiming it uses model "Sora"

    Keep connected to the show:


    WEBSITE: http://aiinside.show

    VIDEO: http://www.youtube.com/@YellowgoldStudios?sub_confirmation=1

    PATREON: http://www.patreon.com/aiinsideshow

    TWITTER: http://www.twitter.com/AIInsideShow

    INSTAGRAM: http://www.instagram.com/aiinsideshow

    THREADS: https://www.threads.net/@aiinsideshow

    MASTODON: https://mastodon.social/@aiinsideshow


    Hosted on Acast. See acast.com/privacy for more information.

  • Jason Howell and Jeff Jarvis discuss the week's AI news, including Sam Altman's call for $7 trillion in AI funding, Google's launch of Gemini Ultra 1.0 chatbot, proposed regulations on AI safety, dismissal of copyright claims against AI, and the need for humanities education in the AI field.


    NOTE: Connectivity issues resulted in a lower-resolution file for part of the show. Apologies!


    NEWS

    Sam Altman wants $7 trillion to boost AI chip and GPU production globallyChatGPT gaining ability to remember user preferences and dataOpenAI building web and device control agentsGoogle's Assistant is now called Gemini on Android devicesGoogle announces Gemini Ultra 1.0 model to compete with GPT-4California bill proposes AI safety regulations and requirementsAI companies agree to limit election deepfakesMost claims dismissed in Sarah Silverman copyright lawsuit, leaving only 1 direct copyright claimBeijing court rules AI-generated content can be copyrightedNYT op-ed argues humanities education is key to developing AI leaders

    Keep connected to the show:


    WEBSITE: http://aiinside.show

    VIDEO: http://www.youtube.com/@YellowgoldStudios?sub_confirmation=1

    PATREON: http://www.patreon.com/aiinsideshow

    TWITTER: http://www.twitter.com/AIInsideShow

    INSTAGRAM: http://www.instagram.com/aiinsideshow

    THREADS: https://www.threads.net/@aiinsideshow

    MASTODON: https://mastodon.social/@aiinsideshow


    Hosted on Acast. See acast.com/privacy for more information.

  • This week, Jason Howell and Jeff Jarvis welcome Dr. Nirit Weiss-Blatt, author of "The Techlash and Tech Crisis Communication" and the AI Panic newsletter, to discuss how some AI safety organizations use questionable surveys, messaging, and media influence to spread fears about artificial intelligence risks, backed by hundreds of millions in funding from groups with ties to Effective Altruism.


    INTERVIEW

    Introduction to guest Dr. Nirit Weiss-BlattResearch on how media coverage of tech shifted from optimistic to pessimisticAI doom predictions in mediaAI researchers predicting human extinctionCriticism of annual AI Impacts surveyRole of organizations like MIRI, FLI, Open Philanthropy in funding AI safety researchUsing fear to justify regulationsNeed for balanced AI coveragePotential for backlash against AI safety groupsWith influence comes responsibility and scrutinyThe challenge of responsible AI explorationNeed to take concerns seriously and explore responsibly

    NEWS BITES

    Meta's AI teams are filled with female leadershipMeta embraces labeling of GenAI contentGoogle's AI Test Kitchen and image effects generator

    Keep connected to the show:


    WEBSITE: http://aiinside.show

    VIDEO: http://www.youtube.com/@YellowgoldStudios?sub_confirmation=1

    PATREON: http://www.patreon.com/aiinsideshow

    TWITTER: http://www.twitter.com/AIInsideShow

    INSTAGRAM: http://www.instagram.com/aiinsideshow

    THREADS: https://www.threads.net/@aiinsideshow

    MASTODON: https://mastodon.social/@aiinsideshow


    Hosted on Acast. See acast.com/privacy for more information.

  • In this episode, Sven Størmer Thaulow, EVP/Chief Data and Technology Officer at Schibsted, joins Jason Howell and Jeff Jarvis to discuss the creation of a Large Language Model (LLM) for media in Norway, the importance of aligning AI models with cultural values, and the potential for American media companies to collaborate on creating English language models. The conversation also explores the use of AI in generating article titles, providing a different user interface, and experimenting with an AI agent for tech reviews.


    INTERVIEW

    Introduction to the guest, Sven Størmer Thaulow, EVP/Chief Data and Technology Officer at Schibsted.The work Sven and his team are doing at Schibsted.Background on Schibsted and its success in the news industry.Schibsted's successful subscriptions in Norway and their high-quality news products.The creation of an LLM in the Norwegian language and Schibsted's role in it.Different attitudes towards AI and language models in Norway and the US.Schibsted's AI strategy and their work with AI in the recommendation system space.The importance of data and language data for AI and language models.Efforts to build Norwegian and Northern Germanic language LLMs.The need to align AI models with cultural values.Potential for American media companies to collaborate on creating English language models.Sven's experiment with using their LLM to generate article titles at Schibsted.The use of AI to provide a different user interface and relationship with content.Experiment with an AI agent for tech reviews.

    NEWS BITES

    Google's new Lumiere AI video generator, Video Poet, can create stunning clips by using space and time together. Microsoft's changes to its AI text-to-image generation tool in response to reports of people using it to create nonconsensual sexual images of Taylor Swift.OpenAI and Common Sense Media have partnered to address concerns about the impacts of AI on children and teenagers.Keep It Shot is a tool that allows Mac users to rename, organize, and quickly find their screenshots using AI.

    Keep connected to the show:


    WEBSITE: http://aiinside.show

    VIDEO: http://www.youtube.com/@YellowgoldStudios?sub_confirmation=1

    PATREON: http://www.patreon.com/aiinsideshow

    TWITTER: http://www.twitter.com/AIInsideShow

    INSTAGRAM: http://www.instagram.com/aiinsideshow

    THREADS: https://www.threads.net/@aiinsideshow

    MASTODON: https://mastodon.social/@aiinsideshow


    Hosted on Acast. See acast.com/privacy for more information.

  • On the premiere episode of the AI Inside podcast, hosts Jeff Jarvis and Jason Howell discuss AI copyright issues with Common Crawl Foundation's Rich Skrenta regarding news outlets limiting access to content they publish publicly, impacting the integrity of Common Crawl's internet archive. In recent years, the archive has been used by LLMs as AI training data, and the implications of restricting information have a dramatic impact on the data quality that survives.


    INTERVIEW

    Introduction and background on AI Inside podcastDiscussion of the recent AI oversight Senate hearing Jeff testified atIntroduction of guest Rich Skrenta from Common Crawl FoundationOverview of Common Crawl and its goals to archive the open webDiscussion of how Common Crawl data is used to train AI modelsNews publishers wanting content removed from Common CrawlDebate around copyright, fair use, and AI’s “right to read”Mechanics of how Common Crawl works and what it archivesConcerns about restricting AI access to data for trainingRisk of regulatory capture and only big companies being able to use AIDiscussion of recent court ruling related to web scrapingHopes for Common Crawl's growth and evolution

    NEWS BITES

    Interesting device announcement from CES - Rabbit R1 with Perplexity AI integrationStudy on actual risk of AI automating jobs away in the near future

    Keep connected to the show:


    WEBSITE: http://aiinside.show

    VIDEO: http://www.youtube.com/@YellowgoldStudios?sub_confirmation=1

    PATREON: http://www.patreon.com/aiinsideshow

    TWITTER: http://www.twitter.com/AIInsideShow

    INSTAGRAM: http://www.instagram.com/aiinsideshow

    THREADS: https://www.threads.net/@aiinsideshow

    MASTODON: https://mastodon.social/@aiinsideshow


    Hosted on Acast. See acast.com/privacy for more information.

  • AI Inside is your new home for keeping up to date on what's new in the world of Artificial Intelligence. Join Jason Howell and Jeff Jarvis every Wednesday as they speak with some of the world's leading minds about how AI is being used, abused, and built into the fabric of our lives. AI Inside will look at the influence of AI in the newsroom, the ongoing uncertainty around AI art and copyright, the scalability of disinformation, supercharging worker productivity, and so much more. And if a groundbreaking service grabs hold of the news cycle, AI Inside will be there to take a closer look. It's all fair game on the AI Inside podcast.


    Keep connected to the show:


    WEBSITE: http://aiinside.show

    VIDEO: http://www.youtube.com/@YellowgoldStudios?sub_confirmation=1

    PATREON: http://www.patreon.com/aiinsideshow

    TWITTER: http://www.twitter.com/AIInsideShow

    INSTAGRAM: http://www.instagram.com/aiinsideshow

    THREADS: https://www.threads.net/@aiinsideshow

    MASTODON: https://mastodon.social/@aiinsideshow


    Hosted on Acast. See acast.com/privacy for more information.