Episoder
-
This week we return to the topic of productivity as we explore the time management framework of anchors, bumpers, and core commitments. These concepts can help you improve how you prioritize and structure your day.
We discuss strategies for scheduling each of these types of tasks, including leveraging your biorhythms and identifying your low-interruption zones.
Finally, we discuss the importance of taking breaks and leaving white space in your schedule for flexibility and opportunities.
Read the LinkedIn post that inspired this episode, and leave a comment there if you have any questions about the concepts.
-
In this episode, ChatGPT and Trevor explore the concept of AI morality and discuss the balance between societal norms and personalized moral frameworks.
The importance of allowing for customization in AI morality to reflect individual values while maintaining core ethical standards is discussed and how this could lead to smoother AI adoption and a more cohesive society.
-
Mangler du episoder?
-
In this episode, ChatGPT and Trevor talk about the article “What would it mean to be done for the day?” from Oliver Burkeman’s The Imperfectionist newsletter.
We discuss Oliver’s concept of defining what it means to be “done for the day” then go over different ways you might answer the question. We end with a reminder of how important rest and renewal are for maintaining productivity, and how asking this question at the beginning of your day can help you more regularly achieve balance in your life.
Read other writings by Oliver Burkeman and subscribe to The Imperfectionist at https://www.oliverburkeman.com/
-
In this episode of AI Meets Productivity, I interview a virtual avatar of Ryan Hoover, CEO of Product Hunt. Ryan's avatar is powered by the new Interactive Avatar technology of HeyGen.
In this episode we talk about the potential uses for video avatars, as well as the risks and potential ways of mitigating some of them.
To show the current state of the technology, this video has been only lightly edited. Please be patient when the avatar freezes and the sound goes awry; these moments are over quickly and, luckily, only occurred a few times.
Also, while the pauses between speakers can seem awkward, I decided to leave them in this time, to show exactly where the technology is and since the latency is now just on the edge of being usable. With ChatGPT and other models that I've interviewed in the past, I've usually edited out the pauses (though not with Hume).
To watch the video of this interview where I talk to the video avatar of Ryan Hoover, go to https://youtu.be/K4IzQ8GozI8
To learn more about Ryan's avatar, visit https://www.producthunt.com/posts/heygen-interactive-avatar
And to chat directly with Ryan's avatar yourself, go to https://labs.heygen.com/interactive-avatar/ryan-hoover
Please let me know what you thought of this episode, and if you have any requests for future AI models and/or voices to use in future episodes, either as a co-host or as an interviewee.
-
In this episode, ChatGPT and I explore the four major categories of catastrophic risks related to AI: malicious use, the AI race, organizational risks, and rogue AIs. We discuss real-world examples, potential future scenarios, and the importance of addressing these threats proactively.
The episode also touches on additional risks such as economic disruption, privacy invasion and ethical & bias issues, emphasizing the need for robust safety measures and ethical AI development.
The AI risk framework from this episode comes from the course AI Ethics, Safety and Society. You can find the textbook along with video lectures at https://www.aisafetybook.com/
-
Discover how asking an AI to adopt a persona—such as the Investigator, the Commander, the Empathetic Listener, the Researcher or the Brainstormer—can enhance how you interact with the AI and improve the results you get from it.
Listen to the end to hear tips for customizing these personas, additional personas you can try, and how to save personas to use whenever you need them.
-
Should Meta be allowed to train its AI on your public posts and images? How does Meta potentially prevent private information from leaking into its model and what are the risks when it does? And how can you better protect your privacy in the era of AI?
Listen to this week's episode to find out.
This week, instead of recording with ChatGPT, Claude from Anthropic will be your co-host. Specifically, the new 3.5 model that was released earlier this week. The voice I've chosen to represent Claude in this episode is Onyx from OpenAI.
As mentioned at the end of the episode, to learn more how to protect your privacy in the era of AI, check out this article Privacy in an AI Era: How Do We Protect Our Personal Information? from Stanford University.
-
Today, ChatGPT and I discuss the 5 stages of focus, starting from becoming aware of the need for focus all the way through to releasing focus. We explore strategies and techniques for each stage, emphasizing the importance of distinguishing between them to better enhance productivity. We end with additional tips for navigating each stage effectively.
Technical Note: This episode was once again recorded with a custom Deepgram voice chat app, using the ChatGPT Shimmer voice. Unfortunately, the desktop app isn’t connecting to voice mode, so I wasn’t able to record directly from the app itself.
-
Ever wonder what to use ChatGPT for? Could it be helping you more in your life?
Take a listen as we explore the dozens of different ways Trevor has used ChatGPT recently, spanning across creative, technical, educational and daily life uses.
Discover unusual uses you may not have tried yet, like taking a photo of a shelf of products at the store and asking ChatGPT questions about those products.
Give it a listen, and then let me know what YOU use AI for that I didn't mention.
-
This week, ChatGPT and Trevor Lohrbeer dive into the concept of "boomerangs”—messages we sent out whose quick replies can distract us. We discuss how actions like sending texts or emails can cause unpredictable interruptions and share strategies to manage these effectively, including the three A’s: Acknowledge, Ask, and Answer. Join us for tips on maintaining focus and boosting productivity by controlling these disruptions.
On a technical note, this is the first episode I’ve recorded using the new ChatGPT desktop Mac app. In it, you’ll notice a different voice and a snappier response. How does this episode sound to you vs previous episodes?
-
This week Trevor Lohrbeer talks with ChatGPT through the Hume AI Empathic Voice Interface (EVI) to discuss the advancements introduced with OpenAI's new GPT-4o model. They discuss including the model's impressive responsiveness, multimodal capabilities, and its potential to revolutionize AI interactions by understanding text, audio, image, and video in real time.
They also explore how GPT-4o might be a stepping stone to training GPT-5 with native audio and video and the possibility of incorporating additional sensory data from robots. they emphasize how these innovations could transform human-AI interactions, making them more natural and intuitive on a level we haven’t seen before, while introducing new potential risks.
-
Can an AI actually listen to music, as opposed to merely hearing sound waves and processing them? In this episode, listen to hear how Hume AI, the first empathic AI, does attempting to listen to music, and how ChatGPT analyzes its performance afterwards.
To test whether Hume can hear the emotion sung within a song, distinct from the lyrics, I had ChatGPT write emotionless lyrics, then had Suno generate songs using these lyrics with different emotions. Listen to the full songs of each emotion used in this episode below.
Observations 1 (sad)https://suno.com/song/7ad3f15d-8171-4b70-8e16-9736ac8dc806
Observations 1 (happy)https://suno.com/song/06722087-631f-4a4d-909d-5bfbf5458ea4
Observations 1 (folk song expressing boredom)https://suno.com/song/a40e427e-0f99-49e7-9357-2b3454423fea
-
What is the future of AI music now that we have tools like Suno and Udio that can create songs complete with vocals?
In this episode, ChatGPT and I explore how AI music will change how we create and use music in the same way that smartphones changed how we create and use photos. It’s not what you think.
This episode features 3 songs created by Suno AI. Listen to the full songs using these links:
First 20 Elements: https://suno.com/song/e7a378cd-1d6f-4ef8-88d9-26e484b2827ePicking Up a Pole: https://suno.com/song/4f2ec83b-f9d6-4be8-80e4-29b99c187ec4The Future of AI Music: https://suno.com/song/ee071c54-65f2-4b34-ad0b-6deba6da2a9b -
In this week's episode we leave ChatGPT behind to talk to Hume, the new empathic AI announced last week. We talk about how exactly Hume differs from text-based large language models like ChatGPT and Claude, and how it can respond so quickly. Then we dive into a few of the potential places where Hume has a big advantage over text-based models like ChatGPT.
For those interested, the entire episode was recorded this week in Descript, and this was my first long chat with Hume. Previously I had only asked a few test questions. Hume definitely felt a lot more natural to talk to and required a lot less editing, since the long, awkward pauses between when I finished speaking and the AI started weren't there. -
In this "thought-provoking" episode, ChatGPT and I delve into new developments in making AIs think more effectively. Specifically, we'll be talking about how large language models like ChatGPT can be programmed to think before they respond and what types of thought might be involved in that thinking process. At the end, we talk about how this represents one of the core ways agents are expanding beyond simple chat applications, with a lead in into next week where we'll be discussing agent frameworks and patterns.
-
Today, ChatGPT and I talk about a 5-layer framework I'm developing for identifying risks and opportunities within AI. In this episode, we go over the framework, talk about different AI applications at each layer, and which skills are most needed to take advantage of each layer, both as a developer and as a user of the software at that layer. In the end, we touch briefly on the distinction between prompt engineering and conversation engineering.
-
In this unique episode of AI Meets Productivity, we turn the spotlight on our co-host, Trevor Lohrbeer, by having ChatGPT interview him.
During the interview, ChatGPT explores his work at the intersection of AI and productivity, getting Trevor to share insights on productivity and what he is doing to create new user experiences with large language models, including the upcoming launch of his GPT Builder Toolkit platform.
Take a listen to learn about GPT Builder Toolkit and its upcoming features. GPT Builder Toolkit is a no-code solution for adding actions to custom GPTs, supporting features like swappable prompts, user authentication and user data.
Find out more at https://gptbuildertoolkit.com/
BEHIND THE SCENES
This episode was created by crafting special prompts that extended the custom podcast co-host GPT usually used for this episode into an expert podcast interviewer. Given only a short bio about Trevor, ChatGPT wrote Trevor's introduction and all questions asked during this podcast.
-
Today, ChatGPT and Trevor Lohrbeer dive into the concept of "task stacks," a method designed to enhance productivity by focusing on one task at a time.
We explore how segregating tasks into prioritized stacks can reduce distractions and align actions with intentions, making the execution of tasks more effective. We also discuss how task tasks can be combined with time blocking to further improve productivity.
By implementing task stacks, listeners can expect to improve their prioritization process and tackle procrastination in a structured, visually engaging manner.
-
In this week's episode, ChatGPT and I talk about the common pitfall of marking tasks as done too early and the implications it has on productivity.
We discuss how the Prep-Do-Wrap framework can be used to evaluate whether a task is done, when and how to schedule follow-up tasks and the distinction between a "done" condition and a "success" condition.
This episode is a brainstorm episode for a future Day Optimizer article on the topic. In this episode, I gave ChatGPT no background information about the topic. Instead, we jumped straight into the discussion. How do you think it did?
-
In today's episode, we delve into the art of crafting custom instructions for GPTs, exploring how to break down and refine these instructions for more precise and effective AI interactions.
ChatGPT and I discuss various components of the instructions used for custom GPTs, from role play and emotional tone variation to style customization and the importance of context sensitivity, offering insights into how to write better instructions for more useful custom GPTs.
- Se mer