Episódios

  • In January of this year, Peregrine Mission One launched with at least 22 payloads. One was intended to be the first American made rover to land on the moon since the Apollo days: 1972.It was called Iris, and it was also the first lunar rover constructed with carbon fiber. It was designed and built by students at Carnegie Mellon University.Today, we’re going to chat with them ...Despite a mission failure due to the lander experiencing a propellant leak and missing its lunar target, the Iris team achieved significant milestones. They successfully demonstrated that student-made rovers could survive space conditions, including the Van Allen Belt's radiation, and maintain communication and functions in space. This project, despite its setbacks, marks a significant achievement in democratizing space exploration and contributes to the broader vision of establishing moon bases and Mars bases as stepping stones for further space exploration.00:00 The Future of Space Exploration: Moon and Mars Bases00:42 Introducing can the Iris Lunar Rover Project05:17 The Team Behind Iris: Roles and Experiences09:00 Scientific Goals and Achievements of the Iris Rover12:58 Overcoming Failure: Lessons from a Mission Gone Wrong22:03 The Next Steps: Future Missions and Career Paths25:59 Reflecting on the Golden Age of Space Exploration

  • What happens after AGI?AGI is artificial general intelligence: it’s when AI achieves human-level intelligence nd likely quickly thereafter super-human abilities, maybe even ushering in the Singularity.I was recently at the Beneficial AGI conference in Panama. One of the speakers was the founder of Emerj Artificial Intelligence Research. He’s interviewed nearly 1,000 AI leaders, his name is Dan Faggella, and he has some good insight into what AGI might do.Or at least what the experts think about it …We discuss artificial general intelligence (AGI), the potential for post-human bliss through advanced simulations, and various perspectives on AGI's ethical and societal impacts. Fagella shares insights from interviews with nearly a thousand AI experts, outlining a matrix to categorize thoughts on AGI's future and human interaction. The discussion covers the balance between control, collaboration, and open-source development in AI, along with personal reflections on humanity's potential paths in an AI-dominated future. Themes include the ethical implications of AGI, the role of human values in AI development, and speculative futures where humanity merges with or is overshadowed by superior AI entities.00:00 Exploring Post-Human Bliss and the Power of AI01:31 The Matrix of AI Perspectives02:50 Exploring the Future with AI: Preservation, Progression, and Ascension04:26 Navigating the Path to AI: Control, Collaboration, Openness07:11 Personal Stances and the Future of AI19:00 AI's Impact on Society and the Future24:23 Envisioning a Post-Human Future: Choices and Consequences29:53 Reflections on Humanity's Path Forward with AI

  • Estão a faltar episódios?

    Clique aqui para atualizar o feed.

  • If you have an iPhone, you've got a notch. Now there's tech that can get rid of that notch ... and the same tech can bring secure Face ID to Android: at a fraction of the cost.In this TechFirst, I chat with Metalenz CEO Rob Devlin about his meta surfaces product. Not only can they produce about 10,000 lenses on a single 30-centimeter wafer, just like computer chips, they can now decode polarization information on surfaces from the light reflecting off of that.That gives them data on what that surface is made from, and that is a huge advancement for biometrics, phones, medical devices, and robots.The technology, which can capture and process unique wavelengths and polarization information, enables the creation of smaller, cheaper, and more efficient optical systems. Metalenz's partnership with ST Microelectronics has led to the integration of metasurface optics in products that have been previously sold in over 150 different smartphone models. 00:00 Revolutionizing Optics with Metal Lens Technology00:30 The Journey of Metalenz: From Concept to Market01:34 Exploring the Impact of Meta Surface Technology02:39 Understanding Metasurfaces and Their Potential10:48 Introducing Polar ID: A Game-Changer for Biometric Security22:20 The Future of Polarization Technology and Its Applications22:33 Collaboration with Samsung and the Path Forward27:14 Envisioning New Horizons: Beyond Polar ID32:36 Wrapping Up: The Future of Metal Lens and Polar ID

  • Billions of robots within a decade? A similar growth curve to smartphones? We currently have about 30 million robots on the planet, not counting Roombas and similar small bots. RobotLab CEO Elad Inbar says that will hit BILLIONS with a B within 10 years.We discuss the exponential increase in commercial robots globally and predict billions of robots integrating into daily activities, from service industries to personal assistance, over the next decade. We chat about the evolution of robotics from novelty items to essential aspects of business operations, highlighting the role of robots in automating mundane tasks and their future potential in enhancing customer service and living standards. Inbar also emphasizes the importance of service infrastructure to support the widespread adoption of robotics technology, drawing parallels with past technological advancements like mobile phones and cars.

    And we dive into specific applications of robots in restaurants, cleaning services, and healthcare, particularly for dementia patients, and the franchise model RobotLab is adopting to expand its reach and capacity to deliver robotics solutions.00:00 The Dawn of the Robot Decade: Envisioning a Future with Billions of Robots01:02 The Big Picture: Robots Transforming Business and Society07:10 The Current State of Robotics: From Hospitality to Manufacturing09:50 The Future of Work: Robots Filling the Gaps in the Workforce12:40 Enhancing Customer Service: How Robots are Changing the Game13:31 The Restaurant Revolution: Robots Taking Over Service Roles16:35 Exploring the Role of Robots in Restaurants16:47 Adapting Robots to Different Restaurant Environments18:18 Growth Areas Beyond Restaurants: Cleaning and Retail22:47 The Future of Customer-Facing Robots24:00 Robots in Assisted Living: A Compassionate Solution27:09 Unlocking the Potential of Robotics in Business

  • Is the Apple Vision Pro the future of surgical training?In this episode of TechFirst, host John Koetsier discusses the transformative impact of virtual reality (VR) on surgical training, highlighting the cost-efficiency and effectiveness of VR in reducing the learning curve for surgeons. The conversation features Richard Vincent, CEO of Fundamental VR, who elaborates on how VR technology, particularly the Apple Vision Pro, is revolutionizing surgical education by offering rapid, repeatable training sessions without the logistical setbacks associated with traditional methods. They explore the hardware agnosticism of Fundamental VR's software, ensuring compatibility with various VR platforms, and delve into the new possibilities unlocked by the Apple Vision Pro's advanced features, including its intuitive control system, powerful compute capacity, and exceptional optics. The discussion also touches on the incorporation of haptics for a more immersive training experience, the potential of VR for remote collaborative training, and the broader implications of VR technology in the medical field.00:00 Unlocking the Future of Surgical Training with VR01:15 The Cost-Effectiveness of VR in Surgical Training03:13 Achieving Competence: The Role of VR in Surgery04:45 Hardware From Oculus to Apple Vision Pro07:04 The Revolutionary Apple Vision Pro in Surgical Training10:35 The Power of Haptics: Enhancing VR Training with Physical Feedback13:07 The Impact of Device Cost on VR Training Accessibility14:34 Expanding Horizons: VR's Role in Remote Surgery Training17:03 The Future of Medical Training and Collaboration with VR18:48 Apple Vision Pro: A Game-Changer for Medical VR Applications20:15 Closing Thoughts and Future Prospects

  • When will AI match and surpass human capability? In short, when will we have AGI, or artificial general intelligence ... the kind of intelligence that should teach itself and grow itself to vastly larger intellect than an individual human?According to Ben Goertzel, CEO of SingularityNet, that time is very close: only 3 to 8 years away. In this TechFirst, I chat with Ben as we approach the Beneficial AGI conference in Panama City, Panama.We discuss the diverse possibilities of human and post-human existence, from cyborg enhancements to digital mind uploads, and the varying timelines for when we might achieve AGI. We talk about the role of current AI technologies, like LLMs, and how they fit into the path towards AGI, highlighting the importance of combining multiple AI methods to mirror human intelligence complexity. We also explore the societal and ethical implications of AGI development, including job obsolescence, data privacy, and the potential geopolitical ramifications, emphasizing the critical period of transition towards a post-singularity world where AI could significantly improve human life. Finally, we talk about ownership and decentralization of AI, comparing it to the internet's evolution, and envisages the role of humans in a world where AI surpasses human intelligence.00:00 Introduction to the Future of AI01:28 Predicting the Timeline of Artificial General Intelligence02:06 The Role of LLMs in the Path to AGI05:23 The Impact of AI on Jobs and Economy06:43 The Future of AI Development10:35 The Role of Humans in a World with AGI35:10 The Diverse Future of Human and Post-Human Minds36:51 The Challenges of Transitioning to a World with AGI39:34 Conclusion: The Future of AGI

  • Can you use sentinel oysters and other mollusks to track water quality near your cities, beaches, or the Great Barrier Reef?Actually ... yes.In this episode of TechFirst, host John Koetsier chats with the CEO of Moloscan, a company focused on bio-monitoring and protection of marine environments using live shellfish.The company uses aquatic bivalves, such as oysters, mussels, or clams to monitor the environment. These mollusks, which are filter feeders, react to changes in water conditions, helping to detect pollution and other disruptions in water quality. The discussion covers the technological developments and rigorous research necessary to map out the normal behaviour of these animals and provide accurate water quality ratings. They also discuss how this method is more efficient and environmentally friendly compared to traditional mechanical probes and lab tests. The CEO shares examples of installations in varied environments, ranging from oil and gas platforms to diverse geographical locations from Quebec to Qatar.00:00 Introduction to Sentinel Oysters and Water Quality Monitoring00:55 Understanding the Concept of Biomonitoring01:48 The Science Behind Mollusk Behavior and Detection02:43 The Journey of Developing the Monitoring Device04:24 Understanding the Sensitivity and Precision of Mollusks05:12 The Role of Mollusks in Detecting Water Pollution08:06 The Technical Aspects of Monitoring Mollusk Behavior10:43 The Real-world Application of Mollusk Monitoring15:34 The Challenges and Benefits of Using Mollusks as Sensors22:51 The Potential for Expanding the Technique to Other Biomes06:24 Conclusion: The Future of Biomonitoring

  • Do you need ChatGPT integrated into your new bike? How about an all-wheel drive bike? (OK: a 2-wheel drive ... but yeah, that's all-wheel drive!)In this episode of TechFirst, host John Koetsier chats the CEO of Urtopia about their new AI-integrated 'smart bike with a mind'. The e-bike market is predicted to grow to about $26 billion by 2028, but Dr. Owen Chang explains how Urtopia is taking a different approach by developing most parts in-house to create a fully integrated, software-enabled product. He says their AI features, like ChatGPT integration, makes e-bikes safer and more personalised. It can also provide assistance including directions, making the ride safer and more enjoyable. Utopia is further developing its own version of GPT based on GPT-5, refining its potential functionalities.We also chat about the world's first e-bike that has drive motors on both wheels, providing more power and better traction.00:00 Introduction and Welcome01:06 Exploring the Fusion GT Bike01:47 The Design and Development Process03:53 The Power of Dual Motor and Dual Battery System06:51 The Future of Bikes: ChatGPT Integration?07:12 The Role of AI in Utopia's Bikes07:38 The Vision of Utopia: A Bicycle with a Mind16:48 The Future of Smart Devices and E-bikes25:30 Conclusion: The Bike as a Wearable Device

  • Can you deliver medical treatment by changing brainwaves instead of injecting drugs? Elon Musk has recently implanted his first Neuralink into a human patient. But can we get neurotech medical treatment without drilling holes in our skulls?Maybe ...According to Element, a startup with roots in MIT, we can. And they say they can read your brainwaves, manipulate them, and fix issues like sleep disorders, tremors, pain, as well as speeding up learning.  Today we're chatting with Meredith Perry, the CEO and former NASA astrobiology Researcher, plus Dr. David Wang, co-founder and CTO, who has a PhD in AI from MIT.This technology could potentially treat medical conditions ranging from sleep disorders and tremors to learning difficulties. We also discuss the future of medtech, envisioning an 'app store for the brain' where individualized treatments can be downloaded like apps, focusing on promoting the most optimized state of health for any given individual through real-time detection and diagnosis.00:00 Intro to Neurotech and Neurostimulation00:33 Welcome and Introduction of Guests01:31 Understanding the Concept of Elemind's Neurotech Device02:59 Exploring the Form Factor of the Device04:23 How it works07:28 Effectiveness and Impact of the Device13:05 Future Plans and Vision for the Device18:52 Potential and Impact of the Device on Healthcare21:35 Conclusion and Final Thoughts

  • Can someone hack your reality if you're wearing an Apple Vision Pro?In this episode of TechFirst, John Koetsier discusses the arrival of Apple's Vision Pro, a groundbreaking VR headset, and its associated privacy and security concerns with Synopsys principal security consultant Jamie Boote.They chat about how the device's advanced sensor systems can map out user environments, posing potential risks and security threats if hacked. Koetsier and Boote also consider Apple's past experience with hardware security and predict potential vulnerabilities and threats that may accompany this new technology.00:00 Introduction to Apple Vision Pro00:23 Privacy and Security Concerns02:02 Potential Threats and Vulnerabilities03:27 The Impact of New Technology on Security04:20 Trust in Apple's Security Measures06:25 Predictions for Future Security Issues07:46 The Evolution of Software and Security13:35 Final Thoughts and Conclusion

  • In this episode of TechFirst, host John Koetsier talks to Dan Hollenkamp, the CEO of Toggled, about the future of smart buildings in 2024. They discuss the difference between devices that are smart and just remote controllable, the continuous improvement in building systems, and emphasize on the usage of data for smart devices. Dan sheds light on how buildings should assist in facilitating our tasks, be predictive, and become an active member of the grid to help stabilize the energy demand. They also discuss the role of Artificial Intelligence (AI) in smart buildings, the idea of buildings moving from energy consumers to energy load managers, and the impact of work from home and return to the office on smart buildings. Join them as they delve into the evolving world of smart buildings and their part in our future.00:00 Introduction and Overview00:13 The Future of Smart Buildings02:23 Understanding Smart Devices vs. Remote Controllable Devices02:43 The Role of Data in Smart Buildings09:36 The Impact of Work from Home on Smart Buildings11:45 Buildings as Energy Load Managers15:54 The Role of AI in Smart Buildings21:53 Conclusion and Final Thoughts

  • In this end-of-year podcast of TechFirst for 2023, host John Koetsier explores the future trajectory of Virtual Reality (VR) technology, especially its potential in 2024. The podcast features Rolf Illenberger, founder & CEO of VR Direct, who believes that 2024 will be a critical inflection point for VR with wide-scale adoption, particularly in enterprises. There is notable discussion on different VR headsets including the Meta Quest Pro, Quest Three, and the upcoming Apple Vision Pro, as well as the role of AI in VR. The conversation also dives into the challenges and opportunities VR presents for both the consumer and enterprise markets, with an emphasis on the need for intuitive user interfaces and valuable use-cases. The podcast concludes with Rolf's prediction that by the end of 2024, it would be vital for every enterprise to have a VR strategy.00:00 Introduction and Welcome00:07 Discussion on VR Trends and Predictions for 202400:45 Interview with Rolf Dillenberger, CEO of VR Direct01:12 The Host's Personal Experience with VR01:44 The Future of VR: An Inflection Point02:48 Enterprise Applications of VR04:51 The Impact of Work from Home Trends on VR06:24 The Role of Apple Vision Pro in VR20:58 The Intersection of VR and AI21:52 The Inflection Point for VR in 202426:36 Conclusion and Farewell

  • You’ve probably heard of Bhutan, and you may have heard of Bhutan’s Gross National Happiness measurement, which is about measuring how happy a country is as well as how wealthy a country is.Now the royal family in Bhutan is establishing an AI center to teach AI to locals ... and maybe bring Bhutan happiness -- and ethics -- to AI. To bring, as the princess puts it, "Ancient Wisdom & Ethics" to artificial intelligence.In this TechFirst, I chat with Enrique Hernandez about his and Princess Wangchuk's goal of bringing an AI center to Bhutan.Subscribe to TechFirsthttps://johnkoetsier.com/category/tech-first/Wangchuk AI Centerhttps://wangchukai.com/

  • Are you ready to dive into the emerging world of robot coworkers? In this video, originally recorded at Web Summit in Lisbon, I chat with special guest David Reger, CEO of Neura Robotics. We chat about what it will be like to work with robots, how we can ensure robots don't kill us (as is happening now, occasionally), and what will change about work and us in the process.We talk about: Working with Robots: A New Frontier The Future of Robot Partnerships Impacts on Different Industries Ensuring Safety and Trust The Role of AI in Human-Robot Interaction Transforming Work and Embracing Automation The Importance of Fairness and Equity Looking Ahead: Opportunities and Challenges

  • In this episode of TechFirst, host John Koetsier explores the global transition from traditional coil and magnet speakers to solid-state semiconductor alternatives with Mike Householder, a VP at XMEMS. The discussion includes the history of the speaker, the advantages of using solid-state semiconductors, and the future vision for sound technology. Mike also makes a big product announcement and provides insight into how his innovative technology will improve audio quality and enhance sound experiences in various devices like earbuds, phones, and home theaters.

    00:00 Introduction to the Evolution of Sound Technology

    00:31 The Limitations of Current Sound Technology

    00:40 Introducing a New Silicon-Based Sound Technology

    00:49 Interview with Mike Householder from XMEMS

    01:12 Understanding the Old Tech: Coil and Magnet Speakers

    03:08 The Advantages of Solid State Components

    05:20 The Benefits of the New Tech for Manufacturers and Consumers

    07:58 The Unique Sound Signature of the New Tech

    16:32 The Path to Market Dominance and Upcoming Product Announcements

    20:47 The Future of Sound Technology: Beyond Personal Audio

    24:25 The Science Behind Ultrasonic Amplitude Modulation

    30:02 Conclusion and Final Thoughts

  • In this episode of TechFirst, host Jon Koetsier welcomes Emmy award-winning XR director Michaela Ternasky-Holland to delve into the world of immersive storytelling through technologies such as virtual reality (VR).

    Using her VR documentary project, On the Morning You Wake, as a case study, Michaela explains how the deeply immersive nature of VR can change the audience's perception of a global threat - nuclear weapons. She compares the engagement and impact of VR experiences to traditional 2D experiences, highlighting how the narrative and the audience's sense of agency play key roles in creating quality engagement. The discussion further explores the future of immersive storytelling, addressing their potential and challenges in the technology field.

    00:01 Introduction and Context

    00:34 Guest Introduction: Michaela Ternasky Holland

    00:56 The Role of Technology in Storytelling

    01:13 Discussing the Project: On the Morning You Wake

    05:18 The Impact of VR on Audience Engagement

    05:40 Challenges and Solutions in VR Accessibility

    08:07 The Emotional Impact of VR Storytelling

    10:55 The Future of VR and Storytelling

    12:04 The Role of Research in VR Storytelling

    19:07 The Intersection of VR and Gaming

    21:30 The Ultimate Expression of Storytelling in VR

    25:10 Conclusion and Final Thoughts

  • It all started with a stolen car. In 1983 Chicago resident David Meilahn's car was stolen. He bought a new one, a Mercedes Benz 280SL 2-seater. But then he needed to replace his old radio-phone ... and the sales rep told him there was something new: a cellular phone.He was one of the first few to be selected, then won a race to place the very first commercial cell phone call, which ended up being from Soldier Field in Chicago, IL, to Alexander Graham Bell's grand-daughter in Germany.This is his story, along with the story of Stuart Tararone, the AT&T engineer who helped build that system and still works for the company to this day.

  • Generative AI won't be building Falcon 9s or new space shuttles just yet. But can it help with all the work that goes into running an organization that builds the future?According to Kendall Clark, CEO of Stardog, yes.Generative AI that democratizes access to data and insight and knowledge speeds up organizations can help with launching space ships, or anything else. For NASA, a generative AI solution is apparently helping the team to do in days what used to take weeks.

  • How will generative AI impact work? And why are smaller companies adopting generative AI more than enterprises?

    Generative AI is almost literally exploding: there are so many possibilities. But how is it changing work and business?

    Recently GBK Collective, a consultancy founded by top academics at Wharton, studied 672 businesses in the US with annual sales over $50 million

    In this TechFirst we're chatting with 2 of the authors to get a sneak peek into what they learned:

    - Dr. Stefano Puntoni, Professor of Marketing at The Wharton School and Co-Director of AI at Wharton

    - Jeremy Korst, former Microsoft and T-Mobile exec, now President of GBK Collective

  • Is equity, inclusion, and diversity in AI a solved problem?

    I’ve written a lot of stories lately about AI. AI is critical to our future of automation ... robots ... relf-driving cars ... drones ... and … everything: smart homes, smart factories, safety & security, environmental protection and restoration.

    A few years ago we heard constantly how various AI models weren’t trained on diverse populations of people, and how that created inherent bias in who they recognized, ho they thought should get a loan, or who might be dangerous.

    In other words, the biases in the people who create tech were manifesting in our tech.

    Is that solved? Is that over?

    To dive in, we’re joined by an award-winning couple: Stacey Wade and Dr. Dawn Wade. They run NIMBUS, a creative agency with clients like KFC and featuring celebs like Neon Deion Sanders.