Episodes

  • Why is the process of creating a new and highly beneficial pharmaceutical drug so difficult? What are the hurdles researchers face when trying to identify a new drug? Why are side effects so challenging to predict? Why are pharmaceutical drugs so expensive?

    In today's episode, we dive deep into drug discovery, and see how AI is already changing the process from the ground up, leading to what can only be described as a revolutionary acceleration in pharmaceutical precision and personalization.

    Timestamps

    0:00 Intro

    0:48 The Dream of Medicines

    1:53 The Incredible Challenges of Finding New Medicines

    8:36 Deepmind and AlphaFold2

    14:33 How AI Is Impacting Drug Discovery

    22:58 Outro



    This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe
  • Something borderline magical happens around the age of 5 years old…

    Squiggly black lines on a piece of paper or screen suddenly transform from being meaningless shapes into something incredibly powerful.

    I’m experiencing the magic as a parent right now. My 5 year old’s learning to read, and it’s one of the most beautiful things I’ve ever seen in my life.

    His mind is making connections he never knew possible, and the sense of empowerment that he feels is contagious, leaving me ready to jump for joy every time he reads something on his own.

    Equally impressive is how quickly it’s all come together. In the span of just a few weeks, I’ve watched him occasionally recognize letters in his name, to reading full on sentences with confidence.

    Here’s the thing — it really hasn’t been that quick. This is just the latest stage in a process that’s been happening deep inside his head since 6 months of age.

    I want you to picture a 6 month old baby, blowing raspberries, babbling, and experimenting with different sounds.

    Besides being frustratingly cute, these simple actions are laying the foundation for what’s to come - learning to read.

    You see, the brain is undergoing a rapid transformation. Neurons are firing and connecting at a blistering pace, forming pathways that will one day allow the child to make sense of the squiggles and lines we call letters.

    But it’s not just neuronal connections being made — the orofacial muscles are getting stronger, and the facial skeleton itself is making adjustments as teeth begin to come in, allowing for more sophisticated babbling. The child isn’t just making sounds anymore — they’re beginning to understand that these sounds have some meaning.

    They’re entering the world of the alphabetic principle.

    At its core, the alphabetic principle is the understanding that letters of the alphabet represent specific sounds in spoken language (unless the language is Chinese or Japanese, which use different but equally fascinating systems). It's the realization that these sounds, or phonemes, can be blended together to form words.

    A “phoneme” is a sound, or a group of different sounds.

    For example, the letter "B" makes the "buh" sound, while the letter "A" can make the "ah" sound. By saying “buh”, “ah”, “t”… you’re pronouncing the phonemes of the word “bat”.

    When you hear a child saying “mama”, “dada”, “bruhbruh”, you’re hearing them associate phonemes with meaning, specifically their parents and siblings, and other familiar people and things around them.

    But this is just the beginning of the alphabetic principle, and it only gets cooler from here.

    The Pre-alphabetic Stage

    The pre-alphabetic stage runs from birth to about 3 years old, although children are all different and there is some wiggle room in these age ranges.

    I want you to picture a toddler, around 3 years old, flipping through the pages of a colorful book. At this pre-alphabetic stage, they’re not making the connection between letters and sounds. Instead, they’re developing what’s known as phonological awareness - the ability to recognize and manipulate the sounds in spoken language.

    Specifically, they’re learning to identify syllables, clusters of syllables, and phonemes.

    One of the funnest methods to help a child identify syllables is to have them place their hand under their chin, and then say a word. Every time the mandible drops, that’s a syllable.

    Go ahead and do it yourself right now. It really is kind of fun.

    “Mama” has to two syllables, “papa” has two syllables, but “mom” and “dad” only have one syllable.

    As this is happening, the auditory cortex is processing the sounds the child hears from both their own mouths and the parent or teacher with them. This is crucial for distinguishing between different phonemes, and helps create powerful connections throughout the brain.

    At the same time, Broca's area in the frontal cortex is developing, which will eventually support the ability to produce speech sounds and engage in phonological processing.

    For a 3 year old, the orofacial muscles are also undergoing significant changes. The lips, tongue, and jaw are learning to work together to produce a wide range of sounds. This is absolutely essential for articulating the precise sounds needed for speech and, later, for reading.

    Partial Alphabetic Stage

    As the child enters preschool between the ages of 3 and 4, they’re entering the partial alphabetic stage.

    This is where they begin to recognize some letters and their corresponding sounds. Typically this lines up with the letters in their name. For example, if their name starts with an ‘S’, they’ll start to match the phoneme /s/ up with the letter.

    At the neurological level, the visual word form area (VWFA) in the occipito-temporal region is becoming more specialized in recognizing letters. This is where the brain processes the visual features of letters and words, allowing the child to identify and distinguish between different characters.

    But the child’s knowledge is far from complete. They may confuse similar-looking letters like "b" and "d" or struggle to blend sounds together smoothly.

    The orofacial muscles are also continuing to develop during this stage, allowing them to produce more precise speech sounds, which is crucial for accurately articulating the sounds associated with each letter.

    Full Alphabetic Stage

    By the time they reach kindergarten, they’ve entered into the full alphabetic stage.

    At this point, they’ve mastered most letter-sound relationships, save a few here and there.

    In the brain, the temporoparietal junction, or TPJ, and inferior frontal gyrus, or IFG, are now far more active during phonological processing and decoding.

    Decoding is where you look at a word, and break it down into its phonemes and syllables.

    The TPJ is involved in mapping sounds onto letters, matching them up in the child’s mind.

    The IFG helps to do this too, but it also processes the motor planning of speech. As the child learns to decode words by sounding them out letter by letter, the IFG helps to process and produce the necessary speech sounds.

    The orofacial muscles have also developed significantly by this stage, meaning the child can now produce all the sounds needed for speech and reading, including more complex consonant blends like "bl" or "st."

    Of course, this is assuming they have their front teeth. My son is currently missing his, which makes for some seriously adorable reading sounds.

    This full alphabetic stage is where my son is at right now, and the progress he’s made by blending phonemes is really cool to watch in real time.

    Something else you’ll see at this stage is something that doesn’t get near the attention it deserves — children beginning to “encode”.

    Encoding is the exact opposite of “decoding”, meaning that instead of breaking down a word into its phonemes, they learn to spell words by their sounds instead.

    For example, if a child was looking at a picture of a dog, the ability to sound out the phonemes — /d/ /o/ /g/, and then turn that into the letters d - o - g, would be encoding.

    Looking at the word “dog” and then sounding out each individual phoneme would be decoding.

    Consolidated Alphabetic Stage

    As the child progresses through elementary school, somewhere around the age of 7 they enter what’s known as the consolidated alphabetic stage.

    At this point, reading becomes increasingly fluent and automatic.

    They move away from decoding individual letters to recognizing larger chunks of words, such as prefixes, suffixes, and syllables.

    In the brain, a structure known as the putamen, as well as the cerebellum are now more involved in automatizing reading skills.

    This is building off of what’s known as “sight words”.

    During the full alphabetic stage, the child begins seeing certain words a whole lot more than other words. For example, the word “this” is everywhere. You don’t want to waste time and cognitive effort trying to decode it, so instead the child learns to just recognize it, and keep reading without slowing down.

    Over time, more words naturally become “sight words”, and others are actually encouraged by teachers to become “sight words”.

    Then, you also have the cerebellum contributing to the coordination and timing of the various processes involved in reading, such as eye movements, phonological processing, and articulation.

    The child’s orofacial muscles are now fully developed, allowing them to produce all the sounds of their language with ease. This automaticity in speech production further supports fluent reading, as they can quickly and accurately articulate the words they’re reading.

    It’s amazing how the human brain is perfectly adapted for reading. I truly believe that reading is one of the most important things anyone can do. It fosters creativity and empowers you to see the world through others’ eyes.

    Part of my goal with this Substack is to inspire others to read more, and if you’re anything like me, breaking down processes like this can be super helpful in motivating you to do that.

    Plus it’s just cool.



    This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe
  • Missing episodes?

    Click here to refresh the feed.

  • I want you to imagine that the year is 2030, and you are a nurse entering a patients room that’s recovering from abdominal surgery…

    As you step through the door, the lighting is soft and adjustable, automatically adapting to the time of day and the patient's sleep cycle.

    You approach the bedside and glance at the large, wall-mounted display. The intelligent monitoring system greets you with a summary of the patient's condition. It highlights that the patient's vital signs have been stable throughout the night, with no significant abnormalities detected.

    The AI-powered system has been continuously analyzing the patient's heart rhythm, respiration, blood pressure, and oxygen saturation, providing real-time insights and alerts if any concerning patterns emerge.

    You remember when you had to manually check and record each vital sign, every few hours. Now, the intelligent monitoring system does this seamlessly, allowing you to focus more on patient care and less on documentation.

    Next to the bed, you spot the patient's IV line, connected to an advanced infusion pump. The pump is controlled by an intelligent agent that precisely regulates the flow rate and dosage of medications, ensuring optimal pain management and preventing any potential medication errors. In the past, you had to double-check each medication and rate, but now you can trust the system to administer the correct dosage at the right time.

    As you check the patient's surgical incision, you notice a small, non-invasive device attached to the skin nearby. The device uses advanced sensors to detect any signs of infection, such as changes in temperature or skin color, and alerts the medical team if there's a potential complication. This early warning system has dramatically reduced the incidence of post-operative infections in your hospital.

    You gently reposition the patient to prevent pressure ulcers, and as you do so, the smart bed automatically adjusts to distribute the pressure evenly. The bed also monitors the patient's movement and can alert you if the patient is at risk of falling.

    Before leaving the room, you take a moment to review the patient's electronic health record on the bedside tablet. The intelligent system has already updated the record with the latest vital signs, medications administered, and your observations. It also suggests potential interventions based on the patient's condition and evidence-based guidelines.

    With a satisfied smile, you quietly exit the room, knowing that your patient is in good hands, monitored and cared for by a seamless collaboration of human expertise and artificial intelligence.

    Healthcare, along with society at large, is changing. At the center of this change are advancements being made in artificial intelligence and specific applications of AI known as intelligent agents.

    These days we’re all familiar with artificial intelligence, but few seem to properly understand the potential that comes with its advancement. Even if research were to stop today, the progress that’s already been made is enough to revolutionize everything around you, and that includes everything inside of healthcare.

    Our goal today is to understand what an intelligent agent is, the different types of intelligent agents, the varying degrees of autonomy, and the incredible potential they have to reshape healthcare from the ground up.

    If you stick with me until the end, we’ll do some responsible speculation as I like to call it , and discuss what I personally believe to be the inevitable outcome of intelligent agent development — hive intelligence. We’ll view it from the lens of healthcare, but the implications of its existence will reach far beyond the realm of medicine.

    With that said, let’s start by asking ourselves a simple question: what is an intelligent agent?

    What Is Agency?

    Agency refers to the capacity of an entity to act in the world.

    It’s the ability to make choices, take actions, and shape one's own life and environment.

    An agent is:

    * Autonomous

    * Agents are capable of making decisions and acting on them without external influences

    * Intentional

    * Agents act with purpose

    * Rational

    * Agents are capable of reasoning

    * Morally Responsible

    * Because agents are rational and capable of making choices, they’re often held morally responsible for the consequences of their actions

    You are an agent. I am an agent. Healthcare workers of all types are agents.

    So what do we mean when we say “intelligent agent”?

    What Are Intelligent Agents?

    An intelligent agent is a software program or computer system designed to operate independently within an environment to accomplish predefined goals without requiring constant human guidance or intervention.

    Put simply, they’re software programs with a degree of agency, that have been designed to achieve a goal with as little human intervention as possible.

    They:

    * Exhibit Goal-Directed Behavior

    * Intelligent agents are designed to pursue specific objectives

    * Make Decisions

    * Intelligent agents select actions based on their current state, available information, and their predefined goals.

    * Interact With Their Environment

    * They need to do so in order to gather information and execute actions.

    * Adapt and Learn

    * Advanced intelligent agents can learn from their experiences and adapt their strategies to improve their performance over time.

    The “degree of agency” is predefined for an intelligent agent, but not for you. Philosophers would likely pick that statement apart, but for our conversation today it’s a perfectly reasonable approach to have.

    Yet there are obviously similarities between us and intelligent agents, and that’s intentional. Intelligent agents are meant to be extensions of us as people, and occasionally literal replacements. They navigate a digital world constructed by humans, so it’s only fitting that they’d resemble us at times.

    To achieve human goals without the help of humans, they need to have specific properties, otherwise they couldn’t do anything meaningful in our world.

    These essential properties are:

    * Autonomy

    * Their ability to operate independently and make decisions without direct human control is essential

    * Reactivity

    * Their ability to respond in a timely manner to changes in the environment is vital

    * Proactivity

    * Their ability to take initiative and exhibit goal-directed behavior is required

    * Social Ability

    * The ability to interact and communicate with other agents or humans is necessary

    Most of those are non-controversial, but autonomy tends to worry people a bit.

    How much agency do intelligent agents have?

    It’s an important question that comes with multiple answers…

    What Is Autonomy?

    When the goal is to offload tasks from human beings to a machine or software program, autonomy is essential.

    But there are varying degrees of autonomy in AI systems:

    Low Autonomy

    * These systems operate independently in limited, well-defined scenarios but require human intervention for most tasks.

    * Robotic Vacuum Cleaners

    * These devices can navigate and clean floors independently but require humans to empty the dustbin, maintain the device, and set up cleaning schedules

    * Automated Teller Machines (ATMs)

    * ATMs can perform basic transactions, such as cash withdrawals and balance inquiries, but require human intervention for tasks like refilling cash, resolving errors, or handling complex customer issues

    Moderate Autonomy

    * These systems can perform a range of tasks without human control but may still require human input for certain decisions or situations.

    * Self-Driving Cars In Controlled Environments

    * Autonomous vehicles that operate in limited, well-mapped areas, such as university campuses or industrial parks, can navigate and make decisions independently but may require human intervention in unexpected situations or when faced with new obstacles

    * Chatbots for Customer Support

    * Chatbots can handle a wide range of customer questions and provide solutions based on predefined scripts or knowledge bases. However, they may need human assistance for complex or unique issues that fall outside their programmed expertise

    High Autonomy

    * These systems can operate independently in complex, dynamic environments and may even be capable of setting their own sub-goals in pursuit of an overarching objective.

    * Advanced Autonomous Vehicles

    * Self-driving cars designed to operate in complex, real-world environments, such as city streets or highways, can make decisions independently, adapt to changing traffic conditions, and handle a wide range of scenarios without human intervention. But it’s far from perfect, and humans need to be paying attention and ready to take over at a moments notice

    * Autonomous Trading Systems

    * Trading algorithms that can analyze vast amounts of financial data, make investment decisions, and execute trades in real-time without human input. These systems can adapt to changing market conditions and continuously optimize their strategies to maximize returns. Still, humans need to keep a watchful eye in certain situations, because these are far from perfect

    Full Autonomy

    * The highest level of autonomy, where a system can perform all tasks and make all decisions independently, without any human intervention or supervision.

    * A fully autonomous system would be capable of setting its own goals, learning from its experiences, adapting to new situations, and continuously improving its performance.

    * We haven’t achieved this level of autonomy yet, but it’s something researchers are actively pursuing around the world .

    * Examples of hypothetical fully autonomous systems include:

    * Artificial general intelligence (AGI) that can match or surpass human intelligence across a wide range of domains, from scientific research to creative tasks

    * Fully autonomous driving, where the vehicle manages every aspect of driving with precision, and humans don’t need to pay attention at all, trusting the vehicle will respond in the best possible way to all conditions, no matter what it is

    * Fully autonomous space exploration vehicles that can navigate, collect data, and make discoveries on distant planets or moons without direct human control

    Intelligent agents don’t need to be highly autonomous in all situations. For many tasks, low to moderate autonomy is all that’s needed.

    For many others tasks though, high to full autonomy is undeniably the desired goal.

    Driving is inherently dangerous for the person in charge of the vehicle and those around them. A fully autonomous vehicle would always respond better than a human, delivering the best possible outcome for all parties involved.

    In medicine, the overall objective is to prevent the issue from developing in the first place, and ensuring the best possible patient outcome if it does show up despite preventative efforts.

    If intelligent agents can help us achieve this goal, the question isn’t whether or not we utilize them, but how much autonomy should be given to them, assuming the technology exists to do so.

    Types of Intelligent Agents

    Intelligent agents have been part of medicine for decades.

    After all, it’s just software and automation, not bedside robots and artificial super intelligence.

    If you took a quick glance around a patients room, you’d see intelligent agents in the monitoring system analyzing raw data from blood pressure cuffs and their pulse oximeter probes, as well as in their hear rate detection, which analyzes ECG signals to determine the heart rate and detect abnormalities.

    Intelligent agents assist care coordination systems, predictive modeling systems, and patient triage systems, which considers the severity of a patient's condition, the likelihood of successful treatment, the potential years of life saved, and the associated costs to make decisions that maximize the overall utility or benefit to the patient population.

    Where Improvement Is Needed and How to Do It

    Intelligent agents are everywhere in healthcare, so the questions we’re really asking are, “where are improvements needed, and how realistic is it to expect them to come by the end of the decade?”

    It would be incredible to have an AGI working inside of healthcare systems, acting both digitally and through embodied robots to ensure the best possible patient outcomes in everything from emergency medicine, surgery, all the way to hospice care.

    But that’s not going to happen in 5 years, even if companies such as OpenAI achieve AGI by the end of the decade.

    Regulatory hurdles exist, as does the not-so small issue of building patient trust while simultaneously eliminating hundreds of thousands to millions of human jobs at the same time.

    So where are improvements needed and how can we make that happen?

    * Better Patient Monitoring and Early Warning Systems

    * We can develop more advanced sensors that can continuously monitor a patient's vital signs and other important health measurements

    * We can make intelligent agents better at analyzing patient data in real-time and spotting small changes that might mean the patient is getting worse or having complications

    * We can use machine learning to personalize the monitoring system for each patient based on their unique characteristics and risk factors

    * Simpler Paperwork and Administrative Tasks

    * We can make intelligent agents better at understanding and accurately typing up clinical notes and patient conversations using natural language processing and voice recognition

    * We can create smarter algorithms to automate tasks like coding, billing, and processing insurance claims, reducing the administrative workload for healthcare workers

    * We can connect more robust intelligent agents with electronic health record systems to make it easier to enter, find, and share data across different healthcare settings

    * Personalized Treatment Planning and Decision Support

    * We can improve the knowledge and reasoning abilities of intelligent agents so they can provide more accurate and specific recommendations for diagnosing, treating, and planning patient care

    * We can use advanced machine learning techniques, like deep learning and reinforcement learning, to help intelligent agents learn from large amounts of patient data and clinical experiences

    * We can develop AI systems that can clearly explain the reasons behind their recommendations, building trust and acceptance among healthcare workers

    * Smart Resource Allocation and Workflow Optimization

    * We can create intelligent agents that can analyze healthcare operations data, such as patient flow, resource use, and staffing patterns, to identify problems and inefficiencies

    * We can develop optimization algorithms that can assign resources, like hospital beds, operating rooms, and healthcare staff, based on real-time demand and capacity limits

    * We can implement intelligent scheduling and task management systems that can adapt to changing priorities and workload, ensuring healthcare workers' time and skills are used in the best way possible

    * Proactive Patient Engagement and Remote Monitoring

    * We can create intelligent agents that can provide personalized patient education, self-management support, and virtual coaching based on each patient's needs and preferences

    * We can integrate remote monitoring technologies, like wearable devices and home-based sensors, with intelligent agents to allow continuous tracking of a patient's health status and adherence to treatment plans

    * We can design intelligent chatbots and virtual assistants that can sort through patient concerns, provide basic health information, and guide patients to appropriate care resources, reducing the workload on healthcare workers

    Everything just mentioned is possible through technology being developed today by the hundreds of startups and large tech companies involved in AI research.

    Realistically achieving them by 2030 is a different story altogether, and will require significant things to be in place, such as:

    * Making sure all healthcare data can be easily and safely shared and worked with across different systems and settings

    * Healthcare organizations, tech companies, and schools working together to make intelligent agents better at helping in healthcare

    * Having clear rules and guidelines to make sure intelligent agents are used safely, responsibly, and openly in healthcare

    * Giving healthcare workers the training and knowledge they need to use intelligent agents well in their jobs

    * Starting small test projects and slowly putting more capable intelligent agents into use to show how they can help in real healthcare situations and to build trust with healthcare workers and patients

    There’s an important caveat to this though — this is what’s possible through improvements being made to low and moderate levels of autonomous intelligent agents.

    If advancements are made with highly autonomous intelligent agents, or how the agents communicate and respond to one another, the potential is staggering…

    Hive Intelligence In Medicine

    An agent swarm refers to a collective of intelligent agents that work together to achieve a common goal or solve complex problems. The concept is inspired by the behavior of swarms in nature, such as ants, bees, or birds, where simple individual actions lead to sophisticated group behavior.

    If we look at this in the context of artificial intelligence, an agent swarm consists of multiple autonomous agents that can interact with each other and their environment. Each agent has its own set of rules, capabilities, and objectives, but they can also communicate and collaborate with other agents in the swarm.

    The agents don’t necessarily need to have high autonomy either, instead they only need to be able to communicate and react appropriately to each other.

    Agent swarms display:

    * Decentralized Decision-Making

    * There is no central control or hierarchy; instead, each agent makes decisions based on local information and interactions with other agents

    * Emergent Behavior

    * The collective behavior of the swarm emerges from the interactions and decisions of individual agents, often resulting in complex patterns and problem-solving capabilities that surpass those of any single agent

    * Adaptability and Resilience

    * Agent swarms can adapt to changing environments and challenges by learning from experience and adjusting their behavior accordingly. They are also resilient to individual agent failures, as the swarm can continue functioning even if some agents malfunction or are removed

    * Scalability

    * Agent swarms can scale efficiently to handle larger problems or environments by adding more agents to the swarm, without requiring significant changes to the individual agent design or rules

    Agent swarms are exciting to think about, but it’s also important to stay grounded and understand where and when they could be useful, and where and when they’re a waste of resources.

    They’re extraordinarily complicated to develop from a engineering perspective, needing to be highly coordinated, communicative, scalable, adaptable, robust, resilient, and trustworthy.

    There are plenty of real world situations where an agent swarm is overkill, and a much simpler system would work just as well, if not better.

    With that said, the healthcare ecosystem is the perfect fit for such an intelligent agent system.

    Personalized and Precision Medicine

    Think of everything that impacts human health outcomes, such as diet, sleep, exercise, genetics, the microbiome, age, race, sex, and so on.

    With smart tech empowered by intelligent agents part of a hive network, raw data that analyzes heart rate, blood oxygen levels, weight, body fat percentages, and more can become available to healthcare workers inside and outside of the hospital system when needed, such as:

    * Primary Care Physicians

    * Dietitians

    * Physical Therapists

    * Dermatologists

    * Optometrists/Ophthalmologists

    * First Responders

    * Nurses

    * Medical Assistants

    * Medical Doctors

    * Doctors of Osteopathic Medicine

    * Radiologists

    * Cardiologists

    * Neurologists

    * Gastroenterologists

    Electronic health care records could be constantly updated, leaving healthcare workers with the necessary information they need at any given time.

    Patient privacy could be managed by this hive network, ensuring medical records and data are secure and only viewable when necessary.

    As robust neural networks continue to discover information within medical data unseeable to human experts, difficult to connect links between events, foods, medications, and family history can be made in real time, providing health care providers with a likely diagnosis, and an up to date treatment plan based on the most recent published medical literature.

    The Future of Medicine

    To help drive this home, I want you to picture this…

    It's a crisp autumn morning in 2035, and Sarah, a 45-year-old woman, is going about her daily routine. As she prepares breakfast, her smart ring, a sleek and unobtrusive device, continuously monitors her vital signs, analyzing her heart rate, blood pressure, and oxygen levels.

    Suddenly, Sarah experiences a sharp pain in her chest and feels short of breath. Unbeknownst to her, she's experiencing a silent heart attack. But her smart ring, powered by advanced intelligent agents, immediately detects the abnormal physiological patterns and sends an alert to the local emergency services.

    Within seconds, an ambulance is dispatched, guided by an intelligent navigation system that analyzes real-time traffic data and patient information to determine the fastest and safest route. The emergency responders, equipped with augmented reality glasses, receive a stream of vital information about Sarah's condition, medical history, and location.

    As the ambulance races towards Sarah's home, the intelligent agents in the healthcare hive mind are already hard at work. The system analyzes Sarah's electronic health records, including her recent meals logged through her smart fridge and the chiropractic adjustments she received last week for back pain. The agents quickly identify potential risk factors and generate a preliminary diagnosis.

    When the paramedics arrive at Sarah's doorstep, they have a wealth of information at their fingertips. The intelligent agents guide them through stabilizing Sarah's condition, providing step-by-step instructions and monitoring her response to treatment in real-time.

    Sarah is swiftly transported to the nearest hospital, where a team of doctors and nurses, assisted by the healthcare hive mind, is already prepared for her arrival. The intelligent agents have analyzed Sarah's data and generated a personalized treatment plan, taking into account her unique medical profile and the latest evidence-based guidelines.

    In the operating room, surgeons use advanced robotic systems that are seamlessly integrated with the intelligent agent swarm. The agents provide real-time guidance and precision control, enabling the surgeons to perform a minimally invasive procedure to restore blood flow to Sarah's heart.

    Throughout the surgery, the intelligent agents continuously monitor Sarah's vital signs, adjusting the anesthesia and medication dosages to optimize her safety and comfort. The system also predicts potential complications and suggests preventive measures, ensuring a smooth and successful operation.

    As Sarah recovers in the hospital, the healthcare hive mind continues to work tirelessly behind the scenes. Intelligent agents monitor her progress, analyze her response to treatment, and adapt her care plan accordingly. They also communicate with Sarah's family, providing regular updates and answering their questions through natural language interfaces, similar to ChatGPT.

    When Sarah is ready to be discharged, the intelligent agents create a personalized rehabilitation program, taking into account her specific needs and preferences. The system connects Sarah with a network of support services, including virtual coaching, remote monitoring, and peer support groups, to help her manage her recovery at home.

    Back in her daily life, Sarah's smart ring and other devices continue to work in harmony with the healthcare hive mind. The intelligent agents analyze her activity levels, sleep patterns, and dietary habits, providing personalized recommendations to optimize her health and prevent future incidents.

    Everything, Everywhere, All At Once

    This would be happening with every citizen, everywhere throughout the United States healthcare system.

    Millions of intelligent agents, working together, analyzing large amounts of data from hundreds of millions of citizens.

    Through robust intelligent agent analysis, early detection for Alzheimers, cancers, and cardiovascular disease would sky rocket.

    Healthcare workers would no longer be understaffed and overworked, now able to focus solely on patient care when the situation best calls for a human touch.

    This is the future we all have at our fingertips, although much needs to be done before it can happen.

    Patient privacy needs to be guaranteed, medical liability needs to be known and specific, and energy demands that don’t destroy the planet need to be solved.

    Still, even if a small fraction of this future actually happens, we’re all about to experience a complete change in healthcare that will feel like something out of a science fiction novel.

    Let’s work together to make sure it isn’t a horror novel.



    This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe
  • It’s different this time, it really is.

    Most healthcare workers are either too busy to pay attention to what’s happening in AI today, or are stuck in the past and have only experienced low quality AI.

    In today’s episode, we’re going to discuss why this is a mistake.

    Healthcare workers aren’t immune to the impact of AI, and they need to start preparing for significant changes to their job description in the next 5 years.

    ____

    Timestamps

    0:00 Why You Should Care

    8:17 FDA Hurdles

    9:39 History of Neural Networks: 1940’s - Present

    19:12 AI & Medicine: 1940’s - Present

    28:32 How LLM’s Can Impact Frontline Medical Works

    35:42 The Incredible Potential of Autonomous AI Agents In Medicine

    ____

    References

    Dermatology

    https://www.nature.com/articles/s41598-021-96707-8

    https://news.mit.edu/2021/artificial-intelligence-tool-can-help-detect-melanoma-0402

    https://www.jidonline.org/article/S0022-202X(20)31201-X/fulltext

    https://www.frontiersin.org/articles/10.3389/fmed.2020.00100/full

    https://www.mdpi.com/1660-4601/18/24/13409

    https://journals.lww.com/idoj/fulltext/2023/14060/artificial_intelligence_in_diagnostic_dermatology_.4.aspx

    https://arxiv.org/abs/2311.01009

    https://www.mdpi.com/2079-9292/12/6/1342

    Ophthalmology

    https://onlinelibrary.wiley.com/doi/10.1111/ceo.13381

    https://journals.lww.com/ijo/Fulltext/2020/68070/Insights_into_the_growing_popularity_of_artificial.22.aspx

    https://bjo.bmj.com/content/105/2/158

    https://www.dovepress.com/artificial-intelligence-in-ophthalmic-surgery-current-applications-and-peer-reviewed-fulltext-article-OPTH

    https://www.frontiersin.org/journals/digital-health/articles/10.3389/fdgth.2022.889445/full

    https://ieeexplore.ieee.org/document/9716741

    https://ieeexplore.ieee.org/document/10340746

    https://www.cureus.com/articles/164004-artificial-intelligence-in-ophthalmology-a-comparative-analysis-of-gpt-35-gpt-4-and-human-expertise-in-answering-statpearls-questions#!/

    https://ieeexplore.ieee.org/document/9674065

    Pathology

    https://www.frontiersin.org/articles/10.3389/fmed.2019.00185/full

    https://gut.bmj.com/content/70/6/1183

    https://www.mdpi.com/2075-4418/13/19/3115

    https://www.mdpi.com/2072-6694/11/11/1673

    https://www.nature.com/articles/s41571-019-0252-y

    https://www.frontiersin.org/articles/10.3389/fmed.2019.00185/full

    https://pathsocjournals.onlinelibrary.wiley.com/doi/10.1002/path.6168

    https://jcp.bmj.com/content/74/2/73

    https://ieeexplore.ieee.org/document/9745795

    Cardiology

    https://www.sciencedirect.com/science/article/pii/S0735109719302360?via%3Dihub

    https://www.ahajournals.org/doi/10.1161/JAHA.119.012788

    https://www.emjreviews.com/interventional-cardiology/congress-review/26886-2-j090122/

    https://pubs.rsna.org/doi/10.1148/ryct.2021200512

    https://www.nature.com/articles/s41569-018-0104-y

    https://www.sciencedirect.com/science/article/pii/S0735109719302360?via%3Dihub

    https://arxiv.org/abs/2310.12630

    Neurology

    https://link.springer.com/article/10.1007/s00415-019-09518-3

    https://linkinghub.elsevier.com/retrieve/pii/S0377123721001490

    https://www.degruyter.com/document/doi/10.1515/revneuro-2021-0101/html

    https://pubs.rsna.org/doi/10.1148/radiol.2018181928

    https://jkns.or.kr/journal/view.php?doi=10.3340/jkns.2022.0130

    https://journals.salviapub.com/index.php/gmj/article/view/3158

    https://www.eurekaselect.com/article/136533



    This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe
  • Radiologists are in short supply. Backlogs are only continuing to build. With an aging population and burnout from COVID only making things worse, something needs to change or outcomes for patients will only get worse.

    In today’s episode, we discuss how Artificial Intelligence is assisting Radiologists and their midlevels in workflow, and soon with diagnostics as well.

    With hundreds of startups around the world using synthetic data generation to train AI models with a level of precision never before seen, the natural question to ask is, how will this impact radiologists within the next decade?

    ____

    Timestamps

    0:00 Intro

    1:30 The Problems Facing Radiologists

    14:37 The Hurdle of the FDA

    18:05 Improving Radiology Workflow

    21:15 The Data Availability Struggle

    22:44 De-identifying Data

    23:26 Synthetic Data

    29:48 MONAI

    30:37 The Beginning of the Future

    38:24 Opthamology, Cardiology, Neurology

    41:45 Last Thoughts

    ____

    References

    World Population

    * https://www.worldometers.info/world-population/us-population/

    Healthcare Provider Statistics

    * https://www.ama-assn.org/press-center/press-releases/ama-president-sounds-alarm-national-physician-shortage

    * https://www.statista.com/statistics/186269/total-active-physicians-in-the-us/

    * https://www.bls.gov/oes/current/oes291071.htm

    * https://www.aanp.org/about/all-about-nps/np-fact-sheet#:~:text=There%20are%20more%20than%20385%2C000,NPs)%20licensed%20in%20the%20U.S.&text=More%20than%2039%2C000%20new%20NPs,academic%20programs%20in%202021%2D2022.

    * https://www.bls.gov/oes/current/oes291229.htm

    * https://www.statista.com/statistics/209424/us-number-of-active-physicians-by-specialty-area/

    * https://www.zippia.com/radiology-assistant-jobs/demographics/

    * https://www.google.com/search?q=can+an+rpa+interpret+radiology+exams&oq=can+an+rpa+interpret+radiology+exams&gs_lcrp=EgZjaHJvbWUyBggAEEUYOTIHCAEQIRigATIHCAIQIRigAdIBCDk0NzVqMGo0qAIAsAIA&sourceid=chrome&ie=UTF-8

    * https://radiologybusiness.com/sponsored/1073/mmp/topics/healthcare-management/business-intelligence/radiology-assistants-users

    Radiology Backlog

    * https://radiologybusiness.com/topics/healthcare-management/healthcare-economics/large-volume-radiologist-reporting-backlogs-urgent-global-issue#:~:text=The%20problem%20persisted%20at%20the,99%25)%20for%20chest%20radiographs.

    * https://www.rsna.org/news/2022/may/global-radiologist-shortage

    * https://www.aidence.com/articles/workload-in-radiology/



    This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe
  • The next 5 years are going to be incredible. Still, it’s important to stay grounded and not feed into the hype.

    Chances are good that you’ve heard people question whether or not ChatGPT will replace doctors. The easy answer here is — no.

    But other models from Google can and will.

    In this episode, Justin discusses:

    * The shortage of Health Care Providers in the United States

    * Health Care Provider dependency on the medical version of Wikipedia — Up to Date

    * Google’s AI models: MedPaLM2, MedLM, and AMIE

    * The use of of AI as a collaborator

    * The most likely jobs within healthcare to be impacted negatively in the next 5 years

    * How patients will interact with AI models, leading to better healthcare outcomes

    ____

    Episode Timeline

    0:00 Intro

    1:14 The Topics Discussed Today

    4:15 Use This Episode Timeline to Scan Ahead to What Applies to You

    4:42 The Shortage of Doctors In the US

    8:22 The Purpose of Physician Associates and Nurse Practitioners

    12:39 The Stresses of Being a Healthcare Provider

    14:04 Wikipedia for Doctors

    18:17 Brief Intro to LLM's

    22:29 Meet the Mind Blowing MedPaLM2

    27:46 Meet the New UptoDate — MedLM

    30:17 Meet Your New Doctor — AMIE

    33:39 The Problem for PA's and NP's

    39:47 The Future of Medicine In Your Pocket

    ____

    References

    Provider Statistics

    * https://www.ama-assn.org/press-center/press-releases/ama-president-sounds-alarm-national-physician-shortage

    * https://www.statista.com/statistics/186269/total-active-physicians-in-the-us/

    * https://www.bls.gov/oes/current/oes291071.htm

    * https://www.aanp.org/about/all-about-nps/np-fact-sheet#:~:text=There%20are%20more%20than%20385%2C000,NPs)%20licensed%20in%20the%20U.S.&text=More%20than%2039%2C000%20new%20NPs,academic%20programs%20in%202021%2D2022.

    * https://www.bls.gov/oes/current/oes291229.htm

    World Population

    * https://www.worldometers.info/world-population/us-population/

    Schooling References

    * https://extension.harvard.edu/blog/how-to-become-a-physician-assistant/#:~:text=To%20become%20a%20PA%2C%20you,for%20two%20to%20three%20years.

    * https://uthscsa.edu/medicine/education/ume/outreach/become-doctor

    * https://www.usmle.org/

    * https://en.wikipedia.org/wiki/Physician_assistant

    * https://en.wikipedia.org/wiki/Nurse_practitioner

    * https://www.aanp.org/news-feed/explore-the-variety-of-career-paths-for-nurse-practitioners

    * https://www.usmle.org/scores-transcripts/examination-results-and-scoring#:~:text=*%20USMLE%20Step%201%20score%20reporting,of%20a%20three%2Ddigit%20score.

    Artificial Intelligence Models

    * https://sites.research.google/med-palm/

    * https://blog.research.google/2024/01/amie-research-ai-system-for-diagnostic_12.html

    * https://cloud.google.com/vertex-ai/generative-ai/docs/medlm/overview#medlm-versus-palm

    * https://www.nature.com/articles/s41586-023-06291-2



    This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe


  • This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe

  • Honestly, this is more of a rant than an actual podcast episode, but hopefully it’s valuable to you.

    This is a crucible moment — generative AI is coming for creatives everywhere.

    Assuming our species survives, our only choice is to navigate the new landscape or give up and hope AI will provide us with meaning.

    Ask yourself: in a world where everyone can generate anything they want, what can someone do to stand out from the pack?

    The answer: tell incredible stories.



    This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe
  • This is a free preview of a paid episode. To hear more, visit justincottle.substack.com

    In this special episode found only on Substack, the book “Killing Floor”, by Lee Child is dissected.

    We discuss:

    * How thriller novels can benefit video content creators

    * The requirements of the thriller genre

    * What makes Lee Child’s writing style different among thriller authors

    * Excerpts of “Killing Floor”

    * How Lee Child’s writing style can benefit script writing…

  • After working with human body donors over the last 10 years, there are a few life lessons I’ve learned.

    #1 - Confront Your Mortality Often

    #2 - Take Your Time

    #3 - Every Body Is Different



    This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe


  • This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe

  • Thinking In Systems: A Primer by Donella H. Meadows

    Systems Thinking for Business and Management: Principles and Practice by Professor Umit S Bititci and Dr Agnessa Spanellis

    Systems Thinking for Beginners: Learn the essential systems thinking skills to navigate an increasingly complex world for effective problem solving and decision making by Henry M. Burton



    This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe


  • This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe



  • This is a public episode. If you’d like to discuss this with other subscribers or get access to bonus episodes, visit justincottle.substack.com/subscribe