Episodes

  • In this episode of Intel on AI host Amir Khosrowshahi talks with Jeff Lichtman about the evolution of technology and mammalian brains.

    Jeff Lichtman is the Jeremy R. Knowles Professor of Molecular and Cellular Biology at Harvard. He received an AB from Bowdoin and an M.D. and Ph.D. from Washington University, where he worked for thirty years before moving to Cambridge. He is now a member of Harvard’s Center for Brain Science and director of the Lichtman Lab, which focuses on connectomics— mapping neural connections and understanding their development.

    In the podcast episode Jeff talks about why researching the physical structure of brain is so important to advancing science. He goes into detail about Brainbrow—a method he and Joshua Sanes developed to illuminate and trace the “wires” (axons and dendrites) connecting neurons to each other. Amir and Jeff discuss how the academic rivalry between Santiago Ramón y Cajal and Camillo Golgi pioneered neuroscience research. Jeff describes his remarkable research taking nanometer slices of brain tissue, creating high-resolution images, and then digitally reconstructing the cells and synapses to get a more complete picture of the brain. The episode closes with Jeff and Amir discussing theories about how the human brain learns and what technologists might discover from the grand challenge of mapping the entire nervous system.

    Academic research discussed in the podcast episode:

    Principles of Neural Development The reorganization of synaptic connexions in the rat submandibular ganglion during post-natal development Development of the neuromuscular junction: Genetic analysis in mice A technicolour approach to the connectome The big data challenges of connectomics Imaging Intracellular Fluorescent Proteins at Nanometer Resolution Stimulated emission depletion (STED) nanoscopy of a fluorescent protein-labeled organelle inside a living cell High-resolution, high-throughput imaging with a multibeam scanning electron microscope Saturated Reconstruction of a Volume of Neocortex A connectomic study of a petascale fragment of human cerebral cortex A Canonical Microcircuit for Neocortex
  • In this episode of Intel on AI host Amir Khosrowshahi and co-host Mariano Phielipp talk with Chelsea Finn about machine learning research focused on giving robots the capability to develop intelligent behavior.

    Chelsea is Assistant Professor in Computer Science and Electrical Engineering at Stanford University, whose Stanford IRIS (Intelligence through Robotic Interaction at Scale) lab is closely associated with the Stanford Artificial Intelligence Laboratory (SAIL). She received her Bachelor's degree in Electrical Engineering and Computer Science at MIT and her PhD in Computer Science at UC Berkeley, where she worked with Pieter Abbeel and Sergey Levine.

    In the podcast episode Chelsea explains the difference between supervised learning and reinforcement learning. She goes into detail about the different kinds of new reinforcement algorithms that can aid robots to learn more autonomously. Chelsea talks extensively about meta-learning—the concept of helping robots learn to learn­—and her efforts to advance model-agnostic meta-learning (MAML). The episode closes with Chelsea and Mariano discussing the intersection of natural language processing and reinforcement learning. The three also talk about the future of robotics and artificial intelligence, including the complexity of setting up robotic reward functions for seemingly simple tasks.

    Academic research discussed in the podcast episode:

    Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Meta-Learning with Memory-Augmented Neural Networks Matching Networks for One Shot Learning Learning to Learn with Gradients Bayesian Model-Agnostic Meta-Learning Meta-Learning with Implicit Gradients Meta-Learning Without Memorization Efficiently Identifying Task Groupings for Multi-Task Learning Three scenarios for continual learning Dota 2 with Large Scale Deep Reinforcement Learning ProtoTransformer: A Meta-Learning Approach to Providing Student Feedback
  • Missing episodes?

    Click here to refresh the feed.

  • In this episode of Intel on AI host Amir Khosrowshahi talks with Joshua Tucker about using artificial intelligence to study the influence social media has on politics.

    Joshua is professor of politics at New York University with affiliated appointments in the department of Russian and Slavic Studies and the Center for Data Science. He is also the director of the Jordan Center for the Advanced Study of Russia and co-director of the Center for Social Media and Politics. He was a co-author and editor of an award-winning policy blog at The Washington Post and has published several books, including his latest, where he is co-editor, titled Social Media and Democracy: The State of the Field, Prospects for Reform from Cambridge University Press.

    In the podcast episode, Joshua discusses his background in researching mass political behavior, including Colored Revolutions in Eastern Europe. He talks about how his field of study changed after working with his then PhD student Pablo Barberá (now a professor at the University of Southern California), who proposed a method whereby researchers could estimate people's partisanship based on the social networks in which they had enmeshed themselves. Joshua describes the limitations researchers often have when trying to study data on various platforms, the challenges of big data, utilizing NYU’s Greene HPC Cluster, and the impact that the leak of the Facebook Papers had on the field. He also describes findings regarding people who are more prone to share material from fraudulent media organizations masquerading as news outlets and how researchers like Rebekah Tromble (Director of the Institute for Data, Democracy and Politics at George Washington University) are working with government entities like the European Union on balancing public research with data privacy. The episode closes with Amir and Joshua discussing disinformation campaigns in the context of the Russo-Ukrainian War.

    Academic research discussed in the podcast episode:

    Birds of the Same Feather Tweet Together: Bayesian Ideal Point Estimation Using Twitter Data. Tweeting From Left to Right: Is Online Political Communication More Than an Echo Chamber?
  • In this episode of Intel on AI host Amir Khosrowshahi talks with Ron Dror about breakthroughs in computational biology and molecular simulation.

    Ron is an Associate Professor of Computer Science in the Stanford Artificial Intelligence Lab, leading a research group that uses machine learning and molecular simulation to elucidate biomolecular structure, dynamics, and function, and to guide the development of more effective medicines. Previously, Ron worked on the Anton supercomputer at D.E. Shaw Research after earning degrees in the fields of electrical engineering, computer science, biological sciences, and mathematics from MIT, Cambridge, and Rice. His groundbreaking research has been published in journals such as Science and Nature, presented at conferences like Neural Information Processing Systems (NeurIPS), and won awards from the Association of Computing Machinery (ACM) and other organizations.

    In the podcast episode, Ron talks about his work with several important collaborators, his interdisciplinary approach to research, and how molecular modeling has improved over the years. He goes into detail about the gen-over-gen advancements made in the Anton supercomputer, including its software, and his recent work at Stanford with molecular dynamics simulations and machine learning. The podcast closes with Amir asking detailed questions about Ron and his team’s recent paper concerning RNA structure prediction that was featured on the cover of Science.

    Academic research discussed in the podcast episode:

    Statistics of real-world illumination The Role of Natural Image Statistics in Biological Motion Estimation Surface reflectance recognition and real-world illumination statistics Accuracy of velocity estimation by Reichardt correlators Principles of Neural Design Levinthal's paradox Potassium channels Structural and Thermodynamic Properties of Selective Ion Binding in a K+ Channel Scalable Algorithms for Molecular Dynamics Simulations on Commodity Clusters Long-timescale molecular dynamics simulations of protein structure and function Parallel random numbers: as easy as 1, 2, 3 Biomolecular Simulation: A Computational Microscope for Molecular Biology Anton 2: Raising the Bar for Performance and Programmability in a Special-Purpose Molecular Dynamics Supercomputer Molecular Dynamics Simulation for All Structural basis for nucleotide exchange in heterotrimeric G proteins How GPCR Phosphorylation Patterns Orchestrate Arrestin-Mediated Signaling Highly accurate protein structure prediction with AlphaFold ATOM3D: Tasks on Molecules in Three Dimensions Geometric deep learning of RNA structure
  • In this episode of Intel on AI host Amir Khosrowshahi, assisted by Dmitri Nikonov, talks with Jean Anne Incorvia about the use of new physics in nanocomputing, specifically with spintronic logic and 2D materials.

    Jean is an Assistant Professor and holds the Fellow of Advanced Micro Devices Chair in Computer Engineering in the Department of Electrical and Computer Engineering at The University of Texas at Austin, where she directs the Integrated Nano Computing Lab.

    Dimitri is a Principal Engineer in the Components Research at Intel. He holds a Master of Science in Aeromechanical Engineering from the Moscow Institute of Physics and Technology and a Ph.D. from Texas A&M. Dimitri works in the discovery and simulation of nanoscale logic devices and manages joint research projects with multiple universities. He has authored dozens of research papers in the areas of quantum nanoelectronics, spintronics, and non-Boolean architectures.

    In the episode Jean talks about her background with condensed matter physics and solid-state electronics. She explains how magnetic properties and atomically thin materials, like graphene, can be leveraged at nanoscale for beyond-CMOS computing. Jean goes into detail about domain wall magnetic tunnel junctions and why such devices might have a lower energy cost than the modern process of encoding information in charge. She sees these new types of devices to be compatible with CMOS computing and part of a larger journey toward beyond-von Neumann architecture that will advance the evolution of artificial intelligence, neural networks, deep learning, machine learning, and neuromorphic computing.

    The episode closes with Jean, Amir, and Dimitri talking about the broadening definition of quantum computing, existential philosophy, and AI ethics.

    Academic research discussed in the podcast episode:

    Being and Time Cosmic microwave background radiation anisotropies: Their discovery and utilization Nanotube Molecular Wires as Chemical Sensors Visualization of exciton transport in ordered and disordered molecular solids Nanoscale Magnetic Materials for Energy-Efficient Spin Based Transistors Lateral Inhibition Pyramidal Neural Network for Image Classification Magnetic domain wall neuron with lateral inhibition Maximized Lateral Inhibition in Paired Magnetic Domain Wall Racetracks for Neuromorphic Computing Domain wall-magnetic tunnel junction spin–orbit torque devices and circuits for in-memory computing High-Speed CMOS-Free Purely Spintronic Asynchronous Recurrent Neural Network
  • In this episode of Intel on AI hosts Amir Khosrowshahi and Santiago Miret talk with Alán Aspuru-Guzik about the chemistry of computing and the future of materials discovery.

    Alán is a professor of chemistry and computer science at the University of Toronto, a Canada 150 Research Chair in theoretical chemistry, a CIFAR AI Chair at the Vector Institute, and a CIFAR Lebovic Fellow in the biology-inspired Solar Energy Program. Alán also holds a Google Industrial Research Chair in quantum computing and is the co-founder of two startups, Zapata Computing and Kebotix.

    Santiago Miret is an AI researcher in Intel Labs, who has an active research collaboration Alán. Santiago studies at the intersection of AI and the sciences, as well as the algorithmic development of AI for real-world problems.

    In the first half of the episode, the three discuss accelerating molecular design and building next generation functional materials. Alán talks about his academic background with high performance computing (HPC) that led him into the field of molecular design. He goes into detail about building a “self-driving lab” for scientific experimentation, which, coupled with advanced automation and robotics, he believes will help propel society to move beyond the era of plastics and into the era of materials by demand. Alán and Santiago talk about their research collaboration with Intel to build sophisticated model-based molecular design platforms that can scale to real-world challenges. Alán talks about the Acceleration Consortium and the need for standardization research to drive greater academic and industry collaborations for self-driving laboratories.

    In the second half of the episode, the three talk about quantum computing, including developing algorithms for quantum dynamics, molecular electronic structure, molecular properties, and more. Alán talks about how a simple algorithm based on thinking of the quantum computer like a musical instrument is behind the concept of the variational quantum eigensolver, which could hold promising advancements alongside classical computers. Amir, and Santiago close the episode by talking about the future of research, including projects at DARPA, oscillatory computing, quantum machine learning, quantum autoencoders, and how young technologists entering the field can advance a more equitable society.

    Academic research discussed in the podcast episode:

    The Hot Topic: What We Can Do About Global Warming Energy, Transport, & the Environment Scalable Quantum Simulation of Molecular Energies The Harvard Clean Energy Project: Large-Scale Computational Screening and Design of Organic Photovoltaics on the World Community Grid Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules Optimizing Memory Placement using Evolutionary Graph Reinforcement Learning Neuroevolution-Enhanced Multi-Objective Optimization for Mixed-Precision Quantization Organic molecules with inverted gaps between first excited singlet and triplet states and appreciable fluorescence rates Simulated Quantum Computation of Molecular Energies Towards quantum chemistry on a quantum computer Gerald McLean and Marcum Jung and others with the concept of the variational quantum eigensolver Experimental investigation of performance differences between coherent Ising machines and a quantum annealer Quantum autoencoders for efficient compression of quantum data
  • In this episode of Intel on AI host Amir Khosrowshahi and Milena Marinova talk about using artificial intelligence for professional learning.

    Milena is currently the Vice President of Data and AI Solutions at Microsoft. At the time of recording this podcast (April 2021), Milena was the visionary and driving force behind the award-winning AI calculus tutoring application Aida and its capabilities platform in the AI Products & Solutions Group, which she founded and led at Pearson. Bringing over 15 years of experience and knowledge in machine learning, neural networks, computer vision, and the commercialization of new technologies, Milena’s background includes an MBA from IMD in Lausanne, Switzerland and a B.Sc. with Honors in Computer Science from Caltech. She is a passionate advocate for innovation and has been a Venture Partner with Atlantic Bridge Capital, helping with AI investments and portfolio companies. Milena is also a co-founder and advisor to several startups in Europe and the US and has previously held management positions at the startup incubator Idealab, as well as executive roles at Intel.

    In the podcast episode Amir and Milena discuss some of the challenges of developing artificial intelligence products, going from academic research into commercial deployment, and the importance of data policy by design. Milena describes some of the lessons she’s learned over the years.

    Academic research discussed in the podcast episode:

    Learning from Data The Multi-Armed Bandit Problem: Decomposition and Computation Programmable Neural Logic Bubble Blinders: The Untold Story of the Search Business Model Regulating Innovation (conference panel) Intel RealSense Stereoscopic Depth Cameras Smart Robots: From the Lab to the World (podcast) Artificial Intelligence: A Modern Approach Self-supervised learning: The dark matter of intelligence
  • In this episode of Intel on AI host Amir Khosrowshahi and Luis Ceze talk about building better computer architectures, molecular biology, and synthetic DNA.

    Luis Ceze is the Lazowska Professor in the Paul G. Allen School of Computer Science and Engineering at the University of Washington, Co-founder and CEO at OctoML, and Venture Partner at Madrona Venture Group. His research focuses on the intersection between computer architecture, programming languages, machine learning and biology. His current research focus is on approximate computing for efficient machine learning and DNA-based data storage. He co-directs the Molecular Information Systems Lab (misl.bio) and the Systems and Architectures for Machine Learning lab (sampl.ai). He has co-authored over 100 papers in these areas, and had several papers selected as IEEE Micro Top Picks and CACM Research Highlights. His research has been featured prominently in the media including New York Times, Popular Science, MIT Technology Review, Wall Street Journal, among others. He is a recipient of an NSF CAREER Award, a Sloan Research Fellowship, a Microsoft Research Faculty Fellowship, the 2013 IEEE TCCA Young Computer Architect Award, the 2020 ACM SIGARCH Maurice Wilkes Award and UIUC Distinguished Alumni Award.

    In the episode, Amir and Luis talk about DNA storage, which has the potential to be a million times denser than solid state storage today. Luis goes into detail about the process he and fellow researchers at the University of Washington along with a team from Microsoft went through in order to store the high-definition music video “This Too Shall Pass” by the band OK Go onto DNA. Luis also discusses why enzymatic synthesis of DNA might potentially be environmentally sustainable, the advancements being made in similarity searches, and his role in creating the open source Apache TVM project that aims to use machine learning to find the most efficient hardware and software combination optimizations. Amir and Luis end the episode talking about why multi-technology systems with electronics, photonics, molecular systems, and even quantum components could be the future of compute.

    Academic research discussed in the podcast episode:

    The biologic synthesis of deoxyribonucleic acid Towards practical, high-capacity, low-maintenance information storage in synthesized DNA DNA Hybridization Catalysts and Catalyst Circuits A simple DNA gate motif for synthesizing large-scale circuits A DNA-Based Archival Storage System Random access in large-scale DNA data storage Landscape of Next-Generation Sequencing Technologies Clustering Billions of Reads for DNA Data Storage Demonstration of End-to-End Automation of DNA Data Storage High density DNA data storage library via dehydration with digital microfluidic retrieval Probing the physical limits of reliable DNA data retrieval Stabilizing synthetic DNA for long-term data storage with earth alkaline salts Molecular-level similarity search brings computing to DNA data storage DNA Data Storage and Near-Molecule Processing for the Yottabyte Era
  • In this episode of Intel on AI host Amir Khosrowshahi talks with Stephen Wolfram about the current state of artificial intelligence. Stephen is the founder and CEO of Wolfram Research, maker of the Wolfram Mathematica software system and WolframAlpha computational knowledge engine, author of A New Kind of Science, and most recently originator of the Wolfram Physics Project, which is a collaborative effort to find the fundamental theory of physics.

    In the podcast episode, Stephen talks about the computational universe and the idea that even simple programs possibly have sophisticated abilities under the Principle of Computational Equivalence, but that these abilities are perceived to be useless to humans and therefore underexplored. He discusses the need for shared computational languages that will allow people and machines to mine the wealth of available historic data so that it can be translated into useable knowledge.

    Amir and Stephen talk about a number of subjects during their two-hour conversation, including Emanuel Kant, Noam Chomsky, if aliens might view a completely different part of physical reality than humans, encoding values for AI content ranking, and why Stephen left academia to develop his own research institute. Stephen discusses his predictions about the limitations of quantum computing, the potential of computing at the molecular scale, and what comes after semiconductor processing. He also explains why Einstein’s theory of relatively and spacetime is misunderstood. Amir asks Stephen to explain how multiway systems and the biology of neuroscience can be viewed in harmony.

    Academic research discussed in the podcast episode:

    Critique of Pure Reason A Review of B. F. Skinner’s Verbal Behavior Perceptrons Workshop on Environments for Computational Mathematics A programming language Modern Cellular Automata: Theory and Applications Space and Time Gravitation My Time with Richard Feynman Some Relativistic and Gravitational Properties of the Wolfram Model The Wolfram Physics Project: A One-Year Update Multicomputation with Numbers: The Case of Simple Multiway Systems Algorithms for Inverse Reinforcement Learning Spiders are much smarter than you think Molecular Computation of Solutions to Combinatorial Problems A Learning Algorithm for Boltzmann Machines The Computational Brain
  • In this episode of Intel on AI host Amir Khosrowshahi, assisted by Dmitri Nikonov, talks with Ian Young about Intel’s long-term research to develop more energy-efficient computing based on exploratory materials and devices as well as non-traditional architectures.

    Ian is Senior Fellow at Intel and the Director of the Exploratory Integrated Circuits in the Components Research. Ian was one of the key players in the advancement of dynamic and static random-access memory (DRAM, SRAM), and the integration of the bipolar junction transistor and complementary metal-oxide-semiconductor (CMOS) gate into a single integrated circuit (BiCMOS). He developed the original Phase Locked Loop (PLL) based clocking circuit in a microprocessor while working at Intel, contributing to massive improvements in computing power.

    Dimitri is a Principal Engineer in the Components Research at Intel. He works in the discovery and simulation of nanoscale logic devices and manages joint research projects with multiple universities. Both Ian and Dmitri have authored dozens of research papers, many together, in the areas of quantum nanoelectronics, spintronics, and non-Boolean architectures.

    In the podcast episode, the three talk about moving beyond CMOS architecture, which is limited by current density and heat. By exploring new materials, the hope is to make significant improvements in energy efficiency that could greatly expand the performance of deep neural networks and other types of computing. The three discuss the possible applications of ferroelectric materials, quantum tunneling, spintronics, non-volatile memory and computing, and silicon photonics.

    Ian talks about some of the current material challenges he and others are trying to solve, such as meeting operational performance targets and creating pristine interfaces, which mimic some of the same hurdles Intel executives Gordon Moore, Robert Noyce, and Andrew Grove faced in the past. He describes why he believes low-voltage, magneto-electric spin orbit (MESO) devices with quantum multiferroics (materials with coupled magnetic and ferroelectric order) have the most potential for improvement and wide-spread industry adoption.

    Academic research discussed in the podcast episode:

    A PLL clock generator with 5 to 110 MHz of lock range for microprocessors Clock generation and distribution for the first IA-64 microprocessor CMOS scaling trends and beyond Overview of beyond-CMOS devices and a uniform methodology for their benchmarking Benchmarking of beyond-CMOS exploratory devices for logic integrated circuits Tunnel field-effect transistors: Prospects and challenges Scalable energy-efficient magnetoelectric spin–orbit logic Beyond CMOS computing with spin and polarization Optical I/O technology for tera-scale computing Device scaling considerations for nanophotonic CMOS global interconnects Coupled-oscillator associative memory array operation for pattern recognition Convolution inference via synchronization of a coupled CMOS oscillator array Benchmarking delay and energy of neural inference circuits
  • In this episode of Intel on AI host Amir Khosrowshahi and Yoshua Bengio talk about structuring future computers on the underlying physics and biology of human intelligence. Yoshua is a professor at the Department of Computer Science and Operations Research at the Université de Montréal and scientific director of the Montreal Institute for Learning Algorithms (Mila). In 2018 Yoshua received the ACM A.M. Turing Award with Geoffrey Hinton and Yann LeCun.

    In the episode, Yoshua and Amir discuss causal representation learning and out-of-distribution generalization, the limitations of modern hardware, and why current models are exponentially increasing amounts of data and compute only to find slight improvements. Yoshua also goes into detail about equilibrium propagation—a learning algorithm that bridges machine learning and neuroscience by computing gradients closely matching those of backpropagation. Yoshua and Amir close the episode by talking about academic publishing, sharing information, and the responsibility to make sure artificial intelligence (AI) will not be misused in society, before touching briefly on some of the projects Intel and Mila are collaborating on, such as using parallel computing for the discovery of synthesizable molecules.

    Academic research discussed in the podcast episode:

    Computing machinery and intelligence A quantitative description of membrane current and its application to conduction and excitation in nerve From System 1 Deep Learning to System 2 Deep Learning The Consciousness Prior BabyAI: A Platform to Study the Sample Efficiency of Grounded Language Learning Equilibrium Propagation: Bridging the Gap between Energy-Based Models and Backpropagation A deep learning theory for neural networks grounded in physics
  • In this episode of Intel on AI host Amir Khosrowshahi and Melanie Mitchell talk about the paradox of studying human intelligence and the limitations of deep neural networks. Melanie is the Davis Professor of Complexity at the Santa Fe Institute, former professor of Computer Science at Portland State University, and the author/editor of six books and numerous scholarly papers in the fields of artificial intelligence, cognitive science, and complex systems, including Complexity: A Guided Tour and Artificial Intelligence: A Guide for Thinking Humans.

    In the episode, Melanie and Amir discuss how intelligence emerges from the substrate of neurons and why being able to perceive abstract similarities between different situations via analogies is at the core of cognition. Melanie goes into detail about deep neural networks using spurious statistical correlations, the distinction between generative and discriminative systems and machine learning, and the theory that a fundamental part of the human brain is trying to predict what is going to happen next based on prior experience. She also talks about creating the Copycat software, the dangers of artificial intelligence (AI) being easy to manipulate even in very narrow areas, and the importance of getting inspiration from biological intelligence.

    Academic research discussed in the podcast episode:

    Gödel, Escher, Bach: an Eternal Golden Braid Fluid Concepts and Creative Analogies: Computer Models Of The Fundamental Mechanisms Of Thought A computational model for solving problems from the Raven’s Progressive Matrices intelligence test using iconic visual representations A Framework for Representing Knowledge On the Measure of Intelligence The Abstraction and Reasoning Corpus (ARC) Human-level concept learning through probabilistic program induction Why AI is Harder Than We Think We Shouldn’t be Scared by ‘Superintelligent A.I.’ (New York Times opinion piece)
  • In this episode of Intel on AI host Amir Khosrowshahi and Bruno Olshausen talk about neuroscience and the future of computing. Bruno is a professor at Berkeley with appointments in the Helen Wills Neuroscience Institute and School of Optometry. He is also the director of the Redwood center for Theoretical Neuroscience, which brings the fields of physics, mathematics, engineering, and neuroscience together to study how networks of neurons in the brain process information.

    In the episode, Bruno and Amir discuss research about recording large populations of neurons, hyperdimensional computing, and discovering new types of engineering principles. Bruno talks about how in order to understand intelligence and its underpinnings, we have to understand the origins of intelligence and perceptual psychology outside of mammalian brains. He points to the sophisticated visual system of jumping spiders as inspiration for developing systems that use low energy in a small form factor. By better understanding the origins of perception and other biophysical structures, Bruno theorizes the artificial intelligence field may evolve beyond image recognition tasks of current neural networks. Bruno and Amir close the episode by talking about the elementary units of computation, the idea of “listening to silicon” as proposed by Carver Mead, neuromorphic computing, and what the future of research might hold.

    Academic research discussed in the podcast episode:

    Spatially Distributed Local Fields in the Hippocampus Encode Rat Position Beyond inspiration: Three lessons from biology on building intelligent machines The Chinese Room Argument Digital tissue and what it may reveal about the brain Principles of Neural Design (Bruno calls this a “must read”) Experiencing and Perceiving Visual Surfaces Analog VLSI Implementation of Neural Systems OIM: Oscillator-based Ising Machines for Solving Combinatorial Optimisation Problems
  • In this episode of Intel on AI host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times Best Selling Author, passes the hosting mantle to Amir Khosrowshahi, Intel Vice President. The two talk about lessons learned from guests across Season 2 of the podcast and what the AI of tomorrow might be.

    Abigail shares about some exciting next steps for her. Amir discusses his background studying neurobiology and theoretical physics, his research in computational neuroscience and mammalian visual systems at UC Berkeley, his work at Intel following the acquisition of Nervana, and his plans for hosting Season 3 of the podcast.

    Follow Abigail on Twitter: twitter.com/abigailhingwen
    Follow Amir on Twitter: twitter.com/khosra

  • In this episode of Intel on AI guest US Congresswoman Robin Kelly talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about artificial intelligence (AI) and the United States government.

    Congresswoman Kelly talks about how she became involved in AI policy, introducing a bipartisan resolution to create a national AI strategy with Will Hurd (R-Texas), and educating other Congress members about the field. The two also talk about the importance of training new talent in order for America to stay competitive in a global market and why ethics in AI is crucial when considering regulation.

    Follow Congresswoman Kelly on Twitter: twitter.com/reprobinkelly
    Follow Abigail on Twitter: twitter.com/abigailhingwen

  • In this episode of Intel on AI guest Aviv Regev, Executive Vice President of Genentech Research and Early Development, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about biomedicine and artificial intelligence (AI).

    The two discuss Aviv’s work on circuitry in cells, the future of experimental biology, why increasing the diversity of data is key to creating algorithms that can find patterns in genomic variants, and how strengthening global networks will help society better prepare for the next pandemic. Hear more from Aviv in a special episode of Genentech’s science podcast “Studying the Symphony of Cells.”

    Follow Genentech on Twitter: twitter.com/genentech
    Follow Abigail on Twitter: twitter.com/abigailhingwen
    Learn more about Intel’s work in AI: intel.com/ai

  • In this episode of Intel on AI guest Terah Lyons, Executive Director of Partnership on AI, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about her previous work as Policy Advisor to the United States Chief Technology Officer Megan Smith in President Barack Obama’s Office of Science and Technology Policy, her thoughts on the role policymakers should play in the field of artificial intelligence (AI), and the ongoing efforts of the Partnership on AI.

    The two discuss how organizations can align their values and prioritize incentives around developing AI that helps workers, the importance of measuring such outcomes, and why practical frameworks for AI can help people outside the field better understand the benefits of AI.

    Follow Terah on Twitter: twitter.com/terahlyons
    Follow Abigail on Twitter: twitter.com/abigailhingwen
    Learn more about Intel’s work in AI: intel.com/ai

  • In this episode of Intel on AI guest Colin Murdoch, Senior Business Director at DeepMind, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about text-to-speech system WaveNet, the recent breakthrough with AlphaFold, the potential for artificial intelligence to solve energy challenges, and how Google adopts cutting-edge research into a number of services.

    The two also discuss examples like AlphaGo, GraphNet, advancements in Android products, and what the future of artificial general intelligence might look like.

    Follow DeepMind on Twitter: twitter.com/DeepMind
    Follow Abigail on Twitter: twitter.com/abigailhingwen
    Learn more about Intel’s work in AI: intel.com/ai

  • In this episode of Intel on AI guest Alice Xiang, Head of Fairness, Transparency, and Accountability Research at the Partnership on AI, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about algorithmic fairness—the study of how algorithms might systemically perform better or worse for certain groups of people and the ways in which historical biases or other systemic inequities might be perpetuated by algorithmic systems.

    The two discuss the lofty goals of the Partnership on AI, why being able to explain how a model arrived at a specific decision is important for the future of AI adoption, and the proliferation of criminal justice risk assessment tools.

    Follow Alice on Twitter: twitter.com/alicexiang
    Follow Abigail on Twitter: twitter.com/abigailhingwen
    Learn more about Intel’s work in AI: intel.com/ai

  • In this episode of Intel on AI guest Rana el Kaliouby, Ph.D., cofounder and CEO of Affectiva, and author of Girl Decoded: A Scientist’s Quest to Reclaim Our Humanity by Bringing Emotional Intelligence to Technology, talks with host Abigail Hing Wen, Intel AI Tech Evangelist and New York Times best-selling author, about emotional intelligence (EQ)—a person’s ability to sense emotional and cognitive states and behaviors, and be able to adapt in real-time based on that information.

    The two talk about Rana’s journey to founding Affectiva with MIT professor Rosalind Picard, Sc.D, the future implementations of EQ in technology, such as customer service and autonomous driving, and why such systems need to have clearly defined data policies.

    Follow Rana on Twitter: twitter.com/kaliouby
    Follow Abigail on Twitter: twitter.com/abigailhingwen
    Learn more about Intel’s work in AI: intel.com/ai