Episodes

  • Andy and Dave discuss the latest in AI news and research, including the signing of the 2022 National Defense Authorization Act, which contains a number of provisions related to AI and emerging technology [0:57]. The Federal Trade Commission wants to tackle data privacy concerns and algorithmic discrimination and is considering a wide range of options to do so, including new rules and guidelines [4:50]. The European Commission proposes a set of measures to regulate digital labor platforms in the EU. Engineered Arts unveils Ameca, a gray-faced humanoid robot with “natural-looking” expressions and body movements [7:07]. And DARPA launches its AMIGOS project, aimed at automatically converting training manuals and videos into augmented reality environments [13:16]. In research, scientists at the Bar-Ilan University in Israel upend conventional wisdom on neural responses by demonstrating that the duration of the resting time (post-excitation) can exceed 20 milliseconds, that the resting period is sensitive to the origin of the input signal (e.g. left versus right), and that the neuron has a sharp transition from the refractory period to full responsiveness without an intermediate stutter phase [15:30]. Researchers at Victoria University use brain cells to play Pong using electric signals and demonstrate that the cells learn much faster than current neural networks, reaching the same point living systems reach after 10 or 15 rallies, vice 5000 rallies for computer-based AIs [19:37]. MIT researchers present evidence that ML is starting to look like human cognition, comparing various aspects of how neural networks and human brains accomplish their tasks [24:34]. And OpenAI creates GLIDE< a 3.5B parameter text-to-image generation model to generate even higher quality images than DALL-E, though it still has trouble with “highly unusual” scenarios [29:30]. The Santa Fe Institute publishes The Complex Alternative: Complexity Scientists on the COVID-19 Pandemic, 800 pages on how complexity interwove through the pandemic [33:50]. And Chris Peter has an algorithm to create a short movie after watching Hitchcock’s Vertigo 20 times [35:22].

    Please visit our website to explore the links mentioned in this episode.

    https://www.cna.org/CAAI/audio-video

  • Andy and Dave welcome the hosts of the weekly podcast AI Today, Kathleen Walch and Ronald Schmelzer. On AI Today, Kathleen and Ron discuss topics related to how AI is making impacts around the globe, with a focus on having discussions with industry and business leaders to get their thoughts and perspectives on AI technologies, applications, and implementation challenges. Ron and Kathleen also co-founded Cognilytica, an AI research, education, and advisory firm. The four podcast hosts discuss a variety of topics, including the origins of the AI Today podcast, AI trends in industry and business, AI winters, and the importance of education.

    https://www.cna.org/CAAI/audio-video

  • Missing episodes?

    Click here to refresh the feed.

  • Andy and Dave discuss the latest in AI news and research, starting with the US Department of Defense creating a new position of the Chief Digital and AI Officer, subsuming the Joint AI Center, the Defense Digital Service, and the office of the Chief Data Officer [0:32]. Member states of UNESCO adopt the first-ever global agreement on the ethics of AI, which includes recommendations on protecting data, banning social scoring and mass surveillance, helping to monitor and evaluate, and protecting the environment [3:26]. The European Digital Rights and 119 civil society organizations launch a collective call for an AI Act to articulate fundamental rights (for humans) regarding AI technology and research [6:02]. The Future of Life Institute releases Slaughterbots 2.0: “if human: kill()” ahead of the 3rd session in Geneva of the Group of Governmental Experts discussing lethal autonomous weapons systems [7:15]. In research, Xenobots 3.0, the living robots made from frog cells, demonstrate the ability to replicate themselves kinematically, at least for a couple of generations (extended to four generations by using an evolutionary algorithm to model ideal structures for replication) [12:23]. And researchers from DeepMind, Oxford, and Sydney demonstrate the ability to collaborate with machine learning algorithms to discover new results in mathematics (in knot theory and representation theory); though another researcher attempts to dampen the utility of the claims. [17:57] And finally, Dr. Mike Stumborg joins Dave and Andy to discuss research in Human-Machine Teaming, why it’s important, and where the research will be going [21:44].

  • Andy and Dave discuss the latest in AI news and research, [0:53] starting with OpenAI’s announcement that it is making GPT-3 generally available through its API (though developers still require approval for production-scale applications). [3:09] For DARPA’s Gremlins program, two Gremlin Air Vehicles “validated all autonomous formation flying positions and safety features,” and one of the autonomous aircraft demonstrated airborne recovery to a C-130. [4:54] After three years, DARPA announces the winners of its Subterranean Robot Challenge, awarding prizes for teams operating in the “real-world” in virtual space. [7:03] The Defense Information Systems Agency released its Strategic Plan for 2022 through 2024, which includes plans to employ AI capabilities for defensive cyber operations. [8:08] The Department of Defense announces a new cloud initiative to replace the failed JEDI contract, with invitations to Amazon, Microsoft, Google, and Oracle to bid. [11:52] In research, DeepMind, Google Brain, and World Chess Champion Vladimir Kramnik join forces to peer into the guts of AlphaZero, with initial results showing strong evidence for the existence of “human-understandable concepts of surprising complexity” within the neural network. [17:48] Andrea Roli, Johannes Jaeger, and Stu Kauffman pen a white paper on how organisms come to know the world, and from these observations, derive fundamental limits on artificial general intelligence. [20:34] MIT Press makes available an elementary introduction to Bayesian Models of Perception and Action, by Wei Ji Ma, Konrad Paul Kording, and Daniel Goldreich. [23:40] And finally, Sam Bendett and Jeff Edmonds drop by for a chat on the latest and greatest in Russian AI and Autonomy – including an update on recent military expos and other AI-related events happening in Russia.

    https://www.cna.org/CAAI/audio-video

  • Andy and Dave discuss the latest in AI news and research, including the Defense Innovation Unit releasing Responsible AI Guidelines in Practice, which seeks to ensure tech contractors adhere to the Department of Defense’s existing ethical principles for AI [0:53]. “Meta” (the Facebook re-brand) announces that it will end its use of facial recognition software and delete data on more than a billion people, though it will retain the technology for other products in its metaverse [3:12]. Australia’s information and privacy commissioners release an order to Clearview AI to stop collecting facial biometrics from Australian citizens and to destroy all existing data [5:16]. The U.S. Marine Corps releases a Talent Management 2030 report, which describes the need for more cognitively mature Marines and seeks to “leverage the power of AI,” and to be “at the vanguard of service efforts to operationalize AI [7:39].” DOD releases at 2021 Report on Military and Security Developments Involving the People’s Republic of China, which describes China’s use of AI technology in influence operations, the digital silk road, military capabilities, and more [10:46]. A competition using unrestricted adversarial examples at the 2021 Conference on Computer Vision and Pattern Recognition includes as co-authors several members of the Army Engineering University of the People’s Liberation Army [11:43]. Research from Okinawa and Australia demonstrates that deep reinforcement learning can produce accurate quantum control, even with noisy measurements, using a small particle moving in a double-well. [14:31] MIT Press makes available a nearly 700-page book, Algorithms for Decision Making, organized around four sources of uncertainty (outcome, model, state, and interaction) [18:01]. And Dr. Amanda Kerrigan and Kevin Pollpeter join Andy and Dave to discuss their latest research in what China is doing with AI technology, including a bi-weekly newsletter on the topic, and a preliminary analysis on China’s view of Intelligent Warfare [20:06].

    https://www.cna.org/CAAI/audio-video

  • Andy and Dave discuss the latest in AI news and research, including: NATO releases its first AI strategy, which included the announcement of a one billion euro “NATO innovation fund.” [0:52] Military research labs in the US and UK collaborate on autonomy and AI in a combined demonstration, integrating algorithms and automated workflows into military operations. [2:58] A report from CSET and MITRE identifies that the Department of Defense already has a number of AI and related experts, but that the current system hides this talent. [6:45] The National AI Research Resource Task Force partners with Stanford’s Human-Centered AI and the Stanford Law School to publish Building a National AI Research Resource: A Blueprint for the National Research Cloud. [6:45] And in a trio of “AI fails,” a traffic camera in the UK mistakes a woman for a car and issues a fine to the vehicle’s owner; [9:10] the Allen Institute for AI introduces Delphi as a step toward developing AI systems that behave ethically (though it sometimes thinks that it’s OK to murder everybody if it creates jobs); [10:07] and a WSJ report reveals that Facebook’s automated moderation tools were falling far short on accurate identification of hate speech and videos of violence and incitement. [12:22] Ahmed Elgammal from Rutgers teams up with Playform to compose two movements for Beethoven’s Tenth Symphony, for which the composer left only sketches before he died. And finally, Andy and Dave welcome Dr. Heather Wolters and Dr. Megan McBride to discuss their latest research on the Psychology of (Dis)Information, with a pair of publications, one providing a primer on key psychological mechanisms, and another examining case studies and their implications.

    The Psychology of (Dis)information: A Primer on
    Key Psychological Mechanisms: https://www.cna.org/CNA_files/PDF/The%20Psychology-of-(Dis)information-A-Primer-on-Key-Psychological-Mechanisms.pdf

    The Psychology of (Dis)information: Case Studies
    and Implications: https://www.cna.org/CNA_files/PDF/The-Psychology-of-(Dis)information-Case-Studies-and-Implications.pdf

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video
  • Welcome to Season 5.0 of AI with AI! Andy and Dave discuss the latest in AI news and research, including. The White House calls for an AI “bill of rights,” and invites comments for information. In its 4th year, Nathan Benaich and Ian Hogarth publish their State of AI Report, 2021. [1:50] OpenAI uses reinforcement learning from human feedback and recursive task decomposition to improve algorithms’ abilities to summarize books. [3:14] IEEE Spectrum publishes a paper that examines the diminishing returns of deep learning, questioning the long-term viability of the technology. [5:12] In related news, Nvidia and Microsoft release a 530 billion-parameter style language model, the Megatron-Turing Natural Language Generation model (MT-NLG). [6:54] DeepMind demonstrates the use of a GAN in improving high-resolution precipitation “nowcasting.” [10:05] Researchers from Waterloo, Guelph, and IIT Madras publish research on deep learning that can identify early warning signals of tipping points. [11:54] Military robot maker Ghost Robots creates a robot dog with a rifle, the Special Purpose Unmanned Rifle, or SPUR. [14:25] And Dr. Larry Lewis joins Dave and Andy to discuss the latest report from CNA on Leveraging AI to Mitigate Civilian Harm, which describes the causes of civilian harm in military operations, identifies how AI could protect civilians from harm, and identifies ways to lessen the infliction of suffering, injury, and destruction overall. [16:36]

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video
  • Andy and Dave discuss the latest in AI news and research, including, the UK government releases its National AI Strategy, a 10-year plan to make the country a global AI superpower [1:28]. Stanford University’s One Hundred Year Study on AI Project releases its second report, Gathering Strength, Gathering Storms, assessing developments in AI between 2016 and 2021 around fourteen framing questions. [4:57] The UN High Commissioner for Human Rights calls for a moratorium on the sale and use of AI systems that pose series risks to human rights until adequate safeguards are put into place. [10:07] Jack Poulson at Tech Inquiry maps out US government use of AI-based weapons and surveillance, using publicly available information. [12:07] Researchers at Hebrew University examine the potential of single cortical neurons as deep artificial neural networks, finding that a deep neural network with 5-8 layers are necessary to approximate them. [16:10] Researchers at Stanford review the different architectures of neuronal circuits in the human brain, identifying different circuit motifs. [20:02] Other research at Stanford shows the ability to image and track moving non-line-of-sight objects using a single optical path (shining a laser through a keyhole). [22:05] And researchers at MIT, Nvidia, and Technion demonstrate that a neural network can identify the number and activity of people in a room, solely by examining a blank wall in the room. [26:33] The Nils Theory research group publishes Physics-Based Deep Learning, introducing physical models into deep learning to reconcile data-centered viewpoints with physical simulations. [30:34] Ori Cohen compiles the Machine and Deep Learning Compendium, an open resource (GitBook) on over 500 topics with summaries, links, and articles. [32:21] The Allen Institute for AI releases a web tool that converts PDF papers into HTML for more rapid web publishing of scientific papers. [33:20] And the Museum of Wild and Newfangled Art: This Show is Curated by a Machine invites viewers to ponder on why they think an AI chose the works within. [34:43]

  • Andy and Dave discuss the latest in AI news and research, including:

    [1:28] Researchers from several universities in biomedicine establish the AIMe registry, a community-driven reporting platform for providing information and standards of AI research in biomedicine.

    [4:15] Reuters publishes a report with insight into examples at Google, Microsoft, and IBM, where ethics reviews have curbed or canceled projects.

    [8:11] Researchers at the University of Tübingen create an AI method for significantly accelerating super-resolution microscopy, which makes heavy use of synthetic training data.

    [13:21] The US Navy establishes Task Force 59 in the Middle East, which will focus on the incorporation of unmanned and AI systems into naval operations.

    [15:44] The Department of Commerce establishes the National AI Advisory Committee, in accordance with the National AI Initiative Act of 2020.

    [19:02] Jess Whittlestone and Jack Clark publish a white paper on Why and How Governments Should Monitor AI Development, with predictions into the types of problems that will occur with inaction.

    [19:02] The Center for Security and Emerging Technology publishes a series of data-snapshots related to AI research, from over 105 million publications.

    [23:53] In research, Google Research, Brain Team, and University of Montreal take a broad look at deep reinforcement learning research and find discrepancies between conclusions drawn from point estimates (fewer runs, due to high computational costs) versus more thorough statistical analysis, calling for a change in how to evaluate performance in deep RL.

    [30:13] Quebec AI Institute publishes a survey of post-hoc interpretability on neural natural language processing.

    [31:39] MIT Technology Review dedicates its Sep/Oct 2021 issues to The Mind, with articles all about the brain.

    [32:05] Katy Borner publishes Atlas of Forecasts: Modeling and Mapping Desirable Futures, showing how models, maps, and forecasts inform decision-making in education, science, technology, and policy-making.

    [33:16] DeepMind in collaboration with University College London offers a comprehensive introduction to modern reinforcement learning, with 13 lectures (~1.5 hours each) on the topic.

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video

    CNA Careers Page: https://www.cna.org/careers/
  • Andy and Dave were recently interviewed on the AI Today podcast.


    On the AI Today podcast we regularly interview thought leaders who are implementing AI and cognitive technology at various companies and agencies. However in this episode hosts Kathleen Walch and Ron Schmelzer interview Andy Ilachinski and David Broyles, hosts of the AI with AI podcast. On their podcast, they explore the latest breakthroughs in artificial intelligence and autonomy, as well as their military implications so naturally, we discussed with them some of the biggest trends they are seeing emerging out of AI today, some of the challenges to AI adoption especially in military applications, and some of the surprising insights and trends they have seen over the 4 years they have hosted their podcast.

  • Andy and Dave discuss the latest in AI news and research, including:

    0:57: The Allen Institute for AI and others come together to create a publicly available “COVID-19 Challenges and Directions” search engine, building off of the corpus of COVID-related research.

    5:06: Researchers with the University of Warwick perform a systematic review of test accuracy for the use of AI in image analysis of breast cancer screening and find most (34 or 36) AI systems were less accurate than a single radiologist, and all were less accurate than a consensus of two or more radiologists (among other concerning findings).

    10:19: A US judge rejects an appeal for the AI system DABUS to own a patent, noting that US federal law requires an “individual” to be an owner, and the legal definition of an “individual” is a natural person.

    17:01: The US Patent and Trademark Office uses machine learning to analyze the history of AI in patents.

    19:42: BCS publishes Priorities for the National AI Strategy, as the UK seeks to set global AI standards.

    20:42: In research, MIT, Northeastern, and U Penn explore the challenges of discerning emotion from a person’s facial movements (which largely relates to context), and highlight the reasons why facial recognition algorithms will struggle with this task.

    28:02: GoogleAI uses diffusion models to generate high fidelity images; the approach slowly adds noise to corrupt the training data, and then using a neural network to reverse that corruption.

    35:07: Springer-Verlag makes AI for a Better Future, by Bernd Carsten Stahl, available for open access.

    36:19: Thomas Smith, co-founder of Gado Images, chats with GPT-3 about the COVID-19 pandemic and finds that it provides some interesting responses to his questions.

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video
  • Andy and Dave discuss the latest in AI news and research, including:

    0:46: The GAO releases a more extensive report on US Federal agency use of facial recognition technology, including what purposes.

    3:24: The US Department of Homeland Security Science and Technology Directorate publishes its AI and ML Strategic Plan, with an implementation plan to follow.

    5:39: Ada Lovelace Institute, AI Now Institute, and Open Government Partnership publish a global study on Algorithmic Accountability for the Public Sector, which focuses on accountability mechanisms stemming from laws and policy.

    9:04: Research from North Caroline State University shows that the benefits of autonomous vehicles will outweigh the risks, with proper regulation.

    13:18: Research Section Introduction

    14:24: Researchers at the Allen Institute for AI and the University of Washington demonstrate that artificial agents can learn generalizable visual representation during interactive gameplay, embodied within an environment (AI2-THOR); agents demonstrated knowledge of the principles of containment, object permanence, and concepts of free space.

    19:37: Researchers at Stanford University introduce BEHAVIOR (Benchmark for Everyday Household Activities in Virtual, Interactive, and ecOlogical enviRonments), which establishes benchmarks for simulation of 100 activities that human often perform at home.

    24:02: A survey examines the dynamics of research communities and AI benchmarks, suggesting that hybrid, multi-institution, and persevering communities are the ones more likely to improve state-of-the-art performance, among other things.

    28:54: Springer-Verlag makes Representation Learning for Natural Language Processing available online.

    32:09: Terry Sejnowski and Stephen Wolfram publish a three-hour discussion on AI and other topics.

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video
  • Andy and Dave discuss the latest in AI news, including an overview of Tesla’s “AI Day,” which among other things, introduced the Dojo supercomputers specialized for ML, the HydraNet single deep-learning model architecture, and a “humanoid robot,” the Tesla Bot. Researchers at Brown University introduce neurograins, grain-of-salt-sized wireless neural sensors, for which they use nearly 50 to record neural activity in a rodent. The Associated Press reports on the flaws in ShotSpotter’s AI gunfire detection system, and one case which used such evidence to send a man to jail for almost a year before a judge dismissed the case. The Department of the Navy releases its Science and Technology Strategy for Intelligent Autonomous Systems (publicly available), including an Execution Plan (available only through government channels). The National AI Research Resource Task Force extends its deadline for public comment in order to elicit more responses. The Group of Governmental Experts on Certain Conventional Weapons holds its first 2021 session for the discussion of lethal autonomous weapons systems; their agenda has moved on to promoting a common understanding and definition of LAWS. And Stanford’s Center for Research on Foundation Models publishes a manifesto: On the Opportunities and Risks of Foundation Models, seeking to establish high level principles on massive models (such as GPT3) upon which many other AI capabilities build. In research, Georgie Institute of Technology, Cornell University, and IBM Research AI examine how the “who” in Explainable AI (e.g., people with or without a background in AI) shapes the perception of AI explanations. And Alvy Ray Smith pens the book of the week, with A Biography of the Pixel, examining the pixel as the “organizing principle of all pictures, from cave paintings to Toy Story.”

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video



  • Andy and Dave discuss the latest in AI news, including an upgraded version of OpenAI’s CoPilot, called, Codex, which can not only complete code but creates it as well (based on natural language inputs from its users). The National Science Foundation is providing $220 million in grants to 11 new National AI Research Institutes (including two fully funded by the NSF). A new DARPA program seeks to explore how AI systems can share their experiences with each other, in Shared-Experience Lifelong Learning (ShELL). The Senate Committee on Homeland Security and Governmental Affairs introduces two AI-related bills: the AI Training Act (to establish a training program to educate the federal acquisition workforce), and the Deepfake Task Force Act (to task DHS to produce a coordinated plan on how a “digital content provenance” standard might assist with decreasing the spread of deepfakes). And the Inspectors General of the NSA and DoD partner to conduct a joint evaluation of NSA’s integration of AI into signals intelligence efforts. In research, DeepMind creates the Perceiver IO architecture, which works across a wide variety of input and output spaces, challenging the idea that different kinds of data need different neural network architectures. DeepMind also publishes PonderNet, which learns to adapt the amount of computation based on the complexity of the problem (rather than the size of the inputs). Research from MIT uses the corpus of US patents to predict the rate of technological improvements for all technologies. The European Parliamentary Research Service publishes a report on Innovative Technologies Shaping the 2040 Battlefield. Quanta Magazine publishes an interview with Melanie Mitchell, which includes a deeper discussion on her research in analogies. And Springer-Verlag makes available for free An Introduction to Ethics in Robotics and AI (by Christoph Bartneck, Christoph Lütge, Alan Wagner, and Sean Welsh).

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video




  • Andy and Dave welcome the hosts of the weekly podcast AI Today, Kathleen Walch and Ronald Schmelzer. On AI Today, Kathleen and Ron discuss topics related to how AI is making impacts around the globe, with a focus on having discussions with industry and business leaders to get their thoughts and perspectives on AI technologies, applications, and implementation challenges. Ron and Kathleen also co-founded Cognilytica, an AI research, education, and advisory firm. The four podcast hosts discuss a variety of topics, including the origins of the AI Today podcast, AI trends in industry and business, AI winters, and the importance of education.

    Related Links

    CPMAI Methodology: https://www.cognilytica.com/cpmai/

    Cognilytica website: https://www.cognilytica.com/

    AI in Government community: https://www.aiingovernment.com/

    Cognilytica: @Cognilytica

    Kathleen Walch: @Kath0134

    Ron Schmelzer: @rschmelzer

  • Andy and Dave discuss the latest in AI news, including a story from MIT Technology Review (which echoes observations made previously on AI with AI) that “hundreds of AI tools have been built to catch COVID. None of them helped.” DeepMind has used its AlphaFold program to identify the structure for 98.5 percent of roughly 20,000 human proteins and will make the information publicly available. The Pentagon makes use of machine learning algorithms to create decision space in the latest of its Global Information Dominance Experiments. An Australian court rules that AI systems can be “inventors” under patent law (but not “owners”), and South Africa issues the world’s first patent to an “AI System.” The United States Special Operations Command put 300 of its personnel through a unique six-week crash course in AI, including leaders such as Google CEO Eric Schmidt and former Defense Secretary Ash Carter. And President Biden nominates Stanford professor Ramin Toloui, who has experience with AI technologies and impacts, as an Assistant Secretary of State for business. In research, DeepMind develop agents capable of “open-ended learning” in XLand, an environment with diverse tasks and challenges. A survey from the Journal of AI Research finds that AI researchers have varying amounts of trust in different organizations, companies, and governments. The Journal of Strategic Studies dedicates an issue to Emerging Technologies, with free access. Mine Cetinkaya-Rundel and Johanna Hardin make an Introduction to Modern Statistics open access with an option (or with proceeds going to OpenIntro, a US-based nonprofit). And Iyad Rahwan curates a collection of evil AI cartoons.

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video
  • Andy and Dave kick off Season 4.0 of AI with AI with a discussion on social media bots. CNA colleagues Meg McBride and Kasey Stricklin join to discuss the results of their recent research efforts, in which they explored the national security implications of social media bots. They describe the types of activities that social media bots engage in (distributing, amplifying, distorting, hijacking, flooding, and fracturing), how these activities might evolve in the near future, the legal frameworks (or lack thereof), and the implications for US special operations forces and the broader national security community.

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video
  • Andy and Dave discuss the latest in AI news and research, including the new DARPA FENCE program (Fast Event-based Neuromorphic Camera and Electronics), which seeks to create event-based cameras that only focus on pixels that have changed in a scene. NIST proposed an approach for reducing the risk of bias in AI and has invited the public to comment and help improve it. Researchers from the University of Colorado, Boulder use a machine learning model to learn physical properties in electronics building blocks (such as clumps of silicon and germanium atoms), as a way to predict how larger electronics components will work or fail. Researchers in South Korea create an artificial skin that mimics human tactile recognition, and couple it with a deep learning algorithm to classify surface structures (with an accuracy of 99.1%). A survey from IE University shows, among other things, that 75% of people surveys in China support replacing parliamentarians with AI, while in the US, 60% were opposed to it. A scientist with uses machine learning to learn Rembrandt’s style and then recreate missing pieces of the painter’s “The Night Watch.” Researchers at Harvard, San Diego, Fujitsu, and MIT present methodical research on demonstrating how classification neural networks are susceptible to small 2D transformations and shifts, image crops, and changes in object colors. The GAO releases a report on Facial Recognition Technology, surveying 42 federal agencies, and finds a general lack of accountability in the use of the technology. The WHO releases a report on Ethics and Governance of AI for Health. In rebuttal to DeepMind’s “Reward is enough” paper, Roitblat and Byrnes pens separate essays on why “Reward is not enough.” An open-access book by Wang and Barabasi looks at the Science of Science. Julia Schneider and Lena Ziyal join forces to provide a comical essay on AI: We Need to Talk, AI. And the National Security Commission on AI holds an all-day summary on Global Emerging Technology.

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video
  • In COVID-related AI news, Andy and Dave discuss survey results from Algorithmia, which shows that IT directors at large companies are looking to spend more money on AI/ML projects due to the pandemic. In regular AI news, the bipartisan Future of Defense Task Force releases its 2020 report, which includes the suggestion of using the Manhattan Project as a model to develop AI technologies. The US and UK sign an agreement to work together on trustworthy AI. Facebook AI releases Dynabench as a way to dynamically benchmark the performance of machine learning algorithms. Amsterdam and Helsinki launch AI registers that explain how they use algorithms, in an effort to increase transparency. In research, the Allen Institute of AI, University of Washington, and University of North Carolina publish research on X-LXMERT (learning cross-modality encoder representations from transformers), which trains GPT-3 on both text and images, to then generate images from scratch by providing descriptions (e.g., a large clock tower in the middle of a town). Researchers at Swarthmore College and Los Alamos National Labs demonstrate the challenges that neural networks of various sizes have in learning Conway’s Game of Life. Maria Jeansson, Claudio Sanna, and Antoine Cully create a stunning visual infographic on the “automated futures” technologies. And Joshua Epstein, a longtime expert in agent-based modeling, provides the European Social Stimulation Association Award Keynote speech.

  • Andy and Dave discuss the latest in AI news, including a report that the Israel Defense Forces used a swarm of small drones in mid-May in Gaza to locate, identify, and attack Hamas militants, using Thor, a 9-kilgram quadrotor drone. A paper in the Journal of American Medical Association examines an early warning system for sepsis, and finds that it misses out on most instances (67%) of cases, and frequently issued false alarms (to which the developer contests the results). A new bill, the Consumer Safety Technology Act, directs the US Consumer Product Safety Commission to run a pilot program to use AI to help in safety inspections. A survey from FICO on The State of Responsible AI (2021) shows, among other things, a disinterest in the ethical and responsible use of AI among business leaders (with 65% of companies saying that can’t explain how specific AI model predictions are made, and 22% of companies have an AI ethics board to consider questions on AI ethics and fairness). In a similar vein, a survey from the Pew Research Center and Elon University’s Imagining the Internet Center found that 68% of respondents (from across 602 leaders in the AI field) believe that AI ethical principles will NOT be employed by most AI systems within the next decade; the survey includes a summary of the respondents’ worries and hopes, as well as some additional commentary. GitHub partners with OpenAI to launch CoPilot, a “Programming Partner” that uses contextual cues to suggest new code. Researchers from Stanford University, UC San Diego, and MIT research Physion, a visual and physical prediction benchmark to measure predictions about commonplace real world physical events (such as when objects: collide, drop, roll, domino, etc). CSET releases a report on Machine Learning and Cybersecurity: Hype and Reality, finding that it is unlikely that machine learning will fundamentally transform cyber defense. Bengio, Lecun, and Hinton join together to pen a white paper on the role of deep learning in AI, not surprisingly eschewing the need for symbolic systems. Aston Zhang and Zack C. Lipton, and Alex J Smola release the latest version of Dive into Deep Learning, now over 1000 pages, and living only as an online version.

    Follow the link below to visit our website and explore the links mentioned in the episode.

    https://www.cna.org/CAAI/audio-video