Episódios

  • For the final (for now?) episode of AI with AI, Andy and Dave discuss the latest in AI news and research, including a political declaration from the US Department of State on the responsible military use of AI and autonomy. NATO begins work on an AI certification standard. The IEEE introduces a new program that provides free access to AI ethics and governance standards. Reported in February, but performed in December, A joint Dept of Defense team performed 12 flight tests (over 17 hours) in which AI agents piloted Lockheed Martin’s X-62A VISTA, an F-16 variant. Andy provides a run-down of a large number of recent ChatGPT-related stories. Wolfram “explains” how ChatGPT works. Paul Scharre publishes Four Battlegrounds: Power in the Age of AI. And to come full circle, we began this podcast 6 years ago with the story of AlphaGo beating the world champion. So we close the podcast with news that a non-professional Go player, Kellin Pelrine, beat a top AI system 14 games to one having discovered a ‘not super-difficult method for humans to beat the machines. A heartfelt thanks to you all for listening over the years!

  • Andy and Dave discuss the latest in AI news and research, including the update of the Department of Defense Directive 3000.09 on Autonomy in Weapon Systems. NIST releases the first version of its AI Risk Management Framework. The National AI Research Resource (NAIRR) Task Force publishes its final report, in which it details its plans for a national research infrastructure, as well as its request for $2.6 billion over 6 years to fund the initiatives. DARPA announces the Autonomous Multi-domain Adaptive Swarms-of-Swarms (AMASS) program, a much larger effort (aiming for thousands of autonomous entities) than its previous OFFSET program. And finally, from the Naval Postgraduate School’s Energy Academic Group, Kristen Fletcher and Marina Lesse join to discuss their research and efforts in autonomous systems and maritime law and policy, to include a discussion about the DoDD 3000.09 update and the high-altitude balloon incident.

    https://www.cna.org/our-media/podcasts/ai-with-ai

  • Estão a faltar episódios?

    Clique aqui para atualizar o feed.

  • Andy and Dave discuss the latest in AI news and research, starting with an education program from AI that teaches US Air Force personnel the fundamentals of AI across three types: leaders, developers, and users. The US Equal Employment Opportunity Commission unveils its draft Strategic Enforcement Plan to target AI-based hiring bias. The US Department of State establishes the Office of the Special Envoy for Critical and Emerging Technology to bring “additional technology policy expertise, diplomatic leadership, and strategic direction to the Department’s approach to critical and emerging technologies.” Google calls in its founders, Larry Page and Sergey Brin, to help with the potential threat over ChatGPT and other AI technology. Researchers from Northwestern University publish research that demonstrates how ChatGPT can write fake research paper abstracts that can pass plagiarism checkers, and that human reviewers were only able to correctly identify 68% of the generated abstracts. Wolfram publishes an essay on a way to combine the computational powers of ChatGPT with Wolfram|Alpha. CheckPoint Research demonstrates how cybercriminals can use ChatGPT for nefarious exploits (including people without any experience in generating malicious tools). Researchers at Carnegie Mellon demonstrate that full body tracking is now possible using only WiFi signals, with comparable performance to image-based approaches. Microsoft introduces VALL-E, a text-to-speech AI model that can mimic anyone’s voice with only three seconds of sample input. The Cambridge Handbook of Responsible AI is the book of the week, with numerous essays on the philosophical, ethical, legal, and societal challenges that AI brings; Cambridge has made the book open-access online. And finally, Sam Bendett joins for an update on the latest AI and autonomy-related information from Russia as well as Ukraine.

  • Andy and Dave discuss the latest in AI and autonomy news and research, including a report from Human Center AI that assesses progress (or lack thereof) of the implementation of the three pillars of America’s strategy for AI innovation. The Department of Energy is offering up a total of $33M for research in leveraging AI/ML for nuclear fusion. China’s Navy appears to have launched a naval mothership for aerial drones. China is also set to introduce regulation on “deepfakes,” requiring users to give consent and prohibiting the technology for fake news, among many other things. Xiamen University and other researchers publish a “multidisciplinary open peer review dataset” (MOPRD), aiming to provide ways to automate the peer review process. Google executives issue a “code red” for Google’s search business over the success of OpenAI’s ChatGPT. New York City schools have blocked access for students and teachers to ChatGPT unless it involves the study of the technology itself. Microsoft plans to launch a version of Bing that integrates ChatGPT to its answers. And the International Conference on Machine Learning bans authors from using AI tools like ChatGPT to write scientific papers (though still allows the use of such systems to “polish” writing). In February, an AI from DoNotPay will likely be the first to represent a defendant in court, telling the defendant what to say and when. In research, the UCLA Departments of Psychology and Statistics demonstrate that analogical reasoning can emerge from large language models such as GPT-3, showing a strong capacity for abstract pattern induction. Research from Google Research, Stanford, Chapel Hill, and DeepMind shows that certain abilities only emerge from large language models that have a certain number of parameters and a large enough dataset. And finally, John H. Miller publishes Ex Machina through the Santa Fe Institute Press, examining the topic of Coevolving Machines and the Origins of the Social Universe.

    https://www.cna.org/our-media/podcasts/ai-with-ai

  • Andy and Dave discuss the latest in AI news and research, including the release of the US National Defense Authorization Act for FY2023, which includes over 200 mentions of “AI” and many more requirements for the Department of Defense. DoD has also awarded its cloud-computing contracts, not to one company, but four – Amazon, Google, Microsoft, and Oracle. At the end of November, the San Francisco Board voted to allow the police force to use robots to administer deadly force, however, after a nearly immediate response from a “No Killer Robots” campaign, in early December the board passed a revised version of the policy that prohibits police from using robots to kill people. Israeli company Elbit unveils its LANIUS drone, a “drone-based loitering munition” that can carry lethal or non-lethal payloads, and appears to have many functions similar to the ‘slaughter bots,’ except for autonomous targeting. Neuralink shows the latest updates on its research for putting a brain chip interface into humans, with demonstrations of a monkey manipulating a mouse cursor with its thoughts; the company also faces a federal investigation into possible animal-welfare violations. DeepMind publishes AlphaCode in Science, a story that we covered back in February. DeepMind also introduces DeepNash, an autonomous agent that can play Stratego. OpenAI unleashes ChatGPT, a spin-off of GPT-3 optimized for answering questions through back-and-forth dialogue. Meanwhile, Stack Overflow, a website for programmers, temporarily banned users from sharing responses generated by ChatGPT, because the output of the algorithm might look good, but it has “a high rate of being incorrect.” Researchers at the Weizmann Institute of Science demonstrate that, with a simple neural network, it is possible to reconstruct a “large portion” of the actual training samples. NOMIC provides an interactive map to explore over 6M images from Stable Diffusion. Steve Coulson creates “AI-assisted comics” using Midjourney. Stay tuned for AI Debate 3 on 23 December 2022. And the video of the week from Ricard Sole at the Santa Fe Institute explores mapping the cognition space of liquid and solid brains.

    https://www.cna.org/our-media/podcasts/ai-with-ai

  • Andy and Dave discuss the latest in AI news and research, including the introduction of a lawsuit against Microsoft, GitHub and OpenAI for allegedly violating copyright law by reproducing open-source code using AI. The Texas Attorney General files a lawsuit against Google alleging unlawful capture and use of biometric data of Texans without their consent. DARPA flies its final flight of ALIAS, an autonomous system outfitted on a UH-60 Black Hawk. And Rafael’s DRONE DOME counter-UAS system wins Pentagon certification. In research, Meta publishes work on Cicero, an AI agent that combines Large Language Models with strategic reasoning to achieve human-level performance in Diplomacy. Meta researchers also publish work on ESMFold, an AI algorithm that predicts structures from some 600 million proteins, “mostly unknown.” And Meta also releases (then takes down due to misuse) Galactica, a 120B parameter language model for scientific papers. In a similar, but less turbulent vein, Explainpaper provides the ability to upload a paper, highlight confusing text, and ask queries to get explanations. CRC Press publishes online for free Data Science and Machine Learning: Mathematical and Statistical Methods, a thorough text for upper-class college or grad-school level. And finally, the video of the week features Andrew Pickering, Professor Emeritus of sociology and philosophy at the University of Exeter, UK, with a video on the Cybernetic Brain, and the book of the same name, published in 2011.

    https://www.cna.org/our-media/podcasts/ai-with-ai


  • Andy and Dave once again welcome Sam Bendett, research analyst with CNA’s Russia Studies Program, to the podcast to discuss the latest unmanned and autonomous news from the Ukraine and Russian conflict. The group discusses the use and role of commercial quadcopters, the recent Black Sea incident involving unmanned systems, and the supply of Iranian systems to Russia. They also discuss the Wagner Group’s Research and Development center, and its potential role in the Ukraine-Russian conflict. Will Ukraine deploy lethal autonomous drones against Russia? https://www.newscientist.com/article/2344966-will-ukraine-deploy-lethal-autonomous-drones-against-russia/ PMC Wagner Center: https://www.euronews.com/2022/11/04/russias-wagner-paramilitary-group-opens-first-official-hq-in-st-petersburg Russia's Lancet: https://www.forbes.com/sites/davidhambling/2022/11/04/russian-videos-reveal-new-details-of-loitering-munitions/ Coordinated drone attack at Sevastopol: https://defense-update.com/20221030_coordinated-drone-attack-targets-the-russian-black-sea-fleet-at-sevastopol.html Iranian supply of drones to Russia: https://www.npr.org/2022/11/05/1134523148/ukraine-russia-war-iran-drones Russia's "brain drain" problem: https://www.maravipost.com/russia-vs-ukraine-the-major-brain-drain-amid-conflict/

  • Andy and Dave discuss the latest in AI-related news and research, including a bill from the EU that will make it easier for people to sue AI companies for harm or damages caused by AI-related technologies. The US Office of S&T Policy releases a Blueprint for an AI Bill of Rights, which further lays the groundwork for potential legislation. The US signs the AI Training for the Acquisition Workforce Act into law, requiring federal acquisition officials to receive training on AI, and it requires OMB to work with GSA to develop the curriculum. Various top robot companies pledge not to add weapons to their technologies and to work actively at not allowing their robots to be used for such purposes. Telsa reveals its Optimus robot at its AI Day. DARPA will hold a proposal session on 14 November for its AI Reinforcements effort. OpenAI makes DALL-E available for everybody, and Playground offers access to both DALL-E and Stable Diffusion. OpenAI also makes available the results of an NLP Community Meta survey in conjunction with NY University, providing AI researchers’ views on a variety of AI-related efforts and trends. And Nathan Benaich and Ian Hogarth release the State of AI Report 2022, which covers a summary of everything from research, politics, safety, as well as some specific predictions for 2023. In research, DeepMind uses AlphaZero to explore matrix multiplication and discovers a slightly faster algorithm implementation for 4x4 matrices. Two research efforts look at turning text into video. Meta discusses its Make-A-Video for turning text prompts into video, leveraging text-to-image generators like DALL-E. And Google Brain discusses its Imagen Video (along with Phenaki, which produces long videos from a sequence of text prompts). The Foundation of Robotics is the open-access book of the week from Damith Herath and David St-Onge. And the video of the week addresses AI and the Application of AI in Force Structure, with LtGen (ret) Groen, Dr. Sam Tangredi, and Mr. Brett Vaughan joining in on the discussion for a symposium at the US Naval Institute.

  • Andy and Dave discuss the latest in AI news and research, starting with a publication from the UK’s National Cyber Security Centre, providing a set of security principles for developers implementing machine learning models. Gartner publishes the 2022 update to its “AI Hype Cycle,” which qualitatively plots the position of various AI efforts along the “hype cycle.” PromptBase opens its doors, promising to provide users with better “prompts” for text-to-image generators (such as DALL-E) to generate “optimal images.” Researchers explore the properties of vanadium dioxide (VO2), which demonstrates volatile memory-like behavior under certain conditions. MetaAI announces a nascent ability to decode speech from a person’s brain activity, without surgery (using EEG and MEG). Unitree Robotics, a Chinese tech company, is producing its Aliengo robotic dog, which can carry up to 11 pounds and perform other actions. Researchers at the University of Geneva demonstrate that transformers can build world models with fewer samples, for example, able to generate “pixel perfect” predictions of Pong after 120 games of training. DeepMind AI demonstrates the ability to teach a team of agents to play soccer by controlling at the level of joint torques and combine it with longer-term goal-directed behavior, where the agents demonstrate jostling for the ball and other behaviors. Researchers at Urbana-Champaign and MIT demonstrate a Composable Diffusion model to tweak and improve the output of text-to-image transformers. Google Research publishes results on AudioLM, which generates “natural and coherent continuations” given short prompts. And Michael Cohen, Marcus Hutter, and Michael Osborne published a paper in AI Magazine, arguing that dire predictions about the threat of advanced AI may not have gone far enough in their warnings, offering a series of assumptions on which their arguments depend.

    https://www.cna.org/our-media/podcasts/ai-with-ai

  • Andy and Dave discuss the latest in AI news and research, starting with DARPA moving into Phase 2 of its No Manning Required Ship (NOMARS) program, having selected Serco Inc for its Defiant ship design. The UK releases a roadmap on automated vehicles, Connected & Automated Mobility 2025, and describes new legislation that will place liability for the actions of self-driving vehicles onto manufacturers, and not the occupants. The DOD’s Chief Digital and AI Office is preparing to roll out Tradewinds, an open solutions marketplace geared toward identifying new technologies and capabilities. The US bans NVIDIA and AMD from selling or exporting certain types of GPUs (mostly for high-end servers) to China and Russia. A report in Nature examines the “reproducibility crisis” involving machine learning in scientific articles, identifying eight types of “data leaks” in research that raise cause for concern. Google introduces a new AI image noise reduction tool that greatly advances the state of the art for low lighting and resolution images, using RawNeRF, which makes use of the previous neural radiance fields approach, but on raw image data. Hakwan Lau and Oxford University Press make available for free In Consciousness We Trust: the Cognitive Neuroscience of Subjective Experience. And Sam Bendett joins Andy and Dave to discuss the latest from Russia’s Army 2022 Expo and other recent developments around the globe.

    https://www.cna.org/our-media/podcasts/ai-with-ai

  • Andy and Dave discuss the latest in AI and autonomy news and research, including an announcement that the Federal Trade Commission is exploring rules for cracking down on harmful commercial surveillance and lax data security, with the public having an opportunity to share input during a virtual public form on 8 September 2022. The Electronic Privacy Information Center (EPIC), with help from Caroline Kraczon, releases The State of State AI Policy, a catalog of AI-related bills that states and local governments have passed, introduced or failed during the 2021-2022 legislative season. In robotics, Xiaomi introduces CyberOne, a 5-foot 9-inch robot that can identify “85 types of environmental sounds and 45 classifications of human emotions.” Meanwhile at a recent Russian arms fair, Army-2022, a developer showed off a robot dog with a rocket-propelled grenade strapped to its back. NIST updates its AI Risk Management Framework to the second draft, making it available for review and comment. DARPA launches the SocialCyber project, a hybrid-AI project aimed at helping to protect the integrity of open-source code. BigScience launches BLOOM (BigScience Large Open-science Open-access Multilingual Language Model), a “bigger than GPT-3” multilanguage (46) model that a group of over 1,000 AI researchers has created, that anyone can download and tinker with it for free. Researchers at MIT develop artificial synapses that shuttle protons, resulting in synapses 10,000 times faster than biological ones. China’s Comprehensive National Science Center claims that it has developed “mind-reading AI” capable of measuring loyalty to the Chinese Communist Party. Researchers at the University of Sydney demonstrate that human brains are better at identifying deepfakes than people, by examining results directly from neural activity. Researchers at the University of Glasgow combine AI with human vision to see around corners, reconstructing 16x16-pixel images of simple objects that the observer could not directly see. GoogleAI publishes research on Minerva, using language models to solve quantitative reasoning problems, and dramatically increasing the SotA. Researchers from MIT, Columbia, Harvard, and Waterloo publish work on a neural network that solves, explains, and generates university math problems “at a human level.” CSET makes available the Country Activity Tracker for AI, an interactive tool on tech competitiveness and collaboration. And a group of researchers at Merced’s Cognitive and Information Sciences Program make available Neural Networks in Cognitive Science.

    https://www.cna.org/our-media/podcasts/ai-with-ai

  • Andy and Dave discuss the latest in AI news and research, including an announcement from DeepMind that it is freely providing a database of 200+ million protein structures as predicted by AlphaFold. Researchers at the Max Planck Institute for Intelligent Systems demonstrate how a robot dog can learn to walk in about one hour using a Bayesian optimization algorithm. A chess-playing robot breaks the finger of a seven-year-old boy during a chess match in Moscow. A bill with the Senate Armed Services Committee would require the Department of Defense to accelerate the fielding of new technology to defeat drone swarms. The Chief of Naval Operations Navigation Plan 2022 aims to add 150 uncrewed vessels by 2045. The text-to-image transformer DALL-E is now available in beta. Researchers at Columbia University use an algorithm to identify possible state variables from the observation of systems (such as a double pendulum) and discover “alternate physics”; the algorithm discovers the intrinsic dimension of the observed dynamics and identifies a candidate set of state variables, but in most cases, the scientists found it difficult (if not impossible) to decode those variables to known phenomena. Wolfram Media and Etienne Bernard make Introduction to Machine Learning: Mathematica available for free. And Jeff Edmonds and Sam Bendett join for a discussion on their latest report, Russian Military Autonomy in Ukraine: Four Months In – a closer look at the use of unmanned systems by both Russia and Ukraine.

    https://www.cna.org/our-media/podcasts/ai-with-ai

  • Dr. Anya Fink from CNA’s Russia Studies program joins the podcast to discuss the impacts of global sanctions on Russia’s technology and AI sector.

    Report

    CNA: A Technological Divorce: The impact of sanctions and the end of cooperation on Russia’s technology and AI sector.

  • Andy and Dave discuss the latest in AI news and research, including an update from DARPA on its Machine Common Sense program, demonstrating rapidly adapting to changing terrain, carrying dynamic loads, and understanding how to grasp objects [0:55]. The Israeli military fields new tech from Camero-Tech that allows operators to ‘see through walls,’ using pulse-based ultra-wideband micro-power radar in combination with an AI-based algorithm for tracking live targets [5:01]. In autonomous shipping [8:13], the Suzaka, a cargo ship powered by Orca AI, makes a nearly 500-mile voyage “without human intervention” for 99% of the trip; the Prism Courage sails from the Gulf of Mexico to South Korea “controlled mostly” by HiNAS 2.0, a system by Avikus, a subsidiary of Hyundai; and Promare’s and IBM’s Mayflower Autonomous Ship travels from the UK to Nova Scotia. In large language models [10:09], a Chinese research team unveils a 174 trillion parameter model, Bagualu (‘alchemist pot’) and claims it runs an AI model as sophisticated as a human brain (not quite, though); Meta releases the largest open-source AI language model, with OPT-66B, a 66 billion parameter model; and Russia’s Yandex opens its 100 billion parameters YaLM to public access. Researchers from the University of Chicago publish a model that can predict future crimes “one week in advance with about 90% accuracy” (referring to general crime levels, not specific people and exact locations), and also demonstrate the potential effects of bias in police response and enforcement [13:32]. In a similar vein, researchers from Berkeley, MIT, and Oxford publish attempts to forecast future world events using the neural network system Autocast, and show that forecasting performance still comes in far below a human expertise baseline [16:37]. Angelo Cangelosi and Minoru Asada provide the (graduate) book of the week, with Cognitive Robotics.

  • Andy and Dave discuss the latest in AI news and research, starting with the Department of Defense releasing its Responsible AI Strategy. In the UK, the Ministry of Defence publishes its Defence AI Strategy. The Federal Trade Commission warns policymakers about relying on AI to combat online problems and instead urges them to develop legal frameworks to ensure AI tools do not cause additional harm. YouTuber Yannic Kilcher trains an AI on 4chan’s “infamously toxic” Politically Incorrect board, creating a predictably toxic bot, GPT-4chan; he then uses the bot to generate 15,000 posts on the board, quickly receiving condemnation from the academic community. Google suspends and then fires an engineer who claimed that one of its chatbots, LaMDA, achieving sentience; former Google employees Gebru and Mitchell write an opinion piece saying they warned this would happen. For the Fun Site of the Week, a mini version of DALL-E comes to Hugging Face. And finally, IBM researcher Kush Varshney joins Andy and Dave to discuss his book, Trustworthy Machine Learning, which provides AI researchers with practical tools and concepts when developing machine learning systems.

    Visit our CNA.org to explore the links mentioned in this episode.

  • CNA colleagues Kaia Haney and Heather Roff join Andy and Dave to discuss Responsible AI. They discuss the recent Inclusive National Security seminar on AI and National Security: Gender, Race, and Algorithms. The keynote speaker, Elizabeth Adams spoke on the challenges that society faces in integrating AI technologies in an inclusive fashion, and she identified ways in which consumers of AI-enabled products can ask questions and engage on the topic of inclusivity and bias. The group also discusses a variety of topics around the many challenges that organizations face in operationalizing these ideas, including a revisit of the findings from recent medical research, which found an algorithm was able to identify the race of a subject from x-rays and CAT scans, even with identifying features removed.

    Inclusive National Security Series: AI and National Security: Gender, Race and Algorithms

    Inclusive National Security webpage

    Sign up for the InclusiveNatSec mailing list here.

  • Andy and Dave discuss the latest in AI news and research, starting with an announcement that DoD will be updating its Directive 3000.09 on “Autonomous Weapons,” with the new Emerging Capabilities Policy Office leading the way [1:25]. The DoD names Diane Staheli as the new chief for Responsible AI [5:19]. NATO launches an AI strategic initiative, Horizon Scanning, to better understand AI and its potential military implications [6:31]. China unveils an autonomous drone carrier ship even though Dave wonders about the use of the terms unmanned and autonomous [8:59]. Stanford University and the Human-Centered AI Center build on their initiative for foundation models by releasing a call to the community for developing norms on the release of foundation models [10:42]. DECIDE-AI continues to develop its reporting guidelines for early-stage clinical evaluation of AI decision support systems [14:39]. The Army successfully demonstrates four waves of seven drones, launched by a single operator, during EDGE 22 [18:31]. Researchers from Zhejiang University and Hong Kong University of S&T demonstrate a swarm of physical micro flying robots, fully autonomous, able to navigate and communicate as a swarm, with fully onboard perception, localization, and control [19:58]. Google Research introduces a new text-to-image generator, Imagen, which uses diffusion models to increase the size and photorealism of an image [24:20]. Researchers discover that an AI algorithm can identify race from X-ray and CT images, even when correcting for variations such as body-mass index but can’t explain why or how [31:21]. And Sonantic uses AI to create the voice lines for Val Kilmer in the new movie Top Gun: Maverick [34:18].

    RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7.

  • Andy and Dave discuss the latest in AI news and research, starting with the European Parliament adopting the final recommendations of the Special Committee on AI in a Digital Age (AIDA), finding that the EU should not always regulate AI as a technology, but use intervention proportionate to the type of risk, among other recommendations [1:31]. Synchron enrolled the first patient in the U.S. clinical trial of its brain-computer interface, Stentrode, which does not require drilling into the skull or open brain surgery; it is, at present, the only company to receive FDA approval to conduct clinical trials of a permanently implanted BCI [4:14]. MetaAI releases its 175B parameter transformer for open use, Open Pre-trained Transformers (OPT), to include the codebase used to train and deploy the model, and their logbook of issues and challenges [6:25]. In research, DeepMind introduces Gato, a “single generalist agent,” which with a single set of weights, is able to complete over 600 tasks, including chatting, playing Atari games, captioning images, and stacking blocks with a robotic arm; one DeepMind scientist used the results to claim that “the game is over” and it’s all about scale now, to which others that using massive amounts of data as a substitute for intelligence is perhaps “alt intelligence [8:48].” In the opinion essay of the week, Steve Johnson pens “AI is mastering language, should we trust what it says [18:07]?” Daedalus’s Spring 2022 issue focuses on AI and Society, with nearly 400 pages and over 25 essays on a variety of AI-related topics [19:06]. And finally, Professor Ido Kanter from Bar-Ilan University joins to discuss his latest neuroscience research, which suggests a new model for how neurons learn, using dendritic branches [20:48].

    RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm on June 7.

    Apply: Sr. Research Specialist (Artificial Intelligence Research) - ESDA Division

    Further Reading

  • Andy and Dave discuss the latest in AI news and search, including a report from the Government Accountability Office, recommending that the Department of Defense should improve its AI strategies and other AI-related guidance [1:25]. Another GAO report finds that the Navy should improve its approach to uncrewed maritime systems, particularly in its lack of accounting for the full costs to develop and operate such systems, but also recommends the Navy establish an “entity” with oversight for the portfolio [4:01]. The Army is set to launch a swarm of 30 small drones during the 2022 Experimental Demonstration Gateway Exercise (EDGE 22), which will be the largest group of air-launched effects the Army has tested [5:55]. DoD announces its new Chief Digital and AI Officer, Dr. Craig Martell, former head of machine learning for Lyft, and the Naval Postgraduate School [7:47]. And the National Geospatial-Intelligence Agency (NGA) takes over operational control of Project Maven’s GEOINT AI services [9:55]. Researchers from Princeton and the University of Chicago create a deep learning model of “superficial face judgments,” that is, how humans judge impressions of what people are like, based on their faces; the researchers note that their dataset deliberately reflects bias [12:05]. And researchers from MIT, Cornell, Google, and Microsoft present a new method for completely unsupervised label assignments to images, with STEGO (self-supervised transformer with energy-based graph optimization), allowing the algorithm to find consistent groupings of labels in a largely automated fashion [18:35]. And elicit.org provides a “research discovery” tool, leveraging GPT-3 to provide insights and ideas to research topics [24:24].

    Careers: https://us61e2.dayforcehcm.com/CandidatePortal/en-US/CNA/Posting/View/1624

    “RSVP for AI and National Security: Gender, Race, and Algorithms at 12:00 pm EST on June 7th at https://www.eventbrite.com/e/ai-and-national-security-gender-race-and-algorithms-tickets-332642301077?aff=Podcast.”