Episodes

  • 00:24:05

    AI with AI: Curiosity Killed the Poison Frog, Part II

    AI with AI starstarstarstarstar
    add

    Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases its Unmanned Systems Integrated Roadmap 2017-2042; Google announces DataSet Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life; and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability poison the training data set of an neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpenAI, Berkley and Edinburgh research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”

  • 00:30:37

    AI with AI: Curiosity Killed the Poison Frog, Part I

    AI with AI starstarstarstarstar
    add

    Andy and Dave briefly discuss the results from the Group of Governmental Experts meetings on Lethal Autonomous Weapons Systems in Geneva; the Pentagon releases its Unmanned Systems Integrated Roadmap 2017-2042; Google announces DataSet Search, a curated pool of datasets available on the internet; California endorses a set of 23 AI Principles in conjunction with the Future of Life; and registration for the Neural Information Processing Systems (NIPS) 2018 conference sells out in just under 12 minutes. Researchers at DeepMind announce a Symbol-Concept Association Network (SCAN), for learning abstractions in the visual domain in a way that mimics human vision and word acquisition. DeepMind also presents an approach to "catastrophic forgetting," using a Variational Autoencoder with Shared Embeddings (VASE) method to learn new information while protecting previously learned representations. Researchers from the University of Maryland and Cornell demonstrate the ability poison the training data set of an neural net image classifier with innocuous poison images. Research from the University of South Australia and Flinders University attempts to link personality with eye movements. OpenAI, Berkley and Edinburgh research looks at curiosity-driven learning across 54 benchmark environments (including video games and physics engine simulations, showing that agents learn to play many Atari games without using any rewards, rally-making behavior emerging in two-player Pong, and others. Finally, Andy shares an interactive app that allows users to “play” with a Generative Adversarial Network (GAN) in a browser; “Franken-algorithms” by Andrew Smith is the paper of the week; “Autonomy: The Quest to Build the Driverless Car” by Burns and Shulgan is the book of the week; and for the videos of the week, Major Voke offers thoughts on AI in the Command and Control of Airpower, and Jonathan Nolan releases “Do You Trust This Computer?”

  • Missing episodes?

    Click here to refresh the feed.

  • 00:40:13

    AI with AI: There are FOUR ELEPHANTS!

    AI with AI starstarstarstarstar
    add

    Andy and Dave discuss the latest developments in OpenAI’s AI team that competed against human players in Dota 2, a team-based tower defense game. Researchers published a method for probing Atari agents to understand where the agents were focusing when learning to play games (and to understand why they are good at games like Space Invaders, but not at Ms. Pac-Man). A DeepMind AI can match health experts when spotting eye diseases from optical coherence tomography (OCT) scans; it uses two networks to segment the problems, which also allows a way for the AI to indicate which portion of the scans prompted the diagnosis. Research from Germany and the UK showed that children may be especially vulnerable to peer pressure from robots; the experiments replicated Asch’s social experiments from the 1950s, but interestingly adults did not show the same vulnerability to robot peer pressure. Research from Rosenfeld, Zemel, and Tsotsos showed that “minor” perturbations in images (such as shifting the location of an elephant) can cause misclassifications to occur, again highlighting the potential for failures in image classifiers. Andy recommends “The Seven Tools of Causal Inference with Reflections on Machine Learning” by Pearl; Algorithms for Reinforcement Learning by Szepesvari is available online; Robin Sloan has a novel, Sourdough, with much use of AI and robots; Wolfram has an interview on the computational universe; a new documentary on AI look at the life and role of Geoffrey Hinton ; and Josh Tenenbaum examines the issues of “Growing a Mind in a Machine.”

  • 00:28:55

    AI with AI: Enter the Dragonfly

    AI with AI starstarstarstarstar
    add

    In breaking news, Andy and Dave discuss the Convention on Conventional Weapons meeting on lethal autonomous weapons systems (LAWs) at the United Nations, where more than 70 countries are participating in the sixth meeting since 2014. Highlights include the priorities for discussion, as well as the UK delegation's role and position. The Pentagon’s AI programs get a boost in the defense budget. DARPA announces the Automating Scientific Knowledge Extraction (ASKE) project, with the lofty goal of building an AI tool that can automatically generate, test, and refine its own scientific hypotheses. Google employees react to and protest the company’s secret, censored search engine (Dragonfly) for China. The Electronic Frontier Foundation releases a white paper on Mitigating the Risks of Military AI, which includes applications outside of the “kill chain.” And Brookings releases the results of a survey that asks people whether AI technologies should be developed for warfare.

  • 00:41:47

    AI with AI: How I Learned to Stop Worrying and Love AI

    AI with AI starstarstarstarstar
    add

    The Director for CNA’s Center for Autonomy and AI, Dr. Larry Lewis, joins Dave for a discussion on understanding and mitigating the risks of using autonomy and AI in war. They discuss some of the commonly voiced risks of autonomy and AI, in application for war, but also in general application, which include: AI will destroy the world; AI and lethal autonomy are unethical; lack of accountability; and lack of discrimination. Having examined the underpinnings of these commonly voiced risks, Larry and Dave move on to practical descriptions and identifications of risks for use of AI and autonomy in war, including the context of military operations, the supporting institutional development (including materiel, training, and test & evaluation), as well as the law and policy that govern their use. They wrap up with a discussion about the current status of organizations and thought leaders in the Department of Defense and the Department of the Navy.

  • 00:39:57

    AI with AI: I Have No Eyes and I Must Meme

    AI with AI starstarstarstarstar
    add

    In breaking news, Andy and Dave discuss the Dota 2 competition between the Open AI Five team of AIs and a top (99.95th percentile) human team, where the humans won one game in a series of three; the Pentagon signs a $885M AI contract with Booz Allen; MIT builds Cheetah 3, a “blind” robot that has no visual sensors but can climb stairs and maneuver in a space with obstacles; Tencent Machine Learning trains AlexNet in just 4 minutes on ImageNet (breaking the previous record of 11 minutes); researchers at MIT Media Lab have developed a machine-learning model to perceive human emotions; and the 2018 Conference on Uncertainty in AI (UAI) may have been held 7-10 August in Monterey, CA – we’re not certain (but what is certain is that Dave will never tire of these jokes). In other news, IBM Watson reportedly recommended cancer treatments that were “unsafe and incorrect, and Amazon’s Rekognition software incorrectly identifies 28 lawmakers as crime suspects, about which Andy and Dave yet again highlight the dangerous gap in AI between expectations and reality. Lipton (CMU) and Steinhardt (Standford) identify “troubling trends” in machine learning research and scientific scholarship. The Institute for Theoretical Physics in Zurich describes SciNet, a neural network that can discover physical concepts (such as the motion of a damped pendulum). A paper by Kott and Perconti makes an empirical assessment of forecasting military technology on the 20-30 year horizon, and finds the forecasts are surprisingly accurate (65-87%). "Elements of Statistical Learning Data Mining, Inference, and Prediction," is available online. Andy recommends the Ellison classic story, I Have No Mouth, and I Must Scream, and finally, a video by Percy Liang at Stanford discusses ways of evaluating machine learning for AI.

  • 00:18:42

    AI with AI: People for the Ethical Tasking of AIs

    AI with AI starstarstarstarstar
    add

    Continuing in a discussion of recent topics, Andy and Dave discuss research from Johns Hopkins University, which used supervised machine learning to predict toxicity of chemicals (the results of which beat animal tests). DeepMind probes toward general AI by exploring AI’s abstract reasoning capability; in their tests, they found that systems did OK (75% correct) when problems used the same abstract factors, but that AI systems fared very poorly if the testing differed from the training set (even minor variations such as using dark-colored objects instead of light-colored objects) – in a sense, suggesting that deep neural nets cannot “understand” problems they have not been explicitly trained to solve. Research from Spyros Makridakis demonstrated that existing traditional statistical methods outperform (better accuracy; lower computation requirements) than a variety of popular machine-learning methods, suggesting the need for better benchmarks and standards when discussing the performance of machine learning methods. Finally, Andy and Dave wrap up with two reports from the Center for a New American Security, on Technology Roulette, and Strategic Competition in an Era of AI, the latter of which highlights that the U.S. has not yet experienced a true “Sputnik moment.” Research from MIT, McGill and Masdar IST defines and visualizes skill sets required for various occupations, and how these contribute to a growing disparity between high- and low-wage occupations. The conference proceedings of Alife2018 (nearly 700 pages) are available for the 23-27 July event. Art of the Future Warfare Project features a collection of “war stories from the future,” and over 50 videos are available from the 2018 International Joint Conference on AI.

  • 00:34:45

    AI with AI: Mission SHRIMPossible

    AI with AI starstarstarstarstar
    add

    In breaking news, Andy and Dave discuss the “Future of Life” pledge that various AI tech leaders have signed, promising not to develop lethal autonomous weapons; DARPA announces its Artificial Intelligence Exploration (AIE) program, to provide “unique funding opportunities;” DARPA also announces a Short-Range Independent Microrobotic Platform (SHRIMP) program, which seeks to develop multi-functional tiny robots for use in natural and critical disaster scenarios; GoodAI announces the finalists in the “General AI Challenge,” which produced a series of conceptual papers; and a report from UK’s parliament examines the issues surrounding the government’s use of drones. Then in deeper topics, Andy and Dave discuss various attempts to use AI to predict the FIFA World Cup 2018 champion (all of which failed), which includes a discussion on the appropriate types of questions to which AI is amenable, and also includes an obligatory Star Trek reference. Baidu announced ClariNet, which performs text-to-speech synthesis within one neural network (as opposed to multiple networks).

  • 00:26:17

    AI with AI: Russian AI Kryptonite

    AI with AI starstarstarstarstar
    add

    CNA’s expert on Russian AI and autonomous systems, Samuel Bendett, joins temporary host Larry Lewis (again filling in for Dave and Andy) to discuss Russia’s pursuits with the militarization of AI and autonomy. Russian Ministry of Defense (MOD) has made no secret of its desire to achieve technological breakthroughs in IT and especially artificial intelligence, marshalling extensive resources for a more organized and streamlined approach to information technology R&D. MOD is overseeing a significant public-private partnership effort, calling for its military and civilian sectors to work together on information technologies, while hosting high-profile events aiming to foster dialogue between its uniformed and civilian technologists. For example, Russian state corporation Russian Technologies (Rostec), with extensive ties to the nation’s military-industrial complex, has overseen the creation of a company with the ominous name – Kryptonite. The company’s name – the one vulnerability of a super-hero – was unlikely to be picked by accident. Russia’s government is working hard to see that the Russian technology sector can compete with American, Western and Asian hi-tech leaders. This technology race is only expected to accelerate - and Russian achievements merit close attention.

    Learn more about CNA's Center for Autonomy and Artificial Intelligence at www.cna.org/CAAI.

  • 00:41:54

    AI with AI: Terminator or Data? Policy and Safety for Autonomous Weapons

    AI with AI starstarstarstarstar
    add

    This week Andy and Dave take a respite from the world of AI. In the meantime, Larry Lewis hosts Shawn Steene from the Office of Secretary of Defense. Shawn manages DOD Directive 3000.09 – US military policy on autonomous weapons – and is a member of the US delegation to the UN’s CCW meetings on Lethal Autonomous Weapon Systems (LAWS). Shawn and Larry discuss U.S. policy, what DOD Directive 3000.09 actually means, and how the future of AI could more closely resemble the android data than SKYNET from the Terminator movies. That leads to a discussion of some common misconceptions about artificial intelligence and autonomy in military applications, and how these misconceptions can manifest themselves in the UN talks. With data having single-handedly saved the day in the eighth and tenth Star Trek movies (First Contact and Nemesis, respectively), perhaps Star Trek should be required viewing for the next UN meeting in Geneva.

    Larry Lewis is the Director of the Center for Autonomy and Artificial Intelligence at CNA. His areas of expertise include lethal autonomy, reducing civilian casualties, identifying lessons from current operations, security assistance, and counterterrorism.

  • 00:31:00

    AI with AI: Debater of the AI-ncients, Part 2 (Dota 2)

    AI with AI starstarstarstarstar
    add

    In the second part of this epic podcast, Andy and Dave continue their discussion with research from MIT, Vienna University of Technology, and Boston University, which uses human brainwaves and hand gestures to instantly correct robot mistakes. The research uses a combination of electroencephalogram (EEG, brain signals) and electromyogram (EMG, muscle signals) in combination to allow a human (without training) to provide corrective input to a robot while it performs tasks. On a related topic, MIT’s Picower Institute for Learning and Memory demonstrated the rules for human brain plasticity, by showing that when one synapse connection strengthens, the immediately neighboring synapses weaken; while suspected for some time, this research showed for the first time how this balance works. Then, research from Stanford and Berkley introduces a Taskonomy, a system for disentangling task transfer learning. This structured approach maps out 25 different visual tasks to identify the conditions under which transfer learning works from one task to another; such a structure would allow data in some dimensions to compensate for the lack of data in other dimensions. Next up, OpenAI has developed an AI tool for spotting photoshopped photos, by examining three types of manipulation techniques (splicing, copy-move, and removal), and by also examining local noise features. Researchers at Stanford have used machine learning to recreate the periodic table of elements after providing the system with a database of chemical formulae. And finally, Andy and Dave wrap up with a selection of papers and other media, including CNAS’s AI: What Every Policymaker Needs to Know; a beautifully-done tutorial on machine learning; the Question for AI by Nilsson; Nonserviam by Lem; IPI’s Governing AI; the US Congressional Hearing on the Power of AI; and Twitch Plays Robotics.

  • 00:36:24

    AI with AI: Debater of the AI-ncients, Part 1 (Dota)

    AI with AI starstarstarstarstar
    add

    In breaking news, Andy and Dave discuss a potentially groundbreaking paper on the scalable training of artificial neural nets with adaptive sparse connectivity; MIT researchers unveil the Navion chip, only 20 square millimeters in size and consumes 24 milliwatts of power, it can process real-time camera images up to 171 frames per second, and can be integrated into drones the size of a fingernail; the Chair of the Armed Services Subcommitttee on Emerging Threats and Capabilities convened a roundtable on AI with subject matter experts and industry leaders; the IEEE Standards Association and MIT Media Lab launched the Council on Extended Intelligence (CXI) to build a “new narrative” on autonomous technologies, including three pilot programs, one of which seeks to help individuals “reclaim their digital identity;” and the Foundation for Responsible Robotics, which wants to shape the responsible design and use of robotics, releases a report on Drones in the Service of Society. Then, Andy and Dave discuss IBM’s Project Debater, the follow-on to Watson that engaged in a live, public debate with humans on 18 June. IBM spent 6 years developing PD’s capabilities, with over 30 technical papers and benchmark datasets, Debater can debate nearly 100 topics. PD uses three pioneering capabilities: data-driven speech writing and delivery, listening comprehension, and the ability to model human dilemmas. Next up, OpenAI announces OpenAI Five, a team of 5 AI algorithms trained to take on a human team in the tower defense game, Dota 2; Andy and Dave discuss the reasons for the impressive achievement, including that the 5 AI networks do not communicate with each other, and that coordination and collaboration naturally emerge from their incentive structures. The system uses 256 Nvidia graphics cards and 128,000 processor cores; it has taken on (and won) a variety of human teams, but OpenAI plans to stream a match against a top Dota 2 team in late July.

  • 00:45:05

    AI with AI: The AIth Sense - I See WiFi People

    AI with AI starstarstarstarstar
    add

    In breaking news, Andy and Dave discuss the recently unveiled Wolfram Neural Net Repository with 70 neural net models (as of the podcast recording) accessible in the Wolfram Language; Carnegie Mellon and STRUDEL announce the Code/Natural Language (CoNaLa) Challenge with a focus on Python; Amazon releases its Deep Lens video camera that enables deep learning tools; and the Computer Vision and Pattern Recognition 2018 conference in Salt Lake City. Then, Andy and Dave discuss DeepMind’s Generative Query Network, a framework where machines learn to turn 2D scenes into 3D views, using only their own sensors. MIT’s RF-Pose trains a deep neural net to “see” people through walls by measuring radio frequencies from WiFi devices. Research at the University of Bonn is attempting to train an AI to predict future results based on current observations (with the goal of “seeing” 5 minutes into the future), and a healthcare group of Google Brain has been developing an AI to predict when a patient will die, based on a swath of historical and current medical data. The University of Wyoming announced DeepCube, an “autodidactic iteration” method from McAleer that allows solving a Rubik’s Cube without human knowledge. And finally, Andy and Dave discuss a variety of books and videos, including The Next Step: Exponential Life, The Machine Stops, and a Ted Talk from Max Tegmark on getting empowered, not overpowered, by AI.

  • 00:44:00

    AI with AI: How to Train Your DrAIgon (for good, not for bad)

    AI with AI starstarstarstarstar
    add

    In recent news, Andy and Dave discuss a recent Brookings report on the view of AI and robots based on internet search data; a Chatham House report on AI anticipates disruption; Microsoft computes the future with its vision and principles on AI; the first major AI patent filings from DeepMind are revealed; biomimicry returns, with IBM using “analog” synapses to improve neural net implementation, and Stanford U researchers develop an artificial sensory nervous system; and Berkley Deep Drive provides the largest self-driving car dataset for free public download. Next, the topic of “hard exploration games with sparse rewards” returns, with a Deep Curiosity Search approach from the University of Wyoming, where the AI gets more freedom and reward from exploring (“curiosity”) than from performing tasks as dictated by the researchers. From Cognition Expo 18, work from Martinez-Plumed attempts to “Forecast AI,” but largely highlights the challenges in making comparisons due to the neglected, or un-reported, aspects of developments, such as the data, human oversight, computing cycles, and much more. From the Google AI Blog, researchers improve deep learning performance by finding and describing the transformation policies of the data, and using that information to increase the amount and diversity of the training dataset. Then, Andy and Dave discuss attempts to using drone surveillance to identify violent individuals (for good reasons only, not for bad ones). And in a more sporty application, “AI enthusiast” Chintan Trivedi describes his efforts to train a bot to play a soccer video game, by observing his playing. Finally, Andy recommends an NSF workshop report, a book on AI: Foundations of Computational Agents, Permutation City, and over 100 video hours of the CogX 2018 conference.

  • 00:48:26

    AI with AI: Game of Drones - AI Winter Is Coming

    AI with AI starstarstarstarstar
    add

    In breaking news, Andy and Dave discuss Google’s decision not to renew the contract for Project Maven, as well as their AI Principles; the Royal Australian Air Force holds a biennial Air Power Conference with a theme of AI and cyber; the Defense Innovation Unit Experimental (DIUx) releases its 2017 annual report; China holds a Defense Conference on AI in cybersecurity, and NVidia’s new Xavier chip packs $10k worth of power into a $1299 box. Next, Andy and Dave discuss a benevolent application of adversarial attack methods, with a “privacy filter” for photos that are designed to stop AI face detection (reducing detection from nearly 100 percent to 0.5 percent). MIT used AI in the development of nanoparticles, training neural nets to “learn” how a nanoparticle’s structure affects its behavior. Then the remaining topics dip deep into the philosophical realm, starting with a discussion on empiricism and the limits of gradient descent, and how philosophical concepts of empiricist induction compare with critical rationalism. Next, the topic of a potential AI Winter continues to percolate with a viral blog from Piekniewski, leading into a paper from Berkley/MIT that discovers a 4-15% reduction in accuracy for CIFAR-10 classifiers on a new set of similar training images (bringing into doubt the idea of robustness of these systems). Andy shares a possibly groundbreaker paper on “graph networks,” that provides a new conceptual framework for thinking about machine learning. And finally, Andy and Dave close with some media selections, including Blood Music by Greg Bear and Swarm by Frank Schatzing.

  • 00:41:14

    AI with AI: Detective Centaur and the Curse of Footstep Awareness

    AI with AI starstarstarstarstar
    add

    Andy and Dave didn’t have time to do a short podcast this week, so they did a long one instead. In breaking news, they discuss the establishment of the Joint Artificial Intelligence Center (JAIC), yet-another-Tesla autopilot crash, Geurts defending the decision to dissolve the Navy’s Unmanned Systems Office, and Germany publishes a paper that describes its stance on autonomy in weapon systems. Then, Andy and Dave discuss DeepMind’s approach to using YouTube videos to train an AI to learn “hard exploration games” (with sparse rewards). In another “centaur” example, facial recognition experts form best when combined with an AI. University of Manchester researches announce a new footstep-recognition AI system, but Dave pulls a Linus and has a fit of “footstep awareness.” In other recent reports, Andy and Dave discuss another example of biomimicry, where researchers at ETH Zurich have modeled the schooling behavior of fish. And in brain-computer interface research, a noninvasive BCI system co-trained with tetraplegics to control avatars in a racing-game. Finally, they round out the discussion with a mention of ZAK Inc and its purported general AI, a book on How People and Machines are Smarter Together, and a video on deep reinforcement learning.

  • 00:35:10

    AI with AI: Shiny Heart Reflecting in the Dark Lights Up (SHRDLU)

    AI with AI starstarstarstarstar
    add

    In breaking news, Andy and Dave discuss a few cracks seem to be appearing in Google’s Duplex demonstration; more examples of the breaking of Moore’s Law; a Princeton effort to advance the dialogue on AI and ethics; India joins the global AI-sabre-rattling; the UK Ministry of Defence launches an AI hub/lab; and the U.S. Navy dissolves its secretary-level unmanned systems office. Andy and Dave then discuss a demonstration of “zero-shot” learning, by which a robot learns to do a task by watching a human perform it once. The work reminds Andy of the early natural language “virtual block world” SHRDLU, from the 1970s. In other news, the research team that designed Libratus (a world-class poker-playing AI) announced they had developed a better AI that, more importantly, is also computationally orders of magnitude less expensive (using a 4-core CPU with 16 GB of memory). Next, research with Intel and the University of Illinois UC has developed a convolutional neural net to significantly improve low-ISO image quality while shooting at faster shutter speeds; Andy and Dave both found the results for improving low-light images to be quite stunning. Finally, after yet-another-round of a generative adversarial example (in which Dave predicts the creation of a new field), Andy closes with some recommendations on papers, books, and videos, including Galatea 2.2 and The Space of Possible Minds.

  • 00:40:06

    AI with AI: Super-AI Reveals Answer to Everything: IDK, LUL

    AI with AI starstarstarstarstar
    add

    In a review of the latest news, Andy and Dave discuss: the White House’s “plan” for AI, the departure of employees from Google due to Project Maven, another Tesla crash, the first AI degree for undergraduates at CMU, and Boston Dynamics’ jumping and climbing robots. Next, two AI research topics have implications for neuroscience. First, Andy and Dave discuss AI research at DeepMind, which showed that an AI trained to navigate between two points developed “grid cells,” very similar to those found in the mammalian brain. And second, another finding from DeepMind on “meta-learning” suggests that dopamine in the human brain may have a more integral role in meta-learning than previously thought. In another example of “AI-chemy,” Andy and Dave discuss the looming problem of (lack of) explainability in health care (with implications for many other areas, such as DoD), and they also discuss some recent research on adding an option for an AI to defer a decision with “I Don’t Know” (IDK). After a quick romp through the halls of AI-generated DOOM, the two discuss a recent proof that reveals the fundamental limits of scientific knowledge (so much for super-AIs). And finally, they close with a few media recommendations, including “The Book of Why: The New Science of Cause and Effect.”

  • 00:36:53

    AI with AI: Better Lying Through Alchemy

    AI with AI starstarstarstarstar
    add

    In a review of the most recent news, Andy and Dave discuss the latest information on the fatal self-driving Uber accident, the AI community reacts (poorly) to Nature’s announcement of a new closed-access section on machine learning, on-demand self-driving cars will be coming soon to north Dallas, and the Chinese government is adding AI to high school curriculum with a mandated textbook. For more in-depth topics, Andy and Dave discuss the latest information from DARPA’s Lifelong Learning Machines (L2M) project, which has announced its initial teams and topics, which seek to generate “paradigm-changing approaches” as opposed to incremental improvements. Next, they discuss an experiment from OpenAI that provides visibility into dialogue between two AI on a topic, one of which is lying. This discussion segues into recent comparisons of the field of machine learning to the ancient art of alchemy. Dave avoids using the word “alcheneering,” but thinks that “AI-chemy” might be worth considering. Finally, after a discussion on a couple of photography-related developments, they close with a discussion on some papers and videos of interest, including the splash of the Google’s new “Turing-test-beating” Duplex assistant for conducting natural conversations over the phone.

  • 00:26:16

    AI with AI: Nuclear War and/or Better French Fries

    AI with AI starstarstarstarstar
    add

    Andy and Dave discuss a couple of recent reports and events on AI, including the Sixth International Conference on Learning Representations (ICLR). Next, Edward Ott and fellow researchers have applied machine learning to replicate chaotic attractors, using "reservoir computing." Andy describes the reasons for his excitement in seeing how far out this technique is able to predict a 4th order nonlinear partial differential equation. Next, Andy and Dave discuss a few adversarial attack-related topics: a single-pixel attack for fooling deep neural network (DNN) image classifiers; an Adversarial Robustness Toolbox from IBM Research Ireland, which provides an open-source software library to help researchers in defending DNN against adversarial attacks; and the susceptibility of the medical field to fraudulent attacks. The BAYOU project takes another step toward giving AI the ability to program new methods for implementing tasks. And Uber Labs releases source code that can train a DNN to play Atari games in about 4 hours on a *single* 48-core modern desktop! Finally, after a review of a few books and videos, including Paul Scharre's new book "Army of None," Andy and Dave conclude with a discussion on potatoes.