Episódios

  • In this terms Strachey lecture, Professor Monika Henzinger gives an introduction to differential privacy with an emphasis on differential private algorithms that can handle changing input data. Decisions are increasingly automated using rules that were learnt from personal data. Thus, it is important to guarantee that the privacy of the data is protected during the learning process. To formalize the notion of an algorithm that protects the privacy of its data, differential privacy was introduced. It is a rigorous mathematical definition to analyze the privacy properties of an algorithm – or the lack thereof. In this talk I will give an introduction to differential privacy with an emphasis on differential private algorithms that can handle changing input data.

    Monika Henzinger is a professor of Computer Science at the Institute of Science and Technology Austria (ISTA). She holds a PhD in computer science from Princeton University (New Jersey, USA), and has been the head of research at Google and a professor of computer science at EPFL and the University of Vienna.
    Monika Henzinger is an ACM and EATCS Fellow and a member of the Austrian Academy of Sciences and the German National Academy of Sciences Leopoldina. She has received several awards, including an honorary doctorate from TU Dortmund University, Two ERC Advanced Grant, the Leopoldina Carus Medal, and the Wittgensteinpreis, the highest science award of Austria.

    The Strachey Lectures are generously supported by OxFORD Asset Management

  • It’s said that Henry Ford’s customers wanted a “a faster horse”. If Henry Ford was selling us artificial intelligence today, what would the customer call for, “a smarter human”? That’s certainly the picture of machine intelligence we find in science fiction narratives, but the reality of what we’ve developed is far more mundane.
    Car engines produce prodigious power from petrol. Machine intelligences deliver decisions derived from data. In both cases the scale of consumption enables a speed of operation that is far beyond the capabilities of their natural counterparts. Unfettered energy consumption has consequences in the form of climate change. Does unbridled data consumption also have consequences for us?
    If we devolve decision making to machines, we depend on those machines to accommodate our needs. If we don’t understand how those machines operate, we lose control over our destiny. Much of the debate around AI makes the mistake of seeing machine intelligence as a reflection of our intelligence. In this talk we argue that to control the machine we need to understand the machine, but to understand the machine we first need to understand ourselves.

    Neil Lawrence is the inaugural DeepMind Professor of Machine Learning at the University of Cambridge where he leads the University’s flagship mission on AI, AI@Cam. He has been working on machine learning models for over 20 years. He recently returned to academia after three years as Director of Machine Learning at Amazon. His main interest is the interaction of machine learning with the physical world. This interest was triggered by deploying machine learning in the African context, where ‘end-to-end’ solutions are normally required. This has inspired new research directions at the interface of machine learning and systems research, this work is funded by a Senior AI Fellowship from the Alan Turing Institute. He is interim chair of the advisory board of the UK’s Centre for Data Ethics and Innovation and a member of the UK’s AI Council. Neil is also visiting Professor at the University of Sheffield and the co-host of Talking Machines.

    THE STRACHEY LECTURES ARE GENEROUSLY SUPPORTED BY OxFORD ASSET MANAGEMENT

  • Estão a faltar episódios?

    Clique aqui para atualizar o feed.

  • An introduction to algorithmic aspects of symmetry and similarity, ranging from the fundamental complexity theoretic "Graph Isomorphism Problem" to applications in optimisation and machine learning Symmetry is a fundamental concept in mathematics, science and engineering, and beyond. Understanding symmetries is often crucial for understanding structures. In computer science, we are mainly interested in the symmetries of combinatorial structures. Computing the symmetries of such a structure is essentially the same as deciding whether two structures are the same ("isomorphic"). Algorithmically, this is a difficult task that has received a lot of attention since the early days of computing. It is a major open problem in theoretical computer science to determine the precise computational complexity of this "Graph Isomorphism Problem".

  • An overview of work on probabilistic soft logic (PSL), an SRL framework for large-scale collective, probabilistic reasoning in relational domains and a description of recent work which integrates neural and symbolic (NeSy) reasoning. Our ability to collect, manipulate, analyze, and act on vast amounts of data is having a profound impact on all aspects of society. Much of this data is heterogeneous in nature and interlinked in a myriad of complex ways. From information integration to scientific discovery to computational social science, we need machine learning methods that are able to exploit both the inherent uncertainty and the innate structure in a domain. Statistical relational learning (SRL) is a subfield that builds on principles from probability theory and statistics to address uncertainty while incorporating tools from knowledge representation and logic to represent structure. In this talk, I’ll overview our work on probabilistic soft logic (PSL), an SRL framework for large-scale collective, probabilistic reasoning in relational domains. I’ll also describe recent work which integrates neural and symbolic (NeSy) reasoning. I’ll close by highlighting emerging opportunities (and challenges!) in realizing the effectiveness of data and structure for knowledge discovery.

    Bio:

    Lise Getoor is a Distinguished Professor in the Computer Science & Engineering Department at UC Santa Cruz, where she holds the Jack Baskin Endowed Chair in Computer Engineering. She is founding Director of the UC Santa Cruz Data Science Research Center and is a Fellow of ACM, AAAI, and IEEE. Her research areas include machine learning and reasoning under uncertainty and she has extensive experience with machine learning and probabilistic modeling methods for graph and network data. She has over 250 publications including 13 best paper awards. She has served as an elected board member of the International Machine Learning Society, on the Computing Research Association (CRA) Board, as Machine Learning Journal Action Editor, Associate Editor for the ACM Transactions of Knowledge Discovery from Data, JAIR Associate Editor, and on the AAAI Executive Council.. She is a Distinguished Alumna of the UC Santa Barbara Computer Science Department and received the UC Santa Cruz Women in Science & Engineering (WISE) award. She received her PhD from Stanford University in 2001, her MS from UC Berkeley, and her BS from UC Santa Barbara, and was a professor at the University of Maryland, College Park from 2001-2013.

    THE STRACHEY LECTURES ARE GENEROUSLY SUPPORTED BY OxFORD ASSET MANAGEMENT

  • There has been a proliferation of technological developments in the last few years that are beginning to improve how we perceive, attend to, notice, analyse and remember events, people, data and other information. There has been a proliferation of technological developments in the last few years that are beginning to improve how we perceive, attend to, notice, analyse and remember events, people, data and other information. These include machine learning, computer vision, advanced user interfaces (e.g. augmented reality) and sensor technologies. A goal of being augmented with ever more computational capabilities is to enable us to see more and, in doing so, make more intelligent decisions. But to what extent are the new interfaces enabling us to become more super-human? What is gained and lost through our reliance on ever pervasive computational technology? In my lecture, I will cover latest developments in technological advances, such as conversational interfaces, data visualisation, and augmented reality. I will then draw upon relevant recent findings in the HCI and cognitive science literature that demonstrate how our human capabilities are being extended but also struggling to adapt to the new demands on our attention. Finally, I will show their relevance to investigating the physical and digital worlds when trying to discover or uncover new information.

  • Mixed Signals: audio and wearable data analysis for health diagnostics Wearable and mobile devices are very good proxies for human behaviour. Yet, making the inference from the raw sensor data to individuals’ behaviour remains difficult. The list of challenges is very long: from collecting the right data and using the right sensor, respecting resource constraints, identifying the right analysis techniques, labelling the data, limiting privacy invasion, to dealing with heterogeneous data sources and adapting to changes in behaviour.

  • The advantages of computing for society are tremendous. But while new technological developments emerge, we also witness a number disadvantages and unwanted side-effects. The advantages of computing for society are tremendous. But while new technological developments emerge, we also witness a number disadvantages and unwanted side-effects: from the speed with which fake news spreads to the formation of new echo-chambers and the enhancement of polarization in society. It is time to reflect upon the successes and failures of collective rationality, particularly as embodied in modern mechanisms for mass information-aggregation and information-exchange. What can the study of the social and epistemic benefits and costs, posed by various contemporary mechanisms for information exchange and belief aggregation, tell us? I will use Logic and Philosophy to shed some light on this topic. Ultimately we look for an answer to the question of how we can ensure that truth survives the information age?

  • As AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. As AI technologies enter our everyday lives at an ever increasing pace, there is a greater need for AI systems to work synergistically with humans. This requires AI systems to exhibit behavior that is explainable to humans. Synthesizing such behavior requires AI systems to reason not only with their own models of the task at hand, but also about the mental models of the human collaborators. At a minimum, AI agents need approximations of human’s task and goal models, as well as the human’s model of the AI agent’s task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of the humans in the loop, and the latter allow it to act in ways that are interpretable to humans (by conforming to their mental models of it), and be ready to provide customized explanations when needed. Using several case-studies from our ongoing research, I will discuss how such multi-model reasoning forms the basis for explainable behavior in human-aware AI systems.

  • Innovation is the main event of the modern age, the reason we experience both dramatic improvements in our living standards and unsettling changes in our society. Innovation is the main event of the modern age, the reason we experience both dramatic improvements in our living standards and unsettling changes in our society. Forget short-term symptoms like Donald Trump and Brexit, it is innovation itself that explains them and that will itself shape the 21st century for good and ill. Yet innovation remains a mysterious process, poorly understood by policy makers and businessmen, hard to summon into existence to order, yet inevitable and inexorable when it does happen.

  • Medicine and Physiology in the Age of Dynamics: Newton Abraham Lecture 2020 Lecture by Professor Alan Garfinkel (2019-2020 Newton Abraham Visiting Professor, University of Oxford, Professor of Medicine (Cardiology) and Integrative Biology and Physiology, University of California, Los Angeles)

  • Can we build on our understanding of supervised learning to define broader aspects of the intelligence phenomenon. Strachey Lecture delivered by Leslie Valiant. Supervised learning is a cognitive phenomenon that has proved amenable to mathematical definition and analysis, as well as to exploitation as a technology. The question we ask is whether one can build on our understanding of supervised learning to define broader aspects of the intelligence phenomenon. We regard reasoning as the major component that needs to be added. We suggest that the central challenge therefore is to unify the formulation of these two phenomena, learning and reasoning, into a single framework with a common semantics. Based on such semantics one would aim to learn rules with the same success that predicates can be learned, and then to reason with them in a manner that is as principled as conventional logic offers. We discuss how Robust Logic fits such a role. We also discuss the challenges of exploiting such an approach for creating artificial systems with greater power, for example, with regard to common sense capabilities, than those currently realized by end-to-end learning.

  • Professor Leslie Kaelbling (MIT) gives the 2019 Stachey lecture. The Strachey Lectures are generously supported by OxFORD Asset Management. We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in 'the factory' (that is, at engineering time) and in 'the wild' (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.

  • Why has AI been so hard and what are the problems that we might work on in order to make real progress to human level intelligence, or even the super intelligence that many pundits believe is just around the corner? In his 1950 paper "Computing Machinery and Intelligence" Alan Turing estimated that sixty people working for fifty years should be able to program a computer (running at 1950 speed) to have human level intelligence. AI researchers have spent orders of magnitude more effort than that and are still not close. Why has AI been so hard and what are the problems that we might work on in order to make real progress to human level intelligence, or even the super intelligence that many pundits believe is just around the corner? This talk will discuss those steps we can take, what aspects we really still do not have much of a clue about, what we might be currently getting completely wrong, and why it all could be centuries away. Importantly the talk will make distinctions between research questions and barriers to technology adoption from research results, with a little speculation on things that might go wrong (spoiler alert: it is the mundane that will have the big consequences, not the Hollywood scenarios that the press and some academics love to talk about).

  • This talk is about the experience of providing privacy when running analytics on users’ personal data. The two-sided market of Cloud Analytics emerged almost accidentally, initially from click-through associated with user's response to search results, and then adopted by many other services, whether web mail or social media. The business model seen by the user is of a free service (storage and tools for photos, video, social media etc). The value to the provider is untrammeled access to the user's data over space and time, allowing upfront income from the ability to run recommenders and targeted adverts, to background market research about who is interested in what information, goods and services, when and where. The value to the user is increased personalisation. This all comes at a cost, both of privacy (and the risk of loss of reputation or even money) for the user, and at the price of running highly expensive data centers for the providers, and increased cost in bandwidth and energy consumption (mobile network costs & device battery life). The attack surface of our lives expands to cover just about everything. This talk will examine several alternative directions that this will evolve in the future. Firstly, we look at a toolchain for traditional cloud processing which offers privacy through careful control of the lifecycle of access to data, processing, and production of results by combining several relatively new techniques. Secondly, we present a fully decentralized approach, on low cost home devices, which can potentially lead to large reduction in risks of loss of confidentiality.

  • Stroustrup discusses the development and evolution of the C++, one of the most widely used programming languages ever. The development of C++ started in 1979. Since then, it has grown to be one of the most widely used programming languages ever, with an emphasis on demanding industrial uses. It was released commercially in 1985 and evolved through one informal standard (“the ARM”) and several ISO standards: C++98, C++11, C++14, and C++17. How could an underfinanced language without a corporate owner succeed like that? What are the key ideas and design principles? How did the original ideas survive almost 40 years of development and 30 years of attention from a 100+ member standards committee? What is the current state of C++ and what is likely to happen over the next few years? What are the problems we are trying to address through language evolution?

  • Éva Tardos, Department of Computer Science, Cornell University, gives the 2017 Ada Lovelace Lecture on 6th June 2017. Selfish behaviour can often lead to suboptimal outcome for all participants, a phenomenon illustrated by many classical examples in game theory. Over the last decade we developed good understanding on how to quantify the impact of strategic user behaviour on the overall performance in many games (including traffic routing as well as online auctions). In this talk we will focus on games where players use a form of learning that helps themadapt to the environment, and consider two closely related questions: What are broad classes of learning behaviours that guarantee that game outcomes converge to the quality guaranteed by the price of anarchy, and how fast is this convergence. Or asking these questions more broadly: what learning guarantees high social welfare in games, when the game or the population of players is dynamically changing.

  • Professor Kraus will show how combining machine learning techniques for human modelling, human behavioural models, formal decision-making and game theory approaches enables agents to interact well with people. Automated agents that interact proficiently with people can be useful in supporting, training or replacing people in complex tasks. The inclusion of people presents novel problems for the design of automated agents’ strategies. People do not necessarily adhere to the optimal, monolithic strategies that can be derived analytically. Their behaviour is affected by a multitude of social and psychological factors.  In this talk I will show how combining machine learning techniques for human modelling, human behavioural models, formal decision-making and game theory approaches enables agents to interact well with people. Applications include intelligent agents.
     
    The Strachey Lectures are generously supported by OxFORD Asset Management.

  • Professor Zoubin Ghahramani gives a talk on probabilistic modelling from it's foundations to current areas of research at the frontiers of machine learning. Probabilistic modelling provides a mathematical framework for understanding what learning is, and has therefore emerged as one of the principal approaches for designing computer algorithms that learn from data acquired through experience. Professor Ghahramani will review the foundations of this field, from basics to Bayesian nonparametric models and scalable inference. He will then highlight some current areas of research at the frontiers of machine learning, leading up to topics such as probabilistic programming, Bayesian optimisation, the rational allocation of computational resources, and the Automatic Statistician.

    The Strachey lectures are generously supported by OxFORD Asset Management.

  • Students undertaking undergraduate (first) degrees in Computer Science, Computer Science & Philosophy and Maths & Computer Science undertake a Group Design Practical as a compulsory part of the course. The Group Design Practical, which runs from January, sees teams of four to six undergraduate students battling it out with their chosen project. Many of the challenges having been set, or sponsored by industry partners, which in 2016 included Research, Oxford Asset Management, Bloomberg and Metaswitch. The students’ work culminated in an exhibition and formal presentation, held in the Department on 9 May. In the video current students discuss their experiences of the Group Design Practical.

  • Professor Andrew Hodges author of 'Alan Turing: The Enigma' talks about Turing's work and ideas from the definition of computability, the universal machine to the prospect of Artificial Intelligence. In 1951, Christopher Strachey began his career in computing. He did so as a colleague of Alan Turing, who had inspired him with a 'Utopian' prospectus for programming. By that time, Turing had already made far-reaching and futuristic innovations, from the definition of computability and the universal machine to the prospect of Artificial Intelligence. This talk will describe the origins and impacts of these ideas, and how wartime codebreaking allowed theory to turn into practice. After 1951, Turing was no less innovative, applying computational techniques to mathematical biology. His sudden death in 1954 meant the loss of most of this work, and its rediscovery in modern times has only added to Turing's iconic status as a scientific visionary seeing far beyond his short life.
    Andrew Hodges is the author of Alan Turing: The Enigma (1983), which inspired the 2014 film The Imitation Game.
    The Strachey Lectures are generously supported by OxFORD Asset Management.