Episoder

  • A recording of the third and final of Professor Hurka's rescheduled lectures, series title Knowledge and Achievement: Their Value, Nature, and Public Policy Role We were honoured to welcome Professor Thomas Hurka to Oxford to deliver the 2023 Annual Uehiro Lectures in Practical Ethics.

  • A recording of the second of Professor Hurka's rescheduled lectures, series title "Knowledge and Achievement: Their Value, Nature, and Public Policy Role" We were honoured to welcome Professor Thomas Hurka to Oxford to deliver the 2023 Annual Uehiro Lectures in Practical Ethics.

  • Manglende episoder?

    Klik her for at forny feed.

  • A recording of the first of Professor Hurka's rescheduled lectures, series title "Knowledge and Achievement: Their Value, Nature, and Public Policy Role" We were honoured to welcome Professor Thomas Hurka to Oxford to deliver the 2023 Annual Uehiro Lectures in Practical Ethics.

  • In last of the three 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI" Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations.

    In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations?

    I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

  • In the second 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI" Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations.

    In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations?

    I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

  • In the first of three 2022 Annual Uehiro Lectures in Practical Ethics, Professor Peter Railton explores how we might "programme ethics into AI" Recent, dramatic advancement in the capabilities of artificial intelligence (AI) raise a host of ethical questions about the development and deployment of AI systems. Some of these are questions long recognized as of fundamental moral concern, and which may occur in particularly acute forms with AI—matters of distributive justice, discrimination, social control, political manipulation, the conduct of warfare, personal privacy, and the concentration of economic power. Other questions, however, concern issues that are more specific to the distinctive kind of technological change AI represents. For example, how to contend with the possibility that artificial agents might emerge with capabilities that go beyond human comprehension or control? But whether or when the threat of such “superintelligence” becomes realistic, we are now facing a situation in which partially-intelligent AI systems are increasingly being deployed in roles that involve relatively autonomous decision-making that carries real risk of harm. This urgently raises the question of how such partially-intelligent systems could become appropriately sensitive to moral considerations.

    In these lectures I will attempt to take some first steps in answering that question, which often is put in terms of “programming ethics into AI”. However, we don’t have an “ethical algorithm” that could be programmed into AI systems, and that would enable them to respond aptly to an open-ended array of situations where moral issues are stake. Moreover, the current revolution in AI has provided ample evidence that system designs based upon the learning of complex representational structures and generative capacities have acquired higher levels of competence, situational sensitivity, and creativity in problem-solving than systems based upon pre-programmed expertise. Might a learning-based approach to AI be extended to the competence needed to identify and respond appropriately to moral dimensions of situations?

    I will begin by outlining a framework for understanding what “moral learning” might be, seeking compatibility with a range of conceptions of the normative content of morality. I then will draw upon research on human cognitive and social development—research that itself is undergoing a “learning revolution”—to suggest how this research enables us to see at work components central to moral learning, and to ask what conditions are favorable to the development and working of these components. The question then becomes whether artificial systems might be capable of similar cognitive and social development, and what conditions would be favorable to this. Might the same learning-based approaches that have achieved such success in strategic game-playing, image identification and generation, and language recognition and translation also achieve success in cooperative game-playing, identifying moral issues in situations, and communicating and collaborating effectively on apt responses? How far might such learning go, and what could this tell us about how we might engage with AI systems to foster their moral development, and perhaps ours as well?

  • Professor Michael Otsuka (London School of Economics) delivers the final of three public lectures in the series 'How to pool risks across generations: the case for collective pensions' The previous two lectures grappled with various challenges that funded collective pension schemes face. In the final lecture, I ask whether an unfunded 'pay as you go' (PAYG) approach might provide a solution. With PAYG, money is directly transferred from those who are currently working to pay the pensions of those who are currently retired. Rather than drawing from a pension fund consisting of a portfolio of financial assets, these pensions are paid out of the Treasury's coffers. The pension one is entitled to in retirement is often, however, a function of, even though not funded by, the pensions contributions one has made during one’s working life. I explore the extent to which a PAYG pension can be justified as a form of indirect reciprocity that cascades down generations. This contrasts with a redistributive concern to mitigate the inequality between those who are young, healthy, able-bodied, and productive and those who are elderly, infirm, and out of work. I explore claims inspired by Ken Binmore and Joseph Heath that PAYG pensions in which each generation pays the pensions of the previous generation can be justified as in mutually advantageous Nash equilibrium. I also discuss the relevance to the case for PAYG of Thomas Piketty's claim that r > g, where "r" is the rate of return on capital and "g" is the rate of growth of the economy.

  • Professor Michael Otsuka (London School of Economics) delivers the second of three public lectures in the series 'How to pool risks across generations: the case for collective pensions' On any sensible approach to the valuation of a DB scheme, ineliminable risk will remain that returns on a portfolio weighted towards return-seeking equities and property will fall significantly short of fully funding the DB pension promise. On the actuarial approach, this risk is deemed sufficiently low that it is reasonable and prudent to take in the case of an open scheme that will be cashflow positive for many decades. But if they deem the risk so low, shouldn't scheme members who advocate such an approach be willing to put their money where their mouth is, by agreeing to bear at least some of this downside risk through a reduction in their pensions if returns are not good enough to achieve full funding? Some such conditionality would simply involve a return to the practices of DB pension schemes during their heyday three and more decades ago. The subsequent hardening of the pension promise has hastened the demise of DB. The target pensions of collective defined contribution (CDC) might provide a means of preserving the benefits of collective pensions, in a manner that is more cost effective for all than any form of defined benefit promise. In one form of CDC, the risks are collectively pooled across generations. In another form, they are collectively pooled only among the members of each age cohorts.

  • Professor Michael Otsuka (London School of Economics) delivers the first of three public lectures in the series 'How to pool risks across generations: the case for collective pensions' I begin by drawing attention to the efficiencies in the pooling of longevity and investment risk that collective funded pension schemes provide over individual defined contribution (IDC) pension pots in guarding against your risk of living too long. I then turn to an analysis of those collective schemes that promise the following defined benefit (DB): an inflation-proof income in retirement until death, specified as a fraction of your salary earned during your career. I consider the concepts and principles within and beyond financial economics that underlie the valuation and funding of such a pension promise. I assess the merits of the 'actuarial approach' to funding an open, ongoing, enduring DB scheme at a low rate of contributions invested in 'return-seeking' equities and property. I also consider the merits of the contrasting 'financial economics approach', which calls for a higher rate of contributions set as the cost of bonds that 'match' the liabilities. I draw on the real-world case of the UK's multi-employer Universities Superannuation Scheme (USS) to adjudicate between these approaches. The objectives of the Pensions Regulator, the significance of the Pension Protection Fund, and the decision of Trinity College Cambridge to withdraw from USS to protect itself against being the 'last man standing', all figure in the discussion.

  • Lies, propaganda, and fake news have hijacked political discourse, distracting the electorate from engaging with the global problems we face. These Uehiro Lectures suggest a pathway for democratic institutions to devise solutions to the problems we face t People often resist facts because accepting facts exposes them to shame and blame. Yet, when the point of raising facts is to orient others to moral concerns, how can we communicate these concerns without resorting on blaming and shaming those who resist? Without denying that sometimes we must resort to these practices, I suggest alternative ways in which testimony and empathy can be mobilized to communicate moral concern so that those who resist shame and blame can come to share such concern.

  • Lies, propaganda, and fake news have hijacked political discourse, distracting the electorate from engaging with the global problems we face. These Uehiro Lectures suggest a pathway for democratic institutions to devise solutions to the problems we face t I argue that citizen science and local deliberation within internally diverse micro-publics offer models of how political discourse can be re-oriented toward accuracy-oriented factual claims relevant to constructive policy solutions. Enabling such discourse requires that citizens observe norms against insults and other identity-based competitive discourse, and in favor of serious listening across identities.

  • Lies, propaganda, and fake news have hijacked political discourse, distracting the electorate from engaging with the global problems we face. These Uehiro Lectures suggest a pathway for democratic institutions to devise solutions to the problems we face. I diagnose the deterioration of public discourse regarding basic facts to the rise of populist politics, which is powered by the activation of identity-based fear and resentment of other groups. Populist politics 'hears' the factual claims of other groups as insults to the groups it mobilizes, and thereby replaces factual inquiry with modes of discourse, such as denial, derision, and slander, designed to defend populist groups against criticism and whip up hostility toward rival groups. Nonpopulist groups, in turn, add fuel to the fire by blaming and shaming those who seem stubbornly and ignorantly attached to false claims in defiance of evidence.

  • Lecture 3 of 3.Who we are depends in part on the social world in which we live. In these lectures I look at some consequences for three mental health problems, broadly construed: dementia, addiction, and psychosomatic illness. Many illnesses have been thought—controversially—to have a psychosomatic component. How should we understand this? Sometimes a contrast is made between organic illness and mental illness: psychosomatic illnesses are the latter masquerading as the former. But if the mental is physical, and hence organic, this will not help. An alternative approach distinguishes between symptoms that are influenced by the patient’s attitudes, and those that are not; psychosomatic illnesses are marked by the former. Does this make the class too wide? Suppose I aggravate a bad back by refusing to exercise, falsely expecting the exercise to be dangerous. My symptoms are influenced by my attitude: are they therefore psychosomatic? I suggest that there is no sharp cut-off. I examine the role of attitudes in various illnesses, including addiction, focussing on the ways that social factors affect the relevant attitudes. I ask whether recognition of a continuum might help lessen the stigma that psychosomatic illness has tended to attract, and suggest other ways that treatment might be more attuned to these issues.

  • Lecture 2 of 3. Who we are depends in part on the social world in which we live. In these lectures I look at some consequences for three mental health problems, broadly construed: dementia, addiction, and psychosomatic illness. Much recent work on addiction has stressed the importance of cues for the triggering of desire. These cues are frequently social. We have a plausible theory of this triggering at the neurophysiological level. But what are the ethical implications? One concerns the authority of desire: maximizing the satisfaction of desires no longer looks like a obvious goal of social policy once we understand the dependence of desires on cues. A second concerns an addict’s responsibility in the face of cues. I suggest that the provision of cues can be thought of as akin to pollution, for which the polluter may bear the primary responsibility. I spell out some of the political implications and ask whether there are good grounds for extending the argument to the cues involved in obesity.

  • Lecture 1 of 3. Who we are depends in part on the social world in which we live. In these lectures I look at some consequences for three mental health problems, broadly construed: dementia, addiction, and psychosomatic illness. Loss of memory is a central feature of dementia. On a Lockean picture of personal identity, as memory is lost, so is the person. But the initial effect of dementia is not the simple destruction of memory. Many memories can be recognized with suitable prompting and scaffolding, something that thoughtful family and friends will naturally offer. This suggests a problem of access. More radically, if memory itself is a constructive process, it suggests a problem of missing resources for construction - resources which can be provided by others. This applies equally to procedural memories—to the practical skills likewise threatened by dementia. This leads us away from a narrowly Lockean approach: the power to recognize a memory, or exercise a skill, may be as important as the power to recall; and contributions from others may be as important as those from the subject.

  • In this final lecture, Professor Temkin considers possible negative impacts of global efforts to aid the needy, and reviews the main claims and arguments of all three Lectures In this third Uehiro Lecture, I consider a number of worries about the possible impact of global efforts to aid the needy. Among the worries I address are possible unintended negative consequences that may occur elsewhere in a society when aid agencies hire highly qualified local people to promote their agendas; the possibility that highly successful local projects may not always be replicable on a much larger regional or national scale; the possibility that foreign interests and priorities may have undue influence on a country’s direction and priorities, negatively impacting local authority and autonomy; and the related problem of outside interventions undermining the responsiveness of local and national governments to their citizens. I also discuss a position that I call the Capped Model of Moral Ideals, which may have a bearing on the intuitively plausible approach of always prioritizing the greatest need when making one’s charitable contributions. Another issue I discuss is the possibility that efforts to aid the needy may involve an Each/We Dilemma, in which case conflicts may arise between what is individually rational or moral, and what is collectively rational or moral. Unfortunately, it is possible that if each of us does what we have most reason to do, morally, in aiding the needy, we together will bring about an outcome which is worse, morally, in terms of its overall impact on the global needy. The lecture ends by taking stock of the main claims and arguments of all three Uehiro Lectures, and considering their overall implications for our thinking about the needy. I consider the implications of my discussion for Peter Singer’s view, and the implications of my view for the approach and recommendations of Effective Altruism. I also consider where my discussion leaves us given my pluralistic approach to thinking about the needy. I have no doubt that those who are well off are open to serious moral criticism if they ignore the plight of the needy. Unfortunately, however, for a host of both empirical and philosophical reasons, what one should do in light of that truth is much more complex, and murky, than most people have realized.

  • In this second lecture, Professor Temkin considers some disanalogies between saving a drowning child and giving to an aid organization, and discusses the issues of corruption and poor governance. Peter Singer famously argued that just as we have compelling moral reason to save a drowning child, so we have compelling moral reason to aid the world’s needy. In this Lecture, I raise a number of worries about the relevance of Singer’s Pond Example to whether we should be donating money to international aid organizations. I consider a number of possible disanalogies between saving a drowning child and giving to an international relief organization. These include whether those needing help are members of one’s own community, whether they are near or far, whether one’s aid requires the assistance of many intervening agents, whether one is actually saving lives, whether corruption is a worry, whether those needing assistance are innocent and/or not responsible for their plight, whether the needy are victims of an accident or social injustice, and whether anyone stands to benefit from one’s intervention other than the needy themselves. I show that some of these disanalogies may have important normative significance, making the case for contributing to international aid agencies much less clear than the case for saving the drowning child in Singer’s famous example. In addressing these topics, I argue that we must be attuned to the many direct and indirect ways in which international aid efforts may inadvertently benefit the perpetrators of grave social injustices, incentivizing such injustices. Similarly, we must be aware of the possibility that our aid efforts may end up rewarding corrupt leaders whose policies have contributed to hybrid natural/man-made disasters, thus encouraging such disastrous policies. Furthermore, I note that aid organizations have every incentive to emphasize the good that they accomplish, and to not look for, ignore, or even cover up any bad effects that may result from their interventions, and that independent agencies assessing aid effectiveness may lack the means of accurately determining all the negative effects to which international aid efforts may give rise. Thus, however compelling it may be, Singer’s Pond Example depicts a simple situation that is a far cry from the complex reality with which international development agencies have to contend. Accordingly, much more needs to be considered before one can pass judgment on the overall merits of funding international aid organizations.

  • In this first lecture, Larry Temkin explores different philosophical approaches to aiding the needy, and how they may fit with Peter Singer's famous Pond Example thought experiment. The world is filled with people who are badly off. Each day, many die from hunger or disease, much of which seems easily preventable. Yet the world is also filled with many who are well off, some extraordinarily so. This vast inequality, between the world’s well off and the world’s worst off, gives rise to an age-old question. What, if anything, ought those who are well off to do on behalf of those who are badly off? In these Uehiro Lectures, I aim to explore the nature and basis of our obligations, if any, to the needy, and some problems that may arise when the better-off attempt to ameliorate the plight of the worse-off. In doing this, I will explore a wide-range of empirical and philosophical issues. In this first Lecture, I introduce a version of Effective Altruism, which holds, roughly, that insofar as the well-off give to charity, they should identify and contribute to the most effective international relief and development organizations. I then present an alternative, pluralistic approach, arguing that in addition to the sort of consequentialist-based reasons for aiding the needy favored by Effective Altruism, there are virtue-based and deontological-based reasons for doing so. I then present Peter Singer’s famous Pond Example, which has had a profound effect on many people’s thinking about the needy. I note that Singer’s example is compatible with both Effective Altruism and my pluralistic approach. I then offer variations of the Pond Example, together with other considerations, in support of my approach. My discussion shows that despite its far-reaching impact, Singer’s Pond Example doesn’t actually take us very far in answering the question of what we should do, all things considered, to aid the world’s needy. Unfortunately, my discussion isn’t much better in that regard, except that it reveals that there are a wide-range of morally relevant factors that have a bearing on the issue, and that we must be fully responsive to all of them in considering what we ought to do in aiding the needy. The “act so as to do the most good” approach of Effective Altruism reflects one very important factor that needs to be considered, but it is not, I argue, the only one.

  • The second of the three 2015 Annual Uehiro Lectures 'Why Worry About Future Generations'. Why should we care about what happens to human beings in the future, after we ourselves are long gone? In this lecture I argue that, quite apart from considerations of beneficence, we have reasons of at least four different kinds to try to ensure the survival and flourishing of our successors: reasons of love, reasons of interest, reasons of value, and reasons of reciprocity.

  • The last of the three 2015 Annual Uehiro Lectures 'Why Worry About Future Generations'. Why should we care about what happens to human beings in the future, after we ourselves are long gone? The reasons discussed in the previous lecture all depend in one way or another on our existing values and attachments and our conservative disposition to preserve and sustain the things that we value. The idea that our reasons for caring about the fate of future generations depend on an essentially conservative disposition may seem surprising or even paradoxical. In this lecture, I explore this conservative disposition further, explaining why it strongly supports a concern for the survival and flourishing of our successors, and comparing it to the form of conservatism defended by G.A. Cohen. I consider the question whether this kind of conservatism involves a form of irrational temporal bias and how it fits within the context of the more general relations between our attitudes toward time and our attitudes toward value.