Episódios

  • This workshop aims to address some of the insights that we have gained about the ethics of AI and the concept of trust. We critically explore practical and theoretical issues relating to values and frameworks, engaging with carebots, evaluations of decision support systems, and norms in the private sector. We assess the objects of trust in a democratic setting and discuss how scholars can further shift insights from academia to other sectors. Workshop proceedings will appear in a special symposium issue of C4eJournal.net.

    Speakers:
    Judith Simon (University of Hamburg), Can and Should We Trust AI?
    Vivek Nallur (University College Dublin), Trusting a Carebot: Towards a Framework for Asking the Right Questions
    Justin B. Biddle (Georgia Institute of Technology), Organizational Perspectives on Trust and Values in AI
    Sina Fazelpour (Northeastern University), Where Are the Missing Humans? Evaluating AI Decision Support Systems in Content
    Esther Keymolen (Tilburg University), Trustworthy Tech Companies: Talking the Talk or Walking the Walk?
    Ori Freiman (University of Toronto), Making Sense of the Conceptual Nonsense “Trustworthy AI”: What’s Next?

  • Long before the film Black Panther captured the public’s imagination, the cultural critic Mark Dery had coined the term “Afrofuturism” to describe “speculative fiction that treats African-American themes and addresses African-American concerns in the context of twentieth-century technoculture.” Since then, the term has been applied to speculative creatives as diverse as the pop artist Janelle Monae, the science fiction writer Octavia Butler, and the visual artist Nick Cave. But only recently have thinkers turned to how Afrofuturism might guide, and shape, law. The participants in this workshop explore the many ways Afrofuturism can inform a range of legal issues, and even chart the way to a better future for us all.

    Introduction:
    Bennett Capers (Law, Fordham)

    Panel 1:
    Ngozi Okidegbe (Law, Cardozo), Of Afrofuturism, Of Algorithms
    Alex Zamalin (Political Science & African American Studies, Detroit Mercy), Afrofuturism as Reconstitution

    Panel 2:
    Rasheedah Phillips (PolicyLink), Race Against Time: Afrofuturism and Our Liberated Housing Futures
    Etienne C. Toussaint (Law, South Carolina), For Every Rat Killed

  • Estão a faltar episódios?

    Clique aqui para atualizar o feed.

  • As the fabric of the city becomes increasingly fibreoptic, enthusiasm for the speed and ubiquity of digital infrastructure abounds. From Toronto to Abu Dhabi, new technologies promise the ability to observe, manage, and experience the city in so-called real-time, freeing cities from the spatiotemporal restrictions of the past. In this project, I look at the way this appreciation for the real-time is influencing our understanding of the datafied urban subject. I argue that this dominant discourse locates digital infrastructure within a broader metaphysics of presence, in which instantaneous data promise an unmediated view of both the city and those within it. The result is a levelling of residents along an overarching, linear, and spatialized timeline that sanitizes the temporal and rhythmic diversity of urban spaces. This same levelling effect can be seen in contemporary regulatory frameworks, which focus on the rights or sovereignty of a largely atomized urban subject removed from its spatiotemporal context. A more equitable alternative must therefore consider the temporal diversity, relationality, and inequality implicit within the datafied city, an alternative I begin to ground in Jacques Derrida’s notion of the spectre. This work is conducted through an exploration of Sidewalk Labs pioneering use of term urban data during their foray in Toronto, which highlights the potentiality of alternative, spectral data governance models at the same time it reflects the limitations of existing frameworks.

    Nathan Olmstead
    Urban Studies
    University of Toronto

  • Oftentimes, the development of algorithms are divorced from the environments where they will eventually be deployed. In high stakes contexts, like child welfare services, policymakers and technologists must exercise a high degree of caution in the design and deployment of decisionmaking algorithms or risk further marginalising already vulnerable communities. This talk will seek to explain the status quo of child welfare algorithms, what we miss when we fail to include context in the development of algorithms, and how the addition of qualitative text data can help to make better algorithms.

    Kamilah Ebrahim
    iSchool
    University of Toronto

    Erina Moon
    iSchool
    University of Toronto

  • Machine Learning and Artificial Intelligence are powering the applications we use, the decisions we make, and the decisions made about us. We have already seen numerous examples of what happens when these algorithms are designed without diversity in mind: facial recognition algorithms, recidivism algorithms, and resume reviewing algorithms all produce non-equitable outcomes. As Machine Learning (ML) and Artificial Intelligence (AI) expand into more areas of our lives, we must take action to promote diversity among those working in this field. A critical step in this work is understanding why some students who choose to study ML/AI later leave the field. In this talk, I will outline the findings from two iterations of survey-based studies that start to build a model of intentional persistence in the field. I will highlight the findings that suggest drivers of the gender gap, review what we’ve learned about persistence through these studies, and share open areas for future work.

    Sharon Ferguson
    Industrial Engineering
    University of Toronto

  • Many research and industry organizations outsource data generation, annotation, and algorithmic verification—or data work—to workers worldwide through digital platforms. A subset of the gig economy, these platforms consider workers independent users with no employment rights, pay them per task, and control them with automated algorithmic managers. This talk explores how the coloniality of data work is characterized by an extractivist method of generating data that privileges profit and the epistemic dominance of those in power. Social inequalities are reproduced through the data production process, and local worker communities mitigate these power imbalances by relying on family members, neighbours, and colleagues online. Furthermore, management in outsourced data production ensures that workers’ voices are suppressed in the data annotation process through algorithmic control and surveillance, resulting in datasets generated exclusively by clients, with their worldviews encoded in algorithms through training.

    Julian Posada
    Faculty of Information
    University of Toronto

  • Teens have different attitudes toward AI. Some are excited by AI’s promises to change their future. Some are afraid of AI’s problems. Some are indifferent. There is a consensus among educators that AI is a “must-teach” topic for teens. But how? In this talk, we will share our experiences and lessons learned from the Imagine AI project, funded by the National Science Foundation and advised by the Center for Ethics (C4E). Unlike other efforts focusing on AI technologies, Imagine AI takes a unique approach by focusing on AI ethics. Since 2019, we have partnered with more than a dozen teachers to teach hundreds of students in different classrooms and schools about AI ethics. We tried a variety of pedagogies and tested a range of AI ethics topics to understand their relative effectiveness to educate and abilities to engage. We found promising opportunities, such as short stories, as well as tensions. Our short stories are original, centering on young protagonists, and contextualizing ethical dilemmas in scenarios relatable to teens. We will share what stories are more engaging than the others, how teachers are using the stories in classrooms, and how students are responding to the stories.

    Moreover, we will discuss the tensions we identified. For students, there is a tension of balance: how can we teach AI ethics without inducing a chilling effect? For teachers, there is a tension of authority: which teacher, a social study teacher well-versed in social issues, a science teacher skilled in modern technology, or an English literacy teacher experienced in discussing dilemmas and critical thinking, would be the most authoritative to teach about AI ethics? Another tension is urgency: while teachers agree AI ethics is an urgent topic because of AI’s far-reaching influence on teens’ future, they struggle to meet teens’ even more urgent and immediate needs such as social-emotional issues worsened by the pandemic, interruption of education, loss of housing, and even school shootings. Is now really a good time to talk about AI ethics? But if not now, when? We will discuss the implications of these tensions and potential solutions. We will conclude with a call for action for experts on AI and ethics to partner with educators to help our future generations “imagine AI.”

    Tom Yeh
    Computer Science
    University of Colorado

    Benjamin Walsh
    Education
    University of Colorado

  • Developed along existing asymmetries of power, AI and its applications further entrench, if not exacerbate social, racialized, and gendered inequalities. As critical discourse grows, scholars make the case for the deployment of ethics and ethical frameworks to mitigate harms disproportionately impacting marginalized groups. However, there are foundational challenges to the actualization of harm reduction through a liberal ethics of AI. In this talk I will highlight the foundational challenges posed to goals of harm reduction through ethics frameworks and its reliance on social categories of difference.

    Mishall Ahmed
    Political Science
    York University

  • ‘The same rights that people have offline must also be protected online’ is used in recent years as a dominant concept in international discourse about human rights in cyberspace. But does this notion of ‘normative equivalency’ between the ‘offline’ and the ‘online’ afford effective protection for human rights in the digital age?

    The presentation reviews the development of human rights in cyberspace as they were conceptualized and articulated in international fora and critically evaluate the normative equivalency paradigm adopted by international bodies for the online application of human rights. It then attempts to describe the contours of a new digital human rights framework, which goes beyond the normative equivalency paradigm, and presents a typology of three ‘generations’ or modalities in the evolution of digital human rights.

    In particular, we focus on the emergence of new digital human rights and present two prototype rights – the right to Internet access and the right not to be subject to automated decision – and discuss the normative justifications invoked for recognizing these new digital human rights. We propose that such a multilayered framework corresponds better than the normative equivalency paradigm to the unique features and challenges of upholding human rights in cyberspace.

    Dafna Dror-Shpoliansky
    Hebrew University
    Law

    Yuval Shany
    Hebrew University
    Law

  • Human rights are one of the major innovations of the 20th century. Their emergence after World War II and global uptake promised a new world of universalized humanity in which human dignity would be protected, and individuals would have agency and flourish. The proliferation of digital data (i.e. datafication) and its intertwining with our lives, coupled with the growth of AI, signals a fundamental shift in the human experience. To date, human rights have not yet grappled fully with the implications of datafication. Yet, they remain our best hope for ensuring human autonomy and dignity, if they can be rebooted to take into account the “stickiness” of data. The talk will discuss how international human rights are structured, introduce the notion of Data You, why Data You is here to stay, and how this affects notions of data rights.

    Wendy Wong
    Political Science
    University of Toronto

  • The emergence of artificial intelligence (AI) and, more specifically, machine learning analytics fuelled by big data, is altering some legal and criminal justice practices. Harnessing the abilities of AI creates new possibilities, but it also risks reproducing the status quo and further entrenching existing inequalities. The potential of these technologies has simultaneously enthused and alarmed scholars, advocates, and practitioners, many of whom have drawn attention to the ethical concerns associated with the widespread use of these technologies. In the face of sustained critiques, some companies have rebranded, positioning their AI technologies as more ethical, transparent, or accountable. However even if a technology is defensibly ‘ethical,’ its combination with pre-existing institutional logics and practices reinforces patterns of inequality. In this paper we focus on two examples, legal analytics and predictive policing, to explore how companies are mobilizing the language and logics of ethical algorithms to rebrand their technologies. We argue this rebranding is a form of ethics washing, which obfuscates the appropriateness and limitations of these technologies in particular contexts.

  • Panelists: Kristen Thomasen, Suzie Dunn, & Kate Robertson

    This panel will discuss Citizen Lab and LEAF’s collaborative submission to the Toronto Police Services Board’s public consultation on its draft policy for AI use by the Toronto police with the three co-authors of the submission. The submission made 33 specific recommendations to the TPSB with a focus on substantive equality and human rights. The panelists will discuss some of those recommendations and the broader themes identified in the draft policy.

  • No one has any doubt that the future of the economic system is digital. Central banks worldwide worry that the rising popularity and adoption of cryptocurrencies and other means of payments, and new financial instruments, pose a risk to early fintech adopters and the economy at large. As an alternative, most central banks worldwide, led by the Bank of International Settlements, consider the issuance of a CBDC (Central Bank Digital Currency) – the digital form of a country’s fiat money. A CBDC differs from existing cashless payment forms such as card payments and credit transfers: it represents a direct claim on a central bank rather than a financial obligation to an institution.

    The digital nature of the transactions, together with algorithms, AIs, and the vast amount of data that such a system produces can lead to many advantages: money supply, interest rates, and other features of the system, are expected to be automatically aligned with the monetary policy to achieve financial stability. In addition, tracking digital money routes reduces the ability to launder money, hide payments for illegal activities, and make it harder to evade taxes (and easier to accurately and automatically collect them).

    As with any promising technology, this digital manifestation of money has a dystopian side, too. In this presentation, I focus on identifying the ethical concerns and considerations – for individuals and the democratic society. I will describe how data from such a system can lead to unjust discrimination, how it enables surveillance in its utmost sense, how social developments are at risk of being stalled, and how such technology can encourage self-censorship and cast a shadow over the freedom of expression and association.

    I’ll end with normative recommendations. System designers, developers, infrastructure builders, and regulators must involve civic organizations, public experts, and others to ensure the representation of diverse public interests. Inclusion and diversity are the first lines of defence against discrimination and biases in society, business, and technology.

  • In the last years, legal scholars and computer scientists have discussed widely how to reach a good level of AI accountability and fairness. The first attempts focused on the right to an explanation of algorithms, but such an approach has proven often unfeasible and fallacious due to the lack of legal consensus on the existence of that right in different legislations, on the content of a satisfactory explanation and the technical limits of a satisfactory causal-based explanation for deep learning models. In the last years, several scholars have indeed shifted their attention from the legibility of the algorithms to the evaluation of the “impacts” of such autonomous systems on human beings, through “Algorithmic Impact Assessments” (AIA).

    This paper, building on the AIA frameworks, advances a policy-making proposal for a test to “justify” (rather than merely explaining) algorithms. In practical terms, this paper proposes a system of “unlawfulness by default” of AI systems, an ex-ante model where the AI developers have the burden of the proof to justify (on the basis of the outcome of their Algorithmic Impact Assessment) that their autonomous system is not discriminatory, not manipulative, not unfair, not inaccurate, not illegitimate in its legal bases and in its purposes, not using unnecessary amount of data, etc.

    In the EU, the GDPR and the new proposed AI Regulation already tend to a sustainable environment of desirable AI systems, which is broader than any ambition to have “transparent” AI or “explainable” AI, but it requires also “fair”, “lawful”, “accurate”, “purpose-specific”, data-minimalistic and “accountable” AI.

    This might be possible through a practical “justification” process and statement through which the data controller proves in practical terms the legality of an algorithm, i.e., the respect of all data protection principles (that in the GDPR are fairness, lawfulness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity, accountability). This justificatory approach might also be a solution to many existing problems in the AI explanation debate: e.g., the difficulty to “open” black boxes, the transparency fallacy, the legal difficulties to enforce a right to receive individual explanations.

    Under a policy-making approach, this paper proposes a pre-approval model in which the Algorithms developers before launching their systems into the market should perform a preliminary risk assessment of their technology followed by a self-certification. If the risk assessment proves that these systems are at high-risk, an approval request (to a strict regulatory authority, like a Data Protection Agency) should follow. In other terms, we propose a presumption of unlawfulness for high-risk models, while the AI developers should have the burden of proof to justify why the algorithms is not illegitimate (and thus not unfair, not discriminatory, not inaccurate, etc.)

    The EU AI Regulation seems to go in this direction. It proposes a model of partial unlawfulness-by-default. However, it is still too lenient: the category of high-risk AI systems is too narrow (it excludes commercial manipulation leading to economic harms, emotion recognitions, general vulnerability exploitation, AI in the healthcare field, etc.) and the sanction in case of non-conformity with the Regulation is a monetary sanction, not a prohibition.

  • We are in the midst of ongoing debate about whether, in principle, the enforcement of legal rules—and corresponding decisional processes—can be automated. Often neglected in this conversation is the role of equity, which has historically worked as a particularized constraint on legal decision-making. Certain kinds of equitable adjustments may be susceptible to automation—or at least, just as susceptible as legal rules themselves. But other kinds of equitable adjustments will not be, no matter how powerful machines become, because they require non-formalizable modes of judgment. This should give us pause about all efforts toward legal automation, because it is not clear—or even conceptually determinate—which kinds of legal decisions will end up, in practice, implicating non-automatable forms of equity.

    Kiel Brennan-Marquez
    University of Connecticut
    Associate Professor of Law
    Faculty Director of the Center on Community Safety, Policing and Inequality

  • Artificially intelligent agents that provide care for human beings are becoming an increasing reality globally. From disembodied therapists to robotic nurses, new technologies have been framed as a means of addressing intersecting labour shortages, demographic shifts, and economic shortfalls. However, as we race towards AI-focused solutions, we must scrutinize the challenges of automating care. This talk engages in a two-part reflection on these challenges. First, issues of building trust and rapport in such relationships will be examined through an extended case study of a chatbot intended to help individuals quit smoking. Second, the institutional rationale for favouring machine-focused solutions over human-focused ones will be questioned through the speaker’s concept of crisis automation. Throughout, new equitable cybernetic relationships between those provisioning and receiving care will be platformed.

  • Norbert Wiener, a foundational force in cybernetics and information theory, often used the allegory of the Golem to represent the ethical complexities inherent in machine learning. Recent advances in the field of reinforcement learning (RL) deal explicitly with problems laid out by Wiener’s earlier writings, including the importance of games as learning environments for the development of AI agents. This talk explores issues from contemporary machine learning that express Wiener’s prescient notion of developing a “significant game” between creator and machine.

  • The application of artificial intelligence (AI) to the law has enabled lawyers and judges to predict – with some accuracy – how future courts are likely to rule in new situations. Machine learning algorithms do this by synthesizing historical case law and applying that corpus of precedent to new factual scenarios. Early evidence suggests that these tools are enjoying steady adoption and will continue to proliferate in legal institutions.

    Though AI-enabled legal prediction has the potential to significantly augment human legal analyses, it also raises ethical questions that have received scant coverage in the literature. This talk focuses on one such ethical issue: the “calcification problem.” The basic question is as follows: If predictive algorithms rely chiefly on historical case law, and if lawyers and judges depend on these historically-informed predictions to make arguments and write judicial opinions, is there a risk that future law will merely reproduce the past? Put differently, will fewer and fewer cases depart from precedent, even when necessary to achieve legitimate and just outcomes? This is a particular concern for areas of law where societal values change at a rate faster than new precedents are produced. This talk describes the legal, political and ethical dimensions of the calcification problem and suggests interventions to mitigate the risk of calcification.

    Abdi Aidid
    Faculty of Law
    University of Toronto


    ► To stay informed about other upcoming events at the Centre for Ethics, opportunities, and more, please sign up for our newsletter: https://utoronto.us12.list-manage.com/subscribe?u=0e5342661df8b176fc3b5a643&id=142528a343

  • As national and regional governments form expert commissions to regulate “automated decision-making,” a new corporate-sponsored field of research proposes to formalize the elusive ideal of “fairness” as a mathematical property of algorithms and especially of their outputs. Computer scientists, economists, lawyers, lobbyists, and policy reformers wish to hammer out, in advance or in place of regulation, algorithmic redefinitions of “fairness” and such legal categories as “discrimination,” “disparate impact,” and “equal opportunity.”

    But general aspirations to fair algorithms have a long history. This talk recounts some past attempts to answer questions of fairness through the use of algorithms. In particular, it focuses on “actuarial” practices of individualized risk classification in private insurance firms, consumer credit bureaus, and police departments since the late nineteenth century. The emerging debate on algorithmic fairness may be read as a response to the latest moral crisis of computationally managed racial capitalism.

    Rodrigo Ochigame
    History, Anthropology, & Science, Technology, and Society
    MIT

  • Two centuries of dystopian thought consistently imagined how technologies “out of control” can threaten humanity: with obsolescence at best, with violent systemic destruction at worst. Yet current advances in neural networked machine learning herald the advent of a new ethical question for this established history of critique. If a genuinely conscious form of artificial intelligence arises, it will be wired from its inception as guided by certain incentives, one of which might eventually be its own self-preservation. How can the tradition of philosophical ethics approach this emerging form of intelligence? How might we anticipate the ethical crisis that emerges when machines we cannot turn off cross the existential threshold, becoming beings we should not turn off?

    Alex Hanna
    ML Fairness
    Google