Episodit

  • In this special BONUS episode of the podcast, Todd is joined by Rights Track producer Chris Garrington of Research Podcasts to discuss their recently published book The Rights Track: Sound Evidence on Human Rights and Modern Slavery.

    The book, published by Anthem Press is launched today (September 6, 2022) at a special event hosted by the University of Nottingham's Rights Lab, funders of Series 3-5 of the podcast.

    Transcript

    Todd Landman 0:01

    Welcome to The Rights Track podcast, which gets the hard facts about the human rights challenges facing us today. I'm Todd Landman. In this special episode of the podcast, I'm delighted to be joined by Rights Track producer, Chris Garrington to discuss our new book, The Rights Track: Sound Evidence on Human Rights and Modern Slavery, which was published in July. Chris and I launched The Rights Track podcast together in 2015, and have just finished production of a seventh series. Chris is the director and owner of Research Podcasts Limited, which specialises in consultancy, training and podcast production for researchers and students. So, welcome to this side of the mic, Chris.

    Christine Garrington 0:38

    Thanks, Todd. I was gonna say it feels strange to be here. But of course, it doesn't feel strange at all, because I'm always here for recordings of Rights Track episodes, but it does feel strange, slightly strange. I'm not gonna lie to be speaking into the mic and be having a proper conversation with you in this way. But wonderful.

    Todd Landman 0:55

    You're now the guest, you are not sitting behind the scenes trying to make the guests sound fantastic. So Chris, I wanted to start with, when did podcasts first enter into your head?

    Christine Garrington 1:06

    Oh, that's a really good question. So you know, Todd, but our listeners won't know that my background was in journalism. So I came straight out of university and trained to be a journalist back in the late 80s, early 90s. And spent most of that time working in radio - in BBC local radio in Essex. And then when radio five live launched here in the UK, in the mid 90s I worked there. So I developed if you like, my love of audio, my passion around the power of audio to tell stories, to report news, as well as obviously, all of the technical and editorial skills required to do that well, whilst working as a radio journalist. But jump forward a decade after leaving the BBC and doing a few different things and living abroad for a while, I came back to the UK and ended up working a little bit by chance, if I can be honest there, working in a research institute at the University of Essex.

    Todd Landman 2:03

    Yeah.

    Christine Garrington 2:03

    And it was actually there where I was given a free rein to try to help that institute promote its research better to communicate and engage around its research better with non-academic audiences to wider audiences, that I came up with this idea of using my skills, my background in this new setting, in the university and research setting to launch a podcast and it was indeed there that I launched my very first podcast, and that would have been in around I think, 2010 - 2011.

    Todd Landman 2:34

    Wow. So 12 years ago. Now, I wanted to just hone in on one thing you said there, you've talked about people telling their stories. And I want to link it to my next question, which is, at what point did you want to work with academics interested in presenting their own podcasts? But I guess, are they any good at telling their stories? And did you really have to coach them to tell their stories? Because sometimes people ask us questions, we give ridiculously complex answers. And people really want more straightforward answers to questions, maybe in a more binary fashion. So how do you get academics interested in presenting their own podcasts? And how do you get actually get academics to sit and talk in a way that is meaningful, interesting and productive for a non-academic audience?

    Christine Garrington 3:14

    Yeah, I think I'd go back a little bit and say that when I produced my first podcast, that was me in the chair, that was me, Chris Garrington, journalist, interviewer, you know, trying to coax good answers out of researchers and working with them on that in a way where they could present their work accessibly by asking questions, if you like that we're not about necessarily the complexities behind the research in terms of the methods and the regressions of the variables and all the things that a lot of researchers want to talk about, particularly social scientists, and work with them to really think about how they could answer simple questions about the meaning of their research, or how it could benefit people in the real world, how it could be of help to policymakers and practitioners. So if you'd like I was already working with them in that way in that environment. But moving on to your sort of question. I had this thing in the back of my mind, which was that, wouldn't it be great, you know, I can ask, you know, sensible, intelligent questions, but I'm a bit of a jack of all trades, master of none here. Wouldn't it be great if academics, you know, who would really want to - and it's not for everybody - but wouldn't it be great if there were academics who would like with the support and the background and the experience that I bring to the pot? Wouldn't it be great to get them presenting their own and they would, you know, necessarily, if you like, be using that medium to communicate and engage around their research, in a way I think that could benefit them, but also really demonstrate the potential impact of their work and yeah, hence next steps into really wanting to work with academics who would want to produce and present their own podcast.

    Todd Landman 4:57

    That's brilliant, and I guess you know, in the back of your mind, or maybe in the front of your mind throughout that process, you always have the audience in mind who's the audience going to be? What will they be interested in? And how do we produce something that will meet that interest and capture their attention for the length of a podcast?

    Christine Garrington 5:15

    Yeah, exactly. And, you know, you'll remember and will reflect back and we have reflected back in our book, Todd about the important conversations we had before we went anywhere near a microphone, right? I mean, we talked at length about who is it we want to engage with? Who is it we want to talk with? Who is this for? Who are we trying to reach? What are we trying to achieve? What's the mission, if you like, of our podcast, and those are things that these days, you know, I'm sharing with academics, whether that's in training situations, or whether that's in a situation where I might be producing them to present their own podcast, those are really important conversations to have before, you know, anyone goes near a microphone and starts interviewing or having conversations with people. And I think that conversations word is I don't know how many you times you and I have used the word conversations. And, you know, that's very, very important in the podcasting arena, because I as a journalist can conduct an interview about human rights, but I can't have a conversation. And that's where Todd Landman comes in because you can have a conversation around and about human rights in a way that I couldn't possibly. And so the team working is what works, the team working, the Todd model, as we like to talk about it at Research Podcast these days, really, really works. And that's why I'm so proud of it and feel so passionate about it.

    Todd Landman 6:35

    Well, I guess I'd like to talk about the Todd and Chris model, because Todd and Chris model yielded The Rights Track. So why The Rights Track and why me?

    Christine Garrington 6:43

    So it won't surprise you to know that really, when I was sort of thinking about podcasting, and thinking about who might like to work with me, I think our paths had crossed, not particularly sort of closely, but at the University of Essex. And that's, of course, where we, we first met, but we've had a couple of dealings around media work and stuff like that. And so when I was thinking, you know, I need to find somebody to work with on this idea, who can I talk with? You were, you know, right at the top of the list, of course, and I remember quite clearly sort of saying to you, can I come and have a coffee and chat to you about this idea of podcasts? And, you know, you were so open to the idea. And of course, we did a bit softly, softly it was, it was for that first sort of six episodes of a podcast, it was me interviewing you. But you know, I could tell quite quickly that you grabbed it, you grasped it. And so I suppose I might throw that back to you, in a way, Todd, you know, at what point did you think, oh, yeah, this is something for me, you know, I can work the Todd and Chris model, this could be something that could work really, really well. For me for human rights research for communication, for impact.

    Todd Landman 7:46

    Briefly, I had been looking for a different medium to disseminate human rights information to a wider audience. And I liked talking about human rights. I taught human rights for many, many years and have had many conversations around the world, I travelled to probably 35 or 40 countries, by the point that you and I have started working together. And of course, I'd spent a lot of time in public fora, whether those were, you know, externally sponsored events in those countries. Some of the highlights for me, were going out to Mongolia at the time, when it embraced democracy, we were doing democracy assessment. And there the audience was fully International, as well as local media, academics, civil society organisations, I spent time in Latin America doing the same thing, particularly in Mexico, Brazil, Chile, Peru, and sharing my views of course across the United States, across continental Europe, and parts of Africa. So for me, it was I was used to talking about human rights, I was used to teaching human rights. But many of this sort of format of that conversation, discussion remained didactic, whereas I didn't have a way of capturing the conversation. So for me, it was about the opportunity to capture a conversation. And I think that when you pitch to me over that cup of coffee, here's a new format. And I didn't know anything about podcasts was a bit like when Twitter came out, I thought, why would I want to use Twitter? You know, what's a tweet? Where's 140 characters gonna get me? Where does a 20 minute podcast get me, but you made very convincing arguments about why podcasting for human rights would be a good thing to do. And of course, we spend a lot of time thinking about the titles. And The Rights Track is a play on words, of course, because old recordings, we'd record on tracks, and we still record our podcasts on multiple devices, and then mix them down, you know, various tracks overlaying on top of one another. But for me, it was a really coming together and a very good moment in my own intellectual and sort of educational formation that this presented a great opportunity to bring this medium to the world of human rights.

    Christine Garrington 9:43

    And so what is it that you would say that you enjoy most about presenting the podcast?

    Todd Landman 9:48

    I think the surprises actually to be honest, I get anxious before every podcast, I don't have a script. I don't have pre-set questions. I know who my guest is going to be in the area of work that they do. I look for a challenging what I would call hook question to start off each podcast which sometimes frightens my guests, and I realise they're probably more anxious and nervous than I am. But what's interesting is after that initial hook question, the opening up of thought, and the opening up and sharing of both a track record of work, the incredible commitment to human dignity and human rights, the guests we've met, are committed to all those sorts of things that get revealed, are always surprising. And people will come out with all sorts of surprising things that you don't expect. And of course, you have to roll with it in a live recording setting, because we don't really like to over script and over edit our episodes. So for me, it was the natural flow of the conversation, I might take furious notes while listening to somebody and pick out key words, and then use those words to craft a new question to push the conversation forward. And I found that almost improvisational element of the podcast, very rewarding indeed. And then there was the final challenge of how do you end the podcast? You know, we would get deep into conversation with people from you know, gross human rights atrocities, being committed to, you know, technology and this latest series on the digital world, in areas that I hadn't talked to people about and quite complex topic areas, how do you then wrap that up? What were the main themes? How do you string those themes together? And how do you reach an end point for a finale of a podcast to leave the listener wanting more, but also feel satisfied that they've learned something by listening to that episode?

    Christine Garrington 11:31

    Yeah, I think there's a real skill in that. And it's something that I often talk to people who want to present their own podcasts about whatever the subject matter is, is that you somehow have this art of wrapping up a conversation in a way that really pulls together the main things that have emerged and that's, you know, that's challenging, right? Because that means you've got to listen to every word, your guest says, and you've got to store all of that in your head across a 20-25 minute conversation. But I think, you know, there in, you know, is a really important thing to take away from, you know, what makes a good podcast, but I wanted to ask you as well about there's something there's an informality, right about the podcasting medium that just doesn't seem to exist in any other way that academics might get to communicate their research, you know, you're always presenting quite formally, right, whether that's at a conference or you're giving a media interview, or you're talking to policymakers giving evidence at some sort of inquiry. But this allows a sort of an informality that I just think is very, very special.

    Todd Landman 12:29

    Well, I think breaking down the formality of human rights dissemination was a key motivator for me, having been to countless conferences and formal events and public fora, but also reading the literature on human rights. I've been, you know, steeped in research monographs, peer reviewed journal articles, policy reports, NGO reports, and they have a distinct formality about them, there's a trotting out of the human rights that are at stake, the legal parameters of those human rights, what is codified and not codified, where the areas of debate are. And it risks really in two ways. One, it's not easily accessible information. And we talk about that in the book, which I, you know, towards the end of the book about that the sort of established ways of disseminating human rights information, actually limit their accessibility for a wide range of people, particularly those people you're trying to reach. And so for me, the podcast and the informal nature is I can just say, oh, yeah, but you said this, but what did you mean by that? Or? I'm sorry? Could you give an example of how that principle works in practice? Or what in your personal experience could you tell us that either led you to that conclusion or motivated you to work in this area? And I remember some of our guests saying, look, it was my dad worked for a particular federal agency in the United States that was dedicated to environmental protection, that inspired me as a child to go to university and then at university, I got interested in human rights. And then I got interested in how people mobilise for human rights. So I started researching human rights NGOs, and to get the human element and motivation behind why people do the human rights work that they do. And equally, that you know, the impact they think they're having. Oftentimes, when we look at academic work, they think, Oh, well, you know, it's a publication, it's out there, it's, you know, it's in the peer reviewed journal world. It's in the research monograph world, it's an echo chamber, it's just academics reading their own stuff, reading each other, citing each other and making, you know, progressive change in the development of knowledge and expertise. But who's the wider audience that one could reach? And how do we make that information more accessible to people? And how do we get the human story behind the derivation and genesis of that information? And then why that information is important for us to ponder and to think about? And so I think the podcast was a perfect medium to be able to do that.

    Christine Garrington 14:43

    Yeah. And so you mentioned the book there. So what on earth made you if you like, sort of, come full circle and come back to the written word, you know, with the idea of the book about the podcast, you know, where did that come from? Because I remember us talking about it, but, you know, it must have been sort of been mulling around in your head for a while before you broach the idea with me.

    Todd Landman 15:03

    So for me, we originally set out to do The Rights Track for one year, we had one year of funding, then we accessed additional funding and more funding. And, you know, the middle part of the series were all about the modern slavery topic, we had really nice financial support from The Rights Lab at the University of Nottingham to do that. We had had Nuffield funding and ESRC funding. And as this body of podcast content developed, I thought, there's some recurring themes here that are of interest to me. And I think I originally pitched just writing a journal article about this. And I was conscious that we would be coming full circle from the written word and the spoken word back to the written word. And, you know, one thing led to another, we sort of parked the article for a while, we had tables and figures and things about the podcast for over a couple of series. But I guess it was late, you know, in deep into the fifth or sixth series, when I thought, actually, I think there's enough content here for a book. And so I approached you and said, let's put a book proposal together to a publisher and see what the response might be. And of course, Anthem Press very graciously said this looks like a winner went out for peer review, we got really good feedback about consolidating and condensing some of the content. And then lo and behold, for a year, I sat and wrote, and you wrote and read and edited and co-edited and fed back, you know, almost every Sunday for a year. And we ended up with 96,000 words of content based on the 58 podcast we did, and the 71 conversations we had, over 26 hours of recorded content, what a beautiful kind of, you know, body of content that was inaccessible, unless we did the podcast. And because we did the podcast, we had this accessible content could then be crafted into, you know, a kind of structure of a book not only tells the story of what we learned during this time, but also what people were saying, and what they were committing themselves to, and what they think they had achieved in the work that they did. So for me, it was all just wrapped up really nicely together.

    Christine Garrington 15:06

    Yeah, and I guess I suppose that brings a question into my head is what you feel the book adds to what we've done, you know, what's come out of it as you reflect on it. And, you know, when that book arrived in your hands, as it did mine through the post a few weeks back, you know, what is it about the book that makes you think that was really worth doing? What, what struck you most?

    Todd Landman 17:16

    Well, for me, you know, if I were to tell people and point them in the direction of our Rights Track website, they would encounter 58 podcasts for download. And they can listen to all 58 in their journeys, commuting journeys, whether they're at work or you know, listening on whatever device they have, that's one approach, but it might feel a bit scattered and sporadic for them. So I felt that our role as creators of this thing was to add value to the audio content we had, and to, in a sense, augment what we've learned from people with the extent knowledge of the state of human rights from an academic perspective. And the book tries to strike that balance between background and context. Why is freedom of speech, freedom of belief, freedom of religion, important? Here are the parameters, here are the ongoing debates, here's what our guests said about these issues. And now we're going to tell you why that matters. And sort of chapter by chapter, podcast by podcast, we put everything together, and actually, I think, ended up with a whole that was greater than the sum of its parts, because we were able to add the academic commentary on top of the conversations and back again, to craft chapters that hopefully are readable, informative, thought provoking, and raised a high relief, the many human rights challenges that are facing us today.

    Christine Garrington 18:35

    Yeah, 100% I agree with that. And also, it was wonderful for me to have the opportunity to reflect, you know, especially writing this section about why podcasts, you know, why that particular moment in time, you know, how we develop the ideas, it was wonderful to have the opportunity to reflect back on all of that in the context of a growing interest in podcasting, as we know, you know, from the early 2000s, to where we are today, with podcasts very much an established part our, you know, audio habits, if you like for many, many people, millions of people around the world, you know, coming from a place actually in the 2000s. You know, when we were talking about it when it wasn't quite so established. I think, you know, that's been a wonderful thing, certainly for me. And I wonder, also, you know, now the books out who you think will benefit from it, be interested in it, what they'll get out of it?

    Todd Landman 19:22

    Well, I think we pitch the proposal to the publisher that this would not be a full on academic book that only academics and students would read. It would be a book that should appeal to the general reader that we have enough for lack of a better word, academic credibility behind what we're saying. There's a lot of referencing to established peer reviewed academic research, combined with experts that we spoke to both who were academics but also practitioners and activists, and weaving that narrative together as something that's not been done before by taking that very rich content of our conversations and weaving it into broader academic arguments that are balanced evidence based and reach a reasonable resolution and conclusion about matters affecting us here today. So for me, it was, I think it will appeal to people who are worried about the state of the world, given developments. Since well as we say, in the beginning of the book since 2001, there was almost a massive pivot after the terrorist attacks of 9/11. And there is a kind of rolling back, if you will, of commitment to human rights and certain emergence of political leaders who have an anti-rights discourse or a populist discourse that challenges the legitimacy of human rights. And equally, that raises to high relief, what I would call this idea of reductionism where, you know, you're either with us or against us, this binary dividing of the world into us and them. Black hats and white hats. You know, grouping very disparate groups of people together and calling them all the other and to be feared and to be deported. And to be, you know, deeply suspicious of them. And so for us to produce a book that says, well, actually, there are a lot of people out there that disagree with that view. And this is why, and actually did you know, here's the flow of refugees, and why here's what the German government did. Here's what the UK Government did. Here's why it matters. Here's some of the discourse around that. Here's the legal things that are in place that can help these people here, the legal gaps, where people are falling through the cracks, you know, so we identify and problematize all these issues in a way that wasn't really immediately obvious. If we had just gone for a straight book, it was listening to the voices of the experts, listening to people there on the ground, trying to make positive change for the world that really gave that human grounded element to the content of the book. And I hope that that's really enjoyable for the reader. And if you think about it, readers and listening to Audible books while they exercise, they might be listening to podcasts. But equally, they might like to pick up a hard copy of the book, and really thumb through the pages enjoy reading the content that we've put together over these many years.

    Christine Garrington 21:47

    Yeah, indeed. And I wonder then also, again, coming sort of back to the podcast, if you like, but it's all been a journey. And whether our efforts to create what we've talked about as sound evidence about human rights with The Rights Track, I wonder if you believe or if you think in some small way, The Rights Track has made a difference that it's had an impact in its own right.

    Todd Landman 22:11

    I think so. You know, we have a chapter about human rights evidence in the book and telling the story of human rights evidence is a very difficult and complex one. And the chapter moves really from individual cases of individual people, to aggregating metrics and indicators on large groups of people and looking for macro patterns. And my world of quantitative analysis of human rights is one that doesn't give what some people mistakenly believe as a precise answer to a question. In statistics, we always deal with probabilities and uncertainties. And I was really struck by you know, what Patrick Ball said about his work on documenting gross violations of human rights. And in the case of Guatemala, he showed that indigenous people during 1981 to 82, in the conflict in Guatemala, under the leadership of General Efrain Rios Montt, that indigenous people were eight times more likely to be killed by military agents than people from other groups in society. And I remember asking him, does that prove genocide? He said, oh, no, not at all. What it is, is a statistical statement that says, the patterns we observe in the data cannot be explained by chance alone, right. So it hints and implies that an intentionality to the observed differences in treatment against people, but it doesn't point its finger and say, this is absolutely sound evidence that tells you, these people were wholly responsible for and intentionally killed. That's a legal judgement that requires other kinds of evidence that requires forensic anthropology, documentary evidence, interviews with victims and survivors of the violence, and a triangulation of sources that then let you reach the legal judgement, that beyond a reasonable doubt, genocide was committed against these people. And I like the sort of small 'c' conservativism of that statement around evidence that you have to be very careful working in the field of human rights, not to overclaim or in my language, draw inferences that are not supported by the data. So we have a very strong section in the book around human rights evidence that I think needs to be read by lots of people, because you hear lots of statistics being bandied about, and certainly during the COVID pandemic, and certainly after looking at the differential impact of the pandemic on different groups, and people might say, well, therefore that demonstrates racism. Well hang on a minute, what does that mean, you know, and I was really struck by Dominique Day and her interview with us talking about medical bias in the medical profession and that, in the height of the pandemic in New York City, a directive came out in a hospital that said, we don't have the luxury for protocols and data analysis and committee meetings. Just use your best instinct in treating people. So that grounded decision does someone get a ventilator or not? Does someone get a mask or not? Does someone get treatment or not? At the heat of the moment, the peak of the pandemic, people fall back on internal heuristics and decision making, that may have an absolutely deleterious effect on certain groups of people in society. And I love that explanation because it went away from this kind of role intentionality of racism to a much more nuanced understanding of medical bias and the sorts of everyday decisions people make that when you aggregate those decisions up, you actually see disproportionate treatment of certain groups of people in society. And that argument that Dominique Day put together, actually is analogous to so many other situations I see in the world that individual decisions incrementally build up. And when you aggregate those decisions, you see those disproportionalities as a treatment. You see that maltreatment, maladjustment, maldistribution of resources, etc, within society that produces many of the societal conflicts and divisions that we're dealing with and grappling with today.

    Christine Garrington 25:50

    Yeah, so many great conversations Todd, I wonder if there's for you, there's a highlight? No, I've got a couple around producing the podcast. But what about for you?

    Todd Landman 25:58

    Well, I was struck by our conversations around the refugee crisis. I think, speaking to Mr. Vargas Llosa was really really impressive. His handle on statistics and the flow of people and the sheer number of people coming out of conflict ridden societies and why they were coming to Europe was a particularly telling conversation for me. Of course, I loved all the statistical conversations with everyone, I was really struck with the people fighting slavery on the ground from India, working with youth, etc, very much struck by a conversation with Mahi Ramakrishnan from Malaysia and her work with the Rohingya and thinking about just sort of the abject maltreatment of that group. And some of the complexities of that group as they fled one country where they were considered stateless, entered another country where they didn't have refugee status. And then were trying to eke out a living at the height of a pandemic. I mean, that's a lot of stuff coming together all in one place, and to have that reasonable, passionate discussion with Mahi on that topic, you know, still stays with me to this day.

    Christine Garrington 27:00

    So many great conversations, I too, have learned so very much about human rights along the way, it's wonderful to have technology and all of the opportunities that brings but when the book popped through the letterbox and I could thumb through it and look back and reflect on, you know, all of that work. And all of that time, it was a wonderful moment. And particularly, I've got to say, to turn to the back page, because there's nothing as powerful as somebody else saying how good something is, right Todd? I mean, we can say, you know, we thought it was a brilliant project. But when I look and see Dame Sara Thornton, the UK Independent Anti-Slavery Commissioner talk about the book and saying how it's, you know, how much it's done to bring modern slavery to wider audiences, you know, modern slavery being her interest. And you know, how powerfully it demonstrates the value of technology and making knowledge accessible and how it provides a collation and analysis of the rich material from the series, providing thought, challenging mindsets, and ultimately, with the potential to transform lives, I've got to say, you know, for me, that feels like a, you know, without wanting to, to get too carried away, for me, a real career highlight, and a real personal highlight that will stay with me forever.

    Todd Landman 28:10

    Well, I'm glad that The Rights Track, you know, presented that opportunity to you. But I have to say, Chris, I also learned a lot from you. And the most learning actually came from your challenges to me. So I might pitch an idea. And you say, Yeah, but who's the audience? Or could you just you know, could you rephrase that? Because that doesn't quite make sense. Or you haven't really captured the human element of what we actually learned from that guest? Could you rewrite that passage, you know, and I think you were always good at pushing me around, you know, staying true to the theme staying true to the model, and making sure that we absolutely kept our thinking on the right track, if I might even say, so. I learned a lot. I hope you learned a lot. And I think the end result of this book is a great archive, if you will. And the great thing about books is they exist in perpetuity now, so very, very pleased that we did this project together and I hope that our listeners as well as our readers, take away so much value, as much value as we have in producing them.

    Christine Garrington 29:04

    Thank you, Todd. Thanks for listening to this special episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts. The Rights Track; Sound Evidence on Human Rights and Modern Slavery is published by Anthem Press and is available from all major bookstores. You can access all seven series of The Rights Track podcast via your podcasting app, or on our website at www.RightsTrack.org

  • In Episode 9 of Series 7, Todd is joined again by Ben Lucas, Director of 3DI at the University of Nottingham, funders of this series. Together they reflect on some of the key themes and ideas to emerge from Series 7 of The Rights Track about human rights in a digital world.

    Transcript

    Todd Landman 0:01

    Welcome to The Rights Track podcast, which gets the hard facts about the human rights challenges facing us today. In series seven, we've been discussing human rights in a digital world. I'm Todd Landman. And in the last episode of this fantastic series, I'm delighted to be joined for the second time by Ben Lucas, Managing Director of 3DI at the University of Nottingham, a hub for world class data science research and funders for this series of our podcast. Ben helped kick off series seven at the end of last year talking about some of the challenges and opportunities created in a data driven society and the implications for our human rights. Today, he's here to help us reflect on some of the key themes that have emerged from this series. So welcome, Ben, it's great to have you on this final episode of The Rights Track.

    Ben Lucas 0:46

    Great to be here. Thanks very much.

    Todd Landman 0:48

    So last night, we were at a launch event for INFINITY, which is an inclusive financial technology hub being launched here at the University of Nottingham, we had a bucolic setting at the Trent bridge, cricket ground, which I say was quite historic. But some of the messages I heard coming out of that event last night, really gave me hope for the promise of digital with respect, particularly to helping people who are currently excluded from financial technologies or finance more generally. And the ever, you know, sort of problem of people getting credit ratings getting access to finance, I wondered if you could just reflect on what was shared last night around the the positive story that could be told around using technology to give people access to hard to find finance?

    Ben Lucas 1:29

    Yeah, absolutely. So I think the central issue with financial inaccessibility is really the fact that people get trapped in this really bad cycle, and perhaps don't have savings, and then you lean more on credit options, for example. And then you become more and more dependent, if you like on credit options. Equally, there are also folks who are excluded from accessing credit completely or at an affordable rate. In the first instance, which obviously changes very much the quality of life, let's say that they're able to enjoy the things they're able to purchase, and so on. So really, the mission of projects like INFINITY, which is focusing very much on this idea of inclusive financial technology, is trying to boost accessibility to everything from tools that help people save to tools that help people spend to a breaking that some of these negative cycles that cause people to end up in not so great financial situations. And yeah, it's really leveraging and learning from, you know, all the wonderful developments in, you know, things like analytics and new financial services, products, especially those that are app based, that we use in the rest of the financial services world, but applying them for good, basically, so very much consistent with this data for good message that we've been speaking about in this series.

    Todd Landman 2:51

    Right that's really interesting. So it's a data driven approach to understanding the gaps and inequalities in a modern society that does have the data infrastructure and technological infrastructure to give people access. But really the data driven approach lowers the barriers to entry for those folks. And I was quite struck by that there was a colleague there from Experian, which is a credit rating agency talking about the millions of people who either don't have online bank accounts don't have access to the right kinds of technologies, and don't have the kind of credit rating that gives them access to the lower priced financial products out there, which in sort of ordinary terms means they're paying a much higher interest rate to borrow money than people that do have a credit rating. So one solution was to use data analytics and a data driven approach to understand their position to boost their credit rating in a way that would give them access to cheaper finance. Did I get that right?

    Ben Lucas 3:40

    Yeah, that's exactly right. I mean, the central thing in financial services and lending is obviously managing their risk exposure with any individual consumer, but then also across, you know, their entire consumer portfolio. And I think, you know, one of the big opportunities in the inclusive FinTech space slash probably what we're going to see going forward is credit rating agencies and credit rating support products, looking for other variables or indicators that, you know, can really paint a clearer picture of individual consumers, and perhaps even say, well, actually, there's not so much risk with this consumer because there is other factors that the usual you know, bog standard algorithm doesn't pick up on, and maybe we don't have that risk exposure, maybe we can offer them, you know, financial products or lending products at a better rate, you know, that colleagues spoke also about Experian's boost product, for example, and I won't go into an advertisement for that, but yet a really interesting example of how by sort of extending the available data and what we do with that, you know, it's possible to sort of calibrate and tailor solutions that are a win win that reduce the risk for the credit provider, but give additional consumers more accessibility. And I think the other big piece just to detail briefly, within data driven and financial research, you know, some of the work that colleagues in the INFINITY team have been doing around, you know, helping to understand that an aggregate and in a privacy preserving way, where perhaps people are making not so great financial decisions. So being able to, you know, hopefully in the future help flag you know privacy protecting way to consumers when they're not making great decisions, which can be everything from wasteful over the top expenses to things like you know, too much gambling or unhealthy eating, for example. So certainly a very, very exciting space.

    Todd Landman 5:33

    No, it's really fascinating, and it resonates well with many of the themes we've heard in this series of The Rights Track. So I'm going to just think about putting these things into groupings or clutches of perspectives if I may, so that you made reference this idea of data for good and of course, we had some guests on the podcast this series, including Sam Gilbert, who talked about the ability for digital transformation and data driven approaches to unearth previously unknown factors and public health benefits, and it could be social justice benefits and other benefits from leveraging data that don't normally talk to each other in a data analytic way. Wendy Betts told us about using really preserving the chain of evidence using visual imagery, but that date stamp timestamp location stamped and then preserving the metadata that sits behind an image for verification for the investigation of human rights abuse and human rights crimes. Amrit Dhir showed us in the United States how his organisation Recidiviz uses data from prisons to actually bring greater sense of justice to prisoners, as well as parolees. And finally, Diane Coyle, the world famous economist not only reflected on the many economic transformations that have happened with the digital disruption, but also made the case for universal access to online life and being on the grid almost as a basic human right, in the ways that access to information access to health care, access to services need to be provided. And certainly during COVID-19, we've learned that many people were excluded from those services precisely because they didn't have the right internet connection, or at least cannot afford to have the right kind of internet connection. So I just wondered what your general reflections are on that general theme of data for good. And what can you tell us about what you think listening to the guests that we've had during this series?

    Ben Lucas 7:21

    Yeah, I mean, I really liked the way that Sam sort of sets the scene in his book, Good Data; An Optimist's Guide to our Digital Future. I think that nobody, of course likes to have their privacy compromised, at an individual level. But the reality is, when we look at, you know, the things we can do when we have data at scale across, you know, large populations, there's a lot that can be achieved, whether that's in something like inclusive FinTech, whether that's in protecting human rights by combating modern slavery, whether that's to do with health data in a system like the NHS. Yeah, I don't think anybody likes to have their privacy compromised, obviously, at that individual level. But if there's a sort of way to communicate that greater good message, I'm not trying to encourage people to willingly give away their data for free, quite the opposite. But I think that's the sort of big debate the both commercial and academic data scientists, you know, that's really the arena in which we work. Because there are a lot of benefits to be had. When we think about sort of data at scale. Equally, we need to protect, you know, individuals and communities. I think, you know, it's really great in this series to hear about, you know, things like eyeWitness up and Recidiviz and some of these platforms that I think are managing that really well and really getting that good out of the data. Yeah, I think that's been really nice. There's a lot we can say also, on the subject of, I think this is more of a frontier thing. But artificial intelligence in particular, which came up a few times, which I think is going to be the next well already is actually the next big frontier in terms of talking about, you know, transparency and fairness, especially because we're applying these tools to these large datasets.

    Todd Landman 9:04

    Right. And I also came across a very interesting project and another group here at the University of Nottingham. It's within the Nottingham University Business School. And it's a neo-demographic lab or N/Lab, which works on you know, big data science projects around harnessing unknown information from pre existing datasets. And there was a partnership with OLIO, which is an app that allows people to trade food that they're not going to need so surplus food sits in people's houses, other people need food. So this app allows people to share food across the app, and to actually make best use almost the circular economy, if you will, in sharing food. Now, quite apart from the pragmatics and the practicalities of sharing food between households. Of course, the app collects data on who needs food and who has food, and that then allows the geo-mapping of food poverty within particular districts and jurisdictions within the United Kingdom. Can you say a bit more about that project and does this fit within the category of data for good?

    Ben Lucas 10:03

    Absolutely. I mean, that's an absolutely fantastic piece of work, you know. And obviously, the purpose of that platform and all that work is to look at both combating food inaccessibility and food poverty, on the one hand, and on the other, combating food waste. So really, yeah, absolutely a fantastic example, as far as data for good and also doing the right thing by people in society. I think it is also a great example of this idea that we can, you know, log data from sharing platforms, and really whatever platform in an ethical way, you know, in the work those that colleagues at N/Lab are doing, you know, so it's all privacy preserved data. It's possible to get a, you know, useful enough geotagged picture of how the sharing is taking place, such that it can be understood at a network level, but it's not giving away, you know, exact locations, it has no identifiers of who's linked to it. But even just with that sort of network exchange level data, you know, it really tells a very interesting story about how this system works. And, you know, as you said, I mean, this is very much in the peer to peer sharing economy space, which is a relatively new idea. So it's also from an academic point of view, very important and very useful to be doing research to understand these entirely, relatively new kinds of systems.

    Todd Landman 11:26

    So essentially, because the heat map that that project produced was for a belief Haringey Council in Greater London, and I guess, you know, knowing what I know about data, this could be scaled up for all jurisdictions, the United Kingdom. And beyond that the heat map tells you areas of food poverty, but also could inform government as to where to put resource and where dare I say levelling up funding could be targeted to help those most in need.

    Ben Lucas 11:53

    Yeah, absolutely. I mean, as I understand it, that works, you know, been incredibly useful for the platform and how it's looking to grow and continue to be successful. But yeah, absolutely. That's really another key thing here is the value these platforms have for policymakers for government, indeed.

    Todd Landman 12:08

    Great. So we've had the data for good story, I now turn our attention to the data for bad story, because we had some guests that were very suspicious, sceptical and were critical of this burst and proliferation and digital transformation and the production of data second by second day by day, week, by week, year by year and two of our guests had actually different perspective on this, so Susie Alegre has this fantastic new book out with Atlantic books, she called Freedom of Thought. And what she was really concerned about was not only the history of analogue ways in which people's freedom of thought had been compromised, but also the digital ways in which freedom of thought might be compromised by this digital revolution. And for her, her concern, really is that there are unwitting or witting ways in which people's thought patterns might be manipulated through AI and machine learning. And we use popular examples of consumerism, consumer platforms, such as Amazon and other shopping platforms where not only does one get bombarded by advertisements, but actually gets suggestions for new things to buy based on patterns of spend in the past. And there is cross referencing between platforms. And I think Sam Gilbert also addressed this thing about this micro targeting and cross referencing. So if I search for something on one platform, it shows up on another one, when I'm sort of, you know, at least expecting it to do so. A bought some shoe laces the other day, they came to the house within a day. So I had that lovely customer experience. And yet, when I went on to a CNN website to look at the news headlines, the first ad that popped up was for shoe laces. So can you say a bit more about the unease that people have around these sharing platforms and the worry that our thoughts are being manipulated by this new technology?

    Ben Lucas 13:45

    Yeah, I think this idea of freedom of thought or, you know, illusion of decision freedom is a really important one, when we're talking about the internet, and especially, you know, one can imagine, you know, as was evidenced with the Cambridge Analytical scandal back a few years ago, you know, this becomes especially dangerous when we're talking about political messaging. I think it's important that we, as users of the internet, approach the internet with a healthy degree of scepticism being a bit, you know, cautiously analytical, and occasionally taking a step back and thinking about what the implications of our behaviour online, including simply consuming content consuming information really are. The reality is most of if not all of the online platforms that we use be that social media, ecommerce, or whatever. They are designed to achieve immersion. They're designed to keep you spending more time and if you're spending time in the wrong kind of echo chambers, or if you're getting exposed to messages from bad actors. You hear these stories of people going down all sorts of terrible rabbit holes and things and this is how conspiracy theories and so forth proliferate online. Yeah, but certainly even just for the regular internet user, we all definitely need to be thinking about where is information coming from? Is it from reliable sources? Is the intent good? And do we indeed have that decision making freedom? I think is the really important thing, or is someone trying to play with us?

    Todd Landman 15:13

    Well, it's a really interesting answer. And it links very nicely to our episode with Tom Nichols, because he was saying that there's this tendency towards narcissism. And that's, you know, certainly during COVID, people had more time inside, they had more time to dedicate to being online. But at the same time, the rabbit holes that you're worrying about really raised too high relief. And so that retreat into narcissism, the idea that if you're going to post something, you're only going to post something negative, critical and maybe sowing division by posting those critical comments. But you also in your answer talked about the power of particular individuals. And I guess, I have to address the question of Twitter in two ways. So Tom made this observation of Twitter is this sort of, you know, you have now have 240 characters to, you know, vent your spleen online and criticise others, but also that's powerful platform to mobilise people. And I say this in two ways. The first is that the revelations from the January 6 committee investigating the events that led up to the insurrection against the US Capitol was putting a lot of weight this week on just the number of followers that former President Trump had, and a single tweet in December where he said, you know, come to the Capitol on January 6, it will be wild. And then there were an array of witnesses paraded in front of the committee, from far right groups from the Oathkeepers, and other groups of that nature, who were saying, but actually, we saw this as a call to arms. So there was a nascent organising taking place, but there's almost this call to arms issued by a single tweet to millions of followers that really was, you know, the spark that lit the fire and wonder if you might just reflect on that.

    Ben Lucas 16:50

    Yeah, I think for anyone currently also trying to keep up with slash decipher the story in the news about Elon Musk, putting in an offer to buy Twitter, which has now fallen through, I would use that lens to sort of explore this because one of the goals that I think he was seeking to achieve in taking over Twitter was really opening up its potential for free speech further. But yeah, for anybody sort of observing. That's a really tricky one. Because sometimes when the speech is, well, I mean, that there should be free speech. But people should be saying, you know, hopefully nice things within that freedom, and not denying the rights of others and not weaponizing free speech to stir up trouble. I think it's really, you know, we touched on this in the first episode of the series as well, the really big question with social media is, who's the editor in chief? Is it everybody? Or is it nobody, and which is the better format?

    Todd Landman 17:42

    Yeah, and we talked about that unmediated expression and unmediated speech and that Martin Scheinin, as well, as Tom Nichols talked about how traditional media organisations have had that mediating function, and the editorial function, which is lost when you have an open platform in the way that Twitter has, even though they did in the end, deplatform the former President. But I want to get back to that. I mean, you know, the task of the January 6 committee is not only to say we think there's a causal link between this tweet and people doing things, but they will also need to demonstrate the intentionality of the tweet in and of itself. And I think that's a major concern, because there's certainly ambiguity in the language saying, you know, come to the Capitol, it's going to be wild doesn't necessarily convert into a mass uprising with weapons and an insurrection. So there's a tall order of, I would say, legal proof, above reasonable doubt that needs to be established, were one to go down that legal route. But if we look at Elon Musk, I mean, here's one person who's exceptionally wealthy in the world who can buy an entire platform. And the concern that many people have is can one individual have that much power to acquire something that powerful, and we don't know if the deals fallen through, because there are some legal wranglings going on at the moment about whether he could actually withdraw at this late stage in the purchase process. But be that as it may, I wonder if you might just reflect on this ability for a very wealthy single individuals take control of a platform as powerful as Twitter.

    Ben Lucas 19:10

    So I think it's a really complicated one, it's really one of the most complicated questions within the social media space, you know, because these platforms are ultimately businesses. There's a founder, there's a CEO, there's a board, there's that leadership, and hopefully accountability and responsibility. It is really a tough one, you know, one wonders about a future where, you know, in the same way, you've got the Open AI Foundation, for example, or you've got, you know, other truly sort of open peer to peer kind of platforms. If we think about how the internet is or technology is trying to decentralise things like finance in the future, wonders if there's sort of an alternative model that could solve some of these problems. I think the narrative so to speak specifically about Elon Musk that he's been putting forward, was really just to open up Twitter even further taking that sort of laissez faire kind of approach and just you know, letting free speech just sort itself out. And again, free speech is and can be a good thing. But sadly, when people engineer these kind of messages to avoid legal accountability, but are implying, you know, some sort of stirring up of trouble, when people engage in narcissistic sort of messaging when people engage in putting forward, you know, campaigns, you know, engineering very, very strong emotions, like fear and anger, obviously, that can get out of control very, very quickly. The reality is, I'm not qualified to come up with the solution. And I, sadly, I don't know who is. Yeah.

    Todd Landman 20:36

    Well, that's interesting, because we have some guests that were suggesting a solution. And if I listened to you speak about the Elon Musk agenda to open up in a laissez faire way, it's almost the invisible hand of the information market, you know, if we go back to economics, and one tenant of economics at least has been that the invisible hand sort of guides markets, and the pricing and equilibrium that comes from supply and demand produces a regulatory outcome that is beneficial for the most people most of the time, it's a somewhat naive view, because there's always winners and losers and economic transactions. So counter to this idea of the invisible hand of the information market, we had quite an interesting set of thoughts from Martin Scheinin, and from Susie Alegre on the need for regulation. And that really does take us back to the beginning of this series of The Rights Track where you made the observation that tech is advancing more quickly than the regulatory frameworks are being promulgated that there's this lag, if you will, between the regulatory environment and the technological environment. So I wonder just for your final reflections, that really what both Martin Scheinin and Susie Alegre are saying that if tech is neutral, we need to go back to ethics, morality law and a human rights framework to give us the acceptable and reasonable boundary conditions with which all this activity needs to be thought about.

    Ben Lucas 21:56

    Yeah, exactly. I mean, it really does come down to, you know, well constructed regulation, which is obviously complicated, especially when, you know, most major social media platforms have a global footprint. So it's then how to ensure consistency across the markets they operate in. I think a lot of the regulatory frameworks are kind of there for the offline world. And the main thing, yeah, that we were sort of getting at in the first episode of this series is really that because technology moves so fast, because these platforms grew so quickly, you know, there are laws to stop people, no one can just go into the town square and start, you know, hurling obscenities, you know, in public, but for some reason, you know, it happens millions and millions of times a day on social media platforms. So I think, yeah, regulation really is key here. But the other thing is, I would say the people that misuse, the definition and excuse of free speech, should actually really look up the definition of free speech again.

    Todd Landman 22:57

    Well, it's this idea of doing no harm. You know, I think I mentioned this notion of a Hippocratic Oath, if you will, for the digital world that you can engage but do no harm. And what people conceive and perceive as harm, of course, is open to interpretation. But that's a general kind of impulse behind this. And you know, this distinction between the offline world and the online world is also really, really important. So Tom Nichols invites us to maybe get off the grid occasionally go back into our community, say hi to our neighbours, volunteer for things and experience humanity face to face in the offline world a bit more than were experiencing in the online world. And of course, the appeal to morality, ethics, law and the human rights framework is going back to you know, basic philosophy, basic conceptions of rights, basic conceptions of law, to make sure that, you know, our offline world thoughts can be applied to our online world behaviours. So, you know, these are super deep insights. And as the world progresses, as technology progresses, as the interconnections between human beings progress in ways that we've seen over the last several decades, through the medium of digital transformation, and this ever expanding digital world, it does make us pause at this moment to say that actually reflect on human dignity, human value, integrity, and accountability and responsibility for the kinds of things that we do both within the offline world and the online world. And you've given us much to think about here Ben certainly across the many episodes of this series, you kicked us off with this great, you know, offline - online regulation versus tech dichotomy that we all face. We've heard from so many people, evangelising the virtues of the digital world but also raising significant concerns about the harm that can come from that digital world if we allow it to run unchecked. So for now, it's just my job to thank you Ben for coming back on this final episode, giving us a good wrap up set of reflections on what you've heard across the series. And thank you ever so much for joining us today on this episode of The Rights Track.

    Ben Lucas 25:02

    Thanks so much.

    Christine Garrington 25:04

    Thanks for listening to this episode of The Rights track podcast which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find a full transcript of this episode on the website at www.rightstrack.org together with useful links to content mentioned in the discussion. Don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.

  • Puuttuva jakso?

    Paina tästä ja päivitä feedi.

  • In Episode 8 of Series 7 of The Rights Track, Todd is in conversation with Wendy Betts, Director of eyeWitness, an International Bar Association project launched in 2015 which collects verifiable video of human rights violations for use in investigations and trials. We're asking Wendy how the use of digital technology can help to hold accountable those who commit human rights crimes.

    Transcript

    Todd Landman 0:01

    Welcome to The Rights Track podcast, which gets the hard facts about the human rights challenges facing us today. In series seven, we're discussing human rights in a digital world. I'm Todd Landman, in this episode, I'm delighted to be joined by Wendy Betts. Wendy is director of eyeWitness an International Bar Association project launched in 2015, which collects verifiable video of human rights violations for use in investigations and trials. So today we're asking Wendy, how does the use of digital technology help to hold accountable those who commit human rights crimes? So Wendy, it's absolutely brilliant to have you on this episode of the right track. So welcome.

    Wendy Betts 0:38

    Thanks, Todd. It's great to be here.

    Todd Landman 0:40

    You and I met in Bergen in Norway, we were at the Rafto Foundation awards for the Human Rights Data Analysis Group and Human Rights Data Analysis Group have featured in previous episodes on The Rights Track. And I see there is a kind of correlation, if you will, between the work of the Human Rights Data Analysis Group and the work that you do at eyeWitness. It is just that the data you're collecting is really video files and video footage. So tell us a little bit about the work that you're doing with eyeWitness.

    Wendy Betts 1:08

    Absolutely. So at eyeWitness, we are helping human rights defenders in conflict zones and other places that are experiencing large scale human rights violations, to collect photo and video information in a way that makes it easier to authenticate. So that footage can be used in investigations and trials. So we work with human rights defenders in three ways. First, we're providing a mobile camera app that we designed to help ensure that the footage can be easily authenticated. And then we are helping to securely store that footage and maintain the chain of custody so it can eventually be used in investigations and trials. And third, we work to then take a working copy of that footage that we catalogue and tag to make it easier for investigators to identify footage that's potentially of interest to their investigations and incorporate that into those processes.

    Todd Landman 2:01

    Well, that's a great summary of the work that you do. I recall when I was a student at Georgetown University, I worked in the Lauinger Library. And my job was to produce photographs in the pre-digital age. So this was processing rolls of film in the old cans used to kind of shake them with the chemicals and then use an enlarger and make photographs. And that was fine for special collections and photographing books. But one day, a Jesuit priest came into the library and handed me a roll of film and said I need 10 copies of each of these pictures. And they were actually photographs from the crime scene where Jesuit priests had been murdered in El Salvador. And I'm curious that when we enlarge those pictures and submitted them back to the authorities that requested them, is that kind of evidence still considered verifiable evidence? And what is it that the digital elements all of this adds to the veracity and the verifiability of evidence collected on human rights crimes?

    Wendy Betts 2:58

    There's a long history of photo and video being used as evidence, that photo and video in its hard copy form would need to be verified to go to court. So generally speaking, the court would want to speak with the photographer, or in the absence of photographer, somebody that could help explain that that footage is indeed an accurate portrayal of that location at that time. And what digital technology has done is expand the ability of who can be the photographer to collect that potential evidence. So with the two trends of smartphones in everyone's pocket, plus the rise of social media platforms where people can share this information, you're suddenly seeing this massive proliferation of the amount of available information that could be used as evidence. But indeed, this also will need to be verified in much the same way. But the challenges to doing that are slightly different. And then the technology that we can bring to bear to do that is slightly different.

    Todd Landman 3:52

    Yes, I understand those differences. And so there's a lot of debate today, if we take the War in Ukraine as a good example, when it first started, there was a flurry of activity on Twitter that said, don't believe everything you see on Twitter. So there of course will be manipulated images manipulated video, I see manipulated video every day, some of it you can tell straight away, it just looks awful. It looks like a video game. Somebody's saying, look, you know, Ukrainians are taking out Russian tanks. And actually you look at the tank tracks and you can see it just looks like a photoshopped superimposed image of a tank running over some really bad terrain, to the fully verifiable accounts that we are seeing coming out of that conflict. So how are things verified? How does one dismiss imagery in one instance and accept imagery in another? What's the expertise required to give that verifiable account?

    Wendy Betts 4:43

    I think when you're looking at verification, what you really want to know is whether that footage was taken where and when it was claimed. And if that footage has been edited, or as you note in your examples has it been photoshopped to look like something else? And then is it possible that even if it was authentic to begin with, and I accurate to begin with hasn't been changed somewhere along the way? So has it been taken down off social media and changed and reposted? And there's been two trends that have developed to address how we can do this. So one is the plethora of open source investigation techniques that have developed in terms of how can you geo locate images using satellite footage and other types of technology? How can you Chrono locate, so how can you figure out when and where that footage was taken? Can you do frame by frame analyses to determine if that footage has been edited in any way? So that was one approach. And that has become increasingly professionalised. And is really coming to the fore in Ukraine. And then the other approach is the one that eyeWitness has taken where we developed a tool that can be used to hardwire that information in the point that that footage was taken. So those are called controlled capture tools, because you're basically controlling the information and controlling that footage, keeping it in a controlled environment for its entire lifespan. So you're collecting information about where and when that footage was taken, you're ensuring that footage can be edited. And you are maintaining that footage in that secure state through its lifespan.

    Todd Landman 6:04

    So the app itself has the technology built inside it, you've actually hardwired that programmable element to the app, and it can't be tampered with. So if I download this app as a user, and I'm travelling through the world, and I want to document something, it's easy to use on a mobile device, easy to proliferate and sort of disseminate if you will out to users. And it's easy to learn by those users. Because the technology itself has been created in a way that preserves the identity and the verifiability of the images that are captured.

    Wendy Betts 6:39

    That's exactly it. The eyeWitness app is designed to be really easy to use to pick up and take and start using and on the surface for the user interface. It's much like standard mobile camera, so you have to open the app instead of your camera. But you're recording footage in the same way, you can enter the secure gallery where the footage is stored to see what you've taken. And you upload it to eyeWitness, this is how we maintain the chain of custody and secure that footage until it can be used. And then you have the option to share it with your social media networks, you can attach it to a WhatsApp message, you can do a variety of things with it. All of the verification aspect is intended to happen behind the scenes kind of inside the technology. So the app is designed indeed to collect information about where and when that footage was taken from three different sources, none of which are the user themselves. It's also collecting information to ensure that that footage can't be edited. So we are calculating basically a digital fingerprint at the moment that information is captured, that stays with that footage. So if any changes wherever to be made to it, you'd be able to spot that by running the algorithm for the fingerprint again, and then that footage is stored encrypted on the device, and then it's transmitted encrypted to eyeWitness so it can't be intercepted or manipulated either at rest on the phone or in transit on its way to us.

    Todd Landman 8:00

    So you have a secure server where all these raw files are held. Is that right?

    Wendy Betts 8:05

    Indeed. So we've been fortunate to partner on a pro bono relationship with LexisNexis legal and professional and so they host our server in their secure hosting environment that they have for litigation services for a variety of confidential evidence that's used in cases around the world. So they host our server, which allows us to scale up quickly and scale up to meet the need. And Ukraine is a perfect example. We've received more footage from Ukraine since the invasion began, then we have globally in the last two years. So that ability to scale up quickly is very important, and more importantly, it is stored securely. So they have their state of the art security around that in a way that we couldn't necessarily put around a server if we were hosting it ourselves.

    Todd Landman 8:51

    That's amazing. Can you tell us a little bit about the complexity of a Ukraine investigation? Let's take the case of Bucha. We know in Bucha, that there were atrocities committed of some kind, clearly there has to be an evidentiary threshold reached, there has to be a profile of perpetrators and victims, there has to be that whole disaggregation of very complex human rights events of the kinds that you and I discussed with the team from Human Rights Data Analysis Group, but what are the steps that eyeWitness takes? What's the role that you take in the preparation of, let's say, an investigation into something like the Bucha incidents that we saw?

    Wendy Betts 9:30

    So I think if we back up to your comment earlier about just the sheer amount of footage that we've been seeing on social media, and including from places like Bucha, that I think there's a sense that there is plenty of evidence out there, and we've got everything we need. And I think what everyone needs to take step back and realise is how complex as you said these cases are. So you need information about what actually happened on the ground, what happened to these victims, and that takes the form of witness statements, it can take the form of physical evidence, it can take the form of photo and video, but we also need to know the context in which it's happening. If you want to elevate something to be a war crime, instead of a murder, you need to understand the conflict dynamic and what's happening. And then if you want to hold people at higher levels of authority responsible, and not just the people on the ground who pulled the trigger, you need to make those linkages. And that, again, is documentary evidence, it's witness evidence. So all of these pieces of this massive evidentiary puzzle have to come together. At eyeWitness, we see ourselves as one of these pieces, we are a photo video piece of evidence that can tell part of the story but has to work together with these other aspects. So we don't do full investigations ourselves and put all these pieces together, what we do is equip either civil society investigators, ordinary citizens, journalists, or others on the ground who have access to these scenes and are collecting photo and video with a tool to do it in a way that they can feed that information into investigations because it can be so easily verified, so they can contribute to this puzzle, in order to help hold the perpetrators responsible.

    Todd Landman 11:03

    I think this whole portrayal of the contribution that you're making is really important. In our interview with the director of Human Rights State Analysis Group, Patrick Ball, the sort of data guru as it were in these areas, he said, you know, statistics are not a silver bullet. So the work that they do, would provide the statistical analysis that showed that certain things were happening that could not be explained by chance alone. But it was only ever one part of a very complex story alongside documentary evidence, alongside testimonies alongside forensic anthropology alongside many other things. And then ultimately, a determination of, let's say, genocide was a legal judgement that was either supported or not supported by the type of statistical evidence that was provided alongside other pieces of evidence. Now you're making a very similar case that whatever body is going to be prosecuting crimes, in whether it is Bucha, or the broader conflict in Ukraine, eyeWitness is only ever going to be one part of that much bigger story. Is that right?

    Wendy Betts 12:02

    Exactly, exactly. I think all of these different strains of investigation have to work together, people collecting witness statements, the people doing open source investigation of footage and other information that was posted early on, people who have access to official documents, all of these pieces have to fit together, because as you said, in addition to showing just the baseline conduct happening on the ground, you need to show these patterns in magnitudes. And you can only do that by amassing large amounts of information that can show some of those patterns and run those types of statistical analysis that Patrick was talking about. So it all does fit together and complements each other.

    Todd Landman 12:42

    Yeah. And you know, the conflict in Ukraine is by no means over. And you know, I read up a report, I think it was yesterday that said, there are up to 30,000 war crimes that need to be investigated. Now, each crime itself requires extensive documentation, and then you multiply that by the number of crimes. And of course, there may be future crimes committed that will need to be documented as well. So the scale of just this conflict in Ukraine, you said, you've received more images from Ukraine, and then you have in the last two years of other areas of the world, and we may get to talking about those other areas of the world. But to me, the scale of what's happening in Ukraine, and the time that's required to fully prosecute many of these crimes means that we're really going to be in this for the long haul.

    Wendy Betts 13:25

    Justice, unfortunately, in these types of cases is definitely a long term process, and the arc of justice is quite long. And that's what we hope is part of the value added of eyeWitness and why we provide that secure storage aspect, because the photos and videos taken now may well not be involved in an investigation or a trial for years and years to come. And so we can safeguard that footage in a way that even at that time, we can hand it over and it could stand up to the scrutiny. But indeed, I think we're looking at a long term prospect for justice.

    Todd Landman 13:58

    Yes. And outside the Ukraine context, what are some other examples of where eyeWitness has been collecting this video footage from other parts of the world?

    Wendy Betts 14:06

    So eyeWitness launched publicly in 2015. And we really do work globally. And we respond to the inquiries and the needs of human rights defenders in various parts of the world. Now, some places we don't advertise especially where the security situation is quite serious for some of the human rights defenders using the eyeWitness app. But in other places, we have been able to be a bit more public. So we have been working actually in Ukraine since 2017. And we put out a report about shelling of civilian dwellings to the United Nations Special Rapporteur on the right to adequate housing. So that's one area where we've been active even before the current events. We've also recently submitted a report to the UN Special Rapporteur on extrajudicial killings related to violence occurring in the middle belt area of Nigeria between farmers and herders. We've also been active in the Palestine context with partners there using the eyeWitness app. So we've been quite broadly represented around the globe. And we review accountability broadly as well. And so that's why I'm mentioning non-judicial approaches to accountability. Any efforts that can get at this conduct and get it and investigate it and helped to hold the perpetrators responsible is what we're interested in empowering human rights defenders to do.

    Todd Landman 15:25

    Okay. And do you provide training alongside because it's one thing just to download an app and start using it, but you might make sort of fundamental errors in using the technology from the start? So do you provide a training manual or workshops or online training for users as they download the app and then say, well, actually, this is the best way to film things? Or do you just sort of allow the technology to run in the hands of the users?

    Wendy Betts 15:49

    Our preferred approach is to work in long term partnerships with human rights defenders that want to use the app, we very much see the app as a tool and to be used effectively, you do need to put more skill building and strengthening around that tool. So we do work hand in hand with human rights defenders, who plan to use the app on not only how to use the app, but how to incorporate photo and video into demonstrating whatever types of violations that are looking into, we can provide training on how you actually film when you're at the crime scene. We work with a lot of human rights defenders whose primary efforts have been advocacy oriented, and those are very different photos than photos you want to take for evidence. And so we work to help them make that shift as well. And so then we give them ongoing feedback. Once their footage starts coming in, we can provide tech support, if they're out in the field, and we know they're going on a documentation mission, we can be ready to answer any questions if they have any. So we really want to work with them hand in hand to not just use eyeWitness but use it effectively.

    Todd Landman 16:54

    I understand and does the technology work in the absence of a mobile signal in the absence of a WiFi connection? Can you collect videos on a phone, outside of network, and then when it gets back into the network, you're able to upload the images and videos that have been taken to a secure server?

    Wendy Betts 17:11

    Our goal in designing eyeWitness is to make sure that it can work in the types of environments where these human rights defenders are active. And especially when you look at conflict zones where electricity may be disrupted, internet may be disrupted, cell service may be disrupted. So the app is designed to be able to collect, not only take the photos and videos, but all of the metadata that's needed to help verify where and when it was taken while offline. So you don't need to have access to the internet. Nor do you need to have a cell subscription or any other kind of data service that will collect all of that. It's designed to store that information securely in a gallery separate from the gallery on your phone. So it's hidden in a secure gallery. The idea being that these human rights defenders may have to make their way back to their headquarters or make their way back to someplace with internet before they're able to upload it to us and then delete it off their phones. So we wanted it to remain hidden in transit during that timeframe. So it is definitely aimed at helping individuals in contexts where there's high security risks, infrastructure challenges to be able to use the app.

    Todd Landman 18:17

    You've definitely given that a lot of thought, I guess another question that flows from that is what's the minimum viable technical requirement on a phone? Obviously, it needs to be a smartphone with a camera and a video. But how far back in time can you go in terms of the age of a device because of the availability of resources, etc in some of these conflict zones? What sort of phone is the basic unit you require to use the app?

    Wendy Betts 18:39

    That is a really good question, because it's such an important issue in terms of access and availability of these tools to the vulnerable segments of society that need them most. First thing I should say it's designed for Android, and we don't currently have an iOS version. Part of that is because the demographics of the places where we're working is primarily Android users. So it's designed for that operating system. And we've designed it to go back to android 6.0, which I think is roughly operating systems on phones back to 2015. So it does stretch back a fair way, we made a decision not to go back any further. And that's because Android changed how it handles security at the 6.0 version onward. And we could harden the security of the information both to protect the user and the integrity of the information from that version onward in a way that was more difficult in previous versions. So that's when it goes back to

    Todd Landman 19:32

    And are there any plans to make this available in iOS? Or are there sort of limitations in terms of partnering with Apple to make that happen?

    Wendy Betts 19:40

    We regularly revisit the question and we're actually currently in the process of again, looking at if we could replicate all the functionality that we currently offer security, the anonymity those types of questions, in an iOS version and then looking at the cost compared to the potential user base are the calculations we make. So we're looking at that right now again actually.

    Todd Landman 20:00

    But for the user, this is free. It's an app that you download for free and then use. Is that right?

    Wendy Betts 20:05

    Exactly. It's free. That's freely available. As I said, we would like to work in partnerships. But that's not necessary. Any individual can go to the Google Play Store now and download it and start using it. We do have written instruction guides on our website, we have a short video on how to use it and some other resources that are available.

    Todd Landman 20:25

    Great. And then I guess my final set of questions really is about how this evidence connects to what we've say different photographic evidence you made passing reference to the use of satellite imagery, which has been a very powerful tool, I think the company planet takes a picture of the entire surface of the Earth every 24 hours with its sort of flocks of satellites, then they have the system, if one satellite goes down, they can easily replace another one within the flock. And they have tremendous number of images that are very high resolution, and I should say an increasing resolution. But that's one version of what you can see from space, as it were. And what you're saying is in the hands of users and defenders, you have almost a citizen science ground truthing that can take place as well. Are there any efforts to coordinate between your organisation and some of these providers of satellite imagery, if asked to do so? You mentioned the forced deportations or the destruction of houses. The Special Rapporteur on adequate housing, for example. So you could see satellites, just, you know, images before and after a village is destroyed. But equally, you could triangulate that with your users on the ground, saying, Here's a house being destroyed, I'm hiding in a bush filming this right now is that sort of partnership and, you know, sort of holistic approach being developed in your organisation?

    Wendy Betts 21:38

    So we have certainly used satellite footage in some of our analyses in that Ukraine one about shelling is a key example. That case we didn't establish a partnership, we used what was publicly available that we can access to help go back and look at the dates and locations of the photos we have and then go back and look at satellite footage. And we use that primarily to determine when the attack actually took place. So we have photos dated as to when they were taken. But that doesn't necessarily give you the date of when the attack was. So we use satellite footage a lot to help determine Okay, well, this building looked intact on this date, and certainly looks more like the photo that we have on this date. And then that way, we were able to determine at what point the attack probably took place. We've also worked with another organisation that was doing an investigation of environmental damage in a different location. And in that case, they were able to get the latitude and longitude of the event that they were looking at using the app. And then they were able to get historic and current satellite footage for that location to be able to trace the trends of what was happening there. So they were looking at some environmental damage. So you can help see the change in the environment based on what you're seeing in the satellite photos. That being said, there's certainly the ability to work with satellite providers to help target so I think if you're setting out to do an investigation, and you know, you're going to be in certain places at a certain time and you need some of those satellites pointed at those locations. I think those type of partnerships are indeed possible. We haven't engaged with any of those at the moment because again, we tend to be led by our human right defender users and what they want to investigate. But I think there are organisations that are engaging in those types of partnerships.

    Todd Landman 23:19

    That's great. That's very clear in your explanation. And then I suppose a follow up question would be you've been an operation now since 2015, you've had seven years of footage coming in and the secure servers and you've supplied images to cases, can you tell us the story of success, you know, it has been successful prosecutions in your mind from a legal perspective where you think that eyeWitness has made a definitive contribution to the outcome of those cases?

    Wendy Betts 23:44

    Absolutely. So as I mentioned, we've launched the app in 2015. And we're looking at atrocity crime. So going back to your earlier point about the long arc of justice in these crimes, we have to kind of bear that in mind at what point they might actually go to trial. So we've provided a significant amount of information to investigations at different stages of the process. And so not all of those have gone to trial yet. But indeed, we did collaborate on a case that has gone to trial and resulted in a conviction. And this was a project that we did in partnership with WITNESS, which is a group based in Brooklyn, and with TRIAL International, which is an organisation based in Switzerland, who does strategic litigation. And we then all three of us partnered with a human rights defender group on the ground in the Democratic Republic of Congo. And in that case, they were investigating a massacre that took place in 2012 in two different villages in eastern DRC. And local human rights defenders were able to use the app based on training that they received from WITNESS on filming a crime scene, and to help put it into a case that TRIAL International was helping to build and they were able to use the app to go back and collect photos that helped to actually authenticate footage that was taken contemporaneously with the massacre. But that hadn't been stored in a way to protect the chain of custody. So they were able to go back and take footage of some of the same scarring injuries on the victims to demonstrate that the ones taken at the time, were accurate, and able to take photos of the mass grave, which could be used to help determine the number of bodies based on its dimensions and how that matched up with the reports of the number of people who had been killed, and with the photos that have been taken at the time of the burials. So all of this footage was entered into evidence by the prosecutor in the case, and was accepted by the court and was noted in the judgement about the power of the footage. And indeed, the two militia leaders were convicted of crimes against humanity.

    Todd Landman 25:51

    Right. So that's a real success story, I had the pleasure of visiting WITNESS in Brooklyn back in 2011. And I recall that it's funny when you enter their offices, they sort of have a timeline of tech sitting in their front room, you know, cameras from ages ago, up to the latest stuff, and they're very, very good at training people how to represent human rights in a slightly different way that you do it. But working together, obviously has produced a great benefit. Now, it's that timeline of tech that interests me. And my final question is that, you know, technology continues to advance at an exponential rate. And what do you see for the future in this space? What would you like to do that you can't do yet, but you think will be possible in a few years time with respect to the technology that you've been working with and developing?

    Wendy Betts 26:32

    That's a great question. There's so many exciting things that can happen with technology. I mean, there's already it's not even in the future, it's looking at virtual reality and using that for juries to kind of put them in the place of the crime scene. And that's all based on taking a number of photos and videos that can then be put through the algorithm to be transformed into virtual reality. There's the idea of being able to take 3D photo and video that you might be able to broadcast into the courtroom. I think the interesting component of that, though, is can the courts keep up? I think the courts now are trying to determine how to best handle digital evidence that's coming out of this flood of footage over the last 10 years. So I'm not sure we're ready to start talking about how we handle 3D images that are captured on a mobile phone.

    Todd Landman 27:20

    Yeah, absolutely. And, you know, like DNA suddenly emerged as a new thing that, you know, transformed the legal profession in terms of solid evidence about whether somebody was actually present at a crime scene, and you could re litigate cases for many years ago, you've put your finger on that challenge between the advance of technology and the ability for legal entities to keep up and courts to keep up there have to be determinations around what is an acceptable piece of evidence. And that's a very interesting challenge for the future. But you've given us so much to think about here. And I think there is this fear of technology as a fear of manipulation of images. There's also the fear of cracking an encrypted storage of these images. But you have given us assurances and confidence in the technology that you developed the way that you've partnered with organisations to help you store this information. And then, of course, this chain of custody, the chain of evidence which is unbroken, and the ways in which these images really do contribute to, as you call it, the long arc of justice. So it's a very interesting conclusion to reach, at least at this stage, in listening to you and talking about how this form of technology which is in the palm of our hands, gives us the power in the palm of our hands to defend human rights in such interesting ways. And in my view, shows us that the digital transformation and technological advance we're seeing in the world can make a positive contribution to positive social change. So Wendy Betts has just leaves me to thank you very, very much for sharing your thoughts with us today on The Rights Track.

    Wendy Betts 28:45

    Thanks so much for having me. It was a great conversation.

    Christine Garrington 28:49

    Thanks for listening to this episode of The Rights Track podcast, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3di. You can find a full transcript of this episode on the website at www.RightsTrack.org together with useful links to content mentioned in the discussion. Don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.

    Additional Links:

    eyeWitness app

  • In Episode 7 of Series 7 of The Rights Track, Todd is joined by Tom Nichols, Professor Emeritus of National Security Affairs at the U.S. Naval War College and Contributing Writer at The Atlantic. Tom specialises in international security affairs including U.S. - Russia relations, nuclear strategy, and NATO issues. His recent book – Our Own Worst Enemy: The Assault from within on Modern Democracy is an account of the spread of illiberal and anti-democratic sentiment throughout our culture.

    Transcript

    Todd Landman 00:00

    Welcome to The Rights Track podcast, which gets the hard facts about the human rights challenges facing us today. In series seven, we're discussing human rights in a digital world. I'm Todd Landman, in this episode, I'm delighted to be joined by Tom Nichols. Tom is Professor Emeritus of National Security Affairs at the US Naval War College and contributing writer at The Atlantic. He specialises in international security affairs, including US Russia relations, nuclear strategy, and NATO issues. He recently authored a book - Our Own Worst Enemy; The Assault From Within on Modern Democracy. It's an engaging account of the spread of illiberal and anti-democratic sentiment throughout our culture. So today, we're asking him who's responsible for this, and what we should do about it. So Tom, it's fantastic to have you on this episode of The Rights Track. So welcome.

    Tom Nichols 00:51

    Thank you. Thanks for having me.

    Todd Landman 00:53

    So I have a rather unusual question to enter into this conversation with you and it involves Indian food, because in your book, you talk about the idea that you're not a big fan of Indian food. But tell me a little bit of the story. What happened when you just expressed this view that you know what, I don't like Indian food?

    Tom Nichols 01:10

    Well, I didn't just express that I didn't like Indian food, I added this kind of snarky comment, because it was on Twitter, of course, and someone had said, post your worst food takes here. And of course, people said things like, well, I hate mayonnaise, and doughnuts are bad, and so on. But I said, Indian food is terrible, and we pretend that it isn't. And, of course, I meant my colleagues who would always drag me to Indian restaurants, and then spend the afternoon sweating and gulping water and you know, sweat running in their eyes. And I would always turn to them and say, so you can't possibly be enjoying this. Because I don't happen to like very spicy food. And this Firestorm broke out. I mean, within two days, you know, I was this, you know, genocidal racist maniac. You know, I was in all the Indian papers. I was in The Washington Post - Russian television mentioned me. I mean, it was insane. All because I'm a middle aged New Englander, who just doesn't happen to like Indian food and is very snarky about it. The coda to this whole story is that finally the former US attorney in New York, Preet Bharara, when the pandemic finally lifted, he took me out and said I challenge you to come to dinner with me. And he took me to an Indian restaurant. And I said, I would sit there and I would just eat Indian food, while people were making donations that we're gonna be used for a COVID ward in India. And this challenge ended up raising about $135,000 for COVID relief in India.

    Todd Landman 02:42

    That is fantastic. Now, I'm going to pick this apart a little bit, because what's interesting is what looked like an incidental and as you admit a bit of a snarky comment about your food preferences, what you really communicated there is the rapidity and the spread of information, geographically, how it gets picked up, it's a bit unusual how one tweet can be picked up and really run and other tweets just sort of die on the vine, as it were. So this captures this idea that you have in the book around the viral nature of information, regardless of its veracity, how it can spread around the world, and how the originator of that information might be vilified by an anonymous group of people out there. And then how stories get picked up. So is that your sort of summary of what happened there that it was just this kind of, you know, ridiculously rapid thing about it actually, just a personal preference?

    Tom Nichols 03:33

    Yeah, absolutely. And there's two things to note about it. One is that the nature of hyper connectivity, where, you know, I mean, when I started my career, 35 years ago, in the late 1980s, a viewer mentioned in a newspaper, you know, people clipped that and sent it to you in an envelope and the thing that we used to call the US mail with a stamp on it, and they'd say, wow, you know, I saw that you were mentioned in a newspaper. Now, you can be mentioned in every newspaper in the world in 24 hours, you know, on the one hand, I suppose there's a good side to that, which is that we all have the opportunity to be more informed. But the second part of it that makes that so worrisome, is that the internet rewards negativity. And so instead of people saying, you know, taking that in kind of the light hearted or snippy spirit that I intended it, it was what rewards engagement is to assume that anyone you encounter in the virtual world has the worst intentions, and that it's your job to kind of, you know, reveal that to the world. I mean, I had people within about three days, literally sending emails to my workplace saying, I hope you die.

    Todd Landman 04:48

    Just because you didn't like Indian food.

    Tom Nichols 04:51

    And because I had said it in this very dismissive way, if I had said, and because also there were a lot of people deciding that this was an opportunity to show their own, you know, elevated consciousness about a part of the world about India. As I said, you know, many times after that incident, if I had said, you know, French food is overpriced, gluey junk, and we pretend it isn't, people would have shrugged, because there's no psychic income from defending expensive French cuisine. But this was, you know, this narcissism. And this is what I was getting at is that there are some times or some incidents that really speak to this problem that I talk about in the book of the rise of narcissism and people say, 'Uh huh!', you know, instead of reading this article, or tweet or letter, or whatever it is, this is an opportunity for me to say something about myself. And to say it loudly and to say, you know, by being very hostile to someone else, and I think that's kind of made the world crazy. When I wrote a piece initially about this tweet, I said, we've become planet Seinfeld. And famously, the creator said, it's a show about nothing. We are now a global culture that is constantly manufacturing things out of nothing, because that's a way that we generate satisfaction and actualization for ourselves. And it's very worrisome, because you can't sustain democracy on that.

    Todd Landman 06:20

    Right? We'll get to the effect on democracy in a minute. I mean, I share your pain not in the palette, mind you. But you know, I write books like you do. I write articles for The Conversation for the Guardian, for other other outlets, and they get a modicum of interest and support. And then one time on BBC Breakfast, I was asked to comment on the intelligence reports about Russian interference in the US elections, and I happen to be out of sequence in the studio, I get on the couch, and they're running a story about a Marine who rescued people from Mount Everest. And they turned to me and I think, Wait a minute, they have the wrong guest. So I say 'you have the wrong guest' that got more hits, more attention, the analytics on my website went completely haywire. So that focus on either the negative or the humorous, can actually, you know, go out of control more than, you know, the erudite focused work that I tried to do in the day job, but I'm gonna get back to this point about the undermining of democracy. And I want to really start with a compelling argument you make in the book, and to me it references some really interesting political science literature, most famously published by Ronald Inglehart back in 1977, published a book called The Silent Revolution with Princeton press. And then he followed that up with a book called Culture Shift. And his main thesis was that at times of plenty, in advanced industrial democracies, there's a development of what he calls post material values. So people are not, if they're not concerned about roof over their head, food in their mouth job every day and a paycheck, they turn their attention to other things, like human rights, like nuclear power, like climate change, like women's rights and other issue areas that transcend traditional class issues that, you know, Marxists would want to talk about or those interested in the economy. Now, you're making a really interesting argument in the book, because you're basically saying that in those countries where we've had economic plenty, material progress, technological advance, and now we throw in an ability and a platform for people to share their you know, the thoughts they have, second by second onto a platform has actually created this phenomenon. You say people are too connected and too isolated at the same time. Tell us about that insight from the book.

    Tom Nichols 08:33

    Right. It's great that you kind of do a touch back to Ron Inglehart, because there was so much that I wanted to think about with this book, and the idea that somehow, once you stop this kind of struggle for your daily bread, that you can actually think about other things, you know, at the time, I mean, now you might people listening might say, Well, that's obvious, you know, but that wasn't really obvious at the time. I mean, you know, people even through the Depression, we went from the depression into World War Two, we started thinking about things like making the world safe for democracy and all that. But then I think we went even further from some kind of post materialist thinking to postmodern thinking, where everything became mediated through our experience of it, that we just decided that the world was just one big TV show. It's kind of like The Truman Show or, you know, kind of virtual reality exercise where we were constantly connected to each other, and snooping in and out of each other's houses all day. You know, when people hear me say connected, I don't just mean by Twitter or Facebook. I mean, things like and I talked about this in the book, I mean, things like Zillow. I just gave a talk the other day where I had a group of I don't know it was talking to about 100 people. And I said, I mean, people here, come on, admit it. You've snooped on your neighbours, and looked inside their houses by going to Zillow and these kinds of hands sheepishly went up, you know, we spend a lot of time being very connected to and very interested in the lives of our neighbours, but not actually interacting with them in any positive way we don't talk, we don't do things with them. We don't. I thought maybe one of the other political science works here, we're going to name check. Here's one that I put in the book, which was Bowling Alone by Robert Putnam, you know, where we don't join bowling leagues, we go bowling, and then we post pictures of it, you know, we don't actually interact with that middle stratum of people who are somewhere between close friends or family, and strangers. You know, there are so many people, I learned this, you know, when I decided in my years ago in my 40s, to take up golf to try and you know, get some physical activity. And suddenly, I realised I knew a tonne of people in my community, I didn't know them well. But I knew them enough to be able to have a conversation with them. I mean, I didn't join a country club, it was a public course. And, you know, and having a beer at the bar afterwards. And you know, I got to know a lot of people, we don't do stuff like that any more. And so we are both connected and isolated. And in a way that just rewards negativity, it rewards that using other people and their views and their lives as raw material for us to express our own grievances and sense of entitlement. And you know, gripes and basically again, to make it about us rather than about other people.

    Todd Landman 11:18

    Yeah. And you know, the reference to Bowling Alone is brilliant, because the thesis of that book, of course, is that because people aren't Bowling Alone, because they're not going to the PTA. They are engaged in chequebook activism, and now we're going to pay pal activism, I think. But actually, it erodes social capital, it erodes the fabric of society, it erodes that connectivity, that chance encounters you, whether it's a golf club, a bridge club, a local social club, or just going down to the local bar and getting a drink. People are now experiencing the world really literally through a screen. And certainly during COVID, that was raised to very high relief that people were isolated. And I wonder if there's going to be a post COVID effect. But what you're describing is a sort of post industrial or post material postmodern resentment, that focuses not only on the negativity, but also I'm going to throw in another term here, this idea of relative deprivation, if you spend all your day, looking at how everyone else is living their life, and we know that's a fiction, we know that what we see on Facebook, and Instagram and any other platform is an idealised, artificial version of somebody as they go about enjoying their lives. That creates resentment as well and the sense of relative deprivation. Why does that person have many more fun things to share on social media than me, including a really nice slap up Indian meal I might add, and that resentment that develops creates that ennui that sort of, you know, just this desperate sort of sense of negativity? And what I'm curious then is how does this then connect to a problem for democracy and by extension, a problem for human rights?

    Tom Nichols 12:53

    What when you think that everyone's living better than you, and you develop that constant sense of entitlement? And we know, by the way, that's spending a lot of time I mean, this psychologists have actually measured this. Spending a lot of time on Facebook actually, will depress you. Because as you say, what do people post on Facebook? Here's my daughter's wedding. Here's my son's graduation. You know, here we are at Disney, nobody posts their, you know, first day out of rehab pictures, their divorce decrees, you know, their court appearances, you know, no one puts that stuff, I can say, Wow, My life sucks. And the conclusion you come to is that somehow this is a failure of government, because government's supposed to fix all these things for you. And therefore, it's a failure of democracy. And I want to anticipate one criticism, I know, that's always out there about this, you know, there are people who will say, but these are legitimate gripes, because of things like income inequality, for example, because of the very rich and the very poor. You know, this is why it takes so long to write a book like this, the data just doesn't bear this out. The two most important things to understand is that the anti democratic attitudes are centred heavily in the middle class. Back in the 50s, you know, the term lumpen bourgeoisie started to peak out a kind of middle class that is bored and restless and hates democracy, because they think they're not getting everything out of it, that they should. The example I use in this in the book is a good, you know, now passed away, unfortunately, but an old friend of mine from school, who literally was complaining to me about how bad things were while he was sitting on his boat, you know, a working class guy with a high school diploma and nothing else sitting on his boat talking about how the world you know, had done him dirt. That's very much the problem - it's not the poor and the dispossessed, and minorities and marginalised people who are giving up on democracy. It is middle class white people in Italy and Britain and the United States and Poland, Turkey and so on. The other problem with the income inequality argument is that most of the anger and most of the dissent in the country is not focused on you know, poor people versus Jeff Bezos, it's the middle class, griping at each other about subtle gradations among them. There's even a thing that social psychologists called the HGTV effect, where people spend a lot of time watching these go home and garden TV shows. And they literally then decide they have to improve their house because they find it intolerable that people they see on television who are like them, somehow have, you know, granite countertops and hardwood floors and recessed lighting, and they look around and they say, How come? These people are just like me, I don't have that. That leads to this anger that says, democracy is a rigged game that's always arrayed against me. And somehow I'm getting screwed in all this. And so the right answer to this is to burn it all down.

    Todd Landman 15:45

    Right. So that's the crucial point. And I know the work of Robert Pape, out of Chicago has been looking at those people that were most involved with the insurrection on January 6 In the US Capitol. And he actually says, look, a large proportion of these people were actually white collar professionals. You know, people were estate agents from Texas, hiring private aeroplanes to fly to Washington, DC to protest the rigged system, as it were, they got caught up in something they maybe didn't realise they were getting caught up in. And then when they were arrested, they said, Oh, my God, they arrested me. Yeah, it broke the law.

    Tom Nichols 16:20

    I think this is a really important point. And I think Bob's you know, work on this, that he and his team who just basically just sat down and kind of trawled through all of the arrest records and cross indexed and kind of did the deep dive on each of these people. These were not unemployed factory workers living in opioid decimated wastelands. They just weren't. That's a comforting thought. And I say comforting, because people think that if that were the case, it's something you can fix with better social policy. But they weren't. They just were not those people. What they were were people who were again, bored, narcissistic, grandiose, one of them, the person you're talking about, the real estate agent from Texas basically said something to the effect of, I'm just too white and blonde and pretty to go to jail or something, you know, and she turned her jail term, which was only I think, 45 days, she turned it into a stunt. Which, you know, for a lot of us, I think it's always bothered me that these people got, you know, these kind of piddly 30 and 45 day jail sentences, you know, six months in prison in a federal prison might have sobered her up a little bit.

    Todd Landman 17:27

    And I think more controversially, what of him said, You're treating us like black people. Now that of course as the racist dimension to that observation. But if we get back to the topic of the digital, then, the technology that you talk about and being too connected, the platforms that technology like WhatsApp and Parler and some of the other things that were available, of course, did allow for a collective action and one might even say connective action that these groups were able to communicate with each other to plan and coordinate. I don't need to tell you this. You're a national security affairs professor at the US Naval War College, you know, how groups organised but the sort of organising infrastructure, if you will, of the digital world allowed for this to happen. There was chatter, there were security agencies that absolutely knew there was chatter, and yet there was an absence of response, at least in a timely fashion to prevent this from happening. And of course, so we see, for example, you know, pro democracy movements organised in the same way, anti democracy movements, organised in the same way. And the recurring theme on our podcast, this series has been this kind of, you know, technology is neutral. It's whatever people do with it, that you have to be worried about. So what can you say about that?

    Tom Nichols 18:35

    I hope people understand I am not a technophobe, I actually, I'm 61. So I came of age when the internet did, and I loved the internet, I have a huge social media account. And, you know, I was the geek tweaking some computers and doing all that stuff in the 90s, and even into my dotage. But I agree that the problem is what you do with the technology. The other technology that really made a lot of this possible that I think we need to say, give a shout out to are mobile phones, which allowed people to kind of track each other and stay in touch with each other during this moment. But of course, in a lovely kind of, you know, karmic irony here, it also allowed the government to be able to pinpoint exactly who was where, you know, by checking that data from cellphone towers and locators and all that other stuff that put a lot of these people in jail. But again the problem is the social normality underlying it - when it comes to the connectivity, when it comes to things like email and chat rooms and social media, the way I kind of stole this from a writer named Yevgeny Simkin who said 'Every town had an end of the world guy right? That every town had a guy with a sandwich board saying the end of the world is coming. What the Internet did was make every one of those guys able to reach out to every other one of those guys in every one of 100,000 towns and to believe hey we're a movement instead of saying I happen to be the one guy who's just kind of a bit off and you know and paranoid about the end of the world. No we're a movement - we're a social force and you see this with a lot of other things and sometimes with really tragic effects. The New York Times reported on a group of people who believed that the Government is watching them which is a problem that peopke with emptional issues have had long before there was an Internet. I mean I grew up with an Uncle who had that exact same problem in the 1950s and 60s. But they have now actually formed a kind of social movement by reaching out ironically through social media to say see it's not us that has the problem it's a real thing because enough of us believe it. That is how extremism grows through this. There is no counterveiling social pressure. When you're the one guy who is you know an abject Hitlet-admiring racist, it matters that everyone around you says that's a terrible thing to believe. You're wrong. If you can go online and find 100,000 other peopke who believe that same thing, which you always will be able to do because it's a big world, then suddenly, you say, well, maybe I'm not wrong. Maybe I'm part of a movement, maybe everybody else is wrong, because look at all my new friends. That's the real danger here. It takes people out of their social environment, removes them from normal kind of interactions about what might be right or wrong, or good or evil, and lets them go find the community of people who will agree with them about anything. And that I think, is the behaviour we've really seen growing. I'll just add one more point, which is, those of us who write and have any kind of public persona, used to get the occasional crank letter here and there, angry crank letters, and people reaching out in the most hostile and violent way possible. Because again, this used to be something that was socially unacceptable, it required a certain modicum of effort, you actually have to write something down, put a stamp on an envelope, whatever, you know, is now just commonplace. It's just part of the cost of doing business. If you step at all into the public eye, it's just a normal part of being in the public view now, and again, I think, because people encourage each other to do it.

    Todd Landman 22:16

    Yeah. A couple of keystrokes from a troll, and suddenly you've gone viral in a negative way. So Tom, I want to push you a bit on this then. So we have the socio economic question. We have the middle class point, the post material resentment, point, a bowling along point, all these things which come together and rather tragic and scary ways that undermine democracy that potentially compromise many different sets of human rights. But I wonder in the remaining time, we have together you might say, what are the couple of practical things we can do as a solution to actually curb the worst forms of this behaviour to regain faith in democracy and human rights in ways that sort of mitigate against the developments that you set out in the book,

    Tom Nichols 22:54

    I'm sorry to say that I got to the end of the book, and I wasn't that optimistic. But I did have a couple of things. One is that we need to concentrate on small scale projects. But all of this, everything I'm about to say requires human beings engaging in an act of will, and self reflection to step away from their screens, and to turn off the TV for a moment. And to take a walk and say, What kind of person am I really and what kind of person do I really want to be? There are a lot of things that need to be done in your community. The problem is that as a very narcissistic grandiose culture, even people who mean well, you know, will say things like, well, we have to change the structure of the US Senate, or we have to do away with the electoral - Well, I have bad news for you, you're not going to do away with the Electoral College as heroic. And as satisfying as you may think that that is that you're going to change the US Constitution tomorrow. That's not going to happen. What you can do is reach out and work with other people in your community to register voters to volunteer at a polling site, to phone bank to help get the potholes filled to go to a city council meeting, to go to a school committee meeting. People don't do any of that because it's boring. And I noticed because my mother was a city alderman, I come from a working class background, uneducated, you know, high school dropout parents, but my mom, one year got very angry about the nature. I tell the story in the book, got very angry about a drug market operating down the street from our house. So she campaigned on it. She went and got elected to City Council and fixed it. You know, you can do things like that. I was at a meeting recently where I talked to three or four local elected officials, who all told me that they had been in their jobs forever because they had no opposition because literally they were getting reelected every year or two years as Selectmen or Assemblyman or assessors or treasurers or whatever. It wasn't their town, because nobody wanted the job.

    Todd Landman 24:49

    No one else was running against them.

    Tom Nichols 24:52

    Yeah and because again, even well, meaning people say well, that's beneath me, that's too boring. I'm going to reform In the United Nations one day and you know, solve world hunger, and that's unfortunately, what we do. So that's one thing. I think the other is that if you are involved with a political party, at least here in the United States, and I think this probably would hold true for Britain, and other places as well, parties need to mean something. You know, in the United States, we had two prominent political figures, one of whom hijacked his party, the Republican Party under Donald Trump, and one of whom nearly hijacked the Democratic Party away from their nominating process, Bernie Sanders, who never joined the Democratic Party, he wasn't even a member of the party, you know, and I think that parties used to have some kind of coherent, ideological content to them. And now they're just tribal flags of convenience. And I think people ought to think about that. But I'm a former Republican, I worked for a Democrat, I'm now an independent, I'm not joining a party at this point in my life. But if you are younger, and you feel strongly about parties, then you know, they should mean something, and they should stand for something. But all of these mean, stepping away from the constant censoring stream of the internet, putting your own ego a little bit on hold, being kinder and thinking better of other people, and working on projects at a scale where you can start building up Todd, you mentioned social capital, and start rebuilding that Bank of social capital, those little interactions that give a society the resilience to hold on through bad times.

    Todd Landman 26:28

    That's brilliant. And those examples are great, you really have highlighted this tension between, you know, hyper narcissism and community really . And you know, I was struck listening to you that what you're really saying, it goes right back to Alexis de Tocqueville, and his assessment of the strength of American democracy was in the natural inclination for Americans to join things and to help one another. And I think that in this current period, we've lost sight of that. And what you're saying is step outside yourself, step outside your home, get off the grid, help somebody, invest in your community. And you know, don't take yourself so seriously, actually, the technology will always be there, as we've been discussing on this series of the podcasts, technology advances, if it does follow Moore's law, it doubles every year, and it's likely to continue to do so. But really, there's a human story here that you need to step outside yourself, step outside your home, and engage with others and rebuild that social capital and social fabric in order to hold on to the institutions that have so well governed. Our society is not just the American society, but also democracies around the world. And we are seeing this democratic backsliding taking on at the moment, and you know, your views around joining, helping, reaching out and stepping away from the narcissistic self that is somehow isolated within this electronic bubble is the first step. So thank you so much for appearing on this episode of the Rights Track with us. It was absolutely brilliant listening to you and engaging with you. And for now, all I can say is, thanks very much, and have a great day.

    Christine Garrington 27:59

    Thanks for listening to this episode of The Rights Track podcast, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3di. You can find a full transcript of this episode on the website at www.RightsTrack.org together with useful links to content mentioned in the discussion. Don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.

    Tom Nichols 27:59

    Thanks for having me.

  • In Episode 6 of Series 7 of The Rights Track, we're joined by Susie Alegre, an international human rights lawyer and associate at Doughty Street Chambers specialising in digital rights. Susie's work focuses in particular on the impact of technology and AI on the rights to freedom of thought and opinion. Her recently published book - Freedom to Think: The Long Struggle to Liberate Our Minds – explores how the powerful have always sought to influence how we think and what we buy. And today we are asking her how do we liberate our minds in a modern digital world?

    Transcript

    Todd Landman 0:01

    Welcome to the Rights Track podcast which gets the hard facts about the human rights challenges facing us today. In series seven, we're discussing human rights in a digital world. I'm Todd Landman, in the sixth episode of the series, I'm delighted to be joined by Susie Alegre. Susie is the international human rights lawyer and associate the Doughty Street Chambers specialising in digital rights, in particular the impact of technology and artificial intelligence on the rights to freedom of thought and opinion. Her recently published book - Freedom to Think; The Long Struggle to Liberate our Minds - explores how the powerful have always sought to influence how we think and what we buy. And today we're asking her, how do we liberate our minds in a modern digital world? So Susie it's great to have you on this episode of the Rights Track. Welcome.

    Susie Alegre 0:47

    Thank you so much for having me. I'm very excited to be here.

    Todd Landman 0:49

    So I love the book - Freedom to Think - I've read it cover to cover. In fact, I read it probably in two days, because it's such a compelling read. And I guess my first question for you is, why is the freedom to think broadly understood belief, expression, speech, religion, thought, why is all of that so critical to us as human beings?

    Susie Alegre 1:10

    I think the way that I've looked at it in the book is really dividing those elements up a little bit. So what I focused on in the book is freedom of thought and opinion and what goes on inside our heads, as opposed to the more traditional discussions that we have around freedom of speech. And one of the reasons for that is that while freedom of speech has consequences and responsibilities, and freedom of speech can be limited, that freedom in our inner worlds to think whatever we like to practice our thoughts and opinions and decide whether or not there's something we should share, is what allows us to really develop and be human. And the right to freedom of thought and opinion, along with belief and conscience, insofar as we practice that inside our heads is something that's protected absolutely in international human rights law, which I think reflects its importance. And when you consider other absolute rights and human rights law, like the prohibition on torture, or the prohibition on slavery, the right to freedom of thought inside your head alongside those other rights, really gets to the heart of human dignity, and what it means for us to be humans.

    Todd Landman 2:24

    Yes and so in protecting those rights, we are giving people agency because I was caught really captured by one thing you just said there about, we choose what we want to share. So a lot of us can have a million thoughts a second, but we don't share all of them. Although in the current era, it seems that people are sharing pretty much everything that they're thinking. But we'll get to that in a minute. I'm just curious about this idea of agency that, you know, you choose what to share, you also choose what not to share. And that element of choice is fundamental to being human.

    Susie Alegre 2:53

    Absolutely. And what the right to freedom of thought, well certainly a key element is right to freedom of thought and freedom of opinion, is what's called freedom in the forum internal that's inside, you know, in our inner lives, it's not what we then choose to do, or say in the outer world. And having that inner space, it's really important for us to be able to develop who we are, you know, I'm sure all of us have had thoughts that we wouldn't particularly like to be recorded. And I don't know if you've seen the recent drama Upload, which.

    Todd Landman 3:28

    I have not.

    Susie Alegre 3:29

    Well it's worth a look, because I was watching one of the episodes where it was about people being unable effectively to shut off their thoughts or their thoughts were being live streamed if you like. And I mean, you can only imagine the horror of that, you know, that was a comedy. A similar story played out in a short story by Philip K. Dick, The Hood Maker, which was a situation where you had people who were able to read other people's thoughts, and the only way that you could protect yourself from this mind reading was to wear a hood. And so protecting your thoughts from mind reading was really seen as an act of rebellion and effectively made unlawful and that I think shows just how important this space is. It is if you like the absolute core of privacy. So privacy becomes like a gateway right to that central core of who we are, and how we decide who we're going to be.

    Todd Landman 4:27

    I like this idea of a gateway right - that's really cool. Now, in the book, you have this really the first part is quite a deep dive into history. I mean, you go right back to Socrates, you worked your way through Galileo, you work your way through people that challenge the status quo, through freedom of thought, whether it was scientific practice, or religious belief or any kind of thought, but what are some of the high points of this history and shall we say the analogue attempts to control people's thoughts?

    Susie Alegre 4:53

    Yeah, as you say, I looked right back and and Socrates is if you like, a classic example of a martyr for freedom of thought. One of the interesting things as well about Socrates is that we don't have anything written down by Socrates, because Socrates was himself very suspicious of the written word and what that did for humans ability to debate. But what he did do was absolutely question the status quo. And he delighted in creating arguments that would undermine Greek democracy at the time. But one of the reasons why we all know the name of Socrates and remember, Socrates, is because Socrates was effectively judged by his peers, and forced to take his own life by Hemlock because of his scurrilous ideas, and his attempts to twist the minds of young Athenians and to question the gods. So while Socrates might be sort of seen as an example of a champion of freedom of thought and freedom of speech, it was very clear that at that time in history, you didn't really have freedom of speech, because it ultimately landed up with a death sentence. Some of the other areas I looked at were people like Galileo and questioning whether the sun and the universe travelled around the Earth or the other way around, and that really landed him in house arrest. So really, again, questioning the status quo of the church, and certainly religions through the centuries have been one of the prime movers in curtailing freedom of thought and freedom of religion, if you'd like.

    Todd Landman 6:32

    Yeah, in my world, the Galileo story is a kind of clash between observational data and belief.

    Susie Alegre 6:38

    Yeah, absolutely, absolutely. But again, it sounds like one of those arguments of you know, well, you can have your own opinion and every opinion is sort of questions, but in another century, and in that century, you'll end up under house arrest, when you challenge the beliefs of the status quo and of the powers that be.

    Todd Landman 6:56

    Yes, we see that being played out today, in the scepticism around science, whether one takes an extreme view about for example, being a flat earther. Or if there's doubt about scientific discovery, scientific development, the way in which countries respond to the COVID crisis, the hesitancy around vaccines, masks mandates, that kind of general scepticism around science, is also one where sure, there's freedom of thought, belief and opinion. But then there's also tested peer reviewed scientific evidence for the best thing we think we can possibly do under times of great uncertainty.

    Susie Alegre 7:31

    Absolutely. And that area is a prime area where you see the difference between freedom of thought and opinion and freedom of speech and expression. So where you have sort of COVID conspiracy theories, if you like spreading through social media or spreading really proven false information that can harm people. You know, there is then a legitimate reason to restrict that expression and the spread of that expression, to protect public health. Doesn't mean that people can't still think those things. But there really have to be limitations on how those expressions are spread, when they are absolutely damaging to public health or to other people's rights.

    Todd Landman 8:18

    Yes, exactly. And I don't think you covered this in the book. But I just want to push you a little bit. You mentioned about Socrates written word not being written down. But with the invention of the printing press historically, how had that changed freedom, expression, thought, belief? What's the role of that technological advance in your understanding of the history of this idea?

    Susie Alegre 8:39

    Well, the printing press just really accelerated the way that information could be shared, it effectively accelerated the impact of expression, if you'd like. And interestingly, actually, I was asked recently, to compare regulation of the printing press and of printing around that time and how long it took to get serious regulation as compared to trying to regulate the internet today. And I said, rather flippantly, well, people were arrested, and books were burned. That was how regulation worked initially in response to the massive impact of the printing press. And while I was being flippant when I thought about it afterwards, well actually, that is how they tried to regulate the printing press. And one of the reasons I looked back at the past of freedom of thought in the ways that we didn't really have freedom of thought historically. To me, that was important because it showed what a sea change, having human rights law has been for us as human beings. So you know, people may complain about cancel culture, but certainly in the UK cancel culture very rarely involves actually being put in prison. Certainly it doesn't involve being told to drink hemlock or certainly not being obliged to drink hemlock. Human rights have really put the brakes on the ability of the powers that be to control us. But they've also put an obligation to protect us from each other.

    Todd Landman 10:13

    And there's a certain duality then because if I think about what you just said, the powers that be, let's translate that into the rise of the modern state, as it were. And you draw on reading some, you know, quite regularly through the book you draw on Orwell's 1984. You draw on Arendt's Origins of Totalitarianism you draw on Huxley's Brave New World. So why did you draw on those sources? It seems to be you're alluding to the power of the state, the power of control, all those sorts of aspects. And yet, in order for human rights to work, we still need the power of the state. So there's two sides of the coin problem that we face in this quest to regulation.

    Susie Alegre 10:52

    Absolutely. And drawing on those sources, in particular, in particular, Orwell and Huxley. I mean, perhaps because I'm a bit of a masochist, I spent the start of lockdown reading 1984. And just marvelling at how prescient it was, and how accurately it portrayed the developments of technology in our life. The Speak Write machine, the way that Winston Smith is employed to rewrite history, if you like, sort of creating in real time, disinformation in 1984, was somehow a real surprise to me having not read it since 1984, was just how accurately prescient it was. And similarly, reading Brave New World and the consumerism and the use of distraction as a means of social control, rather than the oppressive jackboot that you see in 1984. And seeing the ways that potentially commercial enterprises and a light touch can be used to have an equally corrosive and problematic effects on our societies. So the reflections of the images of Huxley and Orwell in particular was so stark that I felt that I had to use them because it seemed that rather than taking those as a warning from the 20th century, we've taken them as a template for the development of technology and consumerism in our lives.

    Todd Landman 12:23

    So I suppose that really allows me now to segue nicely into your concerns over the digital world and how this digital world relates to human rights. And I guess my entry point is this famous line you have in the book where you say, you know, I told my daughter, she can't have Alexa. And she asked me why. And I said, you can't have an Alexa because it steals your dreams, and sells them to other people. Talk me through that. Talk me through your fears and worries around Alexa and what that means for the broader digital problem that we face.

    Susie Alegre 12:52

    Yeah, Alexa is certainly a case in point. And as I'm sure anyone else with children has had the experience, your child comes home and their friends have got whatever technology it is, in this case, Alexa, and I know several people, several families where the kids do have Alexa in their bedroom. So you will always get these arguments as well sounds so has it so it must be great. For me the idea of Alexa the idea of actively choosing to bring a listening device into your home, that is constantly listening to what is going on in your home and sharing that with you have no idea who using that information in ways that you have no real idea how that's going to land up is something so astonishing. You know, having spent years working on human rights and counterterrorism, and also most recently, working in oversight on interception of communications, and how sort of allergic people or if you like, and quite rightly, to state intrusions to the idea that the state might be bugging your home, to then actually pay money and to let a private actor come in and listen to everything that's going on in your home for profit, just to me seems really astonishing. And yet somehow, it's become so normalised that as I said, I know lots of people who do have Alexa and are delighted to have Alexa. Plenty of people in the lockdowns suddenly sending around videos from their Ring cameras outside their doors, but this idea of constant control constant monitoring of our lives for someone else's profit. To me seems like something that is an really fundamental shift and something that we should all be really concerned about.

    Todd Landman 14:51

    Now you're in addition to the Alexa example you're also very concerned about, shall we say the unregulated or the unleashing of and I will use the generic term algorithms in the digital world? So why are these algorithms problematic? From your perspective? What do they do? How do they affect people? Or is it a way that they're affecting people? And people don't even know? And is it that ignorance of the effect that concerns you? Or is it just the development of algorithms in the first place that concerns you?

    Susie Alegre 15:20

    Now, I mean, algorithms are digital tools, if you like. So it's not the algorithm itself. There are two things really well, there are many. But let's start with two. One is the ability to understand why an algorithm is operating in the way it's operating. So an algorithm is effectively told to take information and translate that information into a conclusion or into an action, but understanding exactly what information is taken, how that information is being weighted, and then how a decision if you like, as being taken and what impact that decision will have, is often not very clear. And so where an algorithm based on huge amounts of data, for example, is being used to decide whether or not you might be fraudulently requesting benefits, for example, in the benefits system, that raises really serious concerns, because the outcome of not getting benefits or the outcome of being flagged as a fraud risk, has a really, really seriously detrimental impact on an individual life.

    Todd Landman 16:29

    Yes. And you also give examples of credit rating. So if typically, somebody wants to get a mortgage in the UK, the mortgage company will say, well, we're gonna run a credit check on you. And they might go to one of the big data providers, that gives you a score. And that score is a function of how many credit cards you have any loans, you might have had any late payments you might have had on a loan or a mortgage in the past. And in the absence of a particular number. The company may reserve the right to say, you can't have a mortgage and I think you give the personal examples of your own struggles setting up a bank account after having lived abroad.

    Susie Alegre 17:03

    Yeah.

    Todd Landman 17:04

    Talk us through some of that.

    Susie Alegre 17:05

    Yeah, absolutely. So as you say, I talk a bit in the book about returning from Uganda, where ironically, I've been working as a diplomat for European Union on anti-corruption. And I came back to the UK to work as an ombudsman in the Financial Ombudsman Service. But when I applied for a bank account, I was suddenly told that I couldn't have the bank account. Because the computer said no, effectively. The computer had clearly decided that because I was coming from Uganda or whatever other information had been weighed up against me, I was too much of a risk to take. The fact that I had been fully vetted as an ombudsman, and that the money that would be going through that bank account was going to be salary from the Financial Ombudsman Service was not enough to outweigh whatever it is the algorithm had decided against me. Eventually, I was able to open an account a few months later. But one of the interesting things then working as an ombudsman was that I did come across cases where people had had their credit score downgraded because the computer said so and where the business was unable to explain why that had happened. I mean, from an ombudsman perspective, I was in a position to decide what's fair and reasonable in all circumstances of a case. In my view, it's very difficult to say that a decision is fair and reasonable if you don't know how that decision has been reached. But those kinds of decisions are being made about all of us all the time, every day in different contexts. And it's deeply concerning that we're not often able to know exactly why a decision has been taken. And in many cases, we may find it quite difficult to even challenge those decisions or know who to complain to.

    Todd Landman 17:14

    Yeah and this gets back to core legal principles of fairness, of justice, of transparency of process and accountability of decision making. And yet all of that is being compromised by, let's say, an algorithm, or as you say, in the book, the computer says no.

    Susie Alegre 18:49

    Completely and I think one of the key things to bear in mind that even the drafters have the right to freedom of thought and opinion in the International Covenant on Civil and Political Rights, discuss the fact that inferences about what you're thinking or what your opinions are about, can be a violation of the right even if they're incorrect. So when you find the algorithm, making inferences about how risky a person you are, whether or not the algorithm is right, it may still be violating your right to keep your thoughts and opinions to yourself. You know, you should only be judged on what you do and what you say, not on what somebody infers about what's going on in your inner life.

    Todd Landman 19:50

    Not on what you might be thinking.

    Susie Alegre 19:52

    Exactly. Absolutely. Absolutely.

    Todd Landman 19:54

    Right now, we've had a couple of guests on previous episodes that I would put broadly speaking in the camp of the 'data for good' camp. And when I read your book, I feel like I'm gonna broadly put you in the camp of 'data for bad'. And that might be an unfair judgement. But is there data for good here? I mean, because, you know, you cite the sort of surveillance capitalism literature, you have some, you know, endorsements from authors in that tradition. But if I were to push you, is there a data for good story that could be told nevertheless?

    Susie Alegre 20:23

    I think there might be in public data. So for example, in the US, and I don't know if they are included in your guests, but there's data for black lives. And they've done really interesting work from public data, you know, flagging where there are issues of racial and systemic injustice. So that kind of work, I think, is very important. And there is a distinction between public data and private data, although how you draw that distinction is a really complicated question. But in terms of our personal data, one of the things that I think is important in looking at how to address these issues, is about setting the lines for the things that you can never do. And what I hope is that if you set down some barriers, some very, very clear lines of what can never ever be done with data. Then you will find technology, particularly technology related to data, and that includes the use of AI interpreting and working with data will develop in a different direction, because at the moment, the money is in extracting as much personal information as you can out of every single one of us and selling them.

    Todd Landman 21:40

    And the degree of the extraction of that information is both witting and unwitting. So you also make the point in the book, if somebody signs up for a Facebook account, they just hit agree to the terms and conditions. But actually the time it takes to read the terms and conditions could be two or three days to get through to the fine print. And so people are just saying yes, because they want this particular account with not actually knowing the degree to which the sharing their personal information. Is that correct?

    Susie Alegre 22:06

    Absolutely. And the other problem was the terms and conditions is that if you don't like them, what exactly you're going to do about it? Particularly if you're looking at terms and conditions to be able to access banking or access the National Health Service. If you don't like the terms and conditions, how exactly are you going to push back. But that point that you've made as well about the consent button, there's also an issue around what are called dark patterns. So the way that technology is designed, and that our online experience is designed to nudge us in certain directions. So if you're asked to agree the terms and conditions, the easiest thing is to hit the big green button that says I consent. Again, we see it with cookies, you know, often you've got a simple option where you hit I consent, or there's a complicated option, where you can manage your cookie settings and go through a couple of different layers in order to decide how much you want to be tracked online. And so that is clearly pushing you in the direction in time poor life experience, to hit the easiest option and just consent.

    Todd Landman 23:16

    I feel that everybody you know, I read through Flipboard, which is a way of aggregating news sources from around the world by topic. And I sort of follow politics and law and international events, music and various other things. But every news story open up because of GDPR I get a pop up screen that says accept cookies, manage cookies. And I always say accept because I want to read the story. But what I'm actually doing is telling the world I've read this story, is that right?

    Susie Alegre 23:43

    Yeah, absolutely. The cookies question as well as one where, actually, why should we be being tracked in all of our activities? All of our interests? And as you say, you know, telling the world that you've read this article is partly telling the world what you're interested in and what you're thinking about, not just that you've read this article in an abstract sense, you know, it's telling the world about your interests. One of the things that is also disturbing that people often don't realise is that it's not just what you read. It's even things that you may hover over and not click on that are equally being tracked. And it's not just on the page where you're reading the article. It's about being tracked all around your online activity being tracked with your phone being tracked, where you are not just what you're looking at on the phone. It's so granular, the information that's being taken, that I think very few of us realise it and even if you do realise that as individuals, we can't really stop it.

    Todd Landman 24:52

    And I think for that reason I take a little bit of comfort because I wasn't targeted by Cambridge Analytica. I probably played some of the games on Facebook, you know the personality test stuff, but I never got ads as far as I was concerned that were being, you know, foisted upon me by the Cambridge Analytica approach. I use that as, let's say, a metaphor. But I know that there was micro-targeting based on certain profiles, because there was an attempt to leverage voters who had never voted before, or voters who were predisposed to in particular vote to vote for certain things. But again, it's that unwitting sort of profile that you build by the things that you hover over or the things that you'd like or the things that you at least read and accept that button on cookies. And of course, we now know that that microtargeting actually might have had a, you know, a significant impact on the way in which people viewed particular public policy issues.

    Susie Alegre 25:41

    Completely, and I mean, I don't know whether I was or was not targeted by Cambridge Analytica or similar, around that time around 2016/2017. I don't know if you've come across a Who Targets Me, which is a plugin that you can put onto your browser to find out particularly around election times, who is targeting you. And I have to say that when I very briefly joined a political party for a couple of months, I signed off my membership after a couple of months, because I discovered that they were targeting me and people in my household through this, who targets me plugin. So even though theoretically, as a member, I was already going to vote for them. But that information was being used to pollute my online environment, as far as I'm concerned, which was a bit of an own goal, I imagine for them.

    Todd Landman 26:32

    So that really does bring us to the question of what is to be done. So you know, I was waiting in the book for sort of what's the regulatory answer, and you do give some good practical suggestions on a way forward, because there is this challenge, particularly where we need services, you know, we do need mortgages, we need access to health care, we need public information, we need all the benefits that come from the digital world. But at the same time, we need to protect ourselves against the harms that digital world can bring to us. So what are the sort of three or four major things that need to happen to maybe mitigate against the worst forms of what you're worried about in the book?

    Susie Alegre 27:10

    Well, one of the difficulties in the book was coming up with those things, if you like, what are the key things that we need to stop, and particularly in an atmosphere where we are seeing regulation happening, rapidly trying to play catch up, we've just seen the Digital Services Act in the European Union being agreed, we have the Online Safety Bill on the table in the UK, in Chile, we've seen in the last year legislation around neuro rights being introduced. And so it's a very fast paced environment. So trying to come up with suggestions that go to the heart of it while recognising the complexity and also recognising that it's in a huge state of flux. I wanted to really highlight the things that I think are the core of how we've got here and the core, very obvious things that we should not be doing. The first one of those is surveillance advertising. And that is advertising that is based on information, granular information, like we've been talking about about our inner lives, including how we're feeling potentially at any single moment in order to decide what images what messages we should be delivered. And whether those are political messages, whether that is commercial messages, whether it's just trying to drag us into gambling, when we're having a bad moment online. All of those kinds of things are part of this surveillance advertising ecosystem. And while surveillance advertising isn't the whole problem, I think that surveillance advertising is the oil that is driving this machine forward. If you don't have surveillance advertising, there isn't so much money in gathering all of this information about us. Because that information is valuable because it can sell us stuff, whether it's selling us a political candidate, or whether it's selling us a particular pair of socks tomorrow. And so surveillance advertising, I think is the key. And I think banning surveillance advertising would be the single most effective way to start change. Another thing that I think could make a real sea change in the way tech develops is recommender algorithms. And again, the things that are being recommended to us the way that we receive our information, whether that is on Netflix, whether that is on new services, potentially, very personalised recommendations of information are a way of distorting how we think and how we see the world based on information about our emotional states information about our psychological vulnerabilities, a whole raft of things that could lead to that. That I think is a real vehicle for social control. And so you may want occasionally, or even always, to have somebody suggesting what you should watch, when you're feeling tired, you don't want to make a decision yourself and you're happy to just be given whatever it is. But recommender algorithms and that kind of personalization of information feeds should never ever be the default. At the moment for most of us that is the situation. When we open up our laptops. When we open up social media, when we look at our phones, we're being given a curated personalised experience without necessarily realising it. So addressing that, and making sure that personalization is not the automatic choice would make a really big difference.

    Todd Landman 30:53

    It's just an amazing set of insights. You've taken us from Socrates to socks here today. And it's been an incredible journey listening to you and so much to think about and so many unresolved issues. And when I listen to you, and I read your book, you know, I feel like I should get off the grid immediately, and put my hood on because I don't want anyone reading my mind and I don't want anyone selling me socks. But for now, Susie, it was just great to have you on this episode of the Rights Track and thanks ever so much.

    Susie Alegre 31:20

    My pleasure. Thank-you so much for having me.

    Christine Garrington 31:23

    Thanks for listening to this episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find a detailed show notes on the website at www.RightsTrack.org. And don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.

  • In Episode 5 of Series 7 of The Rights Track, Todd is in conversation with Amrit Dhir, Director of Partnerships at Recidiviz – a team of technologists committed to getting decision makers the data they need to drive better criminal justice outcomes.

    Transcript

    Todd Landman 0:00

    Welcome to the Rights Track podcast, which gets the hard facts about the human rights challenges facing us today. In series seven, we're discussing human rights in a digital world. I'm Todd Landman, in this episode, I'm delighted to be joined by Amrit Dhir. Amrit is the Director of Partnerships at Recidiviz, a team of technologists committed to getting decision makers the data they need to drive better criminal justice outcomes. He has previously spent over a decade at the intersection of technology and new business development, working, for example at Sidewalk Labs, Google for Startups and Verily. Today, we'll be exploring the practical uses of technology and data in the criminal justice system. So Amrit, it's great to have you on this episode of the Rights Track. Welcome from California.

    Amrit Dhir 0:44

    Thank you so much, I'm really glad to be here.

    Todd Landman 0:46

    It's great for you to join us. And I want to start with a simple question. We had a guest - Sam Gilbert - on our last episode, we made this distinction between the sort of data for good and data for bad and there's a very large sort of argument out there about surveillance capitalism, the misuses of data, you know, behavioural microtargeting and all these sorts of issues. And yet I see that where you're working at Recidiviz there's a kind of data for good argument here around using technology and data to help criminal justice systems and the healthcare sector. So just briefly, could you tell us about this data for good and data for bad distinction?

    Amrit Dhir 1:19

    Yeah, well, as with most things, I think it's difficult to pigeonhole anything into one of those camps, everything it seems, can be used for good or bad. And so data itself is not one or the other. I think it's about the use, I think that's what Sam was getting at with you as well. With Recidiviz, you know, what we've understood is that data that's been collected over a long period of time, especially in the context of the United States, and our unfortunate kind of race to mass incarceration, from basically the 1970s until about mid-2010s. We've collected a lot of data along the way, and we're not actually using or understanding that data. And so what we do at Recidiviz is we bring that data together, so make it something that can be better understood and better utilised, to help reduce prison populations to help drive better outcomes. So we're focused on taking data that's been, again, collected over quite a long period of time and consistently collected, but also making it better understandable.

    Todd Landman 2:17

    So this sounds like big, messy, disparate, fragmented data, is that correct?

    Amrit Dhir 2:22

    Most of those things, most of the time. It's definitely fragmented most of the time, it's not always necessarily what we'd call big. Because, you know, coming from Google, I think of big in the terms of, you know, search query type volume. So in corrections, it's not necessarily that big, but it is certainly messy, and it is certainly fragmented.

    Todd Landman 2:42

    You know we had a guest on Rights Track, some while back, David Fathi from the American Civil Liberties Union, he explained to us the structure of the American sort of prison system, not justice in itself, but prison system with, you know, 50 state prison systems, plus a federal prison system and a mix of public and private prisons. So it's a mixed picture in terms of jurisdiction, the use of incarceration and of course, the conditions of incarceration. So what's the sort of data that's being collected that you find useful at Recidiviz?

    Amrit Dhir 3:13

    Yeah, I'll actually add a piece of that as well, you're exactly right to say, you know, every one of the 50 states has a different system, the federal system is itself separate. But then there's also county jails. And those systems are running completely separately from even the states that they're in. So it is messy. And the data also extends, by the way, so we're talking about what we consider the back half of the system. So once someone has already gone to prison, we think of that as the back half. Whereas there's a front half of the system as well, which is the courts, your prosecutor and defence attorneys, and up to policing. And so all of those different segments have their different datasets as well. At Recidiviz we're starting at the back half, largely, because we think there's a lot more impact to be had there, at least for now. And the data extends to many things. So it can be first of all, admissions data. When someone comes into a facility, what sentence did that person come in with? Where is that person going to be in the facility? As in like, where's that bed? And then, as often happens, there are transfers between prisons, within prisons. That's another set of data. There are programmes that the person may be participating in. Some of these are built with the spirit of rehabilitation and reintegration into society. Those are important and knowing how they work and when they work, and if they work is important. And then when someone gets out of prison, that's not the end either. We've whole infrastructure of supervision. And broadly, those are grouped into two categories - parole and probation. And someone may be back out in their community and still under a degree of supervision that's more than what someone who has not been in prison goes through. They have to check in with their parole officer. They have certain requirements, they have certain restrictions. All of those are data points as well. How are you checking in with your parole officer? Did you have to take a drug test? Did you ask for permission to leave the state, all of those things. And as you can imagine, even just by the list I've given you, which is just a very small percentage of it, all of those are sitting in different data silos and are interacted with by different people within the system and it gets pretty tricky.

    Todd Landman 5:21

    And you collect data on the sort of sentencing? So you know an analysis of that plus demographic makeup of the prison population, time served? And also, the use of the death penalty and or deaths in custody - is that data that you can collect?

    Amrit Dhir 5:37

    Yes, so we can do all that. And I'm glad you pointed out racial and demographic data, because that's a big part of what we do and what we highlight, because you may not be surprised to hear that in the US, there are like pretty severe disparities when it comes to race, ethnicity. And these are things that departments of corrections. So those are the executive agencies within each state, we usually call them department of corrections, although they'll have different names in different states. They have this data, and they want to make better sense of it. Their stakeholders want to understand it better. So generally, these agencies report to the governor, but they're also accountable to the legislature. So there's a degree of sharing that data or better unpacking that data that's important. Then we also have, I would broadly, categorise, and we say these kinds of things a lot where there's broad categorizations and then there's also much more detailed ones. But broadly, you can think of this as public data, and then departments of corrections data. So the public data is what's available anyway - we can go out there and find without any data sharing agreement with any agency. As these are government agencies where this data is required to be public. And so you'll find researchers and universities and different organisations accessing this data and publishing it or analysing it, we do that also. But we also get data sharing agreements directly with departments of corrections, and help them unpack that as well. So there's a kind of complimentary interaction there between the two datasets.

    Todd Landman 7:09

    I understand. And how do you actually reduce incarceration through data analysis? I'm perplexed by that statement you made quite early on when you were talking to us.

    Amrit Dhir 7:18

    There's a couple things and I'll categorise this. My broad categories into three categories. There are leadership tools, line staff tools, and then public tools. So let me start with public tools, because I think that's more related to what we just talked about in the previous question. The public tools are ones that are available to you and me. And so there's two that you can look on our website and find right now. One is a public dashboard that we call spotlight. As of the date of this recording there are two that have been published one for North Dakota and one for Pennsylvania. I encourage everyone to go check those out. If you just Google, you know our name Recidiviz and Pennsylvania, you'll see it come up as the first result. And there you can see that all the data in a accessible way. So the 'viz' in Recidiviz stands for data visualisation. We worked with the Pennsylvania Department of Corrections, to better represent the data that they have, so that the public can see it. And you can see the breakdown, by ethnicity, by district, by sex by other filters, and really get in there in some detail and see what's happened also over time. So that's one that's the public dashboard. That's largely to raise awareness. And it's something that when you talk to departments of corrections, you learn that they have lots of FOIA requests, which are Freedom of Information Act requests, so requests from media, from researchers, from the public, but also from the legislature. And so that's one thing that we do that just broadens the conversation. Another are what we call policy memos. If you go to our website, and or if you just type in Recidiviz.org/policy, these are one-page memos that we have our data scientists put together that assess the impact of a particular administrative or legislative policy proposal. So imagine that you are looking to Pennsylvania for example, wanting to make a change to geriatric parole, or if you want it to end the criminalization of marijuana, we can then and we have gone in there and analyse the data that's publicly available. And sometimes we also access our data with collaboration with the DOC. And we can tell you what the both impact on the number of basic liberty person years that are returned. How people will get out of prison earlier or not go to prison at all, as well as how much money the state in these cases will save. And so that's a great way to inform policymakers to say hey, this is actually a good policy or a bad policy, because it's going to get people out of prison and it's going to also save you money.

    Todd Landman 9:57

    Yeah the concept it's like a variable called liberty person years that you use. And then of course, it's almost like a time series interrupted model where if you get new legislation, you can look at that liberty person years before the legislation and after to judge the degree to which that legislation may or may not have made a difference, right?

    Amrit Dhir 10:16

    Exactly right. And I encourage folks just to go check, check some of those memos out, there's probably like 50 on there now. And they're very easy to understand, very easy to access. They're all one page. They're all very beautifully visualised. Because you can take this very, as you said, messy and fractured datasets, but actually come to some pretty simple insights. And I would say simple and actionable. And so that's what we do there. And that was a long description of public data, but I can go into the other two, if you're ready for it.

    Todd Landman 10:43

    Yes, please.

    Amrit Dhir 10:44

    Okay. So working backwards, we'll go to line staff tools. And so this line staff, meaning people who are working within corrections or on supervision. And let me take the example of supervision first, because one thing that's interesting and that I actually learned only while at Recidiviz is that half of prison admissions in the US every year, come from supervision. Meaning people who are getting their parole or probation revoked and are going back to prison. That's half of the emissions we get every year. And that's a huge number.

    Todd Landman 11:15

    Wow.

    Amrit Dhir 11:15

    And so this, you can think of this as the back end of the back end, it's the very last piece. And so for Recidiviz we were kind of assessing where we should start, that seemed like the right place to do so because the impact was just so great. Now, put yourself in the shoes of a parole officer. These folks have pretty difficult jobs in that they often have, you know, up to 100 and sometimes more, we've seen up to 120 people that they are I'll use a verb 'serving' as a parole officer. So the idea is you got people that have been returned to the community, they've been in prison, they now are trying to get jobs, they're trying to get job training. They're trying to reintegrate into their communities, and the parole officer is there to help them do that, and keep track of how they're doing. Now, that's one thing to do if you got 20 people, you want to keep track of and help and connect to the right resources, but if you've got 100, and you're supposed to meet with them every month, it becomes impractical. And that ends up meaning sometimes that parole officers aren't doing as good a job as they'd like to do. Because it's just too hard, just too much to manage.

    Todd Landman 12:22

    You need a structured database approach.

    Amrit Dhir 12:24

    Exactly. So that's where data can be very useful, because we can automate a lot of what a parole officer needs to do. And rather than having to check, you know, we've heard up to 12 different datasets to figure out where are the programmes my the people I'm serving are have available to them? When do I know if I need to do a home visit? Where do I find a list of employers that I can send them to? Where are housing options for them? All these are in different places, but we at Recidiviz, bring them all together, give them an easy-to-use tool, so that we can actually service them even you know, on their smartphones, in an app, to show them, hey, did you know that this person is actually eligible to be released from parole if they just upload a pay stub? And hey, do you want to just take a pay stub with your phone, and we can do it for you? I mean, how much easier that is than you having to go through all 100, figure out who's eligible based on your own recall or some other antiquated system and kind of struggle to try to help people. We can help you do that. And that's a big thing that we've done.

    Todd Landman 13:22

    I mean it's almost like an E-portfolio approach that there's this way to archive parolees meeting certain milestones and conditions. And it makes the management of those cases so much more straightforward. Whilst there's also a record of that management that makes it easier for the parole officer to serve the people that they are serving.

    Amrit Dhir 13:42

    Exactly. You got it exactly right. And by the way, there's, you know, a degree of nudging that can be done in this as well, if you're familiar with like the Cass Sunstein and others, behaviour psychology, but how, you know, instead of saying, hey, this person needs a drug test, and have that'd be the first thing that you prioritise. I mean you can say, hey, this person needs help finding a job. And here are some resources, here's some employers in the area that we know employ people who are formerly incarcerated. It's a great way to actually not only automate and make the life of the parole officer easier and better, but also to kind of encourage the better behaviours within those communities.

    Todd Landman 14:16

    Now that makes sense. So what's the third channel then?

    Amrit Dhir 14:18

    Ahhh the third one is leadership tools. And this is for the directors and their deputies, the most senior people in a department of corrections, they may come in. And actually what we're seeing now is that a lot of the people who are coming in today and are sitting in these roles are reformers. They believe that the size of our criminal justice system in the United States is just too large. And they are motivated to improve outcomes. And they're focusing on things like recidivism, which is a term for people coming back to prison after being released. And that's a number you want to have low naturally. But historically, what happens - actually not even you know what historically -what happens today. He is that these recidivism reports will come out maybe every three years. Yeah. So if you're a director, and by the time they come out, they're almost three years old. So you're almost like because the six year timelines, and you want to know, hey, I instituted this new reform this new programme, I want to know if it's been successful, you won't know until a couple years out whether it worked. And so what we do instead is to give you real time data, we can tell you what's happening on your team and in your agency on a real time basis. And also project out based on what we're seeing with some meaningful kind of population projections as well. So that helpful.

    Todd Landman 14:34

    That's fascinating. And let me ask you just another technical question. So when people are released from prison, is it typical for them to also have a sort of GPS tag on their leg for a certain period of time? And does that form any of the data that you look at?

    Amrit Dhir 15:52

    So it depends? It's a very good question. And it's one of the more controversial topics today in this space, and especially in the Reform Movement, there's a concern that we may be heading towards, from mass incarceration to mass incarceration, and that people will be monitored and supervised within their communities. And I think that is a very meaningful concern that we need to be careful of, because we don't want that to happen. But to broadly answer your question about the state of this today, it depends on where you are, it depends on the county depends on the state depends on all those things, in terms of whether you are wearing a device that electronically monitors, you know, we don't track that ourselves, that's something that we do or want to do. Our approach is to helping people get off of supervision and get into programmes and other kinds of initiatives that help them on their way.

    Todd Landman 16:43

    Excellent. So this discussion really opened up into, you know, the bad side of the question, I guess, you know, you just have to go into this with our eyes open, I suspect that you're triangulating a lot of data. You're providing that in real time on dashboards, a lot of it's in the public domain. What are the risks around this? What are the pitfalls? What's the risk of re-identification? What's the risk of, you know, lapsing into kind of credit scoring philosophies? And just, as you said about the tags, there's worry about that kind of, you know, E-surveillance and E- carceration. Equally, someone could backward engineer some of your data and actually profile people. So, what's the downside of this approach?

    Amrit Dhir 17:21

    Yeah, that was a great list. So there's certainly a concern of bias entering any analysis of a dataset. And we are very careful about that. So one thing to note is that everything that we do is open source. So it's open to the technology community to take a look at what's kind of under the hood. And that's important, because we would do want to make sure that we are not only participating and contributing to the broader ecosystem that are, in this case, tech and criminal justice ecosystem, but that we're also held accountable to them. So that's the first thing that we do, we also are very mindful and transparent about our data ethics policies, and how we handle those kinds of questions and sometimes ambiguities. So if you look at, for example, the spotlight dashboard that I mentioned that you'd find for Pennsylvania, North Dakota, you will see in the methodology that we explain what happens when there's a question. So for example, if someone puts down three different ethnicities, how do we manage that in the data visualisation that just shows them as one. Our approach there is transparency and engagement.

    Todd Landman 19:31

    Have you done any links with the ACLU on this? Because they're quite interested in prison conditions. They're interested in incarceration, sentencing, etc. Do you do any kind of briefing with the ACLU?

    Amrit Dhir 20:16

    Yeah, so two things actually on that. I will take them in reverse order. So first of all, we do work with ACLU. If you look at our website, on the policy page, which again, are those one page memos, the ACLU has requested a number of those. And there's naturally different chapters of the ACLU in different states in different parts of the country. And we work with different stakeholders within the ACLU as well on those. The other piece, though, get one of the back to what you said about three strikes. There's another piece of that I think people may not be as familiar with it. I certainly wasn't, which is this issue of technical revocations. So if you're on supervision, like I said, half of prison emissions every year are from revocation of your supervision, meaning you're going back to prison, from parole or probation. But half of those, so a quarter of all emissions every year are from technical revocations. And those are when someone breaks a rule, that is not a law for the rest of us. Right. So it's not that they stole something, it's not that they got caught breaking a law, that they broke a rule of their parole, and sometimes these are ones that you and I would feel horrified to learn of. So that, you know, we've got examples of people, for example, going to an open mic night where there was alcohol presence, and that person wasn't allowed to be around alcohol. Being in the wrong County. Being out past curfew. All of these things that are, and you know there are anecdotes all over the place of the kinds of things that send people back to prison that we as society would not tolerate. And those are also some of what we're reducing.

    Todd Landman 21:49

    That's amazing, that sort of distinction to draw between, you know, breaking a rule and breaking the actual law, I guess the rules follow from the law. But I get your point in terms of, you know, how would somebody know if they crossed the county line, particularly if they're at an area they don't know well. So this has been a fascinating exploration with the ways in which you have triangulated datasets, made them more visible, put them into real time, and I have to reflect on what you said. I mean, I grew up in Harrisburg, Pennsylvania. So I'm going to immediately read all your Pennsylvania data. I actually grew up near a prison in Camp Hill, Pennsylvania, you know, so it'd be interesting to see how things have moved on from the time that I lived there many moons ago. I won't tell you how long ago that was. But, but this is a really good conversation for us to have around some of the ways in which different types of data can be leveraged for good. And also some of the challenges of that, or the misuse of that information, as well as the sort of things that you don't collect, you know, the fact that you don't collect data on these tags. And that that varies, of course, and the variation you see in terms of the population that you're collecting data on varies because of the fragmentation of the US prison system and the sort of federal system that the US is structured in, but also data that no one really brought together in one place before. And I think that when we hear this data for good argument, we hear a lot of people saying we're actually bringing datasets that haven't been brought together before in order to derive insights from those data and do something that is for good and brings about positive social changes result. So I just think this tour that you've given us today is absolutely fantastic. And on behalf of the Rights Track thanks so much for being on this episode with us today.

    Amrit Dhir 23:25

    Oh, thank you for having me. It's been fun. Thank-you.

    Christine Garrington 23:29

    Thanks for listening to this episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find a detailed show notes on the website at www.RightsTrack.org. And don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.

  • In Episode 4 of Series 7 of The Rights Track, Todd is in conversation with Sam Gilbert, an entrepreneur and affiliated researcher at the Bennett Institute for Public Policy at the University of Cambridge. Sam works on the intersection of politics and technology. His recent book – Good Data: An Optimist’s Guide to Our Future – explores the different ways data helps us, suggesting that “the data revolution could be the best thing that ever happened to us”.

    Transcript

    Todd Landman 0:01

    Welcome to The Rights Track podcast which gets the hard facts about the human rights challenges facing us today. In Series 7, we're discussing human rights in a digital world. I'm Todd Landman, in the fourth episode of this series, I'm delighted to be joined by Sam Gilbert. Sam is an entrepreneur and affiliated researcher at the Bennett Institute for Public Policy at the University of Cambridge, working on the intersection of politics and technology. His recent book, Good Data: An Optimist's Guide to Our Future explores the different ways data helps us suggesting the data revolution could be the best thing that ever happened to us. And today, we're asking him, what makes data good? So Sam, welcome to this episode of The Rights Track.

    Sam Gilbert 0:41

    Todd thanks so much for having me on.

    Todd Landman 0:44

    So I want to start really with the book around Good Data. And I'm going to start I suppose, with the negative perception first, and then you can make the argument for a more optimistic assessment. And this is this opening set of passages you have in the book around surveillance capitalism. Could you explain to us what surveillance capitalism is and what it means?

    Sam Gilbert 1:01

    Sure. So surveillance capitalism is a concept that's been popularised by the Harvard Business School Professor, Shoshana Zuboff. And essentially, it's a critique of the power that big tech companies like Google and Facebook have. And what it says is that, that power is based on data about us that they accumulate, as we live our lives online. And by doing that produce data, which they collect, and analyse, and then sell to advertisers. And for proponents of surveillance capitalism theory, there's something sort of fundamentally illegitimate about that. In terms of the way that it, as they would see it, appropriates data from individuals for private gain on the path of tech companies. I think they would also say that it infringes individual's rights in a more fundamental way by subjecting them to surveillance. So that I would say is surveillance capitalism in a nutshell.

    Todd Landman 2:07

    Okay. So to give you a concrete example, if I'm searching for a flannel shirt from Cotton Trader, on Google, the next day, I open up my Facebook and I start to see ads for Cotton Trader, on my Facebook feed, or if I go on to CNN, suddenly I see an ad for another product that I might have been searching for on Google. Is that the sort of thing that he's talking about in this concept?

    Sam Gilbert 2:29

    Yes, that's certainly one dimension to it. So that example that you just gave is an example of something that's called behaviour or retargeting. So this is when data about things you've searched for, or places you've visited on the internet, are used to remind you about products or services that you've browsed. So I guess this is probably the most straightforward type of what surveillance capitalists would call surveillance advertising.

    Todd Landman 2:57

    Yeah, I understand that, Sam, but you know when I'm internally in Amazon searching for things. And they say you bought this other people who bought this might like this, have you thought about, you know, getting this as well. But this is actually between platforms. This is, you know, might do a Google search one day. And then on Facebook or another platform, I see that same product being suggested to me. So how did, how did the data cross platforms? Are they selling data to each other? Is that how that works?

    Sam Gilbert 3:22

    So there's a variety of different technical mechanisms. So without wanting to get too much into the jargon of the ad tech world, there are all kinds of platforms, which put together data from different sources. And then in a programmatic or automated way, allow advertisers the opportunity to bid in an auction for the right to target people who the data suggests are interested in particular products. So it's quite a kind of complex ecosystem. I think maybe one of the things that gets lost a little bit in the discussion is some of the differences between the ways in which big tech companies like Facebook and Google and Amazon use data inside their own platforms, and the ways in which data flows out from those platforms and into the wider digital ecosystem. I guess maybe just to add one more thing about that. I think, probably many people would have a hard time thinking of something as straightforward as being retargeted with a product that they've already browsed for, they wouldn't necessarily see that as surveillance, or see that as being particularly problematic. I think what gets a bit more controversial, is where this enormous volume of data can have machine learning algorithms applied to it, in order to make predictions about products or services that people might be interested in as consumers that they themselves haven't even really considered. I think that's where critics of what they would call surveillance capitalism have a bigger problem with what's going on.

    Todd Landman 4:58

    No I understand that's, that's a great great explanation. Thank you. And I guess just to round out this set of questions, really then it sounds to me like there's a tendency for accumulated value and expenditure here, that is really creating monopolies and cartels. To what degree is the language of monopoly and cartel being used? Because these are, you know, we rattle off the main platforms we use, but we use those because they have become so very big. And, you know, being a new platform, how does a new platform cut into that ecosystem? Because it feels like it's dominated by some really big players.

    Sam Gilbert 5:32

    Yes. So I think this is a very important and quite complicated area. So it is certainly the case that a lot of Silicon Valley tech companies have deliberately pursued a strategy of trying to gain a monopoly. In fact, it might even be said that that's sort of inherent to the venture capital driven start-up business model to try and dominate particular market space. But I suppose the sense in which some of these companies, let's take Facebook as an example, are monopolies is really not so related to the way in which they monetize data or to their business model. So Facebook might reasonably be said to be a monopolist of encrypted messaging, because literally billions of people use Facebook's platform to communicate with each other. But it isn't really a monopolist of advertising space, because there are so many other alternatives available to advertisers who want to promote their products. I guess another dimension to this is the fact that although there are unquestionably concentrations of power with the big tech companies, they also provide somewhat of a useful service to the wider market, in that they allow smaller businesses to acquire customers much more effectively. So that actually militates against monopoly. Because now in the current digital advertising powered world, not every business has to be so big and so rich in terms of capital, that it can afford to do things like TV advertising. The platform's that Facebook and Google provides are also really helpful to small businesses that want to grow and compete with bigger players.

    Todd Landman 7:15

    Yeah, now I hear you shifting into the positive turn here. So I'm going to push you on this. So what is good data? And why are you an optimist about the good data elements to the work you've been doing?

    Sam Gilbert 7:27

    Well, for me, when I talk about good data, what I'm really talking about is the positive public and social potential of data. And that really comes from my own professional experience. Because although at the moment, I spend most of my time researching and writing about these issues of data and digital technology, actually, my background is in the commercial sector. So I spent 18 years working in product and strategy and marketing roles, and particularly financial services. Also at the data company, Experian, also in a venture backed FinTech business called Bought By Many. And I learnt a lot about the ways in which data can be used to make businesses successful. And I learned a lot of techniques that, in general, at the moment, are only really put to use to achieve quite banal goals. So for example, to sell people more trainers, or to encourage them to buy more insurance products. And so one of the things that I'm really interested in is how some of those techniques and technologies can move across from the commercial sector, into the public sector, the third sector, and be put to work in ways that are more socially beneficial. So maybe just to give one example of that type of data that I think contains huge potential for public goods is search data. So this is the data set that is produced by all of us using Google and Bing and other search engines on a daily basis. Now, ordinarily, when this data is used, it is to do banal things like, target shoes more effectively. But there is also this emerging discipline called Infodemiology, where academic researchers use search data in response to public health challenges. So one great example of that, at the moment has been work by Bill Lampos at University College London and his team, where they've built a predictive model around COVID symptoms using search data. And that model actually predicts new outbreaks 17 days faster than conventional modes of epidemiological surveillance. So that's just one example of the sort of good I believe data can bring.

    Todd Landman 9:50

    So it's like a really interesting example of an early early warning system and it could work not only for public health emergencies, but other emerging emergencies whether they be conflict, or natural disasters or any topic that people are searching for, is that correct?

    Sam Gilbert 10:05

    Yes, that's right. I mean, it's not just in the public health field that researchers have used this, you just put me in mind actually Todd of a really interesting paper written by some scholars in Japan who are looking at citizens decision making in response to natural disaster warnings. So floods and earthquakes that that migration patterns I guess, would be the way of summarising it. Those are things that can also be detected using search data.

    Todd Landman 10:31

    Well, that's absolutely fascinating. So if we go back to public health then. I was just reading a new book, out called Pandemocracy in Europe: Power, Parliaments and People in Times of COVID. And it's edited by Matthias Kettemann and Konrad Lachmayer. And there's a really fascinating chapter in this book that transcends the nation state, if you will. And it talks about platforms and pandemics. And one section of the chapter starts to analyse Facebook, Twitter, YouTube, and telegram on the degree to which they were able to control and or filter information versus disinformation or misinformation. And just the scale of some of this stuff is quite fascinating. So you know, Facebook has 2.7 billion daily users, it's probably a bigger number now. And you know, 22.3% of their investigated Facebook posts contain misinformation about COVID-19. And they found that the scale of misinformation was so large that they had to move to AI solutions, some human supervision of those AI solutions. But what's your take on the role of these big companies like we've been talking about Facebook, Twitter, YouTube, Telegram, and their ability to control the narrative and at least provide safe sources of information, let's say in times of COVID, but there may be other issues of public interest where they have a role to play?

    Sam Gilbert 11:57

    Yes, I think this is such an important question. It's very interesting that you use the phrase, control the narrative, because of course, that is something that big tech companies have traditionally been extremely reluctant to do. And one of the things I explore a bit in my book is the extent to which this can really be traced back to some unexamined normative assumptions on the part of tech company executives, where they think that American norms of free speech and the free speech protections of the First Amendment that's sort of universal laws that are applicable everywhere, rather than things which are culturally and historically contingent. And for that reason, they have been extremely reluctant to do any controlling of the narrative and have tended to champion free speech over the alternative course of action that they might take, which is to be much more proactive in combating harms, including but not limited to misinformation. I think this probably also speaks to another problem that I'm very interested in, in the book, which is what we are concerned about when we say we're concerned about big tech companies’ power, because I think ordinarily, the discussion about big tech companies power tends to focus on their concentrations of market power. Or in the case of surveillance capitalism theory, it concentrates on the theoretical power that algorithms have over individuals and their decision making. And what gets lost a bit in that is the extent to which tech companies by providing these platforms and these technologies actually empower other people to do things that weren't possible before. So in some work I've been doing with Amanda Greene, who's a philosopher at University College London, we've been thinking about that concept of empowering power, as we call it. And as far as we're concerned, that's actually a much more morally concerning aspect of the power of big tech, big tech companies than their market position.

    Todd Landman 14:11

    Yeah. So I like it that you cite the First Amendment of the American Constitution, but interestingly, the international framework for the protection and promotion of human rights also, you know, has very strong articles around protection of free speech, free assembly, free association, which of course, the tech companies will be interested in looking at and and reviewing. But what it raises to I believe really is is a question around the kind of public regulation of private actors, because these are private actors. They're not subjected to international human rights law in the way that states are. And yet they're having an impact on mass publics. They're having an impact on politics. They're having an impact on debate. So perhaps I misspoke by saying control the narrative. What I'm really interested in is we seem to have lost mediation. We have unmediated access to information. And it seems to me that these it's incumbent upon these organisations to provide some kind of mediation of content, because not all things are true just because they're said. So it gets back to that question, what where's the boundary for them? When will they step in and say this is actually causing harm if there's some sort of a big tech Hippocratic oath about do no harm that needs to be developed? So that, so there is at least some kind of attempt to draw a boundary around what is shared and what is not shared?

    Sam Gilbert 15:34

    Yes, so the idea of a Hippocratic oath for tech workers is definitely out there, the writer who has explored it more than I have is James Williams in his book Stand Out Of Our Light. I think that that is certainly something that would help. I also think that it is beneficial that at the moment, we're having more discussion about data ethics and the ethics of artificial intelligence, and that that is permeating some of the tech companies. So I think more ethical reflection on the part of tech executives and tech workers is to be welcomed. I don't think that's sufficient. And I do think that it's important that we have stronger regulation of the tech sector. And I suppose from my perspective, the thing that needs to be regulated, much more than anything to do with how data is collected or how data is used in advertising. Is this what sometimes referred to as online safety, or other times it's referred to as online harms. So that is anything that gives rise to individuals being at risk of being harmed as they live their lives online. There's actually legislation that is coming through in the UK at the moment called online safety bill, which is far from perfect legislation, but in my opinion, it's directionally right. Because it is more concerned with preventing harm and giving tech companies a responsibility for playing their part in it, then it is concerned with trying to regulate data or advertising.

    Todd Landman 17:13

    Yeah, so it's really the result of activity that is trying to address rather than that the data that drives the the activity, if I could put it that way. So if we think about this, do no harm element, the mediating function that's required at least to get trusted information available to users. I, I wonder if we could pivot a little bit to the current crisis in Ukraine, because I've noticed on social media platforms, a number of sites have popped up saying we're a trusted source for reporting on on the current conflict, and they get a sort of kite mark or a tick for that. I've also seen users saying, don't believe everything you see being tweeted out from Ukraine. So where does this take us and not only COVID, but to something as real time active and horrific as conflict in a country, we can talk about Ukraine or other conflicts about the sharing of information on social media platforms?

    Sam Gilbert 18:08

    Yes, well, this is a very difficult question. And unfortunately, I don't have the answer for you today. I guess what I would point to is something you touched on there Todd, which is the idea of mediation. And we have been through this period with social media, where the organizations, the institutions that we traditionally relied on to tell us what was true and what was false and sort fact from fiction, those organisations have been disintermediated. Or in some cases, they have found themselves trying to compete in this very different information environment that is much more dynamic in a way that actually ends up undermining the journalistic quality that we would otherwise expect from them. So this is not a very satisfactory answer, because I don't know what can be done about it, except that it is a very serious problem. I suppose just to make one final point that I've been reminded I've been reading stories on this topic in relation to the Ukraine crisis, is that the duality of this power that tech companies and that technology has given to ordinary users in the era of social media over the last 15 years or so. So if we were to rewind the clock to 2010, or 2011, the role of Twitter and Facebook and other technology platforms in enabling protest and resistance against repressive regimes that was being celebrated. If we then roll forwards a few years and look at a terrible case like the ethnic cleansing of the Rohingya people in Myanmar, we are at the complete opposite end of the spectrum where the empowerment of users with technology has disastrous consequences, and I guess if we then roll forward again to the Ukraine crisis, it's still not really clear whether the technology is having a beneficial or detrimental effect. So this is really just to say, once again, when we think about the power of tech companies, these are the questions I think we need to be grappling with, rather than questions to do with data.

    Todd Landman 20:31

    Sure, there was there was a great book years ago called the Logic of Connective Action. And it was really looking at the way in which these emerging platforms because the book was published some years ago about lowering collective action costs, whether it was, you know, for protest movements, or, you know, anti-authoritarian movements, etc, we did a piece of work years ago with someone from the German Development Institute on the role of Facebook, in, in opposition to the Ben Ali regime in Tunisia, and Facebook allowed people to make a judgement as to whether they should go to a protest or not based on number of people who said they were going and and so it lowered the cost of participation, or at least the calculated costs of participating in those things. But as you say, we're now seeing this technology being used on a daily basis, I watch drone footage every day of tanks being blown up, of buildings being destroyed. And you know, part of my mind thinks it's this real, what I'm watching. And then also part of my mind thinks about, what's the impact of this? Does this have an impact on morale of the people involved in the conflict? Does it change the narrative, if you will, about the progress and or, you know, lack of progress in in the conflict, and then, of course, the multiple reporting of whether they're going to be peace talks, humanitarian corridors and all this other stuff. So it does raise very serious questions about the authenticity, veracity and ways in which technology could verify what we're seeing. And of course, you have time date stamps, metadata and other things that tell you that that was definitely a geolocated thing. So are these companies doing that kind of work? Are they going in and digging into the metadata, I noticed that Maxar Technologies, for example, is being used for its satellite data extensively, and looking at the build-up of forces and the movement of troops and that sort of thing. But again, that's a private company making things available in the public sphere for people to then reach judgments, media companies to use, it's an incredible ecosystem of information, and that it seems like a bit like a wild west to me, in terms of what we believe what we don't believe and the uses that can be made of this imagery and commentary.

    Sam Gilbert 22:32

    Yes, so there is this as an all things, this super proliferation of data. And what is still missing is the intermediation layer to both make sense of that. And also tell stories around it that have some kind of journalistic integrity. I mean what you put me in mind of there Todd was the open source intelligence community, and some of the work that including human rights organisations do to leverage these different data data sources to validate and investigate human rights abuses taking place in different parts of the world. So to me, this seems like very important work, but also work that is rather underfunded. I might make the same comment about fact checking organisations, which seem to do very important work in the context of disinformation, but don't seem to be resourced in the way that perhaps they should be. Maybe just one final comment on this topic would relate to the media, the social media literacy of individuals. And I wonder whether that is something that is maybe going to help us in trying to get out of this impasse, because I think over time, people are becoming more aware that information that they see on the internet may not be reliable. And while I think there's still a tendency for people to get caught up in the moment, and retweets or otherwise amplify these types of messages, I think that some of the small changes the technology companies have made to encourage people to be more mindful when they're engaging with and amplifying content might just help build on top of that increase in media literacy, and take us to a slightly better place in the future.

    Todd Landman 24:26

    Yeah, I mean, the whole thing around media literacy is really important. And I I also want to make a small plea for data literacy, just understanding and appreciating what data and statistics can tell us without having to be you know, an absolute epidemiologist, statistician or quantitative analyst. But I wanted to hark back to your idea around human rights investigations, we will have a future episode with a with a group that does just that and it's about maintaining the chain of evidence, corroborating evidence and using you know, digital evidence as you, you know in ways that help human rights investigations and, you know, if and when this conflict in Ukraine finishes, there will be some sort of human rights investigatory process. We're not sure which bodies going to do that yet, because we've been called for, you know, like a Nuremberg style trial, there have been calls for the ICC to be involved as been many other stakeholders involved, but that digital evidence is going to be very much part of the record. But I wonder just to, yeah go ahead Sam.

    Sam Gilbert 25:26

    Sorry I am just going to add one thing on that, which I touched on this a little bit, and my book, but I think there's a real risk, actually, that open-source intelligence investigations become collateral damage in the tech companies pivot towards privacy. So what some investigators are finding is that material that they rely on to be able to do their investigations is being unilaterally removed by tech companies, either because it's YouTube, and they don't want to be accused of promoting terrorist content, or because it's Google or Facebook, and they don't want to being accused of infringing individual's privacy. So while this is not straightforward, I just think it's worth bearing in mind that sometimes pushing very hard for values like data privacy can have these unintended consequences in terms of open source intelligence.

    Todd Landman 26:24

    Yes, it's an age old chestnut about the unintended consequences of purposive social action. I think that was a Robert Merton who said that at one point, but I guess in closing that I have a final question for you because you are an optimist. You're a data optimist, and you've written a book called good data. So what is there to be optimistic about for the future?

    Sam Gilbert 26:42

    Well, I suppose I should say something about what type of optimist I am first, so to do that, I'll probably reach for Paul Romer's distinction between blind optimism and conditional optimism. So blind optimism is the optimism of a child hoping that her parents are going to build her a tree house. Conditional optimism is the optimism of a child who thinks, well, if I can get the tools and if I can get a few friends together, and if we can find the right tree, I think we can build a really incredible tree house together. So I'm very much in the second camp, the camp of conditional optimism. And I guess the basis for that probably goes to some of the things we've touched on already, where I just see enormous amounts of untapped potential in using data in ways that are socially useful. So perhaps just to bring in one more example of that. Opportunity Insights, the group at Harvard run by Raj Chetty has had some incredibly useful insights into social mobility and economic inequality in America, by using de-identified tax record data to understand over a long period of time, the differences in people's incomes. And I really think that that type of work is just the tip of the iceberg when it comes to this enormous proliferation of data that is out there. So I think if the data can be made available to researchers, also to private organisations in a way that, as far as possible, mitigates the risks that do exist to people's privacy. There's no knowing quite how many scientific breakthroughs or advances in terms of human and social understanding that we might be able to get to.

    Todd Landman 28:52

    Amazing and I guess, to your conditional optimism, I would add my own category, which is a cautious optimist, and that's what I am. But talking to you today does really provide deep insight to us to understand the many, many different and complex issues here and that last point you made about, you know, the de-identified data used for for good purposes - shining a light on things that that are characterising our society, it with a view to be able to do something about it, you see things that you wouldn't see before and that's one of the virtues of good data analysis is that you end up revealing macro patterns and inconsistencies and inequalities and other things that then can feed into the policymaking process to try to make the world a better place and human rights are no exception to that agenda. So for now, Sam, I just want to thank you so much for coming on to this episode and sharing all these incredible insights and, and and the work that you've done. So thank you.

    Chris Garrington 29:49

    Thanks for listening to this episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find a detailed transcript on the website at www.RightsTrack.org. And don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.

    Further reading and resources:

    Sam Gilbert (2021) Good Data: An Optimist’s Guide to Our Digital Future. Bill Lampos’ covid infodemiology: Lampos, V., Majumder, M.S., Yom-Tov, E. et al. (2021) “Tracking COVID-19 using online search”. Infodemiology Japan/natural disasters paper: [1906.07770] Predicting Evacuation Decisions using Representations of Individuals' Pre-Disaster Web Search Behavior (arxiv.org) On “empowering power”: Greene, Amanda and Gilbert, Samuel J., (2021) “More Data, More Power? Towards a Theory of Digital Legitimacy”. On the Hippocratic oath for tech workers: James Williams (2018) Stand out of our Light: Freedom and Resistance in the Attention Economy. Matthias C. Kettemann and Konrad Lachmayer (eds.) (2022) Pandemocracy in Europe: Power, Parliaments and People in Times of COVID-19. W. Lance Bennett and Alexandra Segerberg (2013) The Logic of Connective Action; Digital Media and the Personalization of Contentious Politics.
  • In Episode 3 of Series 7 of The Rights Track, Professor Diane Coyle, Bennett Professor of Public Policy at the University of Cambridge and co-director of the Bennett Institute joins Todd to discuss the dizzying digital changes over the last 25 years, how it has disrupted the economy and impacted on our lives.

    Transcript

    Todd Landman 0:01

    Welcome to The Rights Track podcast which gets the hard facts about the human rights challenges facing us today. In Series 7, we're discussing human rights in a digital world. I'm Todd Landman, in our third episode of the series, I'm delighted to be joined by Professor Diane Coyle. Diane is Bennett Professor of Public Policy at the University of Cambridge and co- directs the Bennett Institute, where she leads research under the themes of progress and productivity. Her most recent book- Cogs and Monsters - explores the problems and opportunities for economics today, in light of the dizzying changes in digital technology, big data, machine learning, and artificial intelligence. And today we're asking her, why is it that digital is so very disruptive? So welcome, Diane, it's wonderful to have you here on this episode of The Rights Track.

    Diane Coyle 0:49

    It's a pleasure, I'm flattered to be invited.

    Todd Landman 0:52

    Well, it's great. And you know, I was reading Cogs and Monsters over the holidays and enjoy very much your dissection of you know, the state of the discipline of economics and where it's going, and some of its challenges, etc. But I was really taken by the section on digital technology and digital transformation. And you, you reference your 1997 book, The Weightless World. And of course, that was 25 years ago. So the time between the publication of The Weightless World and Cogs and Monsters. And you know, factoring in Moore's Law of technological change, a lot has happened over these 25 years. So I wonder if I could just start by asking you, what are the sort of broad brush, absolutely huge changes in this area? And what has been their impact on economics?

    Diane Coyle 1:34

    Well, where to start, as you say, it's 25 years since I first got interested in digital technology, and was always sure, it was going to be transformative. But for a lot of economists, that was not obvious for quite a while. And I remember talking to one very senior figure in the UK profession who said, well, this digital stuff, it's going to reduce transactions costs a little bit, but we know how to handle transactions costs in our models so, so what's so special about this? And I suppose they've been inflection points where small changes or what might seem to be small changes bring about very large consequences. One of those was the switch from dial up internet, to broadband. And simply the loss of friction in the sort of *dial-up joining sound* when the modem did the handshake, for those who are old enough to remember, it made a big difference in the kind of services and opportunities that people thought they were able to put online and expanding the audience for them. And then the other was 2007, and the smartphone. Steve Jobs at that iconic Apple press conference, holding up the first smartphone first iPhone, which converged with the arrival of 3G, so that data transmission became cheaper and more possible at volume and speed. And also the kind of market design ideas in economics that enabled the creation of apps and in particular, matching apps and digital platforms. And if you look at what's happened since 2007, both in terms of individual behaviour and economic transactions, the fact that we spend a whole day a week, whole 24 hours a week, I think it's 28 now, online. And the new kinds of business models and the way that markets have restructured, it has been absolutely extraordinary. And I think in many areas, we're only just beginning to think through what the consequences are, and what the implications are for politics and policy and regulatory choices.

    Todd Landman 3:38

    Thank you for that. And you know, that rapid expansion just in terms of volume, scale, speed has fundamentally transformed our lives. I remember Steve Jobs, the announcement and I thought what am I ever going to do with that? Why do I need a phone that takes a picture? And equally when the iPad came out, I thought, I'm not sure how I'm going to use that now of course I can't live without one. And it sort of does. It changes our workflow, it changes our productivity, people who are amenable to multitasking find that these devices do help us and of course, being able to share information at rapid speed. As we know, through the pandemic, we've been able to communicate and stay on, on track in some ways in engaging with the sorts of things that we do. And so I wanted to focus a little bit on those that haven't really experienced this incredible transformation. I was recently at an event where a representative from one of the local housing association said well, we have about you know, 10,000 houses in our portfolio, if we add up all the housing associations in our, our portfolio plus other providers that might be 100,000 houses in this region, most of whom do not have access to these digital transformations. So what could you say about the sort of the left out, the left behind or the famous word about the digital divide? How do we address some of those issues, both economically but also maybe in policy terms?

    Diane Coyle 4:52

    In different ways it's a different level of the digital divide, and one is just the sheer network infrastructure. And the economics of these networks is such that population density really makes a difference to their financial viability. So to get universal service at high speed, there has to be public subsidy for it. In this country, we've got a government that has since Mrs. Thatcher's time being focused on you try all the market solutions possible first, and then grudgingly, you have some public intervention. And I think there should have been public intervention long ago and much more focused on minimum universal service. Ofcom does set standards and I think the standards that they have set are now outdated by the technology. So that needs revisiting, and then the investments got to happen. And we've had, you know, more or less monopoly of Openreach having the core of the network. And that problem hasn't really been fixed. So there's a set of problems about network infrastructure, and who's going to pay for it, and universal services and utility. And then there's access to devices and the payment plans. And for that, you know, obviously, smartphones are expensive, we've got plans where you can get the handset subsidised if you sign up to a reasonably expensive data plan. But lots of people can't do that. And this is a universal problem in all countries, because they're all pretty unequal. And so the people who are best off have best access. During the pandemic that's been diabolically bad, in particular for schoolchildren who've been learning online. And if you've got a limited plan, limited data, and you've only got a phone, not a tablet or a computer, you're not going to learn, you're not going to learn that learning deficit is going to scar those individuals for the rest of their working careers. So that has been a problem. And I'm not sure I've got an easy fix for this except that this is a necessity of modern life. And if people need subsidising to get necessities, if we subsidise their energy, for example, then we should be subsidising their connectivity as well. And then there's this sort of whole digital literacy bit, which is a whole other kettle of fish. And how do we teach people to be properly sophisticated consumers of whatever it is, whether it's social media misinformation, or whether it's price comparison websites, and how to interpret the information that you're getting from those.

    Todd Landman 7:18

    When I've listened to you, you know, it feels like you're making the case for digital connectivity as almost a public good like access to health care, education, social welfare, social, you know, the social safety net, if you will, is that your view that this really is, you know, akin to the provision of education and health and welfare?

    Diane Coyle 7:43

    I think it is because it's about conveying information really. And this is the fundamental characteristic of information and how that drives economic growth, particularly in what we call the knowledge economy. And all of this is useful because it gives people information to do things that make their lives easier or better in some way that matters to them. A trivial example might be, you've got an app on your phone that helps you navigate around the city so that you don't waste time because your bus isn't running. So that's one kind of valuable information and the time saving that goes with that. But you know, that's, that's the fundamental point of it. It's accessing public services online is almost essential now, leading your daily life, making it more convenient, making it more enjoyable, in business, using the information that you can get to deliver better services to your customers. So it's all, it's all about information. And that is the key characteristic of information - it is a public good, it's non-rival.

    Todd Landman 8:38

    Ah it is a non-rival public good and it's very interesting that that crosses over with a lot of discourse of the Human Rights field around rights to information, rights to be informed, etc. But also date obligations to progressive really realise that the fulfilment of social, economic and cultural rights. So there's a really interesting communication or conversation, if you will, that could take place between economists and human rights people around the provision of non-rival public goods. But the other thing that I was struck by what you said was this idea about digital literacy about not knowing in a way, how good all this can be for you, but also what some of the pitfalls are, how is one a good consumer of digital information, but also what's the unwitting phenomenon of people sharing tremendous amounts of information about themselves in the absence of that digital literacy, literacy? And I know you've done some work on you know, how much is your data worth? So how do we calculate what people's data is worth in the marketplace?

    Diane Coyle 9:36

    Aha, how much time have you got? Actually, my colleague here in the economics department, Wei Xiong has done some work looking at Chinese data on how concerned users of one of the huge apps are about privacy. And the finding there that is really interesting. You know, there's this privacy paradox. People say they care and then they act as if they don't, and they found that people care more the more sophisticated a user they are. So people who don't go online very much or don't think about it very much don't care about their privacy, but the more people use it, and learn about the pitfalls, I suppose the more they care about, about the privacy questions. But this is this is a really interesting area. And it's an ongoing area of research for me. And, you know it operates at different levels. So one is just what's it worth to the economy? People think data is an asset, because it helps businesses tailor their services better, develop better products, serve their customers better, make more money, which is a good thing in a capitalist economy. And there's a growing gap between the most productive companies and all the rest. So the top 5% In most OECD countries are pulling further away in terms of productivity and also profitability. More and more research is suggesting that's because they are using digital tools better, they using predictive analytics, they are building their own software to use the data, growing databases. So all of those more digital firms are becoming more productive and sort of winning the competitive race, the competitive rivalry that takes place in market economies. So we would like more firms to do that, to grow the economy and grow jobs and make better products and services. But then there's also the individual point that you alluded to. And being an economist, I think about this in terms of externalities. And as the negative externality that you pointed to that your behaviour online, or the data that people accumulate about you online, can reveal things about you that you don't want to be known. Or you can do the same about other people, you can reveal things about people who are like you, or people who are connected with you that they don't want, want known. And there are also positive externalities that come from joining up data, because a lot of the value, a lot of the information value depends on putting data in context. And even something that seems very personal. Like, do I have a temperature right now? Obviously, has positive information value for the people around me. And so to make use of this, to give people, you know, better quality lives better information, we need to think about how do we get data shared in good ways that creates value for people and doesn't invade their privacy? So this debate, I think is in in a pretty terrible state. And I'd be interested to know what you think about this, I think part of problem is that it's always thought about in terms of individual rights, and actually, it's a data captures relationships and context.

    Todd Landman 12:38

    Yes, and you know, so a lot of the human rights discourse is around the right of the individual. But of course, there are group rights and collective rights that are equally as important. So one can look at minority rights, for example, and other collective rights. So there is that tension in human rights discourse in human rights law between the absolute fundamental rights of the individual vis-Ă -vis the state then vis-Ă -vis non-state actors, including businesses, but also non, non, not for profit organisations. And then collective rights - does a group of people have a right to maintain a certain set of practices, or certain linguistic tendencies or textbooks in mother tongue language? Which is a it's a whole another podcast about that I'm sure. So yeah, I think you put your finger on a very interesting tension between these things. And I, I guess, I want to pivot to this idea of capitalism without capital. So you, you mentioned the idea about productivity, growing the economy, jobs, and which is good for capitalism, as you say, but a lot of people have observed that actually, you know, companies like an Uber or any other kind of online car provider, or Airbnb, these are property companies without property. These are taxi companies without taxis. So they're actually wiping out any of the kind of overheads by having to run a big fleet of cars. And yet, the markup on that is, is very high. I mean, I went to one of these data centres in London, where they command all of the data needed to run a successful taxi company. And they get 26,000 bookings a day, I think, at the time, and they were optimising to the point that even if one of their drivers was on the way home, they made sure that there was a fair in the car on the way home because that meant that that car was earning money on the way home. So this phenomenon of the capitalism without capital, I mean, it's it's a bit of a misnomer, because it still requires infrastructure. It still requires devices and cars, but it shifts, you know, who owns what, who does what and where the margins sits. So, what can you say about this changing nature of capitalism in the face of this new phenomenon of digital technologies?

    Diane Coyle 14:39

    It's a big question. I think the relationship between the material and the immaterial is really interesting. And the scale of the physical investment needed in data centres, or the energy use is often overlooked, although people are starting to talk about that more. And as you say, some platform companies operate by pushing the need to invest both in whether it's cars, physical capital, but also their own human capital, they're pushing that out to individuals. And what that means is that we're getting under investment, including in human capital, if you're a gig worker, your incentive to invest in your own training, when you're bearing all of the risk of fluctuations in the business is diminished. So that's quite interesting, too. And then we've got this construct of intellectual property or non-material property, hugely valuable, the stock market value put on companies that hold a lot of data or have a certain kind of brand or reputation is absolutely immense. And yet, it doesn't act like normal, old fashioned physical kinds of capital. It's got very different depreciation characteristics, you can, it can lose its value overnight, if there's a hit to reputation, or if a secret gets gets out and get shared. And I think the construct of property, intellectual property, intangible property is just as an individual right to own the property or corporate right to own the property is just highly problematic. And I would much rather we start to think in terms of rights to access - who has rights to access what? And, you know, particularly going back to data, what can, what can who know about somebody? Because part of the privacy issue is that whether it's big tech firms or governments, they're in a position to start joining up all kinds of data about people. And that's the problem. You don't mind your doctor, knowing very intimate details about you and having that data. You don't mind your bank manager, knowing what your bank balances, but you wouldn't want the government to join up all of those different bits of information about you and get that synoptic view, the Stasi, the East Germans had this term glesano which meant transparent people. And that I think, is is a real problem. So I came across this concept that you probably know more about the idea of privacy in public that comes from other parts of social science literature. It operates offline, it doesn't operate online. So can we start to think about those sorts of access rights or permissions rather than absolute property rights? Does that make sense?

    Todd Landman 17:21

    Yeah, that makes sense. And you know, I was thinking about one of the extreme examples of the the intangibles, which is this non-fungible trading regime. So people are creating digital assets, if you will, that are then trading and you know, a digital asset by a famous artists can can sell on the market for for millions of pounds. And it it again, it gets back to some of the fundamental questions you ask in your book Cogs and Monsters about faith in the economy, you know, we think about coins and currency. Why do I accept the fact that you hand me a ÂŁ10 note, and I say, that's a ÂŁ10 note, which is worth something, when actually, it's just a piece of paper. So a lot of the economy is based on that transactional faith that has built up over centuries of people trading. And now of course, during the pandemic, cash and coins weren't used as much, we're going to electronic payment. Apple Pay has lifted its its cap on, you know, pounds per transaction. You know, there's a whole new world of financial transaction that feels even more ephemeral than economics has felt like in the past, and what can you say about sort of where are we going with all of this? What What's the new non-fungible that suddenly is going to have value in the market?

    Diane Coyle 18:27

    I don't really know. I mean, for NFT's, I can't help but believe that there's a bubble element to that. And that people, you know the art market is a pretty rigged market, if I can put it that way. So I think there are people in the market who are trying to create artificial value, if you like around NFT's. But I don't know the answer to your question and it sometimes seems that value has become so untethered, that surely it's unreal. And yet at the same time, there are people who haven't got enough cash to go and buy food, they're going to food banks, and how has that come about? Yet equally, there are intangible things that are really valuable. Trust is an intangible, and we wouldn't have an economy without it. Cultural or heritage assets, which I'm thinking about at the moment. You know, it's not that we assign value to the stones in Stonehenge in some normal economic sense, but, but there is an additional cultural value to that, and how should we start to think about that, and, you know, more and more of the economy is intangible. So we have to get our heads around this.

    Todd Landman 19:27

    More and more, the economy's intangible. I'm gonna have to quote you on that. That's wonderful. I, I think then what the next thing I'm really interested in exploring with you is, is the role of the state and the way I want to enter this really is that you've already hinted at the idea that provision of no-rival public goods where there's clearly you know, a role for the state in that there is also a role for the state in the regulatory environment. And you know, of course, I was very sort of worried about your observation that the state can combine banking information with health information and know something about you in a connected way that re-identification but also that very private revelation about someone's individual circumstance. So what's that balance between the state helping, the state regulating and the state staying away? Because that's a big concern in human rights, we, we often say the state has a has a, you know, an obligation not to interfere in our rights, it has an obligation to protect us from violations of rights by third parties. And it has an obligation to fulfil its right commitments up to available economic capability and, you know, sort of state institutional capability. But boy, there's a tough balance here between how much we want the state to be involved and how much we really say, just stay away. What's your take on that?

    Diane Coyle 20:38

    It's particularly difficult, isn't it when trust in government has declined, and when democracies seem to be becoming rather fragile? So you worry much more about these trade-offs with an authoritarian state, whose politicians you don't trust very much, I think these issues have become more acute than they might have been 25 years ago, I suppose. And at the same time, we need the state more than ever, because of the characteristics of the way the economy is changing. We've had this period since Thatcher and Reagan, when the pendulum in public discourse about economic policy has swung very firmly towards markets first state fills in the gaps corrects the market failure. And yet, we're in a period of technical innovation when we need standards. Just going back to data, we need somebody who will set the standards for interoperability and metadata so that we can enforce competition in digital markets, or technical standards for the next generation of mobile telephony. So we need the standard setting. And because of the non-rivalry, and because of the returns to scale, I think we're all much more interconnected economically than used to be the case. And those phenomena have always existed. They've always been, you know, big economies of scale, and autos and aerospace, but they are now so pervasive across the economy, that almost everything we do is going to affect other people, I think it's becoming a much more collective economy than it used to be. Or just think about the way that the productive companies are combining all of our data to use predictive analytics to do better things for us. So, I, my strong senses is that it's a more collective economy, because it's intangible because it's got this these elements of non-rivalry and scale. And so we're going to have to have a rethink of what kind of policy discourse do we have around that, and it's not markets first government then fixes a few problems.

    Todd Landman 22:40

    Yeah and that idea of the collective economy really moves away from you know, the discipline of economics has often been characterised as residing in methodological individualism. And as long as you understand the individual rationality of people, you just aggregate that rationality and then you get market force, and you get supply and demand curves, you get equilibrium prices, and quantities, etc. But you're actually making a slightly different argument here that the interconnectivity of human behaviour is the interrelationships of one person's choices and the consequences or the as you say, the predictive analytics in a way talking about, well, we expect you to like these sets of products, and therefore you will go buy them, or we expect crime in this region, and therefore we put more resources there. That's a different enterprise. That's a much more holistic enterprise of looking at the, as you say, data in context, and it changes our way of thinking about modelling the economy, but also thinking about remodelling our relationship with the state.

    Diane Coyle 23:34

    I think you're right, you know, we're in a world then of disequilibrium of non-linear, linear dynamics where things can tip one way or another very quickly, where decisions by state agencies will shape outcomes. And give a simple example in my kind of territory, if you've got digital markets that that tip so that there's generally one dominant company because of the underlying economic characteristics, then any decision that a competition authority takes about a merger, or dominant position is going to shape which company dominates the market. You know, if the merger goes ahead, it's one and if it doesn't, go ahead, it may be another one. So they become market shapers. And I think this is why there's more interest now in self-fulfilling outcomes and narratives which started to take off a little bit in economics more in some other disciplines. Because the narrative affects the outcome, it aligns people's ideas and incentives and points them all in the same in the same direction. So I often think about the Victorians and I think they, they had this kind of narrative of greatness, and legacy long-term prosperity, and so they built these huge town halls that you see in cities around the country. Joseph Bazalgette gave us 150 years’ worth of capacity in the London sewers. So they had something going on in their heads. That was not the economics that we've had from 1979 up, up until just recently, they weren't doing cost benefit analysis or thinking about equilibrium supply and demand curves.

    Todd Landman 25:07

    Yeah, it's a much bigger vision, isn't it? And you know, there's an observation now that data is the new oil. It's the oil of the future. And I wonder if, in closing, whether you could just say a few remarks about a) do you think it is the oil of the future? And what's that flow of oil going to look like? Is it just more and more data and more and more confusion? Or is there going to be some sort of consolidation, rationalisation and, and deeper understanding of the limits of the data enterprise and the digital enterprise? Or is it just too hard to say at this stage?

    Diane Coyle 25:36

    Economists don't like that analogy at all because oil is a rival good and data is a non-rival good. So we in a very anoraky way say no, no, that's a very imperfect analogy. And I mean of course, the point is that it's going to be ubiquitous and essential. And people still talk about the digital economy. But before long, that will be like talking about the electricity economy. It'll just all be digital and data. But I think there's so much that we don't know. And so much of what will happen will be shaped by decisions taken in the near term, with, you know, the consequences for governance, really, we've talked a lot about the economics of it. But all of this has implications for governance and democracy and rights, which is where you come in.

    Todd Landman 26:18

    Yes, absolutely and that's what we're exploring in this series of, of The Rights Track. So this has been a fascinating discussion, as ever, I really enjoy your insights and precision your use of language and correcting me about the, the rival nature of data that but that's an important correction and one that I absolutely accept. But you've also raised so many questions for us to think about in terms of governance, democracy, rights, individual rights versus collective rights. And this idea of the non-rival public good that will absolutely, our listeners will want to chew over that one for a long time. So for now, can I just thank you so much for joining us on this episode of The Rights Track.

    Chris Garrington 26:55

    Thanks for listening to this episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find detailed show notes on the website at www.rightstrack.org and don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.

  • In Episode 2 of Series 7 of The Rights Track, Martin Scheinin, British Academy Global Professor at the University of Oxford and a member of the Scientific Committee of the EU Fundamental Rights Agency joins Todd to discuss whether the grammar of human rights law can cope with multiple challenges of the digital realm.

    Transcript

    00:00 Todd Landman

    Welcome to The Rights Track podcast which gets the hard facts about the human rights challenges facing us today. In series seven, we're discussing human rights in the digital world. I'm Todd Landman, in our second episode of the series, I'm delighted to be joined by Martin Scheinin. Martin Scheinin is the British Academy, Global Professor at the University of Oxford, and a member of the scientific committee of the EU's Fundamental Rights Agency. He is currently exploring whether the grammar of human rights law can cope with multiple challenges of the digital realm. So Martin, welcome to this episode of The Rights Track, it's great to have you here. Well, you know, in our first episode of the series, we had a discussion with Ben Lucas, who's a data scientist. And the one thing that he said to me that has really is stuck in my mind is that the regulatory framework has not been able to keep pace with technological change. And I wanted to use that just as an opening framing that when we consider the international human rights framework, the international human rights regime as a regulatory framework of sorts, as against this rapid expanse in technological change, and in the digital space, this gap between regulation and technology is something that's pretty enduring. But I wonder what your early thoughts are about how do human rights address this question of technological change?

    1:14 Martin Scheinin

    Well, I think that human rights law is very much needed now. There, there may be a widely held perception that human rights law would be unable to cope, for instance, because so much is done by private actors that traditionally are not seen as bound by human rights law, or because the digital realm knows no borders between countries, and therefore it escapes the jurisdiction of any single state, often relied upon as a necessary precondition for the application of human rights law. I do believe that human rights law can cope. And I can see with, with some satisfaction, how both the United Nations and Council of Europe human rights bodies and mechanisms have understood the importance of the challenge and are trying to address it, that doesn't mean that they would already have found a way but at least there is a good faith effort, I would say.

    02:13 Todd Landman

    And you know, human rights is often criticised as being state centric, where the primary duty bearers is the state, and then therefore private actors are not part of that frame. But what has changed since you know this early perception of state centric human rights framework in your mind that might address some of the gaps that you've, you've already raised with us?

    2:31 Martin Scheinin

    Well, I'm currently running a four-year research project as British Academy Global Professor at the Bonavero Institute of Human Rights at the University of Oxford. And I framed the project as dealing with the grammar of human rights law and how it can address the challenges of the digital realm. And this, this framing signals need to go back to the foundational concepts and structures and to see how new phenomena new challenges can be spoken about in the language of human rights law. And just to take one example, one of my first journal articles in this project, still in the process of being finalised is about new EU level and national laws about taking down internet material that is seen as terrorist online content. It's a whole wave of legislation in European and other countries. And there is an EU regulation which is directly applicable EU law in all EU member states. It's a striking example of the challenges. We are speaking of something intangible, ultimately, information, images, video, text in digitalized form, which is produced by one private actor in one country, published by someone else, another private actor in another country, and perhaps using a medium of a server located in a third country and operated by a third private party. Under this EU regulation, which as I said, is valid law, national authority in a fourth country, in any EU member state, can ask the authority of the third country where the server is located to give an order to take down the material. And that national authority has one hour to act, which usually means an order for the private server operator to take down the material. And then that private actor has one hour again to implement the request or the order. What we see here is a whole chain of private actors.

    04:41 Todd Landman

    Yes It's an incredible reach and, and also what happens if they don't comply within the hour? That's an extraordinarily short time period.

    4:49 Martin Scheinin

    Well, there are, of course, sanctions and enforcement mechanisms, penalties, etc. But we see here a whole chain of private actors in, in production and publishing of this information. And the challenges to human rights law are, firstly, the civil jurisdiction, the actors, the private actors are located maybe located in different countries. And the order comes from at least two different states where the server is located, and then the initiator of the actual requests. And neither one necessarily has any jurisdiction in relation to the person who actually is behind the message, who uploaded the so called terrorist online content, and is subject to a measure that constrains freedom of expression. And that relates to jurisdiction, and then we have the question of mechanism of redress, that there's a there's a vague clause saying there must be access to judicial remedies, but in what country? In what language by whom? Is the, is the question. So we risk the situation of putting people in a legal limbo. And here we need human rights law to navigate through this mess and and to provide some kind of guidance to what is permissible and where where to draw the limits both as to the substantive issue of what is terrorist online content? And also to the procedures what kinds of remedies will be required?

    06:15 Todd Landman

    Yeah, and you know, I'm going to pick up on this freedom of expression, maybe add freedom of speech alongside it with the, you know, rather famous cases of former President Trump and now representative Marjorie Taylor Green having been banned from certain social media platforms. One was about misinformation with respect to COVID-19. The other was just about misinformation more generally, in a view to mobilising supporters. But what's your take on this ability for private organisations like a Facebook or a Twitter to ban people for life for something that they've posted on their own platforms?

    6:52 Martin Scheinin

    Yeah, the traditional view, of course, is that a medium, a newspaper, has no obligation to publish every opinion, they exercise their freedom of expression, by choosing what message they want to carry. And as such, that belongs to freedom of expression. But then when we have the emergence of let's say, Facebook or Twitter as something that can be called public goods, or common goods, we have to ask the question, whether access itself is a freedom of expression issue and how can the mediation of content be done so that freedom of expression of the of the of the users is reflected. I see a certain asymmetry in the sense that those holding a public office, if they have a Twitter account, they shouldn't be allowed to block other voices who may be critical of them. So that critics couldn't respond to their messages, but can then Twitter block them by banning them from using the service? I think we are in in quite a challenging situation. Here, I do believe that some kind of extension of human rights obligations to private actors is necessary. It may happen through their own regimes of redress, as Facebook is trying to build. And I'm optimistic, not of the model itself, but of the possibility of perfecting the model so that we can have genuine human rights mechanisms, also within private actors. Ultimately, there could be something like a World Human Rights Court with jurisdiction over those private actors who have accepted it as a as an appeal court, in respect of their own internal mechanisms.

    08:51 Todd Landman

    That's fascinating Martin. You know, back in all way back in 1998, I was on my way to Venice, to teach on the European master's degree in Human Rights and democratisation. And I think I was in the air when the British authorities arrested Augusto Pinochet from Chile and put him under house arrest, which I believe was about 18 month’s worth of time when British Parliament debated the legality of his arrest in his detention. And there was an appeal made and often this case is cited as one in which the application of universal jurisdiction applied, and it really advanced the argument for universal jurisdiction. I wonder to what degree what you're exploring and talking about here today is the application of the principle of universal jurisdiction for digital technologies.

    9:36 Martin Scheinin

    I think there's a need for a distinction in the sense that the Pinochet case was about enforcement jurisdiction, the powers of the state to do something over an individual who is is primarily subject to other country's laws. Whereas here we hold a state to account for something that happened outside its borders, because of the causal link to human rights harm elsewhere. And states have been very careful in not accepting extraterritorial jurisdiction in respect of human rights violations that materialise elsewhere, when they were not there themselves and the European Court of Human Rights has been struggling, we know the bombing of Belgrade, the Bankovic case where the European Court of Human Rights threw it out, because it was outside the legal space of Council of Europe. Subsequently, it has taken the view that if you take possession of a person through arrest, then you are there with human rights obligations, which is, of course a bit paradoxical that dropping bombs is not jurisdiction, but but handcuffing is. We are we are trying to impose upon States a broader notion of jurisdiction, which is simply based on their causal links with what happens in the digital realm. For instance, in curtailing freedom of expression, by actors outside their own territory. It is necessary that we do this because the internet knows no knows no borders, and there are causal links, which create the human rights harm we are addressing. And as we see in the EU terrorist online content regulation, there are multiple countries involved. And one country asks for the takedown, another country implements it that the server can be located in a third country and the actor himself/herself in a fourth country, there's a whole chain of action, but somebody must be held accountable. And that requires the extension of the notion of jurisdiction.

    11:44 Todd Landman

    Okay, that that distinction between the two makes, makes perfect sense to me. And you know, the complexity and complication of that is, is very salient. I wonder beyond expression and freedom of speech, etc. What other human rights are at stake in this particular agenda?

    11:58 Martin Scheinen

    Well, I don't think people realise how broadly their human rights are actually at issue, when dealing with new developments in the digital realm. When we say expression, of course, easily what follows is freedom of assembly and association. Their exercise has largely shifted to happen online, especially in the times of the pandemic, but we also can say at elections and democracy. And public accountability have become phenomena that take place online. And this issue of democracy is especially important because of the vulnerability of electoral systems to malicious operators in cyberspace. So democracy is facilitated by moving online, but also but also subject to new kinds of risk. Our intimate sphere happens, to a large extent, online, even if the most important manifestations, of course, are still interpersonal. That brings up a whole range of privacy issues. Data protection is of course the human right which is most often referred to simply because of the passing of lots of sensitive personal data, but the mother right, right, the right to privacy is equally important. Here we go to issue such as surveillance. And if I now may mentioned another article I'm working on within my British Academy Global Professor project, I've been looking into the privacy related developments during the pandemic. And of course, there are very important and very different developments over these 22 months. We, we have totalitarian control with countries like China, which erode totally the privacy of the individual, and utilise and exploit health information for social control. It is true that digitalized control tools are in a sense rational because humans are vectors of the virus. The epidemic is not simply a question of a virus that that keeps replicating. It is human society, which transforms the virus into an epidemic in democratic countries. We see innovations such as contact tracing apps, digital contact tracing apps, and COVID passports. Both are potentially privacy intrusive, but here we see a certain kind of paradox in that in order to function, they must be human rights compatible or at least must have human rights compatible features, because they will only work if they are widely accepted. So, here the issue of legitimacy comes to the defence of human rights. Solutions, technological solutions, that would be best simplistically will not work, because they will not be widespread enough, whereas, where privacy, by design is inbuilt in the solutions, they will have much better success. We get into new paradoxes however, because for instance, when the, when the contours of the epidemic change with new variants like the Omicron variant, we are speaking on today, the scope of for instance, a COVID Passport can be rapidly overnight changed. So previously, having a COVID passport did not reveal your actual health information. It only told that this person is at this moment, carrying a valid COVID passport. But it didn't tell whether they were vaccinated, whether they had COVID, or whether they were tested in the last 24 hours, 72 hours. Now, when the, when the requirements are being made more narrow. The COVID Passport suddenly starts to reveal health information. It was sold under a different label. But now it is transforming to, let's say worse for human rights in the sense that it breaks the promise of not revealing health information.

    16:09 Todd Landman

    Yeah, and it really does hit the question of liberty versus public health and involves this question of proportionate response, right. And so the human rights framework often talks about proportionality, as well as reasonableness as well as of a certain, you know, time bound duration. So it's possible to rescind on particular rights commitments for a particular period of time, if that rescindment of rights is or taking away rights is proportionate to the threat that one faces. And of course, massive debates here in the UK about this, there's a very strong lobby that's advocating against the passports, another lobby that's advocating for them, and it is down almost to the individual user to give consent to those passports and move about planet Earth. But those who do not give their consent and want to move around planet Earth without demonstrating whatever status they have, they may in themselves be putting others at risk. But the probability of that risk is different, you know, because I could have all the passports I like and still be a contagion. And somebody couldn't have any of the passports or not be a contagion. So it's these huge tensions throughout this whole debate.

    17:16 Martin Scheinin

    You mentioned, you mentioned proportionality, and I think there's an important issue that I want to address in the sense that many a human rights laws scholar is happy with proportionality. Ultimately, human rights would be a question of balancing between the competing public interest and the intrusion that results into an individual's human rights. But I belong to the, let's say, more fundamentalist school of scholars who say, there are also bright lines, there's something called the core or the inviolable ethics of every human right. So proportionality just does not justify every intrusion. And and that's an important task also in the context of COVID, that we must first define the ultimate limit up to which proportionality is taken into account. And there are applications of this approach include, including the two Max Schrems cases by the European Court of Justice, the highest EU court, where they did not conduct a proportionality assessment because they said this is mass surveillance, which is prohibited as as a bright line. I endorse that approach, that human rights are not only about balancing of competing values, they are also about protecting the inviolability of certain foundational principles and they belong to what I call the grammar.

    18:39 Todd Landman

    I see, so this word grammar then becomes very important for you. And I suppose it almost invites you to deconstruct the grammar, and then reconstruct the grammar. So what can you tell us about the grammar of human rights? I'm very interested in this concept.

    18:54 Martin Scheinin

    Well, my British Academy project lists ten antinomies or challenges, which are related to human rights in the digital realm, but at the same time, go back to these foundational principles, concepts, structures of human rights law, and what I mentioned about the essence inviolability of the essence versus proportionality is one. There's the question of the private versus the public actor as agent, and also as the duty bearer. There's the question of territorial versus extraterritorial action by states. And there's also the distinction between derogation and limitation. Limitations are in normal times. And they must be proven proportionate, whereas derogations are exceptions in times of crisis. And I think COVID has provided us an opportunity, us an opportunity to look once again into the question, are there different limits, a different definition of the inviolable core, for instance, when a country is in a state of emergency? These are just examples.

    20:04 Todd Landman

    Yeah, they're great examples. We interviewed an anti-terror expert, Tom Parker in the last series, and he made this reference very similar set of things that you just said there. And, you know, this notion of limits is really important. But also he's worried that there's a kind of state bureaucracy, a state apparatus that has been developed for this particular public emergency. And he's worried that that will become permanent, that that that it won't fade away, it won't be brought back down again, after a period of duration, and that we are in a sense, living with a new kind of surveillance that will will not go away. What do you say to that?

    20:40 Martin Scheinin

    I have worked on surveillance in an earlier EU funded research programme called SURVEILLE, which developed multi-dimensional and multidisciplinary methodology for assessing the utility of surveillance technologies versus their human rights harm. And we could show that the most intrusive methods of surveillance often were only marginally effective in actually producing the legitimate aim or benefit towards the legitimate aim. It was a semi-empirical, largely largely based on hypothetical or modelling situations. But nevertheless, we had the multidisciplinary teams working on it and could show that this technological hype around surveillance is unfounded, that traditional methods of policing, footwork and human intelligence deliver a much better proportionality approach to assessing the Human Rights harm in relation to the actual benefit obtained toward national security. There are many reasons why surveillance technology and other digital innovations tend to perpetuate then. And we can speak on the surveillance industrial complex. And I'm also sure that there as issues of mission creep and function creep, and many of the changes we see in the realm of treatment of sensitive health data will remain after COVID-19 is over. So something is lost. Or at least there's a risk that something is lost every time a special situation justifies resorting to exceptional measures.

    22:31 Todd Landman

    And just in closing, I want to ask you a final question, which is you spend your time as Global Professor, you engage with academics at Oxford and the rest of the world in this area, and you come up with a new grammar for human rights - what next? What's the goal here? Is it to then advocate to the United Nations system, the European system to change laws, regulations and practices? Do you think you could have that kind of leverage to make the changes required to address the very deep issues that you've raised with us today?

    22:59 Martin Scheinin

    Well, I, I did mentioned the surveillance project where I was involved. That gives a good example of what an eternal optimist who is a serious academic can achieve. So we developed this methodology for the multidisciplinary assessment of surveillance technologies. And we delivered our reports and on 29th of October 2015, the European Parliament adopted a resolution where they commended the methodology developed in the SURVEILLE project and recommended it for use. Two weeks later, happened Bataclan, one of the most dreadful terrorist attacks in Europe and everything was forgotten. Nothing came out of it. And that's the pendulum, especially in issues of terrorism, that there are all kinds of good efforts to develop constraints safeguards and make proposal about human rights compatibility. But when panic strikes, it goes down the drain. I am an eternal optimist, and I think that human rights law has to engage has to evolve and that it will be able to deliver outcomes that both make meaningful difference as to the facts on the ground, and at the same time, are able to correspond to the intuitions of ordinary people with a common sense, there is a certain legitimacy requirement that what we deliver must be backed by the people as acceptable. And I think we can cope with that. But we cannot cope with irrational panic. That's the big problem in this work.

    24:37 Todd Landman

    Amazing. Yeah, I share your optimism, I'm afraid. And you know, the incremental gains you do you do face setbacks from these external threats panics, as you as you call them, and the perception of the disruption that's coming, but at the same time holding true to human rights and the philosophies that sit behind human rights, and then also this thing just about legitimacy, I think, you know, if we go back to Max Faber and his legal rational sources of authority, in where legitimacy comes from that acceptance that people think this is reasonable, proportional and something we can live with, but as you say, if there's overreach, mission, creep, panic and and other elements of state action and non-state action I might add, then the acceptability and legitimacy comes into question. So it's just been unbelievable talking to you and hearing your insights about this in the direction that you've taken our conversation today. So much to think about, you're in the middle of the project. We look forward to, you know, the results that you get at the end of the project and really seeing that that output and those conversations that will come from what you discover, but for now, I just want to thank you for appearing on this episode of The Right Track.

    25:53 Chris Garrington

    Thanks for listening to this episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find detailed show notes on the website at www.RightsTrack.org. And don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.

  • In Episode 1 of Series 7 of The Rights Track, Todd is in conversation with Ben Lucas, Managing Director of the University of Nottingham's Data-Driven Discovery Initiative (3DI).

    Together they discuss the threat to human rights posed by aspects of a digital world and the opportunities it can create for positive change.

    Transcript

    Todd Landman 0:00

    Welcome to The Rights Track podcast which gets the hard facts about the human rights challenges facing us today. In series seven, we're discussing human rights in a digital world. I'm Todd Landman, in our first episode of the series, I'm delighted to be joined by Ben Lucas. Ben is Managing Director of 3DI at the University of Nottingham. A hub for world class data science research, and a funder for this series of The Rights Track. To kick off the series, we're talking about some of the challenges and opportunities created in a data driven society, and particularly what all that means for our human rights. So welcome on this episode of The Rights Track.

    Ben Lucas 0:37

    Thank you so much for having me.

    Todd Landman 0:38

    It's great to have you here, Ben. And I guess I want to start with just to kind of broad open question. We've been living with the internet for a number of years now. When I first came to United Kingdom, we barely had the internet and suddenly the web exploded, and it is a wonderful thing. It's transformed our lives in so many different ways. But it's also created major challenges for human rights, law and practice around the world. So my first question really is, what are the key concerns?

    Ben Lucas 1:04

    I think that the internet is perhaps not bad in and of itself, and in that regard, it's very similar to any other new and emerging technology. We look at something like the automobile there's obviously dangers that having cars on roads introduced into society, but there's also a lot of good as far as a boost in quality of life and economic productivity and so forth. I think the central challenge and one that's perhaps getting exponentially more challenging is the fact that often more now than ever, digital technologies are moving a lot faster than what the regulatory environment can keep up with. And also very importantly, humankind's ability to fully understand the potential consequences of misuse or what happens when things go wrong.

    Todd Landman 1:50

    So in some ways, it is interesting, you could look at Moore's Law for example, technology increases exponentially and this point you're making about the inability for the regulatory environment to keep up with that. I think that's a crucial insight you've given us because human rights in a way is a regulatory environment. We have international standards; we have domestic standards.

    Ben Lucas 2:08

    Correct.

    Todd Landman 2:09

    We have de jure protection of rights, de facto enjoyment of rights, but oftentimes, there's a great tension or gap between those two things. And when new issues emerge, we either need a new standard, or we need a new interpretation of those standards to be able to apply to that new thing. So, we're going to call the Internet a new thing for now and it actually, this dual use of technology is also interesting to me. When barbed wire was invented it's a great thing because you can suddenly close off bits of land and keep animals in one place. And it's wonderful for agriculture, but it's also a way to control property. And as we know, the enclosure laws in this country led to quite a lot of political conflict. But if we get back to the questions then about, you know, positive and negative aspects of the Internet, what else can you share with us?

    Ben Lucas 2:50

    There are examples such as work that colleagues in the Rights Lab are doing, for example, on the use of the Internet and in particular social media, for exploitation. So, child exploitation, for example. There's also terrible examples of migrant exploitation. People who join groups thinking it's going to be a community to help them to get a job in another place. And that turns out to be quite dodgy, so that there's examples that are just blatantly you know, bad and terrible and terrible things that happen on the internet. But then there are other examples that are, I think, much more complicated, especially around the transmission of information and new emergent keywords we're seeing around misinformation and disinformation. The power that user generated content can have to help mobilise activists and protests for good for example, to get information out when journalists can't get in. Then the flip side of that is the potential exploitation by nefarious actors who are obviously spreading information that potentially damages democracies and otherwise stable and important institutions around the world. The other thing I would sort of cite here would be work by our colleague, Austin Choi-Fitzpatrick with his book, The Good Drone. That's a really interesting contrast here. So, a book about the use of UAVs and where on the one hand, if we think about a UAV that's armed.

    Todd Landman 4:12

    That's an Unmanned Aerial Vehicle for our listeners.

    Ben Lucas 4:14

    Yeah, Unmanned Aerial Vehicle. And if we think about one of those drones that's armed and also potentially autonomous moving forward to some that's potentially you know, very, very scary. On the other hand, this same basic sort of technology platform could provide cheap and accessible technology to help mobilise social movements to help journalists for example. And so I think any debate around the good and bad of technology, that there's some really interesting and very complicated contrast involved.

    Todd Landman 4:43

    And you know, you see drones being used for beautiful visual displays over you know, presidential inaugurations, for example.

    Ben Lucas 4:48

    Exactly.

    Todd Landman 4:49

    You see this big, colourful display, but that same swarm technology of UAVs can actually be used for combat for warfare, etc. And we know from the work on human rights, modern slavery and human trafficking that, you know, taking pictures of the Earth using satellites with swarms of satellites is very good, but then that can also be used for for ill as well and I think that challenge of the dual use of technology will always be with us. I wonder now if we could just turn to another set of questions, which is, is the difference between life online and life offline. Do we think that human rights rules are different for online and offline life or roughly the same?

    Ben Lucas 5:25

    A lot of people argue that online is a mirror of offline, although there are those potentially really negative amplification effects involved in the bad stuff that happens in the real world so to speak, when you move it online because you can take something that's very small and suddenly make it very big. I think there's a degree of it really just being a mirror and potentially an amplifier for the offline. Again, I think the central problem when we talk about human rights and the general protection of users of the Internet, is again really this fact that the technology is just moving so fast. That regulation both it's you know, how it's developed, initiated, interpreted going forward, the tech just moves so much faster. And then I think what we're seeing now is really kind of a shock that internet users get after the fact but it's maybe the sort of Newton's third law effect. You know, tech moved so fast was so aggressive and so free in the way it kind of there was sort of a wild west of how we, you know, captured and used data. And now we're just sort of experiencing the backlash that you would expect. One other sort of complicated dimension here is that we really need regulation to protect users of the internet but of course, that's then balanced against examples we see around the world of the way the internet's regulated being used to oppress and suppress populations. There's a really important balance that we need to achieve there. We need to protect everybody online. We need to preserve freedom of access to information, freedom of speech. We don't want people to get hurt online, but we also don't want to do that in an oppressive way. Maybe one thing that's really different as far as human rights online and offline, will emerge in the future around artificial intelligence. The big question I think that researchers in artificial intelligence are dealing with be they folks who are working on the algorithmics or be they the colleagues in law who are working on the ethics and the legal side of it. The really big question is around sort of transparency and tractability what's actually happening in this magic algorithmic box? Can we make sure that people can have appropriate checks and balances on what these you know this this new class of machines is doing?

    Todd Landman 7:32

    Well, it's interesting because there is this observation about people who, who who use AI and design those algorithms that the AI solution and the algorithm that's been designed reflects many of the biases of the coder in the first place.

    Ben Lucas 7:44

    Exactly.

    Todd Landman 7:425

    And who are these coders? Well, they come from a particular social demographic and therefore you're replicating their positionality through AI, yet AI is presented as this neutral machine that simply calculates things and gives you the best deals on whatever platform you might be shopping.

    Ben Lucas 7:58

    Precisely. And a lot of these you know, if we think about machine learning in general, where we're training an algorithm, essentially a type of machine to do something it involves a training set that involves a training data set. Where is that coming from? Who's putting it together? Exactly what biases are present in that? And now, and this is probably one of the most pronounced differences when we think about sort of human rights offline and online. I think a really big issue going forward is going to be that of AI discrimination, basically, and we're seeing that in everything from financial services - you know a machine is making a decision about does somebody get a loan, does somebody get a good credit score, applications and facial recognition technology. Who are they trying to find? What are they trying to do with that tech? And this AI discrimination issue is going to be one of the, one of the key things about that online/offline contrast.

    Todd Landman 8:50

    Yeah, you know running right through all of our human rights law discourses, one about you know no discrimination, right that there should not be discrimination by type of person.

    Ben Lucas 8:59

    Correct.

    Todd Landman 9:00

    And yet, we know in practice, there's law discrimination already. And in a way AI can only amplify or maybe accelerate some of that discrimination. So it's a good cautionary tale about you know, the, the, shall we say, the naive embrace of AI as a as a solution to our problems. I wonder if I might just move forward a little bit about the cross-border nature of the internet, one of the promises of the internet is that nation state boundaries disappear, that people can share information across space and time we've just lived through a pandemic, but we're able to talk to each other in meetings all around the world without having to get in any kind of form of transport. But what sort of things should we thinking about in terms of the cross-border nature of the internet?

    Ben Lucas 9:38

    I think that I would encourage all listeners today to go back to Alain de Botton's book, The News; a User's Manual, and also some of the talks he gave around that period, I think around 2014. We can have a totally new interpretation of some of those very relevant ideas, where we are now in the present and I'm talking about what some people are calling the threat of the post truth era. We've seen a completely unprecedented explosion in the information that we have access to the ability to suddenly take somebody's very small idea, good or bad, and project to a massive audience. But with that comes, you know, the vulnerabilities around misinformation and disinformation campaigns and the threat that that leads to, you know, potentially threatening democracies threatening, you know, various populations around the world. And another important branch of work that we're doing is studying campaigns and user generated content, and actually studying what's being said, at scale within these large audiences. We've done quite some work, Todd and I are with the Rights Lab for example, looking at analysing campaigns on Twitter. And this really comes down to trying to get into, exactly as you would study any other marketing campaign, looking at how do you cut through clutter? How do you achieve salience? But then also through to more practical functional matters of campaigns such as you know, driving guaranteed region awareness, policy influence donations, but we're just doing that at a much larger scale, which is facilitated, obviously, by the fact that we have access to social media data.

    Todd Landman 11:16

    It's unmediated supply of information that connects the person who generates the content to the person who consumes it.

    Ben Lucas 11:23

    Yeah.

    Todd Landman 11:24

    Earlier you were talking about the media you're talking about academia and others, you know, there's always some sort of accountability peer review element to that before something goes into the public domain. Whereas here you're talking about a massive democratisation of technology, a massive democratisation of content generation, but actually a collapse in the mediated form of that so that anybody can say anything, and if it gains traction, and in many ways, if it's repeated enough, and enough enough people believe it's actually true. And of course, we've seen that during the pandemic, but we see it across many other elements of politics, society, economy, etc, and culture. And yet, you know, there we are in this emerging post truth era, not really sure what to do about that. We see the proliferation of media organisations, the collapse of some more traditional media organisations, like broadsheet newspapers and others have had to change the way they do things and catch up. But that peer review element, that kind of sense check on the content that's being developed is gone in a way.

    Ben Lucas 12:18

    Yep and it's potentially very scary because there's no editor in chief for, you know, someone's social media posts. On top of that, they probably have or could potentially have a far greater reach than a traditional media outlet. And I think the other thing is, I mean, we were kind of for warned on many of these issues. The NATO Review published quite some interesting work on Disinformation and Propaganda in the context of hybrid warfare, I think around sort of starting in 2016, or ramping up in 2016, which is, you know, also very fascinating read. And then the flip side again of this connectivity that we have now, I guess the good side, you know, is when user generated content is used in a good way. And again, that's examples like, you know, examples we've seen around the world with the mobilisation of protests for good causes or fighting for democracy, grassroots activism, and in particular, that ability to get information out when journalists can't get in.

    Todd Landman 13:15

    You know it's interesting we did a study years ago, colleagues and I, on the the mobilisation against the Ben Ali regime in Tunisia, and we were particularly interested in the role of social media and Facebook platform for doing that. And it turned out that a. there was a diaspora living outside the country interested in the developments within the country but within the country, those who were more socially active on these platforms more likely to turn up to an event precisely because they could work out how many other people were going to go so it solves that collective action problem of you know, my personal risk and cost associated protesting is suddenly reduced because I know 100 other people are going to go. And you know, we did a systematic study of the motivations and mobilisation of those folks, you know, try, trying to oust the Ben Ali regime, but it gets to the heart of what you're saying that this this you know, user generated content can have a tech for good or a social good element to it.

    Ben Lucas 14:08

    Exactly. And I think another important note here, that's maybe some sort of upside is that, you know, there are a lot of academics in a lot of different fields working on understanding this massive proliferation of connectivity as well. In a kind of, I guess, strange silver lining to many of the new problems that this technology may or may not have caused is that it's also given rise to the emergence of new fields like so we're talking about Infodemiology, now we've got some amazing studies happening on the subjects of echo chambers and confirmation bias and these types of type of themes and I think it's really given rise to some really interesting science and research and I have some some confidence that we've got, even if we don't have those, again, editors in chief on social media, I have confidence because we certainly have some, you know, wonderful scientists coming at this scenario from a lot of different angles, which I think also helps to sort of moderate and bring some of the downsides to the public attention.

    Todd Landman 15:04

    Yeah, and let me jump to research now, because I'm really interested in the type of research that people are doing in 3DI here at the university. Can you just tell us a little bit about some of the projects and how they're utilising this new infodemiology as you call it, or new grasp and harnessing of these technologies?

    Ben Lucas 15:23

    Yeah, so 3DI as the data driven discovery initiative, we're basically interested in all things applied data science. We have, I think, quite a broad and really wonderful portfolio of activity that we represent here at the University of Nottingham, in our Faculty of Social Science. Faculty of Social Sciences. This is everything from economics, to law, to business, to geography, and everything in between. We take a very broad exploratory approach to the kinds of questions that we're interested in solving, I would say. But we do tend to focus a lot on what we call local competitive advantage. So we're very interested in the region that we operate - Nottinghamshire - sectors and industry clusters where they have questions that can be answered via data science.

    Todd Landman 16:08

    What sort of questions? What sort of things are they interested in?

    Ben Lucas 16:11

    This is everything from the development of new financial services to really driving world class, new practice in digital marketing, developing and sort of advancing professions like law, where there is a very big appetite to bring in new sort of tech and data driven solutions into that space but a need to achieve those new sort of fusions and synergies. So that, that side is obviously very, you know, commercially focused, but very importantly, a big part of our portfolio is SDG focus. So Sustainable Development Goal focused, and we've got, I think, some really fascinating examples in that space. My colleagues in our N-Lab, which is a new demographic laboratory, based in the business school, are working on food poverty, for example. And they're doing this in what I think is really exciting way. They've teamed up with a food sharing app. So, this is very much driven by the start-up world. It's very much a marketplace offering. The platform is set up to combat, hopefully both hunger, but also food waste. So, we're talking SDG 2, and we're talking SDG 12, sustainable production and consumption. And they've then been able to expand this work not just from understanding the platform - how it works, not just helping the platform, how it can work and function better. But they've been able to take that data from the private sector and apply it to questions in the public sector. So, they are doing a lot of wonderful work.

    Todd Landman 17:37

    So, people have a bit of surplus food, and they go on to the app and they say I've got an extra six eggs, and someone else goes on the app and says I need six eggs and then there's some sort of exchange, almost like an eBay for food.

    Ben Lucas 17:47

    Exactly.

    Todd Landman 17:48

    But as you say, people who are hungry get access to food for much less than going to the shop and buying it and.

    Ben Lucas 17:55

    Or free.

    Todd Landman 17:56

    And people with the extra six eggs don't chuck them out at the end of the week. They've actually given them to somebody right?

    Ben Lucas 18:01

    Exactly.

    Todd Landman 18:02

    And then from that you generate really interesting data that can be geo-located and filled into Maps, because then you can work out where the areas of deprivation then where people have, say, a higher probability of seeking less expensive food.

    Ben Lucas 18:15

    Precisely. Yeah. And I think that's also a good segue into you know, so one of the other flagship projects we have is 3DI, which is tracktheeconomy.ac.uk where we've been looking at, again, taking data from the private sector, but also government data and looking at how economic deprivation might have been exacerbated or not or how it changed. In particular focused on COVID and what sort of shocks that brought about, but with the intention of taking that forward. And the biggest sort of revelations that we've had working on that project have been really around the need for better geographical granularity. The fact that a lot of our national statistics or you know, marketing research assessments that are made by companies are based on you know, bigger geographical chunks. Actually, if we can get more granular and get into some of that heterogeneity that might exist at smaller geographical levels, you know, that's that's really, really important. That really, really changes a lot of policy formulation, sort of scenarios and questions that policy makers have.

    Todd Landman 19:19

    One of the big problems when when you aggregate stuff, you lose that specificity and precisely the areas that are in most need. So I wonder in this research that your your colleagues been doing and that you've been doing, you know, what's the end game? What are we working towards here? And how is that going to help us in terms of it from a human rights perspective?

    Ben Lucas 19:41

    I think speaking from a personal perspective, when I was a student when I was first taught economics, I was taught in a way that really highlighted that this is you know, economics was was just something that everyone as a citizen should know even if you don't want to become an economist or an econometrician, you need to know it as a citizen. The same now very much applies when we talk about technologies that might not be familiar to all folks like AI data science. I think there's a lot to be said, as far as what I would say is a big sort of mission for 3DI is to really boost the accessibility of technical skills to really benefit people in terms of prosperity, but also just in terms of understanding as citizens what's actually going on. You know, if machines are going to be making decisions for us in the future, that we have a right to understand how those decisions are made. Also, if we think about other challenges, in the sort of AI and automation space around, you know, potentially people losing jobs because it's become automated. I think we have a right to know how and why that is. I think another big sort of an extension of that point is really in learning and getting technical skills out there to people for you know, potentially benefiting prosperity and the labour market. We really need to keep that very tightly paired with critical thinking skills. You know, we're very good as academics, thinking about things and breaking them down and analysing them especially you know, we as social scientists, you know, coding is probably going to be language of the future to borrow your quote Todd, but who's going to use that coding and what for? So I think we need to keep people in a good mindset and be using this this this technology and this power for good. And then the last point would be as something that's been done very well on this podcast in the past, is getting people to think both researchers and again, definitely citizens to think about the inextricably intertwined nature of the Sustainable Development Goals. You know, so for us at 3DI we're looking for those problems at scale, where we have measurements at scale, where we can do data science and crack big challenges, but I think whether you're doing you know, much more focused work or work with the SDGs at scale, it's all really interconnected. An obvious example, what is climate change going to do for you know, potentially displacing populations and the flow on, the horrible flow on effects that's going to have? So I really, I think that's yes, sort of our our mission, I would say, moving forward.

    Todd Landman 22:07

    That's fantastic. So you've covered a lot of ground, Ben, it's been fascinating discussion, you know, from the dual use of technology and this age old question of the good and the bad of any kind of new technological advance. You've covered all things around the, you know, the mobilizational potential problems with post truth era. The expanse and proliferation of multiple sources of information in a sense in the absence of of that mediated or peer reviewed element. And this amazing gap between the speed of technology and the slowness of our regulatory frameworks, all of which have running right through them major challenges for the human rights community. So we're really excited about this series because we're going to be talking to a lot of people around precisely the issues you set out for us and many more. In the coming months we've got Martin Sheinin who is a great human rights expert, former UN Special Rapporteur, but now a global, British Academy global professor at the Bonavero Institute at the University of Oxford working on precisely these challenges for human rights law, and this new digital world. And that's going to be followed by a podcast with Diane Coyle, who's the Bennett Professor of Economics, University of Cambridge. It's interesting because she wrote a book in 1997 called The Weightless World, which is about this emerging digital transformation coming to the economy, and has now written a new book called Cogs and Monsters. It's a great take on the modern study of economics and the role of digital transformation. But for now, I just want to thank you, Ben, for joining us. It's exciting to hear about the work of 3DI. We appreciate the support of 3DI for this series of The Rights Track. We look forward to the guests and I think by the end of the series we would like to have you back on for some reflections about what we've learned over this series of the Rights Track.

    Ben Lucas 23:50

    Happy to. Thank-you for having me.

    Christine Garrington 23:53

    Thanks for listening to this episode of The Rights Track, which was presented by Todd Landman and produced by Chris Garrington of Research Podcasts with funding from 3DI. You can find detailed show notes on the website at www.RightsTrack.org. And don't forget to subscribe wherever you listen to your podcasts to access future and earlier episodes.

  • In Episode 8 of Series 6 Todd is in conversation with Arlene Tickner and David Owen about the impact of Covid-19 on democracy and migration. Arlene is a Professor of International Relations in the School of Political Science, Government and International Relations at the Universidad del Rosario in BogotĂĄ, Colombia. David is a Professor of Social and Political Philosophy at Southampton University.

    0.00-12.40

    Todd starts by asking David about the relationship between democracy and human rights. David says that human rights and democracy are mutually entwined. They secure our basic standing, interest and membership in a democracy, whilst being a part of a democracy is meant to ensure those rights are available to us.

    Todd expands on David’s explanation and asks him about how Covid-19 has compromised the ideals of democracy and the protection of human rights. David points to three things that have questioned every day ideas of democracy:

    How within states different people (e.g. permanent/temporary residents/asylum seekers and refugees) are treated unequally The depth of global inequalities around health (e.g., Africa has just 3000 intensive care beds on the whole continent) Between and within states we are radically interdependent (poverty/lack of education in other parts of the world are threats to us all

    Todd asks Arlene about how she sees things from her base and perspective in Colombia. She outlines the political backdrop across Latin America where she says people are increasingly questioning democracy as the best form of government because of its failings. The pandemic has underscored different forms of inequality and is crucial in understanding growing forms of social protest in the region. She points to two specific issues that underscore the shortcomings of the democracies in this part of the world:

    Latin America is the worst affected region accounting for 35% of all deaths from Covid-19 despite representing only 8% of the global population (Colombia is top of the global list for deaths) Vaccination programme is extremely slow (e.g. only just beginning in Paraguay)

    Todd comments that there is something of a myopia towards this part of the world and asks Arlene to talk specifically about recent protests in Colombia itself.

    Arlene says the country has undergone a number of protests since peace accords were signed a few years ago which was to be expected. But she adds the more recent protests were related to tax reforms -proposed in the middle of the pandemic. This caused considerable discontent among the middle classes. Protests were also linked to ineffective implementation of the peace accords, discontent around access to education for young people, frustration over the pandemic and a deteriorated health infrastructure and pensions. Excessive police force used to deal with protestors has worsened the situation and invitations for dialogue have been empty offers.

    12.40-18.00

    Todd mentions recent protests in the UK (Black Lives Matter, violence against women, anti lockdown, European Football Cup final violence and racism) and asks David for his take. He says there is a question of how to balance public health security with the right to protest (a fundamental human right). A more worrying issue for democracy in the UK he says is a lesson learned from Trump America around using culture as a way of focusing and intensifying social division (something he believes Boris Johnson and Priti Patel have engaged in in a bid to silence/counter the traditional left).

    He adds culture is becoming something of a key battleground for the kind of democracy people want (relatively thin as in Turkey/India/Russia with a strong executive) or a more egalitarian form of democracy with genuine opportunities to self-govern and participate.

    Todd picks up on David’s mention of ‘wokeism’ and points out that it is something that still isn’t well understood in the UK. He goes on to ask David about the lifting of restrictions in the UK despite rising cases of Covid. David refers to the England football team as a representation of the conflicted visions around what Britain/England should look like. One is a diverse and multicultural ‘bringing people together’ vision - the other is focused on division, generating division and ruling through division. Todd agrees.

    18.00-24.43

    The discussion moves to migrants and migration. Todd asks Arlene about the situation in Venezuela which has been highly unstable since the 1990s and where many people have decided to leave the country and flee to Colombia.

    Arlene says there is both a political and humanitarian crisis in Venezuela (exacerbated unintentionally she says by the US) which has led to some 2 million Venezuelans fleeing to Colombia. LA countries more widely have been unable to agree on a strategy to deal with this, but the Colombian President has afforded temporary protection status to all those migrants who arrived before January 2021. This has created a huge strain on Colombia’s fiscal capabilities. Arlene believes this to be part of the President’s ambitions to force the Venezuelan President Nicolás Maduro out of power.

    Todd also asks about widespread social protests in Cuba and Arlene says these protests were a surprise to many, but that essentially were a response to growing discontent with the handling of the pandemic and certain human rights. She adds sanctions put in place by Donald Trump when he was US president have hit the Cuban economy hard.

    24.43-end

    Todd asks David and Arlene to reflect on what the future of democracy holds. David says that in Europe the massive visibility of the inequalities discussed may be a spur for a re-engagement of social democracy and taking inequality seriously. He mentions Portugal as leading the way in temporarily giving some migrants the same rights to healthcare as its citizens. The ways in which some states have handled the pandemic will have implications for how politics in those states develops post/declining-pandemic.

    Arlene says there are few success stories from the region, but has simply placed in sharp relief how those democracies are failing. Saying that she does think Uruguay and Chile provide some sources of hope. She says events around the pandemic have raised questions for her around ‘who is the human’ in human rights and so she feels both pessimistic and hopeful about the future of democracy.

  • In Episode 7 of Series 6 Todd is in conversation with Tom Parker, a prominent counterterrorism practitioner who has consulted for the EU, the UN, Amnesty International and MI5 on post-conflict justice, security sector reform, and counter-terrorism. He is author of a new book, Avoiding the Terrorist Trap: Why Respect for Human Rights is the Key to Defeating Terrorism and in this episode he and Todd are reflecting on the complex interplay between counter-terrorism and human rights in the context of the Covid-19 pandemic.

    NB By using the link above and the following codes, you can enjoy a substantial discount on Tom's book

    55% discount on the Hardback version using the code: P995PARKERHC 30% discount on the eBook version using the code: P995PARKEREB

    0.00-03.43

    Todd asks Tom about a view expressed in his book that there is no set profile of a terrorist. Tom says there have been many attempts to profile the type of individual who becomes a terrorist but that this does not work. Terrorists come from all backgrounds and walks of life, they are male/female and young/old. He mentions Mohamed Merah who shot and killed seven people in Toulouse and his brother who although from the same background and influences went on to marry a Jewish woman and get involved in inter-faith dialogue. Tom says there are a host of pressures from different sources that push people towards terrorism and that there are certain behaviours that can influence whether terrorism emerges in a particular society such as the marginalisation or abuse of people.

    03.43-15.20

    Todd asks Tom why he believes a human rights framework is so key to tackling terrorism. Tom explains that while researching his book he looked closely at materials in which terrorists over 150 years and across continents shared information about their ‘cause’ or activities. He outlines six core principles that emerge:

    Asymmetrical warfare Attrition Propaganda by deed Revolutionary prototype Provoking a reaction in the existing Government Undermine legitimacy of existing Government

    In the latter two, Tom believes a human rights framework is particularly key as it stops Governments falling into a trap of over-reaction. He mentions the activities of Baader Meinhoff in the late 60s through to the 1980s.

    Todd asks Tom to say more about the idea that open societies are more vulnerable to terrorism and feel more pressure to create restrictive measures to prevent it. Tom says terrorism tends to happen in democracies rather than authoritarian societies. Terrorists are using violence often to open a political dialogue. Human rights law does not prevent states from taking action to protect themselves. Rather, Tom says, it is quite permissive with a range of options within a framework and he sees no reason to step outside that framework. He talks briefly about his own experiences in the 1990s as a security officer in the UK working within this framework. He sees no tension between effective counter-terrorism and human rights observance.

    Todd presses Tom on the claim from some quarters that the perceived existential threat of terrorism leads states to curb freedoms and violate human rights. Tom references Mao Tse Tung’s analogy of the War of the Flea and explains that it’s the reaction to a perceived threat that is the actual threat. He talks about Al Qaeda and how in his view it never posed a real existential threat to the United States compared with other threats including COVID-19. He goes on to say that despite this, many of the laws passed as a result of 9/11 are still in force today. He says he is in favour of the system used in the UK during the Troubles in Northern Ireland when all anti-terrorism legislation was temporary and designed to restore the status quo and therefore reviewed, renewed where necessary and updated or changed regularly. This has been lost since 9/11 in the US, UK and Europe he adds. He also adds that this has been done in the context of new technical and highly intrusive advancements that did not exist 20 years ago and may be hard to dismantle.

    15.20-23.30

    The conversation moves to COVID-19 and whether it can be perceived as an existential threat and whether responses to it can be perceived as curbing human rights. Tom talks about ‘privilege’ and how the threat seems larger in the West compared with Nigeria where he is currently based and where there are other as if not more serious public health threats such as malaria. International human rights law anticipates the curbing of public freedoms to protect public health so he says there isn’t a threat per se to human rights from it as long as the curbs are lawful/ proportionate etc. Todd presses Tom on public concerns around the measures used to tackle COVID-19 and how long they will continue to be left in place. Tom references the Iron Law of Bureaucracy, a concept in political science that an established democracy and its supportive state institutions have a tendency to enlarge and enhance themselves. He says we don’t think enough about the length of time we may have to live with measures after a threat has passed. He mentions the shoe bomber Richard Reid and how we still take our shoes off at airports because one person tried unsuccessfully to smuggle a bomb onto a plane in his shoe. He points out that when something fails, terrorists tend to move onto different things. He points to the length of time it took for the so-called Ring of Steel around London established as a response to the threat posed by the IRA took many years to gradually dwindle because these things are hard to change back once they are in place. Tom talks briefly about the development of new technologies such as number plate and facial recognition and smart cities and the potential implications of that with free public space shrinking and the potential for these technologies to be exploited for nefarious purposes.

    23.30-end

    Todd wonders if our attention will return to terrorism post COVID and if there are any learnings from the experience to help in tackling terrorism. Tom says public focus may have left terrorism but it hasn’t gone away especially right-wing and Islamist extremism. He agrees that the pandemic has had a ‘slightly depressing’ effect on terrorism and that the threats are likely to emerge as significant as they were pre-pandemic. Todd brings Tom back to the focus of his book to reflect once again on the central premise of the book that a human rights based approach to tackling terrorism is key. Tom agrees that counter-terrorism and public health are hard and that there will always be contention and disagreement. A human rights approach helps resist the goals that terrorist organisations are seeking to achieve. It is a more measured and careful way of tackling the problem.

    Further links and references

    A reminder that by using the link above and the following codes, you can enjoy a substantial discount on Tom's book:

    55% discount on the Hardback version using the code: P995PARKERHC 30% discount on the eBook version using the code: P995PARKEREB Other links Martha Crenshaw (political scientist and prominent terrorism researcher)
  • In Episode 6 of Series 6, Todd is joined by Professor Aoife Nolan, to discuss the impact of the Covid-19 pandemic on the human rights of children. Aoife is Professor of International Human Rights Law and Co-Director of the Human Rights Law Centre in the School of Law at the University of Nottingham. She is also Vice-President of the Council of Europe European Committee of Social Rights and has worked with a range of civil rights organisations.

    0.00-04.10

    Todd begins by asking Aoife to outline the impact of the pandemic on the human rights of children. She points to the wide-ranging global impact of the pandemic and associated lockdowns, in terms of the health and survival of children and identifies a range of issues including, education, food access, mental health, increased levels of child abuse, the impact of poor housing, loss of social contact and increased risk of online harm. All of these directly affect children’s rights.

    Aoife explains that the pandemic has had a hugely unequal impact on children from different backgrounds and living in different situations. She adds that this has entrenched existing inequalities. Unaddressed, she concludes, this will have an impact on the future life-course of some children.

    04.10-08.18

    Todd moves on to focus on the actions of governments during the pandemic and the extent to which they were compatible with the rights of children.

    Aoife points to the 1989 UN Convention on the Rights of the Child, which recognises that restrictions on human rights may be necessary in times of crisis but also the limitations on the exercising of those powers.

    Todd wonders whether in the light of criticism from anti-lockdown groups, governments have responded to the crisis in an appropriate way. Aoife makes the following points:

    There have been a wide range of measures in different states In the UK there have been positive measures, but also shortcomings in terms of food and support for families Some governments have used the crisis to push long-standing agendas not consistent with child rights, for example in relaxing obligations to children in care

    08.18-11.36

    Aoife gives an example of how the pandemic has been used to weaken various statutes related to the protection of children in social care. She explains how changes have been made in relation to the duty of Local Authorities towards education health and social care. She notes that these changes have been reversed as a result of pressure on the government. She says there are concerns that Covid-19 was being used as a cover for mass de-regulation of social care.

    She mentions that the UK’s Department of Education was found to have acted unlawfully in scrapping a range of rights for children in care. A child rights impact assessment carried out by the department, which signed off the measure showed a lack of understanding of child rights. She points out that this move was later reversed.

    11.36-13.45

    Todd moves the discussion to the USA, which has not ratified the UN convention on human rights. He points to differences in approach between the Trump administration and the Biden administration and asks Aoife to comment on the progress towards getting children back into school.

    Aoife points out that approaches to education are very much state driven, and although not an expert on matters relating to US education points out that:

    Schools cannot re-open without adequate planning, safety provision, and funding Even though the USA is not a party to the UN Convention, individual state constitutions include provisions for the protection of children’s rights

    13.45-16.55

    Aiofe reviews the situation in South Africa around school closures and re-opening, and says the net effect has been to amplify inequalities within the country:

    The effect of closures was to move education online but large numbers of children did not have access to the internet There were issues around re-opening in terms of infrastructure shortcomings and lack of support for school re-opening

    As a result, re-opening took place against in non-Covid safe schools with implications for health, provision of school meals, and education.

    They move on to discuss the terrible situation with Covid-19 cases and deaths in India and what Aoife thinks about the impact on children’s rights. She suggests that, beyond concerns related to Covid infections, the health crisis and associated lockdowns have interfered with the normal processes of vaccinations and health interventions, as well as in education.

    16.55-18.12

    Asked about the response of the Council of Europe, of which she is a member, Aoife reports that the Council has identified worrying trends in respect of:

    School closures Lack access to healthcare services Mental health issues Loss of social contact

    18.12-20.23

    Todd asks about the work of activists and advocacy groups in mitigating the impacts of the pandemic. Aoife says she has been impressed by the large amount of energetic work, and advocacy by both regional and international groups including:

    The strength of the ongoing discourse on children’s rights globally. The UN policy brief The Impact of COVID-19 on children. Is evidence of the traction of children’s rights. Children’s rights currently occupy a higher profile than other affected groups.

    20.23 – 23.10

    Todd’s asks about priorities for the post-Covid era.

    In Aoife’s view there must be meaningful steps to get children’s rights to the centre of the recovery effort and policy planning She warns of the potential austerity cuts that may follow in the post-Covid phase and predicts that they will be catastrophic for children’s rights There is a need to acknowledge and deal with the structural inequalities in society, which are exacerbated by the pandemic, and which impinge directly on children’s rights

    23.10-end

    Todd asks Aoife to reflect on the importance of the voices of children themselves. She believes children have been excluded from the decision-making process. Their voices and views have been ignored by governments and that this is contrary to Human Rights Law. There is an urgent need for this situation to be redressed.

    However, the issue of children’s rights is part of a wider concern for Human Rights she concludes. There is a need for “inter-generational solidarity.” This requires children’s rights groups to work alongside disabled groups, older people, women’s groups and others to bring about change.

    Further Reading

    Protecting the most vulnerable children from the impact of coronavirus: An agenda for action. UNICEF,

    Policy Brief: The Impact of COVID-19 on Children United Nations 2020

    COVID-19 Statement United Nations Committee on the Rights of the Child, April 2020

    Statement on COVID-19 and Social Rights European Committee of Social Rights, April 2021

    A Child Rights Crisis A. Nolan, LRB Blog, May 2020

    Should Schools Reopen? The Human Rights Risk -An Advisory Note for the Independent SAGE – A. Nolan, May 2020

    Of Limitations and Retrogression: Assessing COVID-19’s Impact on Children’s ESC Rights A. Nolan & J. Bueno de Mesquita, May 2020

    Covid-19 Protocol R(Article 39) v Secretary of State for Education [2020] EWCA Civ 1577 24 Sept 2020,

    Equal Education & Others v Minister of Basic Education & Others 2020 ZAGPPHC 306 (17 July 2020)

  • In Episode 5 of Series 6, Todd is talking to Mahi Ramakrishnan. Mahi is a refugee rights activist and runs a non-profit organisation, Beyond Borders Malaysia, which works to promote and protect the rights of refugees and stateless persons in Malaysia.

    00.00 – 02.55

    Todd begins by inviting Mahi to talk about refugee issues in South-East Asia. She explains that there are approximately 500,000 refugees in Malaysia and that:

    around half are from Myanmar the Rohingya make up the largest refugee group none of the refugee groups have any legal status in Malaysia, no rights to work, education or health care and are reliant on UNHCR for support Malaysia has not ratified the 1951 Refugee Convention

    02.55 – 05.25

    Todd asks Mahi to say more about the situation facing Rohingya. She says she visited Myanmar in 2017 and describes her shock at the lack of racial unity in the country. She explains that:

    prior to the 1960s the Rohingya were well integrated but the situation changed with the installation of the military government in the 1960s there followed mass migrations of Rohingya from Myanmar to Malaysia in the 1970s

    (Note: The Rohingya were declared stateless by the ruling Military Junta in 1982)

    Mahi says that there are currently 3 to 4 generations of Rohingya, in Malaysia and points to 3 specific issues for them:

    They have forgotten their culture Lack of access to education means that they occupy the lowest social classification in Malaysia Their community is characterised by a deep-seated patriarchy

    05.25 – 09.50

    Todd asks Mahi to expand on the issue of patriarchy and refers to her documentary film, Bou (Bride) which is about the trafficking of young girls into Malaysia to be child brides.

    Mahi points out that while the buying of child brides is not exclusive to the Rohingya it is a central part of their patriarchal culture. She reports on the purchase of Rohingya child brides by men, via traffickers and suggests that parents are complicit partly because marriage offers a semblance of security to the girls given their lack of legal status (in Myanmar). The girls are in a precarious position, abandoned when they become pregnant and/or subjected to domestic violence and abuse.

    Patriarchy is evidenced in the following ways:

    young Rohingya girls are preferred by the men over Malaysian girls because they will be more obedient girls are not allowed to attend school parents control children husbands control wives

    However, she notes that women are beginning to organise and stand up for themselves and their rights, despite negative reactions from men.

    09.50 – 17.15

    Todd moves on to ask about the impact that Covid-19 has had on the refugee community in Malaysia. Mahi refers to the continuous influx of migrants and refugees, which has led to a xenophobic reaction within Malaysia. Initially directed at the Rohingya, but now it is more widespread, directed towards all refugees and migrant workers. She refers to existing socio-economic tensions along ethnic lines within the country and the focus of that discontent on the refugee community and points to the lack of a comprehensive health care plan to protect all groups against the virus, especially the refugee/migrant community. She says that lockdowns and movement controls have made life very difficult for refugees and undocumented workers to travel for work.

    When asked about infection rates, Mahi reports that the majority of COVID infections are within the immigrant communities largely as a result of high density living conditions and the impossibility of social distancing at home and at work. She also notes high levels of infections in detention centres.

    Todd and Mahi agree that this feeds into a narrative that migrants are “bad” and need to be sent home.

    However, Mahi argues that the problem lies with labour agents and corruption,which leads to the exploitation migrant workers, who lose their documentation and forcing them to live and work in high density unregulated environments.

    17.15 – 20.57

    Todd’s next question concerns the work of UNHCR, The World Health Organisation and the International Labour Organisations and whether Mahi sees any evidence of them working together for the benefit of refugees. She assumes that they have ongoing conversations but points to the need for them to work more closely with grass roots organisations and community leaders.

    She goes on to outline the work of Beyond Borders Malaysia.

    The principal aim is to give refugees a voice using art and performance as a vehicle and she references Refugee Festival July 2021, which is used as an advocacy tool. It is involved in discussions with lawmakers re; basic rights to health care, education and work. It undertakes projects like the Livelihood Initiative which involves women cooking food for sale and sharing in the profits.

    20.57 – 26.45

    Todd asks how the Festival has been impacted by the pandemic. Mahi notes a number of difficulties:

    the lack of freedom/requirement for permits to hold events at any time the backlash against migrants frightened off some from participating

    Mahi explains that in 2020 the Festival went online, and while that presented opportunities to reach a wider audience and involve more people from elsewhere including the Kurdish-Iranian journalist Behrouz Boochani, many refugees were afraid to take part. To mark this fact, Mahi had a fixed camera on an empty chair during a panel discussion. Mahi has passed the directorship of this year’s festival to a refugee artist and hopes restrictions will be lifted and enable it to take place in a physical space.

    26.45 - end

    Finally, Todd asks Mahi about signs of hope for the future.

    In her view, the current Malaysian government is very difficult to work with. However, she says she will try to use existing legislation to allow refugees to work. She will continue to try to persuade the existing government even though the conversations are difficult.

    Further links

    Human rights: reason to be joyful - Rights Track episode with Professor William Paul Simmons about marginalised groups Refugees: why hard times need hard facts – Rights Track episode with Gonzalo Vargas LLosa, UNHCR
  • In Episode 4 of Series 6, Todd is in conversation with Alison Brysk, Professor of Global Governance at the University of California. Alison’s recent work has focused on the global impact of Covid-19 on human rights. In this episode, she reflects on the disproportionate impacts of the virus and explains why she believes that human rights are an integral part of the pathway out of the pandemic.

    00.00-03.58

    Todd begins by asking Alison to reflect on the idea of Covid-19 as a threat to democracy and human rights.

    Alison starts by talking about a citizenship gap, that is, people “out of place” physically, socially or in terms of status, for example:

    Refugees Migrants Internally displaced people

    She argues that Covid-19 has intensified that threat, particularly for vulnerable groups who have become subject to increased levels of mobility tracking and surveillance. She refers to examples from Brazil, India and the treatment of Native Americans in the USA.

    03.58-07.16

    Todd moves on to discuss concerns around the way governments may be using the Covid-19 pandemic as an excuse to restrict migration, human rights and curb civil liberties. Alison says the first step is to focus on the interdependence of human rights. She points out that vulnerable people are being made scapegoats during the pandemic deflecting attention away from the real issues. She points to a selective approach to some civil rights over others, referencing threats to property and economic activity as receiving the most push back in California, for example.

    07.16-11.00

    The discussion turns to the debate surrounding privacy rights - the ongoing debate in the UK around the requirement of vaccination passports, for travel, for example and how that might affect identity rights. Given that this will create individual digital footprints the question is how concerned should we be Todd asks?

    In Alison’s view, that depends on the functioning of the health care system. In well-established systems for example, such as in Europe and the global North, it could be a problem but there are well established mechanisms for monitoring privacy.

    In most of the world, the situation is different. Access to this kind of health care does not exist. Health disparities and, therefore, a lack of, for example a Covid vaccination passport could create problems for:

    Those seeking employment Economic migrants Refugees seeking asylum

    Some countries stand out as Covid-19 champions, for example New Zealand and Taiwan where there have been increases in state power, but where there are mechanisms for control.

    11.00-15.12

    Todd asks about the notion of patriarchy and how it intersects with the pandemic. Alison identifies three areas:

    1. Production

    -Two/thirds of front-line workers are women and they have been disproportionately exposed to Covid-19.

    - Female domestic workers comprise a large percentage of migrant labour and have been left vulnerable to the virus.

    2. Public space. Governments have used concerns over social distancing and the spread of the virus to restrict peaceful assembly.

    3. Reproduction. Many governments have taken advantage of the pandemic to limit access to reproductive health, for example contraception and abortion. USA and Poland are cited.

    15.12-20.12

    Todd points to a marked increase in reports of domestic abuse against women, during the pandemic. Alison refers to work carried out by UN Women, and the data that they have collected, and WomensStats, a project she works on. She finds:

    An increase of around 30% in reports of domestic violence globally The more severe the lockdown the higher the level of abuse The impacts relate not only to being physically locked in with the abuser but also in being unable to access support

    Examples are given from France and Spain where new ways have been developed for women to communicate and seek support where they are unable to make use of established support mechanisms.

    20.12-end

    The interview closes with Alison reflecting on the impact of the pandemic on her home state, California.

    Case rates are stabilising, with most areas going down through the tiers 25% of adults have had access to at least one dose of the vaccine. Some issues relating to the vaccination programme have been addressed Bottle-necks in the supply chain of the vaccine Issues re prioritization in terms of who was vaccinated Issues of distribution Evidence of pandemic fatigue especially on college campuses where compliance is low Elderly and middle-class communities have shown most compliance There has been resistance to vaccination in three areas A small number of neo liberal conservatives New age groups advocating alternative medicine Members of the Hispanic population, which makes up 40% of the population and are in the most vulnerable occupations although influential individuals within the community have been working to encourage the vaccine uptake

    Further reading

    The Future of Human Rights Alison Brysk, Polity Press, 2018

    Why feminism is good for your health Alison Brysk and Miguel Fuentes Carreno, New Security Beat, 2020

    A Pandemic of Gender Violence in the COVID Era Audio discussion, The Wilson Center

    When “Shelter-in-Place” Isn’t Shelter That’s Safe: A Rapid Analysis of Domestic Violence Case Differences during the COVID-19 Pandemic and Stay-at-Home Orders. M. Mclay 2021

    Essential and Expendable: Gendered Labor in the Coronavirus Crisis Megan Neely, 2020

    The Color and Gender of COVID: Essential Workers Not Disposable People Catherine Powell, 2020

    UNFPA. Impact of the COVID-19 Pandemic on Family Planning and Ending Gender-Based Violence, Female Genital Mutilation and Child Marriage. Interim Technical Note: 2020

    Addressing the Impacts of the COVID-19 Pandemic on Women Migrant Workers UN Women, Guidance note UN Women 2020

    COVID-19: Emerging Gender Data and Why It Matters, UN Women Data Hub, 2020

  • In Episode 3 of Series 6, Todd is joined by David Fathi, Director of the American Civil Liberties Union National Prison Project to discuss the impact of Covid19 on prisons and prisoners in the USA.

    00.00 – 04.40

    David provides an overview of the prison system in the USA. The country has:

    the largest prison population in the world at over 2 million people the highest per capita rate of prisoners at between 5 and 10 times the rate for countries like Canada, England, and Wales and even authoritarian countries like China

    Incarceration in the United States is highly decentralized across 51 different prison systems. Every state has its own prison system separate from and running alongside the federal prison system, and within that the private, for profit prisons account for around 10 percent of the national prison population.

    There are concerns relating to private run prisons, which have led to the Biden administration removing private companies from operating federal prisons. Concerns raised include:

    lack of oversight poor quality rehabilitation services and programming low levels of safety and security

    04.40 – 06.07

    The conversation moves on to discuss rehabilitation. David notes that rehabilitation has a very low profile in the U.S. prison system. The extensive use of solitary confinement works contrary to rehabilitation.

    06.07 – 09.33

    David says the drivers of the prison population date back to the days of slavery, structural racism and the Jim Crow laws. He points to the post-Civil War period in the US when there was a deliberate policy of incarcerating black people. He adds that its legacy exists today in the fact that a black male is 6 times more likely to be incarcerated than a white man.

    The penal system and culture is described by David as punitive rather that restorative:

    average sentences are longer than in comparable democracies. early termination of sentences is less likely. many more prisoners serve life sentences (1 in 11 of all prisoners) few efforts to rehabilitate and release

    09.33 – 12.00

    The US is also amongst the worst countries in terms of its use of solitary confinement. There are significant numbers of prisoners on death row who are kept in permanent solitary confinement often for over 10 years. It is estimated that over 100,000 prisoners are held in solitary confinement on a daily basis, a number which has increased during the COVID pandemic.

    12.02 – 18.30

    Todd moves on to ask about the early release from prison of Michael Cohen, President Trump’s personal lawyer as an example of prominent individuals gaining release citing medical vulnerability to Covid19. David agrees that affluent/prominent people are treated differently by the system, but also contends not enough prisoners have been released as a result of Covid19. This does not make sense, he says because prisons are hotspots of Covid19 infection due to:

    large numbers of inmates high population density poor ventilation poor sanitation an ageing and therefore more vulnerable group to Covid19 population

    Although data show one in five inmates have tested positive, and anecdotally ethnic minorities have been disproportionately impacted, there are no data on whether/how BAME prisoners have been adversely affected because that data are not recorded. David says it’s hard to see this data omission as anything other than intentional.

    18.30 – 21.30

    The situation is similar in other detention centres, immigration centres, jails etc, but the problems of control are enhanced by the rapid turnover of people through those facilities. Todd asks how successful ACLU has been in its efforts to get prisoners released because of Covid. David says they have had:

    significant success in getting people released from detention centres due to medical vulnerability to Covid19 very little success at getting vulnerable inmates released from prisons some success in terms of mitigation of infection risk in prisons

    30.00-end

    Todd asks about the prospects for a reduction in the size of the prison population. David says the problem is the decentralised nature of the penal system, which works against the ability to bring about reform. This has a parallel in the drive to get all of the population vaccinated against the Covid19 virus, which is also being hindered by the same federated structure. This, he adds, begs the question of where prisoners fit in the priority system for vaccination.

    Further Links from ACLU

    Prisoner rights Statement on ending of private companies running federal prisons Banking on Bondage: private prisons and mass incarceration Campaign to end solitary confinement Report on solitary confinement on death row Report on 500% increase in solitary confinement under Covid19
  • In Episode 2 of Series 6 Attorney Dominique Day, founder and Executive Director of the Daylight Collective which seeks to fill the space between the status quo and substantive justice with creativity, diverse voices, and multi-sector approaches and understandings talks to Todd about how COVID is negatively and unequally impacting the lives and human rights of Black Americans of African descent.

    00.00 – 02.20

    Todd begins by asking Dominique to comment on the dis-proportionate impact of the Covid19 pandemic on people of African descent. She points to significant racial disparities in terms of:

    Who becomes infected Who has access to health care Differences in outcomes in terms of severe illness and death

    This is seen as an outcome of policies, which exemplify systemic racism at a global and local level.

    02.20 – 05.30

    Todd asks Dominique which factors she sees as playing a key role in the impact of the Covid19 pandemic.

    Whilst racism is not intentional she sees it as being ingrained into the presumptions and actions of individual decision makers In emergency departments this translates to medical bias when doctors are working under stress As evidence she points to research which suggests that medical bias disadvantages people of African descent (and which she discusses in a related webinar) Although the data is widely known her concern is that the issue of systemic racism is embedded in decision making even at the level of the individual clinician

    05.30 – 12.40

    Todd summarises and points out that the reality is that people of African descent in the USA have a markedly higher mortality rate, which is linked to a long history of systemic racism.

    Dominque points to “social conditioning” in deciding which lives matter. By way of example she points to the decision to withhold the distribution of the Pfizer vaccine on the African continent and argues that it suggests that this is a decision made along the lines of race In terms of the impact of the pandemic, there are parallels within the fields of education, the economy and health where individuals make decisions on the basis of a bias which reflects systemic racism within society She references an email circulated within NYU hospital in New York where the onus to make rapid life and death decisions was placed on doctors working in the emergency department, without supervision and review. Given the intense stress doctors were under, those decisions were more likely to be influenced by bias (unwitting or not) Health care providers showed no willingness to discuss the research data, predicting the disproportionate impact on black and brown communities identifying systemic racial bias individual doctors were prevented from commenting publicly Warnings of racial bias were ignored and continue to be ignored

    12.40 – 20.50

    Todd moves on to examine differences of outcomes for black and white communities in relation to encounters with the police and references the killing of Michael Brown in Ferguson a suburb of St Louis in the US in 2014.

    The Ferguson killing follows a common pattern of outcomes for the black community A parallel is suggested with respect to the security preparations made for the Black Lives Matter protests in 2020 in Washington Comparisons have been made between this protest and the insurrection staged by pro- Trump militants. Todd argues that any move to suggest the two events were similar creates a false equivalence

    Dominique points out that:

    In terms of policing there was a higher level of perceived threat and a heavier response during the Black Lives Matter protests than for the recent march on the Capitol in Washington Dominique argues that the former was a racialised response conditioned by acceptance of white supremacy and a long history (in the USA) rooted in slavery and exploitation. She references the origins of racial policing in the USA as being to protect property from the actions of slaves. She identifies a “legacy mindset”, a baseline of white supremacy, where white people expect to be treated differently (better) than black people, a mindset which is a major barrier to progressing racial justice and equality.

    20.50 – 23.45

    The conversation returns to the pandemic and vaccination programs in the USA. Dominique has a number of concerns.

    Distribution is a major issue More thought needs to be given in terms of prioritising who gets the vaccine first. The role of essential workers, drivers, home helpers who have been disproportionately infected needs to be acknowledged when prioritising vaccination programs. There is also a need to talk about racial equity in the delivery of vaccination programs

    23.45 – 26.00

    Todd asks why significant numbers of African Americans are resistant to taking the vaccine.

    In Dominique’s view there is a distrust in black communities which in part dates back to the infamous Tuskegee experiment, where black people were exploited in the name of medical science In order to increase the uptake of the vaccine in black communities their must first be an understanding that there is a legitimated scepticism based on historical fact

    26.00 - end

    Todd ends by asking Dominique what she is hoping to see from the new government administration in terms of the issues she has discussed in this episode. In terms of the response to the Covid19 pandemic, she would like to see a critical re-evaluation of responses to the pandemic, and in particular the role of systemic racism and its impact on African American communities.

    Useful links

    Racial Bias in the time of Covid19, the Time is Now A webinar hosted by Dominique Day Millions of black people affected by racial bias in healthcare alogorithms Heidi Ledford in Nature October 2019 NYU Langone tells doctors, “Think more critically” about who gets ventilators Shalini Ramachandran and Joe Palazollo in Wall Street Journal 31/03/ 2020 40 years of Human Experimentation in America: The Tuskegee Study Ada McVean, McGill University, January 2019

    Additional references

    Assessing differential impacts of COVID-19 on black communities; Gregorio Millet et al, Annals of Epidemiology, July 2020

    Implicit Bias in ED overcrowding, is there a connection? Loner and Rotolli i EM Resident October 2018

    The effect of race and sex on recommendations for cardiac catheterization Schulman Berlin et al, New England Journal of Medicine February 1999

    Implicit racial/ethnic bias among health care professionals and its influence on health-care outcomes: a systematic review Chapman et al, Journal of Public Medicine December 201

  • In Episode 1 of Series 6, Todd is talking with Dr. Nina Ansary an award-winning Iranian-American author, historian, and women's rights advocate. Nina is the UN Women Global Champion for Innovation and Visiting Fellow at the London School of Economics Centre for Women, Peace & Security, and author of Anonymous Is a Woman: A Global Chronicle of Gender Inequality. They discuss the impact of the COVID19 pandemic on women’s rights and on the citizens of Iran.

    00.00 - 05.06

    Todd begins by asking Nina for her reflections on the impact of the Covid 19 pandemic on Iran. She comments that:

    Covid 19 has served to exacerbate existing economic problems and far from supporting the population the regime has continued its crackdown on advocates for freedom and closer ties with the West The health service is under severe strain not helped by the impact of sanctions resulting in shortages of medical equipment and medicines Overall Iranians now feel more isolated than ever While there are numerous organisations engaged in lobbying on human rights issues the international community could do more The impact of Covid 19 has pushed human rights issues to the background

    05.06 – 06.55

    Todd moves on to ask Nina for her take on the existing nuclear power deal and US sanctions.

    She argues that while the sanctions are not the cause of Iran’s economic difficulties they have accelerated the impact of economic mismanagement and corruption, which has fallen on the people and not the regime or its leaders.

    06.55 – 11.05

    The discussion moves onto the impact of Covid 19 on women’s rights. Prior to the pandemic, Nina says:

    The advancement of women’s rights was moving at a ‘glacial’ pace. Discrimination was present in a wide range of economic and political activity Stereotyping of women was commonplace

    The effect of the pandemic has been to exacerbate inequalities, expose vulnerabilities, encourage discriminatory practices, and set back the advancement of women’s rights, in particular those who are most vulnerable and those who are marginalised. Nina notes that women have been losing employment at a disproportionate rate as a result of Covid 19. She concludes by referencing the Beijing World Conference on Women 1995 and the lack of progress made since then.

    11.05 – 15.05

    When asked about the impact of the pandemic on women in the USA Nina refers to existing reforms which have been too narrow and the need to “move beyond the reforms of the past” to create a more equitable future. Todd then asks whether Nina foresees a move to resurrect the Equal Rights Amendment in the USA (ERA).

    In reply she points out that women in the USA are not united around this topic and that even within the ERA movement there was/is a tendency to fragment into different groups which is a limiting factor and an obstacle to reform.

    15.05 – 19.40

    Todd moves on to discuss Nina’s work at the U.N. Appointed as a Global champion for innovation in 2019. Her focus is to drive transformational change by,

    Creating more opportunities for women and girls especially in technology and entrepreneurship Raising awareness of barriers to progress Highlighting women who have made significant contributions in those fields which have been overlooked downplayed or ignored. Nina refers to Dr. Jessica Wade who been challenging theses stereotyped b posting the names of women who have made significant contributions in the field of science. Working towards equality in participation, representation and opportunity in those fields Discrimination and stereotyping which serve to hold women back. Here she references the infamous post by Google engineer James Damores, whose internal memo suggested that women were biologically less capable of working in the fields of science and technology

    19.40 – 21.25

    Todd wonders whether it is time for a feminising of the curriculum in line with the decolonising the curriculum movement. Nina refers to gender mainstreaming as a major tool in moving away from entrenches stereotypes and unequal trajectories of development.

    21.25 – end

    Todd brings the discussion full circle by asking for Nina’s thoughts on the current situation in Iran and to comment on the motivations of the state in its crackdown on women activists. Nina describes a regime that feels threatened by powerful women and enacts discriminatory policies in law as a means of enacting coercive control over women.

    She cites the example of the incarceration of a prominent lawyer Nasrin Sotoudeh for representing women’s rights activists In this way women are being denied access to legal defence by the state Far from addressing what she sees to be the legitimate concerns of the Iranian people, instead, the regime is expending large sums on religious endowments and the funding of foreign terrorist organisations

    Additional references

    Impact of COVID on Iran

    https://www.worldometers.info/coronavirus/country/iran/

    https://www.brookings.edu/opinions/iran-the-double-jeopardy-of-sanctions-and-covid-19/

  • In this second of two special episodes of The Rights Track, Todd reflects on what has been learned about modern slavery from our podcast and its contribution towards UN Sustainable Development Goal 8.7 to end global modern slavery by 2030. This episode features interviews from Series 3-5 of The Rights Track, which together form a library of 26 fascinating episodes and some 13 hours of insightful conversations with researchers from the University of Nottingham's Rights Lab, and a stellar line-up of people working on the ground to combat slavery from NGOs, campaigners and activists, authors, historians, economists, businesses and policymakers.

    Episodes featured

    Blueprint for Freedom: ending modern slavery by 2030 Zoe Trodd, Rights Lab

    Slavery-free cities: why community is key Alison Gardener, Rights Lab

    Life after slavery: what does freedom really look like? Juliana Semione, Rights Lab

    Forced marriage and women's rights: what connects SDGs 5 and 8.7? Helen McCabe Rights Lab and Karen Sherman, Author

    Voices of slavery: listen and learn Minh Dang and Andrea Nicholson, Rights Lab

    The useable past: what lessons do we learn from history in the fight to end slavery? David Blight and John Stauffer

    Face to face: researching the perpetrators of modern slavery Austin Choi-Fitzpatrick, Rights Lab

    How is the UN working to end modern slavery? James Cockayne, Rights Lab and Lichtenstein Initiative for Finance against Slavery and Trafficking

    Strengthening laws and ending modern slavery: what connects SDGs 16 and 8.7? Katarina Schwarz, Rights Lab

    Fast fashion and football: a question of ethics Baroness Young of Hornsey, All Party Parliamentary Group on Ethics and Sustainability in Fashion, and the All-Party Parliamentary Group on Sport, Modern Slavery and Human Rights

    Unchained supply: eradicating slavery from the supply chain Alex Trautrims, Rights Lab

    The business of modern slavery: what connects SDG 8.7 with its overarching SDG8? John Gathergood, University of Nottingham and Genevieve LeBaron, University of Sheffield

    Walking the supply chain to uphold human rights: what connects SDGs 12 and 8.7 Elaine Mitchel-Hill, Marshalls plc

    Bonded labour: Listening to the voices of the poor and marginalised Anusha Chandrasekharan and Pradeep Narayanan, Praxis

    Fighting slavery on the ground: what does it look like? Dan Vexler, Freedom Fund

    Creating stronger places for child rights: what connects SDGs 8.7 and 11? Ravi Prakash, Freedom Fund consultant

    Health and slavery: what connects SDG 3 and SDG 8.7? Luis LeĂŁo, Federal University of Mato Grosso, Brazil

    Global partnerships to end modern slavery: what connects SDGs 8.7 and 17? Jasmine O'Connor, Anti Slavery International

    The Congo, cobalt and cash: what connects SDGs 9 and 8.7? Siddharth Kara, Carr Center for Human Rights Policy, Harvard University

  • In this first of two special episodes of The Rights Track, Todd reflects on what has been learned about the advancement of human rights from our podcast since it was launched in 2015.

    Episodes featured

    How is the church leading the fight to end modern slavery? Rt Rev Alastair Redfern

    Crunching numbers: modern slavery and statistics Sir Bernard Silverman

    Eye in the sky: rooting out slavery from space Doreen Boyd

    Hating the haters: tackling radical right groups in the United States Heidi Beirich

    Picture this: using photography to make a case for environmental rights Garth Lenz

    Refugees: why hard times need hard facts Gonzalo Vargas LLosa

    In the minority: the right to identity, culture and heritage Clare Thomas

    Evidence for change: the work of Human Rights Watch Iain Levine

    Advancing human rights the Amnesty way Meghna Abraham

    Islam and the West: questions of human rights Akbar Ahmed

    Pursuing justice: what role for research evidence? Dixon Osburn

    Women and Trump: a question of rights? Monica Casper

    Gay rights - how far have we come? Richard Beaven

    Does America need a Truth Commission? Karen Salt and Christopher Phelps

    Human rights: reasons to be joyful William Simmionds

    Making human rights our business Shareen Hertel

    How can statistics advance human rights? Patrick Ball

    A matter of opinion: What do we really think about human rights? James Ron

    Beyond GDP: a measure of economic and social rights Sakiko Fukuda-Parr

    Modern day slavery: counting and accounting Kevin Bales

    How do we count victims of torture? Will Moore

    Do NGOs matter? Amanda Murdie

    Are we better at human rights than we used to be? Chris Fariss