Bölümler
-
In this episode, I break down the current landscape of Lethal Autonomous Weapons Systems (LAWS) and how this era of exponentially improving AI converges with violent motives. What will the future of war look like in a world that relies on autonomous systems to make life-or-death decisions? Who will we hold accountable when people get innocent people killed by these autonomous systems?
I begin by talking about one of the many Billion-dollar ties Big Tech companies have with the US military to develop AI-enabled weapons. They profit off of tech that specializes in high-precision killing. These same companies are leading this AI revolution with a priority on profits over safety. I then introduce a coalition called ‘Stop Killer Robots’ which advocates for human control to stay in the use of force. Sign their petition here.
There is a current real-world test case for LAWS and it’s happening in Palestine. The Israeli Defense Force’s use of AI systems to choose bombing targets and high-precision weapons. I introduce the Future of Life Institute and Toby Walsh - a leading AI expert - who launched an Open Letter which now has over 5,000 AI expert signatures. The open letter calls for immediate action to ban the development of Autonomous Weapons. Finally, I briefly talk about UCAVs and where autonomous drone weapons are currently heading.
This is not a part of the reboot episodes, I just thought this was a really important aspect of future AI risks to brief people on. Share this with people you love and stay hopeful! Change can and will happen. <3
-
The first AI Safety Summit has taken place in the UK. It was held at Bletchley Park, which was the location of code-breaking during WW2. The summit took place over November 1-2 and consisted of roundtables, in which governments, civil society, and big tech representatives come together and share their views on different AI discussions. This is a move in the right direction, and it makes me hopeful that collaboration on an international scale is possible.
-
Eksik bölüm mü var?
-
On Monday, the 31st of October, US President Biden signed an Executive Order on Artificial intelligence. It's the first attempt at legislating the safe use of AI and will set the tone for upcoming EU AI acts. I read and summarized the order and explained the overall goal of each of the eight 'principles' listed. It's really important to stay updated and educated on the development of legislation around AI because of its potential for change within society.
-
In this episode, I explore key takeaways from the book 'System Error: Where Big Tech Went Wrong and How We Can Reboot' by Rob Reich, Mehra Sahami, and Jeremy M. Weinstein. I specifically look at Chapter 6, 'Can humans flourish in a world of smart machines?'. I compare Human intelligence to AI, discuss the state of social media algorithms that have control over people, and how automation will cause mass unemployment. I make reference to a recent episode of 'Your Undivided Attention' with Joy Buolamwini and the Centre for Humane Technology co-founders. I hope you enjoy!
-
In a Social Change course I'm taking, I was inspired to share my findings of AI viewed through a systems lens. In this episode, I share my findings after viewing the development of AI as a system, with key parts that directly influence each other. In a sketched-up mindmap I made, I highlighted key areas; Big Tech, The Government (USA, China), Society, and The Environment. I talk about how all these are interconnected, and how power is distributed between these key areas. I think this has helped me to fully understand the current situation in AI development, and hopefully gets you all thinking of possibilities for change. Enjoy!
-
This episode is broken into 3 parts;
I was inspired by the Bloomberg Business Weekly article "AI Is Thirsty" by Clara Hernanz Lizarranga and Olivia Solon. A look into how data centers that big tech companies run, and how they use hundreds of millions of Litres of water per year - each. I critically read through the key highlights of the article to understand the issue.
A Guardian article 'Tech Leaders Agree on AI Regulation but Divided on How' by Johana Bhuiyan. Just last week, the biggest names in tech (Elon Musk, Sam Altman, Bill Gates, Mark Zuckerberg..), and the American government met, in a meeting closed to the public, to begin one of many meetings to regulate AI. I reiterate the importance of including the working class - not just billionaires, in the decision-making.
I wrap up and read through some of my own notes, and add thoughts about trusting Big Tech to do the right thing in the regulation of AI, the need to be focused on becoming far more sustainable, and creating a future we want which also considers the rehabilitation of the ecosystem.
Thanks for listening, you'll hear from me again soon...
(sorry for my bad name pronunciations)
-
After I watched Ted Talk by Eliezer Yudkowsky 'Will Superintelligent AI End the World?' I decided to begin by explaining the key problems faced within the development of AI currently. I talk about Open AI's pledge to solve the Alignment Problem within four years, through a Superalignment research project led by Ilya Sutskever and Jan Leike. I highlight key points from an interview/article by Rob Anderson called 'Does Sam Altman Know What He's Creating?'. I hope this episode was able to create a heightened interest, and concern for the current future of AI in the world. Thank you for listening!
-
THE ERA OF AI aka T.E.A. In this pilot episode, I discuss a talk that inspired my interest in AI. The discussion is called 'The AI Dilemma' and its made by the same people who created The Social Dilemma on Netflix. This episode is getting a feel for the medium of podcasting. I will have a proper mic next episode!!