Episodios

  • In this episode of Always Listening we go beyond our discussion on how AI assistants work to instead understand how the current digital landscape is shaping us as a society.

    Our lives are becoming increasingly transparent because of the information collected about us, but the way that this information is used remains opaque. In this interview with SAIS researcher Dr Mark Cote we discuss AI assistants from a humanities perspective, from big data inferring what you want before you want it, to the business case for why personalisation is what users want. We discuss the tension between the ethics of inferences and their market value.

    Listen to learn about:

    Where data is coming from and how inferences are made.How data accumulation is affecting every area of our lives.How the accumulation of data over time creates distinct and new market opportunities. The disproportionate impact of inferences made about groups of people.How far data protection laws go in protecting us from inferences from big data.

    We would love to hear your thoughts and comments on this podcast episode!

    Tweet us @SecureAI_SAIS

    Connect with us on LinkedIn SAIS project

    Visit us https://secure-ai-assistants.github.io

  • The relationships we build with our voice AI Assistants has been observed with great interest from the scientific community. These little machines have names, we talk to them like humans and trust them to do tasks for us. At the same time there is an inherent limit to what they can do for us and we might have decided not to trust them in certain situations. There are times when for example they have shared false information.

    In this episode we discuss how accidental misinformation, and purposeful disinformation can be shared via an AI assistant. We look at the psychology behind the relationships we build with our assistants and find out about the SAIS exhibition that examines this through soundscape and installation.

    Subjects in this episode:

    How can false information be spread on AI AssistantThe difference between misinformation and disinformation The psychology behind our relationships with voice AI AssistantsThe future of AI assistants according to industry experts, including Microsoft and SecurysThe one of a kind exhibition about our relationship with AI, by the SAIS project and Cellule Studio.

    AI: Who’s looking after me? is a free exhibition and events series at Science Gallery London which explores artificial intelligence and its impact on our lives. The SAIS team have worked with a Cellule Studio to create an evolving soundscape inviting us to question our relationship with AI assistants, how and where we use our voices and the value we place on them. The exhibition opens in June 2023. You can find out more on the SAIS website.

    If you would like to find out more about SAIS visit us on https://secure-ai-assistants.github.io/

    Tweet us @SecureAI_SAIS

    Find us on LinkedIn SAIS project

    Contact the show producers, Helix Data Innovation, on [email protected]

  • ¿Faltan episodios?

    Pulsa aquí para actualizar resultados

  • In today's connected world it is common to be asked for your data, and users may consent without any understanding of how their information is being used. Although this is often standard practice, knowledge about how your data is being used should be made available through privacy policies, as it is a legal requirement. However, all too often privacy policies are overlooked, inadequate or simply not made available by the service provider. How often do data privacy inconsistencies occur in voice AI Assistants? This is the ‘black box’ challenge that the SAIS research team have put their minds to - and the result was the Skillvet project.

    In this episode we take a fresh look at privacy in the AI Assistant ecosystem. We discuss how recent privacy rules like GDPR are affecting businesses and users, and consider what more can be done to protect users. And we will find out about the key challenges for users in understanding and controlling how their data is used.

    As well as this research, in this episode we discuss:

    What data does the AI Assistant hold about you?How can you control use of your data by skills and the AI assistant ecosystem?How transparent are skills about how they use data? And how does the Skillvet tool help us assess this?

    Find out more about Skillvet in our blog about Improving transparency in AI Assistants on our website.

    We would love to hear your thoughts and comments on this podcast episode!

    Tweet us @SecureAI_SAIS

    Connect with us on LinkedIn SAIS project

    Contact the show producers, Helix Data Innovation, on [email protected]

  • In the first of our four-part podcast series from the SAIS research project, we explore how the voice AI Assistant system works, with some surprising insights along the way. From first saying the ‘wake-up word’ through to the action that is taken in the device: there is a lot of complexity in the underlying technology ecosystem to make that happen. Listen to find out how your voice commands are interpreted, where exactly the decision for action is made and who makes it.

    We also answer that crucial question: Is my AI Assistant always listening?

    In this episode we explain:

    What happens from the ‘wake-up word’ and command to action in voice AI assistants, using Amazon Alexa as an example.What a ‘skill’ is, the difference between native skills and third-party skills and why this matters.Where decisions are made and the data is sent - many believe it all happens in the device, but this is not the case. Is my AI Assistant always listening and recording my conversations?

    We would love to hear your thoughts and comments on this podcast episode!

    Tweet us @SecureAI_SAIS

    Connect with us on LinkedIn SAIS project

    Contact the show producers, Helix Data Innovation, on [email protected]