Episodes

  • In the sixth episode of Actually Interesting, The Spinoff’s monthly podcast exploring the effect Artificial Intelligence has on our lives, Russell Brown looks at the draft algorithm charter, the government's commitment to transparent and accountable use of AI.


    In the Star Trek Voyager episode 'Critical Care', the Doctor – well, actually, the mobile emitter that produces him as a hologram – is stolen and sold to an alien hospital ship. There, he discovers that the complex computer algorithm that determines treatment, the Allocator, dishes out lifesaving care according to each patient's Treatment Coefficient – which measures not need, but an individual's value to society. To lower-value patients, the computer just says no, without explanation. They die.


    Mandy Henk was home sick herself recently when she watched the episode. And she swiftly recognised that this was a dystopian sci-fi story about a real thing used in the government sector here on Earth, in New Zealand: an operational algorithm.

    We do not dispense medical treatment on the basis of individuals' deemed social value. But we do use algorithms to make a bunch of other decisions: provisioning school bus routes, predicting which young school-leavers are in danger of falling through the cracks, triaging visa applications.


    The government uses algorithms to make all kinds of decisions, from provisioning school bus routes and predicting which young school-leavers are in danger of falling through the cracks to triaging visa applications Henk, the CEO of Tohatoha, the organisation formerly known as Creative Commons NZ, is one of a number of people looking closely at the draft algorithm charter published recently by Statistics NZ. It's the government's most concrete commitment yet to transparent and accountable use of algorithms, AI and other sophisticated data techniques. It's timely.


    "I think it's probably past time," says Henk. "Given the amount of algorithms currently used throughout government, we're probably overdue for a commitment on the part of government to use them in ways that ensure equity and fairness."

    "We have passed the point where we need to have this conversation," agrees data scientist Harkanwal Singh. "It's urgently needed. We need a robust conversation and real action."


    Both Henk and Singh welcome the draft charter as a useful statement of principles – and both believe it needs to be clarified and strengthened. For instance, it commits public entities to "upon request, offer technical information about algorithms and the data they use" – which implies there needs to be someone doing the requesting. But who, and how?

    "That is not clear at the outset," says Singh. "It would be better if the language made it clear. Also, why 'upon request?’ Being open by default is much better and creates a culture of accountability. We do not want a repeat of the OIA experience."



    See acast.com/privacy for privacy and opt-out information.

  • In the fifth episode of Actually Interesting, The Spinoff’s monthly podcast exploring the effect Artificial Intelligence has on our lives, Russell Brown discovers that maybe AI has better musical taste than humans. 

    My music streaming service works a whole lot better than it used to.

    There's a reason for that, and it's the decisive tilt that Apple made a few months ago towards algorithmic playlists on Apple Music. Having branded itself on the virtues of human curation – it's only a year ago that Apple CEO Tim Cook lamented the dehumanising effect of Spotify's data-driven approach to curation – Apple seems to be acknowledging that maybe Spotify has it right.

    New Music recommendations have improved markedly under this personal robot curation – like I'm getting what an algorithm thinks I'd like, rather than what someone thinks I should like. But the playlist that's really working for me is Favourites Mix, a rolling weekly selection of things that I have loved (and sometimes forgotten) at some point in the past decade or more. Apple knows what I have loved because for all that time I've been telling it, by uploading Genius data from iTunes to the mothership. Apple had a lot of data to push my buttons with – it just finally got around to using it. It's great.

    It's an example of the power of the aspect of AI we see most in popular culture – the recommendation algorithm. Actually Interesting spoke to Juan Swartz of the Christchurch-based tech company 4th and Andy Low, general manager of DRM New Zealand the country's largest digital distributor of music, about the power of the algorithm.


    See acast.com/privacy for privacy and opt-out information.

  • Missing episodes?

    Click here to refresh the feed.

  • In the fifth episode of Actually Interesting, The Spinoff’s monthly podcast exploring the effect Artificial Intelligence has on our lives, Russell Brown speaks to Ben Reid, executive director of the AI Forum, about the role of government in embracing and regulating AI.

    Here's a thing you probably didn't know: if you've sought ACC cover in the past couple of years, your claim was very likely processed by artificial intelligence.

    More precisely, your likelihood of cover was assessed by an algorithm trained on an anonymised dataset of 12 million claims made between 2010 and 2016. The AI system embodies a predictive model that decides whether or not your claim clearly falls within the criteria of the Accident Compensation Act 2001. Most claims do, and are thus swiftly and efficiently approved. Uncertain cases or potential refusals are then handled by human staff. It's designed so the computer literally can not say "no".

    ACC's system is one of a number of case studies cited in Towards Our Intelligent Future: An AI Roadmap for New Zealand, the new report from the AI Forum New Zealand. The report is a substantial work – 180 pages dedicated to explanations of the key AI technologies, a look at the international AI landscape (which is dominated by two research and investment superpowers, the US and China) – and a polite, repeated request for a national AI strategy.

    The forum made essentially the same call in a report last year and, in a landscape of task forces and working groups, we still haven't seen that strategy. Is government listening? Ben Reid, the AI Forum's executive director, is both diplomatic and optimistic.

    "They're beginning to. Just recently a group of us from the AI Forum presented to the select committee on economic development, science and innovation and that was the first time I think that AI's been considered by Parliament. I think it's a really positive sign."


    See acast.com/privacy for privacy and opt-out information.

  • In the fourth episode of Actually Interesting, The Spinoff’s monthly podcast exploring the effect AI has on our lives, Russell Brown speaks to Ana Arriola, general manager and partner at Microsoft AI and Research, about ethics and transparency in tech.


    Ana Arriola's talk at the recent Future of the Future seminar – about intersectionality, surveillance capitalism and the risks of AI – might not have been the stuff of your Dad's (or Steve Ballmer's) Microsoft, but it was actually a great reflection of the way Microsoft thinks now. In recent years, the company has devoted attention and resources to contemplating both the power of its technologies and ways to ensure they help rather than harm.


    Most notably, there's FATE – for Fairness, Accountability, Transparency and Ethics – a Microsoft Research group set up to "study the complex social implications" of AI and related technologies and match those against the lessons of history. This year, FATE published a thoughtful paper on designing AI so it works for people with disabilities, and another on fairness in machine learning systems, which observes bluntly that the problem starts in the way the datasets on which ML systems are trained are curated. The same paper points out that AI design teams often don't know their systems are biased until they're publicly deployed and, to quote one software engineer, "someone raises hell online."


    There's also the company's advisory board AI Ethics and Effects in Engineering and Research (Aether), which last year published a set of six principles for any work on facial recognition – and whose advice has apparently already led Microsoft to turn down significant AI product sales over ethics concerns. The company also publishes a general set of ethical principles for AI.


    And Arriola – whose full job title is General Manager & Partner, AI + Research & Search – has established another group within the company, called ETCH ( Ethics, Transparency, Culture and Humanity). It's evident that Microsoft takes this stuff seriously – and that it's about more than simply aiming for diversity in recruitment.


    "So much more," Arriola told me a couple the day before her talk at the seminar. "Diversity and inclusion just means making sure that there's safety and security within any given organisation, but it's really about global intersectionality. 


    See acast.com/privacy for privacy and opt-out information.

  • In the third episode of Actually Interesting, The Spinoff’s monthly podcast exploring the effect AI has on our lives, Te Aroha Grace explains the Iwi Algorithm.


    At this year's AI conference in Auckland the way technological developments would affect law, business, politics, and our everyday lives was at the forefront of the conversation. But then the conference moved into an entirely new space - the space of te ao Māori. 


    That conversation was led by Te Aroha Grace, the innovation officer at Ngāti Whātua Ōrākei, and explored the way Artificial Intelligence can unite New Zealand’s diverse cultures to grow the mana of brand Aotearoa. As part of that work Grace has developed the Iwi Algorithm, a concept designed to embed New Zealand’s unique cultural values at the heart of AI’s decision making. 


     In his own words: “The Iwi Algorithm is the re-understanding of our ancient relationship with emotional, natural and social capital. It prioritises the spiritual, cognitive and physical equilibrium needed for humans to feel connected, inspired and actionable.” 


    “Its premise and paradigm is founded in the ancient human and pre-human modalities of ceremony and ritual, where science, religion and industry are one and therefore indistinguishable. The algorithm enlists the preservation, providence and understanding of the miracle of language, a vibrational force understood in its outright ability to pierce the veil of conscience through the sonic forces created through a voice box. 


    “The Iwi Algorithm magnifies the essential and existential dimensions of a visible and invisible world, whose core is a genius framework of timeless and eternal values left behind the invisible giants of our past whose shoulders we are privileged to tenant today.”


    According to Grace, this gives us an opportunity to learn from our past and use that to inform the technology that will define our future. 


    See acast.com/privacy for privacy and opt-out information.

  • In the second episode of Actually Interesting, The Spinoff’s new monthly podcast exploring the effect AI has on our lives, Russell Brown explores how aware machines really are with Aware Group's head data Scientist Kane O'Donnell.


    See acast.com/privacy for privacy and opt-out information.

  • In the first episode of The Spinoff’s new podcast, Actually Interesting, Russell Brown explores the world of A.I. and the way it’s already affecting our lives.


    When you hear the words "Artificial Intelligence" your mind might turn to science fiction – a vast army of robots with aspirations to rule over us – but we already experience AI in our lives every day. When Netflix recommends something we might like, Facebook recognises us in a picture or Spotify builds us a playlist, that's AI at work.


    In the first episode of Actually Interesting, brought to you by Microsoft, I talk to AUT's Mahsa Mohaghegh and Microsoft big data and AI expert Chimene Bonhomme about what AI, algorithms and machine learning actually are – and what they imply for our future.


    We also talk about the way that AI works is a product of the assumptions – conscious or unconscious – of the people who design the rules. (That's what algorithms are: sets of rules.) And the problems that can create when humans are teaching machines about the world.


    See acast.com/privacy for privacy and opt-out information.