Episodit

  • In this episode of This Week in AI brought to you by the AI Australia podcast, Kobi and Natalie dive into wide-ranging discussions covering recent political events, traversing:

    The implications of the French and UK election outcomes, and their relation to technology. AI's role in the U.S. presidential elections Tech sector developments like Meta's Llama models, and the economic and environmental impact of AI advancements The intersection of AI with job markets, wealth distribution, and universal basic income studies, while also considering the cultural and ethical impacts of technological integration in corporate settings The recent global technology outage caused by CrowdStrike on Microsoft Windows

    Links:

    The Grimy Residue of the AI Bubble

    Toby Murray: What really happened at CrowdStrike...

    Business Insider: CrowdStrike CEO Has Twice Been at the Centre of Global Tech Failure

    Lawfare: The CrowdStrike Outage and Market-Driven Brittleness

  • This episode of AI Australia features a discussion with Danielle Haj-Moussa, a deep-tech investor at Main Sequence Ventures, and Roo Harris, a partner at Scale Investors. The conversation spans the different investment types, the importance of inclusive and ethical AI, and the differences between deep tech and general tech.

    Danielle shares insights into Main Sequence's focus on decarbonisation, industrial productivity, and the commercialisation of groundbreaking research.

    Roo discusses Scale Investors' mission to fund female-led startups across diverse sectors, emphasising the need to support underrepresented voices in innovation.

    Both speakers explore the evolving landscape of AI, the impact of democratising AI through open-source models, and the critical role of diverse and ethical investment in shaping the future.

    We hope you enjoy this thought-provoking conversation as much as we enjoyed having it - we enjoyed it so much we have decided to make this a two-parter and delve even deeper into this topic in our next episode.

    Links:

    Danielle Haj-Moussa (Main Sequence Ventures)

    Roo Harris

    Scale Investors

  • Puuttuva jakso?

    Paina tästä ja päivitä feedi.

  • We're back after a bit of hiatus, and a rebrand - the AI Australia podcast is now brought to you by Mantel Group (formerly a house of brands including eliiza).

    This was recorded a couple of weeks ago, which is a lifetime in this age of AI, but hopefully you still enjoy the conversation as much as Nat & Kobi enjoyed having it. In this episode of 'This Week in AI', we discuss the continued rapid developments in AI, such as:

    The lack of transparency and accountability within organisations like OpenAI, which led to significant internal conflicts and questionable practices such as unauthorised voice usage and staff contract clawbacks. The broader implications of AI on personal identity, regulation, and the importance of maintaining ethical standards in the industry. We address the global impact of political developments and election outcomes influenced by AI, as well as the real-world consequences of algorithmic manipulation in social media. We highlight the necessity for greater transparency, robust governance frameworks, and regulatory actions to ensure AI technologies are developed and utilized responsibly.

    Look out for our upcoming episode(s?) on how AI is shaping the Venture Capital landscape today.

  • We have rebranded! The AI Australia is now brought to you by Mantel Group (Eliiza's parent company).

    Please join us this week as we discuss AI in education with Professor Andrew Maynard from the Arizona State University.

    Andrew's fascinating takes from the front line of educating in the age of AI show a vision of the future and interrogates some of the common misconceptions of AI in education.

  • Welcome back to the AI Australia podcast for 2024, with your hosts Natalie Rouse and Kobi Leins.

    We are joined by two experts in the field of cybersecurity - "The Voice of Cyber" multimedia journalist Karissa Breen and internationally recognised cyber law expert Emma (EJ) Wise.

    We start off learning about Aboriginal Birthing Trees, then talk about the ethics of self-driving cars, international regulation & frameworks, education, the use of autonomous technology in the military, cyber law, the use of AI in manipulating upcoming elections, information warfare, the naming of the Russian hacker responsible for the Medibank hack, and the opportunity for AI to drive innovation on both sides of the fence - for organisations protecting themselves and for hackers. We make it clear we do not condone cyber criminality!

    Unfortunately we did lose the last half of EJ's audio, which is a technical travesty because she did have many fascinating things to say - we will get her back to share more of her great thoughts and experiences at a later date.

    Links:

    Karissa Breen

    Emma (EJ) Wise

    Aboriginal Birthing Tree

    Missy Cummings on Tesla Autopilot

    Automating the Banality and Radicality of Evil

    Moral Machine

  • In our last episode for 2023, Kobi & Nat discuss the following recent developments around the world and how they relate to Australia:

    The EU AI Act, which has made it another step closer to reality, albeit with some conditions for regulating foundation models that not everyone is happy with. Kobi has been reading Nancy Levison's new book, which looks at large historical accidents and unpacks what that looks like from a human perspective. We wonder how the Act will plug into the new international standards coming out, and whether that might result in some more granular teeth for the Act. Microsoft has been quietly talking to industrial organisations in the US, leading us to wonder how the Hollywood Writer's strike will impact other union negotiations around the world. The attribution of job losses to AI or automation is proving difficult, and will make future union negotiations even more interesting. Kobi talks about the history and evolution of boards over time, which leads us to circle back to the Open AI board purging and reflect on recent information that has come out about why exactly Sam Altman was fired in the first place.
  • This week join Kobi and Natalie as they discuss some perhaps seemingly-unrelated global trends, and ponder how AI may or may not impact or amplify them in the future.

    We raise questions about:

    There are around 40 upcoming elections happening around the world in 2024; have we already seen advancements in AI play a role in driving a shift to the right, and what might we expect to see in the next year? There is a trend where local media is becoming increasingly scarce - what does that mean for our local collective understandings & conversations, and global polarisation? The lack of diversity (gender and otherwise) in technology media (and the technology industry in general) presents a major hurdle for language models of the future as it leads to a lack of variety in the type of voices on record today, and hence in training data tomorrow.

    Links:

    Where are all the godmothers of AI? In the Guardian

    A 'Trump moment' in the Netherlands shows that Europe still has a populist problem - CNN

    Brace for elections: 40 countries are voting in 2024 - Bloomberg

    The Guardian view on local journalism's decline: bad news for democracy

  • This week in AI has been a week like no other - join Natalie & Kobi to mull over the back and forth at OpenAI over the last week as we delve into what happened and what might it all mean. We are saddened about the departure of all the women on the OpenAI board, but wonder if the previous academic-focussed board had found themselves out of their depth as the value of the company grew. We are more than a little curious about the eventual movie that will no doubt be made about this highly dramatic saga!

    We also talk about the importance of governance around not just data but AI, including processes & accountability rather than just technical aspects.

    No links this week because there is not enough space on the internet to link to all of the articles covering the happenings of this last week.

  • Welcome to another episode from the AI Australia podcast, with your hosts Natalie Rouse and Kobi Leins.

    Continuing the conversation from our last interview on the importance of Indigenous data rights, this time we cross the ditch to Aotearoa New Zealand, discussing Maori data sovereignty and artificial intelligence with three special guests: Megan Tapsell, Associate Professor Maui Hudson, and Dr Karaitiana Taiuru.

    They discuss their roles and responsibilities in advocating for indigenous data rights, the challenges and opportunities they face in this area, and the importance of data sovereignty for Maori communities.

    Towards the end, they call upon corporations to involve and fairly pay Indigenous people when working with data that impacts their communities.

    Apologies for some audio quality issues at times - we experienced some technical difficulties in recording, but are very pleased to be able to bring you most of what we felt was an important & valuable conversation.

    Links:

    Megan Tapsell on Linkedin

    Maui Hudson on Linkedin

    Karaitiana Taiuru on Linkedin

  • We're back! After a brief hiatus, Natalie and Kobi talk about some of the recent developments around the legal world with union agreements and lawsuits due to set some initial precedents in different areas.

    They talk about the Hollywood Writers strike and the agreement that was reached including use of AI, as well as covering some of the legal action in progress at the moment around IP ownership, copyright and the updated Privacy Act recommendations. They also touch on the right to be forgotten, and the difficulty applying that within vast training data sets.

    Natalie attended the Aotearoa NZ AI Summit, and covers some of the guardrails & principles that were discussed.

    Kobi talked about globally the guardrails and principles that will come out of the UN. She also shared a story about Studio Ghibli's director, Hayao Miyazaki's reaction to unthoughtful use of AI in his company.

    Links:

    Anna Johnston summarises the Privacy Act proposals on Salinger Privacy

    Hayao Miyazaki on the use of AI

  • It’s no secret that AI is enhancing efficiency across nearly every industry in the world today. Unsurprisingly, that includes dentistry and pathology detection. Khoa Le is the founder and CEO of Eyes of AI, which leverages the power of technology to detect, analyze, and diagnose dental X-Ray images. In this episode, he shares his story, from building his skillset to establishing the idea for Eyes of AI and serving his client base today. We touch on preparation and tracking results, discuss the tumultuous waters of navigating ethics and privacy, and explore the benefits of bringing such a high level of accuracy to differential diagnosis. Khoa gets candid about some of the biggest challenges he has faced while building his company and offers his perspective on the growing influence of AI in Australia and beyond. In closing, he shares his thoughts on the human element of the medical industry and why it will always be necessary. Tune in today to hear all this and more!

    Key Points From This Episode:

    Khoa Le's career trajectory which led to the founding of Eyes of AI.

    How Eyes of AI leverages the power of technology for detection, analysis, and diagnosis of dental X-Ray images.

    What Eyes of AI client base largely consists of.

    Forming the idea for the product.

    The company flagship: 3D images.

    Preparation and results.

    Navigating ethics and privacy by using de-identified data.

    Benefits of the 95% accuracy.

    What the output provides and why it cannot be considered a final diagnosis.

    The biggest challenges Khoa has faced in getting Eyes of AI off the ground.

    His perspective on how the AI industry is growing in Australia.

    Why a personal connection will always be relevant to the medical field.

    Links Mentioned in Today’s Episode:

    Khoa Le on LinkedIn

    Eyes of AI on Instagram

    Eyes of AI

    Eyes of AI Journal

    Natalie Rouse on LinkedIn

    Dr Kobi Leins

    Dr Kobi Leins on LinkedIn

    Dr Kobi Leins on Twitter

    Eliiza

  • This week in AI, Natalie and Kobi talk about the global progress on the regulation conversation, and the potential for a global governing body for AI.

    They talk about the EU AI Act which has passed another major milestone, and hypothesise about the impact of commercial interests on the effectiveness of that. There was also a senate hearing in the US that Sam Altman (OpenAI) and Gary Marcus (NYU) attended, where they proposed a collaborative international body bringing together academics and the public rather than just corporate/tech interests.

    President Biden has also announced a task force investigating the impact of social media on young people. Kobi wonders if some of these things might be a distraction from some of the most important issues, like climate change and the impact of these models and equipment on the environment.

    Meta have declared that generative AI is already dead as they go all in on the next generation of AI that is better able to take logic and reasoning into account; however Meredith Whitaker (Signal) has said that while the "godfathers of AI" talk about x-risk (existential risk), they are not talking about a-harms (actual harms), which conveniently means they don't have to take immediate mitigating actions.

    Links:

    EU AI Act Explained

    Yann LeCun of Meta declares generative AI already obsolete

    Meredith Whittaker on AI Panic in Slate

  • It is long overdue that we, as organisations, and people in organisations, start to question our own thinking. And as a global society need to move toward making a systemic paradigm shift when it comes to Indigenous research and all that it encompasses. Today, we are joined by a very special guest, Distinguished Professor Maggie Walter, coming to us today from Nipaluna, Tasmania. Maggie is a Palawa woman who descends from the Pairrebenne people of north-eastern Tasmania and is also a member of the Tasmanian Briggs Family. She is a Distinguished Professor Emerita of Sociology at the University of Tasmania and is still heavily involved in Indigenous data sovereignty space and anything Indigenous data related. Join our very timely conversation as Maggie takes us on a deep dive into Indigenous data sovereignty 101, the two sides of data collection, and the problem with AI and using existing data sets. We talk about different challenges with funding and how there is a massive requirement for a paradigm shift in Indigenous research. This is a loaded episode with great points of conversation including a global look at relationships with indigenes, why presuming research equals change is dangerous, and why Maggie is more after ontological disturbance rather than money and time. You don’t want to miss this episode, so start listening, and join the conversation.

    Key Points From This Episode:

    An introduction to Maggie Walter, a Palawa woman and descendent of the Pairrebennes.

    Maggie runs us through Indigenous data sovereignty 101.

    The statistical indigene and what colonisation has done to us.

    She explains the two sides of data collection. (7:50)

    The problem with AI and using existing data sets: we always end as the problem.

    Thoughts on levers and tools aimed at shifting and solving the Indigenous data problem.

    The starting point, humans, and why AI is scary as it relates to Indigenous data (collection).

    She shares the challenges faced with Indigenous data collection.

    The challenges with funding and the required paradigm shift for Indigenous research.

    Why Indigenous research projects can’t be concentrated in health and should diversify.

    Her encouragement to challenge and flip mindsets with relationships to first peoples.

    We take a global look at other countries and their relationships with Indigenous peoples.

    The danger of presuming research equals change.

    Maggie divulges why she doesn’t do advisory committees anymore.

    The need for ethics codes.

    Why systemic change can only be done in increments.

    A discussion on who owns Indigenous data and the benefits, or lack thereof, of AI.

    Maggie tells us why she’s after ontologic disturbance rather than money and time.

    Links Mentioned in Today’s Episode:

    Maggie Walter on LinkedIn

    UN Declaration on the Rights of Indigenous Peoples

    Closing the Gap Memorandum

    Professor Sally Engle Merry

    AIATSIS Code of Ethics 2020

    Natalie Rouse on LinkedIn

    Dr Kobi Leins

    Dr Kobi Leins on LinkedIn

    Dr Kobi Leins on Twitter

    Eliiza

  • Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Dr Ava Bargi, Data Science Tech Lead at the NSW Department of Customer Service.

    Bio:

    Ava is a Lead Data scientist (PhD) with technical Machine Learning (ML), people and community leadership focus in ML and data science.

    She is a seasoned data scientist and product lead, passionate about creative problem solving, impactful stakeholder engagement and social interactions, and focused on building ML solutions either individually or as a technical leader. Ava has pioneered the application of machine learning and data science to enhance NSW Planning processes, tackling challenges of modelling partial and unstructured data, with imbalance and quality issues, using ML techniques.

    Ava is an impactful Tech and people leader, spearheading in coaching a team of brilliant analysts to use advanced tools and build powerful NLP solutions. She is contributing to NSW government AI maturity by running "AiCoP", an AI community of practice across gov agencies and industries, successfully expanding and increasingly capturing more members and insightful presentations.

    Links:

    Dr Ava Bargi on Linkedin

  • Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Dr Muneera Bano, from CSIRO’s Data61 within Diversity and Inclusion in AI.

    Bio:

    A passionate advocate for women in STEM, Muneera was announced as the ‘Most Influential Asian-Australian Under 40’ in 2019. A ‘Superstar of STEM 2019’ and member of the ‘Equity, Diversity and Inclusion’ committee of Science and Technology Australia, in 2021, Muneera was recognized by the Government of Pakistan Foreign Minister’s Honour List as under-40 Pakistani-Australian leader in ‘Science and Innovation’ for her leadership and advocacy in diversity and inclusion in STEM in Australia.

    Muneera graduated from the University of Technology Sydney in 2015 with a Ph.D. in Software Engineering. She specializes in socio-technical domains of software engineering focusing on human-centered technologies. She was the recipient of Schlumberger’s ‘Faculty For The Future Award’ for Women in STEM (2014 and 2015). Muneera was named as a finalist for ‘Google Australia’s Anita Borg Award for Women in Computer Science’, Asia-Pacific 2015, and won the ‘Distinguished Research Paper Award’ at International Requirements Engineering Conference 2018. Muneera was offered Fellowship for ‘Cultural Diversity and Leadership’ at Sydney University’s Business School in November 2019. She was selected for ‘Pathways to Politics’ by Melbourne University’s School of Government to complete the leadership program in 2021.

    Links:

    Dr Muneera Bano on Linkedin

    Dr Muneera Bano at CSIRO

  • Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Dr Tabinda Sarwar, Lecturer/ Early Career Development Fellow (ECDF) at RMIT.

    Bio:

    Tabinda Sarwar completed her PhD in Jan 2020 and currently, she is working as Early Career Development Fellow/Lecture at RMIT University.

    Tabinda’s areas of expertise include the application of machine learning and deep learning algorithms for data mining and analytics. She has worked on multiple digital health projects involving heterogeneous data sources (including imaging and free text data) for almost 7 years. This includes creating data-driven solutions for health deterioration detection and prediction for aged care homes in Australia, for which she received RMIT Award for Research Impact. She has worked in computational neuroscience, where she analysed the anatomical and functional behaviour of the human brain. T

    abinda is interested in the application of machine and deep learning in biomedicine and digital health for knowledge and information extraction to create high-quality solutions.

    Links:

    Dr Tabinda Sarwar on Linkedin

    Dr Tabinda Sarwar at RMIT

  • Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Amanda Princi, Head of Data Enablement at Transurban.

    Bio:

    As the Head of Data Enablement, Amanda is responsible for driving data culture at Transurban.
    Amanda leads a team that enables people to easily manage data, confidently employ data use and maximise data’s value in a safe, ethical way.


    Amanda is passionate about a workplace free from data jargon, where teams have positive data
    experiences and can embed data thinking into daily activities at any level of the organisation.
    Prior to Transurban, Amanda’s career began as a Banker at National Australia Bank (NAB). Over 10 years, Amanda transformed her banking career into a data career, by leveraging data to help millions of customers and achieve significant impact. Amanda’s determination to learn all things data and evangelise data use, led to Amanda holding multiple senior data leadership positions specialising in data analytics and insight, data management and governance.

    Links:

    Amanda Princi on LinkedIn

  • Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Kelly Brough, Managing Director, ANZ Applied Intelligence at Accenture ANZ.

    Bio:

    Kelly leads the Applied Intelligence Network for Accenture ANZ.

    Responding to businesses increasingly turning to AI to enable their growth, productivity, and
    creativity, Kelly leads a talented team of strategists, scientists, and value innovators to support
    our clients in designing and executing data led transformations. She is passionate about the
    opportunities for society being unleashed by AI technologies and committed to focussing
    energy on the responsible application of AI to build trust and protect people and organisations
    while delivering new value outcomes.

    Kelly has over 25 years experience across both industry and consulting building digital and
    data businesses in the Retail, Media, and Technology sectors. Prior to joining Accenture, Kelly
    held executive roles including Chief Digital Officer at Sensis (Telstra), Global Digital Director at
    Lonely Planet, CEO Allegran Online Dating (Daily Mail), and Director at AOL Europe.

    Kelly has an MBA from INSEAD, an MS in Environmental Engineering from University of
    Virginia, and completed her undergraduate degree in Engineering Science at Harvard
    University.

    Links:

    Kelly Brough on Linkedin

  • Eliiza is teaming up with Jewelrock and Women in AI APAC to run a limited interview series in support of the Women in AI Awards 2023. The series will showcase conversations with a number of thought leaders and subject matter experts across industry and academia in Australia and New Zealand on topics like career, the future of AI technologies and the responsible application of AI.

    Our esteemed guests are all representing sponsors of the Women in AI Awards, which will be held on 16th June in Sydney for the APAC region.

    Our next guest is Kendall Jenner, Research Assistant at STELaRLab, Lockheed Martin's first research laboratory outside of the US.

    Bio:

    As a Research Assistant, Kendall performs research into Artificial Intelligence and Machine
    Learning, investigating the latest developments in the field and how it can be applied to defence
    applications.

    Before entering the workforce, Kendall studied a Bachelor of Science in Theoretical and Experimental Physics at the University of Adelaide, followed by a Masters of Philosophy in machine learning for gravitational wave astrophysics (which is still currently ongoing) with OzGrav – the Australian Research CoE for Gravitational Wave Discovery. Since starting with STELaRLab and Lockheed Martin Australia in the Graduate Program early last year, Kendall has worked on a variety of projects. She has investigated using machine learning for prognostics and health management of aircraft, link prediction in knowledge graphs, creating simulated data for a variety of other projects in the lab, and processing of overhead imagery. She enjoys the variety of tasks that her and her team partake in, and find that the work she does at STELaRLaB is the perfect balance between interesting and exciting state-of-the-art research, and the applications and practicality of implementing new solutions.

    Kendall's first introduction to machine learning was during a research internship with the University of Adelaide High Energy Astrophysics group, where the group was tasked with using a random forest classifier to classify blazars, which are a type of active galaxy. This project piqued her interest in AI, and so she pursued a career in it, leading to some extra-curricular activities with Adept at the University of Adelaide, work experience and internships, choice of topic for her postgraduate research degree, and now to her current role with STELaRLab.

    Outside of work and university, Kendall is a professional footballer (soccer) for West Torrens Birkalla SC. She enjoys gardening, playing boardgames with friends, and generally doing outdoor activities.

    Links:

    Kendall Jenner on LinkedIn

  • This week in AI, Natalie and Kobi tackle some hefty potential societal impacts.

    They talk about the strange proposition of a young woman "cloning" herself via chatGPT to outsource her "girlfriend experience"; to celebrated American science fiction writer Ted Chiang writing about whether AI is set to become something of a consultant or third party for problem solving; to ex-DeepMind co-founder Moustafa Suleyman calling on governments to find solutions for people who lose their jobs to AI, and the links between all of these activities.

    Links:

    https://fortune.com/2023/05/09/snapchat-influencer-launches-carynai-virtual-girlfriend-bot-openai-gpt4/

    https://www.newyorker.com/science/allans-of-artificial-intelligence/will-ai-become-the-new-mckinsey

    https://www.abc.net.au/news/2023-04-28/stage-three-tax-cuts-to-scale/102268304

    https://www.theguardian.com/books/2019/may/29/fully-automated-luxury-communism-aaron-bastani-review

    https://www.theatlantic.com/ideas/archive/2019/06/give-us-fully-automated-luxury-communism/592099/