Dan Jermyn joins us today to share his experiences as the head of the biggest AI and Data Science practice in Australia. He explores the criticality of thought diversity in AI development, the need to break the status quo regularly, and the importance of regulations to customer safety. He also shares his insights about building effective data science teams as well as navigating politics to create AI that, first and foremost, serves a purpose with defined ethics and explainability even within complex models.
And, of course, Dan will tell us which sci-fi universe he envisions will most closely resemble our own future. Tune in to find out all of this and more about the remarkable projects and progress Dan has contributed to AI and machine learning in Australia.
Dan Jermyn is Chief Decision Scientist at Commonwealth Bank of Australia, who have reported 7.5 million digitally active customers as of 10/02/2021.
Dan is an experienced leader in both technology and data science, with an established record of building award-winning, global teams in digital, big data and customer decisioning. Dan joined the Commonwealth Bank of Australia in 2017, where he has responsibility for delivering great customer experiences and innovative new solutions through data science.
Dan has also worked as a strategy consultant and then Head of Analytics for an agency in the UK as well as co-founding a successful digital technology startup. Prior to his current role, he held various positions at the Royal Bank of Scotland including Head of Digital Analytics and Head of Big Data & Innovation.
0:30 Has the foundation of banking changed over the last five or so decades? While banks still look out for the financial wellbeing of the communities they serve, the way we interface with our banks has changed—our bankers rarely know our financial dreams and wellbeing—an issue data scientists seek to resolve.
3:00 What will be the value proposition of machine learning to banking in the year 2050? How have futuristic technologies like robotics already changed the banking experience? Dan shares how robotics in banking has allowed frontline staff to derive more meaning from their work as well as how we can expect this trend will grow.
5:40 What is the state of the ecosystem for fintechs? Are small disruptors still able to break into the field alongside big players? Do different banking cultures have a competitive advantage? Does diversity of thought play a role?
7:30 On the topic of banking regulations—is it friend of foe to the Big Four? Dan discusses customer safety and banking’s contributions in supporting the critical infrastructure to make Australia a leading digital economy.
9:15 Dan describes the organization's philosophy of developing machine learning and AI for banking purposes. Is there a roadmap for the best route for using AI and ML for service and customer solutions? Dan discusses with us a moderated approach that keeps purpose at the forefront of AI development—that end products must have a use case.
12:15 What are the most valuable lessons Dan has learned about building and scaling a data science team? He shares with us the criticality of diversity that extends beyond culture and gender into diverse experience, including people who were once first-line banking employees.
15:00 How do you navigate complex environments with deep issues such as data governance if you’re trying to advance your data science career? Is there an imbalance in the market as it pertains to data science talent? Dan discusses the benefit of hiring fresh out of college people who will break the status quo.
23:00 Dan discusses explainable AI in the context of innovative banking and the purpose and benefit of having a global ethical AI toolkit.
27:00 How does Dan see academia and industry working together going forward? Dan explains the importance of independent verification as well as symbiotic education—how members of industry can open conversations about current real-world issues as well as the ability of experts in the field to teach industry leaders in support of new and usable solutions. Dan also fills us in about coming together in groups to see new contexts for technology.
30:00 What is the corporate decision process behind Benefits Finder? Dan explains the program which shows banking customers benefits they may be entitled to from various governmental programs that they may not have been aware of. How did he and his team create a program like this and why?
Jeannie Paterson and Tim Miller join us from CAIDE to discuss AI, ethics, accountability and explainability. What is changing in these spaces, especially in light of Covid-19, and what should researchers and governments turn attention to in order to ensure helpful, beneficial AI that is used ethically?
More About Our Guests:
Jeannie Paterson & Tim Miller are the co-directors of the Centre for AI and Digital Ethics, a new collaborative, interdisciplinary research, teaching and policy centre at the University of Melbourne involving the faculties of Computing and Information Systems, Law, Arts and Science. Jeannie specialises in the areas of contracts, consumer rights and consumer credit law, as well as the role of new technologies in these fields. Jeannie’s research covers three interrelated themes:Support for vulnerable and disadvantaged consumers; The ethics and regulation of new technologies in consumer markets; Regulatory design for protecting consumer rights and promoting fair, safe and accountable AI.
Jeannie is co-leader of the Digital Ethics research stream at the Melbourne Social Equity Institute, an interdisciplinary research institute focused on social equity and community-led research Tim is associate professor in the School of Computing and Information Systems at The University of Melbourne, his primary area of expertise is in artificial intelligence, with particular emphasis on:Human-AI interaction and collaboration Explainable Artificial Intelligence (XAI) Decision making in complex, multi-agent environments Reasoning about action and knowledge
In this episode we investigate:Tim discusses why the center publishes so frequently and the importance of public outreach, especially in times of crisis such as Covid-19. How did the center come about? How did Jeannie and Tim become involved with it? Is the center independent? Is it a collaborative effort between the several similar centers across Australia such as the 3AI center in Canberra? Tim discusses how the centers work together as well as how to differ in vision and approach. Jeannie expresses the importance of an interdisciplinary approach and their dedication to this. Which subjects will they launch in semester two? Jeannie lets us know about subjects such as AI Ethics & Law which discusses how law responds to ethical dilemmas. What expertise does Jeannie have regarding law and the impacts of technology to consumers? What are counterfactuals? Jeannie answers, “What would you ask the machine when you’d receive a particular mortgage recommendation?” How do counterfactuals help scrutinize the basis of the decision to see if it is valid? How does this remove systematic bias and prejudice? What are the new trends in explainable AI? Tim also delves into counterfactuals as well as cognitive psychology and cognitive science. How do you generate counterfactuals that are realistic? What do those look like? Tim expresses that the human factor in explainability is becoming increasingly important. Tim discusses the impacts of Covid-19 on conferences and networking in this space now that everything is virtual. How does it make things less connective and enticing? Jeannie answers, “What advice do you offer your family and friends regarding the Covid Safe app?” She delves into privacy and security as compared to the benefits and effectiveness of the app. Has the Covid Safe app set a precedent for privacy in Australia? Tim discusses how a contact tracing app was used by law enforcement to understand who was at a Black Lives Matter event and why this means there should be legislation surrounding apps such as this and how the data can be used. What scale of discouragement will it require to make a difference to Australian businesses? Is money the answer? Tim discusses why we must educate consumers about data collection and privacy, in part by exemplifying what information about them can be discerned from the data they share. How can consumers be proactive? Jeannie discusses how sometimes consumers are misled about what data is collected about them. Jeannie and Tim share their views about harmful tech and the ability to question automated decisions and the need for accountability to ensure that.
Today on AI Australia, we have the opportunity to talk to Professor Jon Whittle of Monash University about the impacts - both good and bad - data science is having on the world around us. As co-author of a series of published and soon-to-be-published papers in the fields of software development, ethics, and values, Jon is well placed to talk to us today about the heightened risks and opportunities that the development of data-science based systems brings to our world.
About Professor Jon Whittle:
Professor Jon Whittle is the Dean of the Faculty of Information Technology at Monash University.
Jon is a world-renowned expert in software engineering and human-computer interaction (HCI), with a particular interest in IT for social good. In software engineering, his work has focused around model-driven development (MDD) and, in particular, studies on industrial adoption of MDD.
In HCI, he is interested in ways to develop software systems that embed social values. Jon's work is highly interdisciplinary. As an example, he previously led a large multidisciplinary project with ten academic disciplines looking at how innovative digital technologies can support social change. In 2019 Monash launched the Data Futures Institute, which we will find out more about today.
Before joining Monash, Jon was a Technical Area Lead at NASA Ames Research Center, where he led research on software for space applications. He was also Head of the School of
Computing and Communications at Lancaster University, where he led eight multi-institution,
multi-disciplinary research projects. These projects focused on the role of IT in society and included digital health projects, sustainability projects and projects in digital civics.2:30 How is Monash busting the old model of disciplinary siloed schools and turning toward a passion for multidisciplinary studies? How does the intersection of fields lead to progress, especially in software engineering and AI? 5:00 How do you go about raising awareness of these types of social, ethical, and psychological aspects of interdisciplinary work and studies with a deep technical kind of audience who are focused on their tools and being the best they can in their particular arena? What role do universities play in creating engineers who will consider ethics and values in their software products? 8:00 What are values? Jon gives us a long and short answer that can include everything from social responsibility to hedonism to inclusion. What role do social scientists play in how we understand values? Jon discusses Schwartz’s Ten Universal Values, corporate values, and the difference between ethics and values—as well as the ability for them to contradict one another. How do values differ by culture, age group, and other demographic factors? 14:00 Who gets dominance in a software application—who chooses the values that underpin the software? How do we take the implicit aspect of values in software and turn it into an explicit process? Jon discusses the maturity scale of companies and their corporate values and whether or not this impacts design decisions. 19:00 Jon discusses the impact of corporate values on software development and real-world cases of ethical issues that have arisen due to software. This includes everything from parole re-offender predictions, priority one-day shipping, and self-harm on Instagram. 23:00 Are values at the root of algorithmic bias? Different groups experience products and services differently, especially if the data and ideas going into it are heavily biased. 80% of AI professors are male—how does this influence the way systems are designed? Are our programs today working to increase diversity in AI and software development and, even when those fields are diverse, what difference will it make? Jon proposes someone on the team to specifically ask questions about values during the design process. 29:00 Will we ever get to an empirical state where values are so measurable that we could be alerted, automatically, of a breach? Jon discusses the complexities of this process, but shares progress that is already occurring on this front. Even without perfect accuracy, we can see if things are getting better or worse. 32:00 Is there a government role in policing the ethical use of software, data, and AI? Jon shares his thoughts on a multi-faceted approach to regulation. 35:00 How is Monash helping students prepare for a values-first data environment? Jon discusses the “Bachelors of Applied Data Science” multidisciplinary degree and the combination of “data plus x” in education and the workforce. 38:00 Jon discusses examples where values went right and why we need values built-in to software upfront, the way we do with security. Jon also answers the questions, “Do robots have values?” with a surprising standpoint.
In today’s episode, Lizzie O’Shea discusses the great power of data and AI — and how we can use them to empower people rather than oppress them. She’ll discuss which technologies should be off-limits, compares data policies around the world, and proposes a code of ethics for engineers building these influential technologies. Lizzie probes who holds the power of AI and data and who should be responsible for ethics in this realm — corporations or the government — and who is better equipped to do so. Lizzie raises important questions about privacy concerns in our digital lives and even poses the question — do machines already rule the world?
About Lizzie O’Shea:
Lizzie is a lawyer, writer, and broadcaster. Her commentary is featured regularly on national television programs and radio, where she talks about law, digital technology, corporate responsibility, and human rights. In print, her writing has appeared in the New York Times, Guardian, and Sydney Morning Herald, among others.
Lizzie is a founder and board member of Digital Rights Watch, which advocates for human rights online. She also sits on the board of the National Justice Project, Blueprint for Free Speech and the Alliance for Gambling Reform. At the National Justice Project, Lizzie worked with lawyers, journalists and activists to establish a Copwatch program, for which she was a recipient of the Davis Projects for Peace Prize. In June 2019, she was named a Human Rights Hero by Access Now.
As a lawyer, Lizzie has spent many years working in public interest litigation, on cases brought on behalf of refugees and activists, among others. I was proud to represent the Fertility Control Clinic in their battle to stop harassment of their staff and patients, as well as the Traditional Owners of Muckaty Station, in their successful attempt to stop a nuclear waste dump being built on their land.
Lizzie’s book, Future Histories looks at radical social movements and theories from history and applies them to debates we have about digital technology today. It has been shortlisted for the Premier’s Literary Award.
In this episode we cover the following topics:4:00 How does the modern day compare to times in decades past as it pertains to rights—is technology a force for good? How can we take back the power of technology to benefit humanity? 8:00 How can we manage AI and digital technology in a more intentional way? How are automated processes already determining the course of many people’s lives? Lizzie explains how the future when machines takeover is, in many ways, already here. Should technology be regulated in order to help solve problems, and what problems have already occurred? 16:00 Lizzie discusses the state of regulation across the world, including GDPR and New York’s data fiduciary law. Should we move beyond contractual ideas of privacy? 18:00 Lizzie explains her stance on facial recognition. Should facial recognition be limited in the same way as chemical warfare—a line that is not to be crossed? How can facial recognition technology be oppressive, and what can you do to protect yourself? 22:00 Is the social credit system in China far-fetched in the West? Lizzie discusses the modern surveillance state. 26:00 How does technology mirror power structures in the analog world? Lizzie discusses predictive policing technology and the biases that exist within it. 31:00 Should we create a code of ethics for engineers developing these technologies? What practical things could an engineer do if a project’s implications make them uncomfortable? 38:00 Lizzie discusses the influence of large companies, social media, and why some issues they face are better suited to politics than corporations. 46:00 We converge to talk about the politics behind data and AI, the need to educate our regulators, and speak with our younger generations who will one day create the rules surrounding the tech that rules the world.
Kriti Sharma, seen in Forbes 30 Under 30 in Technology, is an artificial intelligence technologist, business executive, and humanitarian. She has been a part of Google India’s Women in Engineering, a United Nations Young Leader, and a Fellow of the Royal Society of the Arts. In 2018, Kriti launched rAInbow for domestic abuse victims in South Africa. She is the founder of AI For Good UK as well as an advisor to the United Nations Technology Innovation Lab. Previously, Kriti was VP of Artificial Intelligence at Sage.
Today, we are lucky to have her with us to discuss the future of AI and to answer an important question—what if disadvantaged groups don’t have a say in the AI tech we create? How can the world’s governments join forces from a regulatory perspective in order to create AI that benefits humanity as a whole? What types of social change and humanitarian impacts intersect with AI—how do we do bigger, better things that incorporate the human element? And how does GDPR play a role?
On this episode we discuss:2:00 Kriti discusses her background, including her introduction to machine learning and robotics with a robot she created when she was 15. She also discusses the purpose of the Centre for Data Ethics and Innovation and the types of problems they focus on within the group. What do these problems mean from a regulatory point of view and in everyday life? 6:00 How do we get governments around the world to engage in more collaboration? Rather than become a “leader” in AI, why don’t we use the combined power of joining forces? How does being in London alter her view of AI around the world and why did she choose to work there? 9:00 Is employment a driver for concern in the markets that Kriti engages with? Because AI is at the height of its hype cycle, how does that impact AI in business moving forward? Is the alarmist narrative surrounding job automation valid? Kriti discusses that women are expected to lose twice as many jobs to automation as men as well as the dark side of new job creation. 14:00 Kriti discusses the importance of diversity in AI and how it can bring focus to actual usefulness as well as potential misuse cases that can arise. How has Kriti encountered challenges and mistakes on this front? 17:00 Where is the intersection between AI and climate change and how does this relate to the youth classes Kriti has taught? What social issues do the young people Kriti teaches care about? 21:00 Kriti discusses rAInbow for women in South Africa where 1 in 3 women face domestic abuse—a figure that is the same for Australia. Why are women not reporting their abuse? What other women’s issues and diversity issues intersect with AI? 28:00 What are Kriti’s thoughts on GDPR? Is it working?
On this episode of AI Australia, we’re changing things up a little. With James overseas, Nigel and his friend Sarah Turner from REA Group hosted multiple panel conversations with some of Australia’s best and brightest in the field of AI, computer science, mathematics, and regulation to discuss the launch of OVIC’s new book: Closer to the Machine.
OVIC is the Office of Victorian Information Commissioner and is the primary regulator and source of independent advice to the community and Victorian government about how the public sector collects, uses and shares information.
In this episode, we’re lucky to be joined by:Sarah Turner (co-host) - General Counsel, REA Group Adam Spencer - comedian, media personality and former radio presenter Rachel Dixon - Privacy and Data Protection Deputy Commissioner, OVIC Professor Toby Walsh (University of New South Wales and CSIRO’s Data61) Professor Richard Nock (Australian National University and CSIRO’s Data61) Associate Professor Ben Rubinstein (University of Melbourne) Katie Miller (Independent Broad-based Anti-corruption Commission) Professor Margaret Jackson (RMIT)
In this episode we discuss:The degree to which we all take for granted how big a part AI plays in our lives The rate of improvement in algorithms in their narrow fields of “expertise”. We discuss how quickly a chess-playing AI went from basic to beating the chess grandmaster Gary Kasparov OVIC’s motivation for publishing the book on data privacy and protection Grappling with the implications of how AI systems can be misused, or easily breached from a security standpoint. Do we continue to push the boundaries in the face of privacy risks and concerns? Do we pull back? The challenge of discrimination. Eventually, machines will need to make decisions that “discriminate” against people in one way or another - but there is such a thing as good discrimination and bad discrimination. Who gets to make those definitions? What role should government play in the regulation (or non-regulation) of AI? Accountability. What happens when AI “goes rogue”? Where does the buck stop? How the conversation has evolved over the years, and become more “realistic” in a sense.
OVIC: Closer to the Machine book
Dr Karin Verspoor works at the intersection of Science and Technology, applying computation to analysis and interpretation of biological and clinical data, particularly unstructured text data.
Karin is a Professor in the School of Computing and Information Systems at the University of Melbourne, as well as the Deputy Director of the University's Health and Biomedical Informatics Centre.
She was previously a Principal Researcher at NICTA's Victoria Research Lab and served as the Scientific Director for Health and Life Sciences. Karin headed a research team at NICTA in Biomedical Informatics.
Karin moved to Melbourne in December 2011 from the University of Colorado School of Medicine, where she was a Research Assistant Professor in the Center for Computational Pharmacology and Faculty on the Computational Bioscience Program. She also spent five years at Los Alamos National Laboratory, nearly five years in start-ups during the US Tech Bubble, and a year as a Research Fellow at Macquarie University. She received her undergraduate degree in Computer Science from Rice University (Houston, TX) and her MSc and PhD degrees in Cognitive Science and Natural Language from the University of Edinburgh (UK).
Topics covered include:Karin’s take on the future of healthcare and the role will AI play, given her unique vantage point on the topic. We also cover the key building blocks that make this future possible Whether there are new risks associated with a more technologically advanced healthcare future We’ve seen public outcry over the My Health Record program with people opting out and question marks over doctors willingness to upload data to the system. We discuss what this means for the future of healthcare Whether tech companies like Apple and Fitbit will become significant players in the health space Karin’s take on what AI will do to the job market in healthcare if AI continues to advance at its current rate Is more regulation around data - eg GDPR or similar - a must-have for Australia if we want to embrace a more data-driven approach to healthcare? Where Australia ranks in terms of governments and companies leading the charge on transforming healthcare What Karin’s colleagues in medicine think about the rapid pace of technology innovation
On today’s episode of AI Australia, we have the pleasure of speaking with Suelette Dreyfus renowned researcher and journalist on digital privacy and whistleblowing. She has a personal connection to these topics and even co-authored a book with Julian Assange. Suelette will explore with us the implications of AI on privacy, the complexity of context able to be extracted from collected data thanks to this technology, as well as tech’s impacts on personal security. She will also cover how technology is changing the world of whistleblowers, what protections citizens have against predatory data collection, and much more. Join us on today’s episode!
More about Suelette Dreyfus:
Suelette Dreyfus is a journalist, technology researcher, and writer. Her field of research includes information systems, digital security and privacy, the impact of technology on whistleblowing, health informatics, and e-education.
Her work examines digital whistleblowing as a method of freedom of expression and the right to dissent from corruption.
Suelette is also a researcher and lecturer in the School of Information and Computing Systems at the University of Melbourne. She is also the principal researcher on an international project testing the impacts of digital technologies on whistleblowing
Additionally, Susan is the author of the ‘97 cult classic Underground: Tales of Hacking, Madness, and Obsession on the Electronic Frontier which covers the exploits of a group of hackers in the late ‘80s and early ‘90s—including Julian Assange who is also the co-author of the book.6:00 Suelette describes the tradition of privacy in various parts of the world. 7:00 Suelette discusses the upswing of encryption and whether or not law enforcement has always had access to this information, citing the “right to whisper.” 12:00 Can the government read our minds? How much does our Google search history reveal? Can you simply use another search engine? What parts of our privacy and ourselves should we be willing to sell? 20:00 Do policies such as GDPR offer protections on things like genetic data? What happens if your data is sold, but it’s wrong? Can you correct it? What legal measures are coming into being surrounding these topics? Are there laws surrounding facial recognition technology? 30:00 What sorts of legislation may come about in Australia in regards to data privacy? How can data be used in nefarious ways such as targeting potential gambling customers based on genetic predisposition to addiction? 34:00 Where is the line between reasonable persuasion and manipulation? How does this relate to politics? Social media? How do news filter bubbles normalize ideas? Is fact-checking an issue here? 39:00 Suelette discusses whistleblowing and relevant cases. What systems and laws are in place to assist whistleblowers in calling attention to wrongdoings? How are those options changing? Is whistleblowing about fame? 48:00 Suelette discusses Australian legislation that allows for backdoor access to data. What was wrong with this recent bill? 55:00 How are companies choosing to censor people? 63:00 When did Suelette decide to pursue privacy in her career? How did Julian Assange impact her path? What type of environment and situation is Assange dealing with? What rights are at stake?
Amazon: Underground: Tales of hacking, madness, and obsession on the electronic frontier
Today we’re talking with Data Scientist Katherine Bailey and Professor of Philosophy Oisin Deery about bias in machine learning systems. Katherine and Oisin share their perspectives on why philosophy and ethics of AI is such a hot topic, the scandals engulfing the tech industry and how philosophy could be applied to chart a new course, and how philosophy can help us frame solutions to bias in machine learning systems.
Katherine Bailey is the Natural Language Processing lead within Accenture Australia’s AI and Automation Engineering group. Originally from Dublin, Ireland, her background is in software engineering and data science, with over a decade in the technology industry, primarily in Canada, the US and Australia. Prior to joining Accenture she was Principal Data Scientist at Acquia, a Boston-based Software-as-a-Service company, where she led the company’s Machine Learning initiatives. Katherine speaks and writes regularly on the topic of A.I. and is committed to dispelling the myths and removing the confusion around it, teasing apart the real from the imaginary implications of these technologies, both practical and ethical.
Oisin is Assistant Professor in the Department of Philosophy at Monash University, in Melbourne. His research interests lie at the intersection of philosophy of mind and action, metaphysics, and ethics. Oisin also works on ethical issues related to artificial intelligence. Oisin completed his Ph.D. in philosophy in 2013 at the University of British Columbia. Katherine and Oisin have a particular interest in bias in Machine Learning systems and have presented at conferences and meetups around the world on this topic, including a presentation at Google in 2018.
In this episode we discuss:STEAM versus STEM, how the Arts complements STEM Exploring the feasibility of Artificial General Intelligence How philosophy can help us think about AI and ethics How we define intelligence The limitations of the Turing test The long history of machines fooling people Consciousness and sentience in relation to AI Is AI ushering in a golden age of philosophy? The tendency in tech to assume a question is being asked for the first time Tim Miller’s work on explainable AI and the importance of drawing on social sciences How can tech companies best apply philosophy? The recent scandal surrounding Google’s external AI ethics advisory council? What is ethics washing? How tech is impacting democracy and public debate Different types of bias in word embedding how it can be addressed
Resources mentioned:Life 3.0: Being Human in the Age of Artificial Intelligence https://www.amazon.com/Life-3-0-Being-Artificial-Intelligence/dp/1101946598 Facebook’s Role in Brexit—and the threat to democracy with Carole Cadwalladr https://www.ted.com/talks/carole_cadwalladr_facebook_s_role_in_brexit_and_the_threat_to_democracy?language=en https://ai.google/principles
Today we’re talking with Aire co-founder and mum of the Rita AI, Sarah Bell. Sarah discusses with us the evolution of Rita, humanisation of machines, our sci-fi future, and the ethics of big data and bias-free AI.
More about Sarah:
As a co-founder of the PropTech startup Aire, Sarah Bell is a sought after speaker and expert on the strategic application of artificial intelligence and automation in real estate. She helps real estate businesses harness the opportunity within their data and transition to a blended human-digital workforce using Aire's Digital Employee, Rita.
With a professional and academic background in research and analysis, Sarah joined the front lines of real estate as an agency owner and practitioner for a decade giving her tactical insights, subject matter expertise and just enough street cred to create solutions that are intelligent by design and Cx obsessed.
Sarah is an author, analyst, researcher, project designer, speaker and nerdtrepreneur with research interests and some pieces of paper in strategic planning, change management, planning professional development and strategic application of artificial intelligence with London's Middlesex University, Leeds Trinity University (UK) and recently MIT's Sloan Management and CSAIL Executive Schools. Sarah is also Mum to three human children and a clever little robot named Rita. She has a strong preference for dogs over cats if pushed on the subject.How does Sarah see her role? Does she see herself as an entrepreneur? How did she end up in real estate? How did the AI Rita come about? How did they avoid creating a solution to a problem that didn’t exist? How did their background help with this? What does Rita actually do? What does Rita stand for? How has Rita become humanised among the team? Is this humanization deliberate or does it occur naturally? Tech jargon isn’t relevant to customers. How does the team help customers understand what Rita does? How does this let customers connect to her? Does Sarah have any practical advice for establishing an AI startup? What role does intuition play? Three important questions: How can we connect humans to computers in ways that make them more intelligent? How do we connect humans to each other in ways that are more intelligent? How can we connect people to information in ways that are more intelligent? Is it straightforward to hook into the right skills to take a concept or idea to an actual product? What challenges has Sarah faced on this front? Do some customers see Rita as a potential risk to their jobs? How can humans and machines work collectively with their best skills? Why is Rita built the way she is vs. as a chat bot? How does Sarah feel about chatbots as an interface? Will Rita one day talk directly to customers? What is Sarah’s vision of the future in relation to sci-fi—Star Wars, Star Trek, Jetsons, Terminator, Altered Carbon, etc.? Which sci-fi future are we most closely going to end up in?
Today, we’re chatting with artist and scientist J.J. Hastings. Her research explores self-experimentation, genome editing, machine learning and the future intersections between tech and the human body. She has long-standing roots as a biohacker—having co-founded two community labs, London Biohackspace and Melbourne’s BioQuisitive—and now has the first garage lab start-up in Australia to be approved by the OGTR to work with genetically modified organisms.
Her work has been exhibited at venues across Europe, India, Asia, North America, and Australia. J.J.’s career in scientific research spans over 15 years. She is an alumna of New York University, Harvard University, the University of Oxford, and Central Saint Martins with advanced degrees in Biology, Bioinformatics, and Fine Art. Her research fuses and folds together the fields of machine learning, bioengineering, space exploration, new media art, and ethics.
In our episode, we’ll probe J.J. ’s mind about the ethics of editing the genes of children, using gene drive to eliminate disease, the possible risks of biohackers. She’ll also cover how close she believes we are to artificial general intelligence (AGI), the impact of quantum computing on biotechnology, and the possibilities of our sci-fi-esque future, and her own lab—Bioquisitive.2:00 How did JJ come to be who she is? How did she get involved in so many interesting projects? Is she an expert in all of these topics? And how did she end up in biohacking? 5:00 Has JJ chased being a provocateur? Or is it the product of pure curiosity? 7:00 JJ discusses the book Dark Emu (is this right?) and seed modification. How might this relate to our future settlement of Mars? 11:00 How are machine learning, CRISPR technology, and biohacking all interrelated? 14:30 JJ discusses theoretical biology, quantum computing, and whether or not biohackers pose a risk. 17:00 Is it out of control to utilize biohacking to cull species or eradicate disease? Will gene drive be widely used? How might this impact human and ecological health? 21:00 What is the history behind CRISPR? How is it still unfolding? JJ discusses the act of editing the genome of children and states her position on the scientist who experimented with exactly this. Is genetic engineering justified? 25:00 What are the ethics behind genetic modification? How far is the law behind the technology? How should we concern ourselves with our genetic privacy? 32:00 How does JJ feel about sci-fi? Which sci-fi movie most likely depicts our future? Gattaca, Altered Carbon, Star Trek, The Island? 38:00 How close are we to Artificial General Intelligence (AGI)? How does the accessibility to CRISPR and AI tech change our research? How much do we really understand and how much of our thoughts around these processes are hubris?
Dark Emu - Book
JJ Hasting’s Homepage
The Island - Movie
Change Agent by Daniel Suarez - Book
Today we’re speaking with Chris Hausler, Senior Data Science Manager at Zendesk, a global customer support SaaS company.
Chris believes that the future of AI and humanity will be a bright one, but humans must adapt and seek new opportunities as old opportunities evaporate. Hausler is working to help clients resolve customer service issues before they crop up by giving users access to the information they need at their fingertips.
Hausler speaks about artificial intelligence growth, development, and trends over the last few years and how Zendesk has helped set the bar for their development on today’s episode of AI Australia02:00 How did Zendesk integrate AI into their already strong ticketing system? How is it changing customer-facing features? 02:40 How are Chris and his teams able to find repetitive trends in support conversations, streamline the information, and add UX elements for self-service. 05:40 What was the first machine learning project Chris worked on with Zendesk? 09:20 Why does Zendesk keep all data science in Melbourne as opposed to near their home office in San Francisco? 14:30 Do you find that talent is more readily available in Melbourne? Is it less competitive than San Francisco? What challenges do you face while recruiting in Melbourne? 20:10 What skills and traits do you look for when hiring smart people straight out of university? 22:50 When approaching problems, what critical thinking skills come into play? How can these be applied to a machine learning platform? 28:00 What technologies have been responsible for machine learning growth over the last four years? 33:10 Will humans take action to “hack” self-driving cars? What sorts of action might they take? 38:10 Where do you stand on dystopian versus optimistic points of view? 40:10 What are some upcoming breakthroughs that are coming up on seriously changing the world? 47:00 Can machine learning be used to teach products to change their approach based on the customer experience? 48:00 What other outcomes, aside from job loss are possible through machine learning applications in healthcare? 49:40 What roles on a team are needed to start hiring for a machine learning business path? 50:10 What skills are sought after for early hires in data scientist? 50:50 How important is domain knowledge in data science? 55:00 Are encrypted backdoors a serious threat to information security? Should legislators be allowed to make these mandatory?
Today, we’re chatting with Dr Niels Wouters, Head of Research and Emerging Practice for Science Gallery Melbourne and Research Fellow in the Interaction Design Lab at the University of Melbourne.
Dr Wouters’ research focuses on social good and the human element of technology, especially as it pertains to AI and Human-Computer Interaction. A renowned speaker across national and international media, Wouters regularly speaks about the impacts of new technology on urban life. He has been featured on The Sydney Morning Herald, ABC, BBC, The Washington Times, World Economic Forum, Dazed Digital and CNN.
Dr Wouters is the creator of Biometric Mirror, Stories of Exile, Encounters, and Street Talk. His work will be featured in a permanent Science Gallery exhibit in Melbourne.
During today’s discussions, Dr Wouters will explain the purpose behind such fascinating projects as Biometric Mirror and the implications of trusting an AI trained on subjective data. He will also lead us through the journey of Street Talk and the very human, life-changing connections made via technology placed on the outside of homes in Belgium. How can technology bring us together as a community? How might unreliable AI be used against us in the future?
Today, we will explore these questions and more with one of society’s most creative researchers on Human-Machine Interaction, Dr Niels Wouters.Street Talk - What happens when you equip family homes with technologies that engage the outside world? Explore the results of note printers for passersby, LED ambient noise detectors, and even a headphone that lets others hear the conversations within the home. How do we treat data privacy differently when looking at digital data vs. linking people in analogue ways? How did Dr Niels Wouters go from computer science to finance to architecture to find a home here linking science, art, ethics, and the human element? How is Science Gallery bringing together artists and scientists to stir interest in STEM among young adults? What’s happening with Science Gallery in Melbourne? Biometric Mirror - How do people react to an AI that determines their gender, age, ethnicity, weirdness, aggression, and emotional instability? What if that data were used against them for job selection or insurance rates? How simple is it to build an AI from an unreliable dataset? And how open are people to believe an assessment is correct due to the simple fact it was made by a machine? What is AI’s impact on human rights? What concerns does a university’s ethics review committee have surrounding a project like Biometric Mirror? What are the ethics behind showing AI-generated attractiveness assessments to young adults? Are young people more or less likely to accept a machine-generated assessment as fact? Which sci-fi future are we in the midst of creating? The Jetsons? Bladerunner? Black Mirror? Or something else entirely? Eliza’s “granddaughter” shows up as a special guest. What’s coming in the future? Will there be a Biometric Mirror 3.0? Beyond facial recognition, what would happen if Wouters hacked an Alexa to do unexpected things? How will Biometric Mirror be used as an ethics probe? What’s next for Science Gallery? What two or three things could listeners and legislators do now to jumpstart AI ethics in Australia?
Joining us today is Zane Lackey, Co-Founder and Chief Security Officer at Signal Sciences based in New York.
Zane serves on multiple public and private advisory boards and is an investor in emerging cybersecurity companies. He is incredibly well versed in the various trends and advancements in cybersecurity and defending against attacks.
He is the author of Building a Modern Security Program, a how-to in building and scaling effective security teams.
Prior to co-founding Signal Sciences, Zane led a security team at the forefront of the DevOps/Cloud shift as Chief Security Officer of Etsy.
We also have a guest co-host today is Craig Templeton. Craig is the Chief of Information Security Officer REA group and has spent 20 years working in cybersecurity. Safe to say, this episode is stacked with insight into the state of the security industry and what it means for both citizens and businesses.
Here’s what’s discussed in today’s episode:Why it’s been a rough year in the tech industry How the concept of defence and depth has been turned into expense and depth. Why security teams are drowning in too much data and need to focus on what is important. The threat to cybersecurity as a result of automation Privacy risks - how do we eliminate or reduce personal information data? The implications of data corruption Why people may lose trust in machines and avoid using them When the most simple methods in designing defence systems can often be overlooked Should citizens be able to opt out of automated decisions and would they even know? Why legislation needs to catch up with technology How can we better adopt and embrace technology such as DevOps, Cloud, AI, and machine learning?
Today, we’re chatting with AI expert, activist, and author Toby Walsh. Toby is a professor of artificial intelligence at the University of New South Wales and Data61 with an educational background from the University of Cambridge and University of Edinburgh in mathematics, theoretical physics, and artificial intelligence. He has also been Editor-in-Chief of the Journal of Artificial Intelligence Research and has chaired multiple AI conferences.
As an activist, Toby helped in the release of an open letter which gained over 20,000 signatures that called for a ban on offensive autonomous weapons. Toby is also the author of multiple books on artificial intelligence, the most recent of which is “2062: The World That AI Made” which we will discuss today.
Dr Walsh opens his thoughts on the future of AI, when and how it could surpass humans, and how little we know about how it could impact our jobs. Toby also discusses with us other ways AI can impact society for better or for worse. How can our data privacy impact AI and how can microtargeting using this data change the course of history? What ethics should we stand by? Who is responsible for decisions made by autonomous machines? And could government regulation actually help—rather than hinder—innovation?Insight into Toby's most recent book, “2062: The World That AI Made”, touted as the book to read, bar none, on AI and society. Will AI have consciousness? Is consciousness a biological construct? AI's impact on jobs. Will more jobs be displaced than created? Could AI lead to the second Renaissance when people focus on what's truly important? Which fictional future will AI lead us most closely to? Blade Runner, A Space Odyssey, Altered Carbon, The Jetsons, etc.? How did Toby become involved in AI? How did he become an activist? Which legal and ethical issues do we need to look at surrounding AI? AI is being used to target suspicious people for criminal surveillance. Bosses are now sometimes algorithms such as is the case with Uber. Should we regulate data monopolies? What is happening to our data privacy and how is it used to manipulate us? Cambridge Analytica and Facebook accused of involvement in manipulating elections through microtargeting using collected personal data. What is the regulatory landscape in Australia? How has it been impacted by GDPR? How has data been used illegally to discriminate based on race, gender, and other similar factors? What is the potential impact of government regulation on automation in travel? How could the safety of transportation increase through regulated data sharing in automated cars and planes? What are the advantages of machines over humans? How will global learning change humanity? Do we want humans to be manipulated to such an extreme degree? Can humans be hacked? Should we regulate weapons of mass persuasion? How did AlphaGo create a ”Sputnik Moment” for AI? What impacts has it had? Is robotic soccer Australia's “Sputnik Moment” in AI?
For full show notes and resources head to: eliiza.com/podcast/episode-3
On this episode of AI Australia, we’re excited to be speaking with Kendra Vant. Kendra is currently the principal data scientist at SEEK, and has had an extensive and diverse career in AI/ML and data science (among other things) across insurance, banking, telecommunications, government, gaming, the airline industry, and the job board market.
With SEEK being one of the leading Australian companies in the field of AI, Kendra really is someone to pay attention to as the AI landscape unfolds. We’re honoured to have had the opportunity to speak with her about the past, present, and future of AI for the Australian technology community.
We discuss a wide range of topics, including:What is involved in Kendra’s role as principal data scientist at SEEK Some of the results Kendra and her team have seen by applying AI to their job search functions How Kendra got into the world of data science and software engineering The role of ethics in AI, and how it plays into the work Kendra is doing with her team at SEEK Some of the issues with bias in the hiring process, and where Kendra sees the main opportunities are for removing bias using AI and ML The importance of keeping humans in the loop when it comes to AI initiatives. This helps humans keep machine bias at bay, and vice versa How SEEK goes about finding tried and true machine learning algorithms, implementing them, and scaling them. Rather than being the research and development ground for new algorithms, they are more focused on making tested algorithms scale better Kendra’s rule of thumb for data scientists and engineers working together - how many engineers per data scientist, what kind of engineers, etc. How Kendra views the state of data science and machine learning adoption and usage. There’s a lot of hype, and Kendra helps us cut through a lot of that in explaining what’s really going on in the Australian business community Kendra’s thoughts and concerns on the National Health Record The best communities and conferences to be involved with as a data scientist or AI/ML enthusiast
Today we’re speaking with Tim Miller, associate professor in the School of Computing and Information Systems at The University of Melbourne. Tim’s primary area of expertise is in artificial intelligence, with a particular emphasis on the concept of explainable Artificial Intelligence (XAI).
Tim describes his background as being at the intersection of artificial intelligence, interaction design, and cognitive science/psychology, so we knew this was going to be a fascinating conversation!
Tim does not disappoint, and we cover a wide range of topics including:What explainable AI is, and why it is such a hot topic today How to define intelligence, and the role philosophy has to play in AI What can go wrong with a lack of visibility into AI, and why explainability is so important The current issue of trust when it comes to AI, and why the worst-case scenarios tend to get most of the media attention The problem of bias in AI and how we can remove it. In particular, we discuss the topic of “predictive policing” and some of the issues it presents Tim’s thoughts (as a leading expert on transparency in AI) on the My Health Record program GDPR and what it means for explainability and the AI community in general The concept of using AI to inspect, detect, and solve problems with other forms of AI Where Tim believes we are as a community in terms of AI advancement, and what needs to happen in order to take further leaps in the development of AI Tim’s movements and appearances at upcoming AI events
We’re so excited to announce our first guest: Ellen Broad.
Ellen is a renowned expert in the field of data ethics and AI. She has a wealth of knowledge and experience, having worked around the world advising governments and corporates on data sharing, open data, strategy and licensing.
Ellen’s writing has appeared in the New Scientist, the Guardian and a range of civil service and tech publications. She has spoken about AI and data to ABC Radio National's 'Big Ideas' and 'Future Tense' programmes, and at SXSW in the US
Ellen’s recently published first book Made by Humans: The AI Condition, analyses our role and responsibilities in automation and how machine learning is affecting our decisions. Roaming from Australia to the UK and the US, data expert Ellen Broad talks to world leaders in AI about what we need to do next.
During our conversation with Ellen, we explore themes such as why understanding AI is really about understanding human thought and behaviours, how to improve people’s recognition of AI and how it works, how to approach AI ethics as a business leader plus so much more:How Ellen’s career journey into AI and data ethics and has unfolded Literature that does a good job of examining ethics and has had a profound influence on Ellen The purpose behind Ellen’s book Made by Humans: The AI Condition How the concept of automation bias and taking for granted the output of a machine affects us and the decisions that we make How to improve people’s understanding of how machines work and what the points of failure might be Why it’s important to be wary of quality assurance mechanisms in AI right now Communicating ethics to a board of directors The emerging risks of new job roles in ethics The place of and need for external standards and systems of validation in AI and the need for markers of quality/standardised expectations Guiding principles for a business leader who is evaluating AI solutions (and what to avoid)
For full show notes and resources head to eliiza.com.au/episode-1.