Episodit
-
In the second episode of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools, Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk as a developer and CEO.
View our Risk Assesments here: https://www.educateventures.com/risk-assessments
In the studio:
Rowland Wells, Creative Producer, EVR Rajeshwari Iyer, CEO and Cofounder, sAInapticTalking points and questions include:
Who are these for? what's the profile of the person we want to engage with these risk assessments? They're concise, easy-to-read, no technical jargon. But it's still an analysis, for people with a research/evidence mindset. Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public. So how do we get this in front of people? Do we lead the conversation with budget concerns? Safeguarding concerns? Value for money? What's the end goal of this? Are you trying to raise the sophistication of conservation around evidence and risk? Many developers who you critique might just think you're trying to make a name pulling apart their tools. Surely the market will sort itself out? What's the process involved in making judgements about a risk assessment? If we're trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what's the first step? Can this be done quickly? Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves? Schools aren't testbeds for intellectual property or tech interventions. Why is it practitioners' responsibilities to make these kind of evaluations, even with the aid of these kind of assessments? Why is the tech and AI sector not capable of regulating their own practices? You've all worked with schools and learning and training institutions using AI tools. Although this episode is about using the tools wisely, effectively and safely, please tell us how you've seen teaching and learning enhanced with the safe and impactful use of AI -
In today’s episode, we have the first part of a two-part miniseries on risk management, risk mitigation and risk assessment in AI learning tools. Professor Rose Luckin is away in Australia, speaking internationally, so Rowland Wells takes the reins to chat with Educate Ventures Research team members about their experience managing risk as teachers and developers. What does a risk assessment look like and whose responsibility is it to take onboard its insights? Rose joins our discussion group towards the end of the episode, and in the second instalment of the conversation, Rowland sits down with Dr Rajeshwari Iyer of sAInaptic to hear her perspective on risk and testing features of a tool as a developer and CEO herself.
View our Risk Assessments here: https://www.educateventures.com/risk-assessments
In the studio:
Rowland Wells, Creative Producer, EVR Dave Turnbull, Deputy Head of Educator AI Training, EVR Ibrahim Bashir, Technical Projects Manager, EVR Rose Luckin, CEO & Founder, EVRTalking points and questions include:
Who are these for? what’s the profile of the person we want to engage with these risk assessments? They’re concise, easy-to-read, no technical jargon. But it’s still an analysis, for people with a research/evidence mindset. Many people ignore it: we know that even learning tool developers who put research on their tools ON THEIR WEBSITES do not actually have it read by the public. So how do we get this in front of people? Do we lead the conversation with budget concerns? Safeguarding concerns? Value for money? What’s the end goal of this? Are you trying to raise the sophistication of conservation around evidence and risk? Many developers who you critique might just think you’re trying to make a name pulling apart their tools. Surely the market will sort itself out? What’s the process involved in making judgements about a risk assessment? If we’re trying to demonstrate to the buyers of these tools, the digital leads in schools and colleges, what to look for, what’s the first step? Can this be done quickly? Many who might benefit from AI tools might not have the time to exhaustively hunt out all the little details of a learning tool and interpret them themselves? Schools aren’t testbeds for intellectual property or tech interventions. Why is it practitioners’ responsibilities to make these kind of evaluations, even with the aid of these kind of assessments? Why is the tech and AI sector not capable of regulating their own practices? You’ve all worked with schools and learning and training institutions using AI tools. Although this episode is about using the tools wisely, effectively and safely, please tell us how you’ve seen teaching and learning enhanced with the safe and impactful use of AI -
Puuttuva jakso?
-
In today's rapidly evolving educational landscape, Artificial Intelligence is emerging as a transformative force, offering both opportunities and challenges. As AI technologies continue to advance, it's crucial to examine their impact on student expectations, learning experiences, and institutional strategies. One pressing question is: what do students truly want from AI in education? Are they reflecting on the value of their assessments and assignments when AI tools can potentially complete them? This begs the deeper question of what we mean by student success in higher education and the purpose of knowledge in an AI-driven economy. Professor Rose Luckin is joined by three wonderful guests in the studio to discuss what tools we need to support students and how we explore the potential and the limitations of AI for education.
Guests:
Michael Larsen, CEO & Managing Director, Studiosity Sally Wheeler, Professor, Vice-Chancellor, Birkbeck, University of London Ant Bagshaw, Executive Director, Australian Technology Network of UniversitiesTalking points and questions include:
Student expectations and perspectives on using AI for assessments/assignments and the role of knowledge in an AI economy The potential of AI to enhance learning through features like instant feedback, error correction, personalized support, learning analytics How AI could facilitate peer support systems and student community, and the research on the value of this The lack of robust digital/AI strategies at many institutions as a barrier to effective AI adoption The evidence-base for AI in education - challenges with research being highly specific/contextual, debating the value of in-house research vs general studies Whether evidence on efficacy truly drives institutions' buying decisions for AI tools or if other factors/institutional challenges are stronger influences How challenges facing the education sector can inhibit capacity for innovative deployments like AI The growing need for proven, supportive AI tools for students despite institutional constraints -
Coming to the fifth and final episode of our miniseries on AI for education, host Professor Rose Luckin is joined by Timo Hannay, Founder of SchoolDash, and Lord David Puttnam, Independent Producer, Chair of Atticus Education, and former member of the UK parliament's House of Lords. This episode and our series have been generously sponsored by Nord Anglia Education.
Today we’re going to look ahead to the near and far future of AI in education, and ask what might be on the horizon that we can’t even predict, and what we can do as humans to proof ourselves against disruptions and innovations that have, like the Covid pandemic and ChatGPT's meteoric rise, rocked our education systems, and demanded we do things differently.
Guests:
Lord David Puttnam, Independent Producer, Chair, Atticus Education Timo Hannay, Founder, SchoolDashTalking points and questions include:
Slow Reaction to AI: Despite generative AI's decade-long presence and EdTech's rise, the education sector's response to tools like ChatGPT has been surprisingly delayed. Why? Learning from Our AI Response: Can our current reaction to generative AI serve as a case study for adapting to future tech shifts? It's a test of our educational system's resilience AI's Double-Edged Sword: With ChatGPT's rapid rise, are EdTech companies risking harm by using AI without fully understanding it? Think Facebook's data misuse in the Rohingya massacre Equipping Teachers for AI: Who can educators trust for AI knowledge? We need frameworks to guide them, as AI literacy is now as crucial as internet literacy Digital Natives ≠ AI-Ready: Today's youth grew up online, but does that prepare them for sophisticated, accessible AI? Not necessarily -
Continuing our miniseries on AI in education with the fourth episode centred around a AI's potential for equity of learning, host Professor Rose Luckin is joined by Richard Culatta of ISTE, Professor Sugata Mitra, and Emily Murphy of Nord Anglia Education. This episode and our series are generously sponsored by Nord Anglia Education.
In our fourth instalment of this valuable series, we look at AI’s potential to address various challenges and bridge the educational gaps that exist among different groups of students around the world. AI can analyse vast amounts of data, provide early interventions, and enhance accessibility, and as long as the deployment of the technology is appropriate to the unique context of the school, the learners, the location, and the access to devices, AI can transform education for those who need the most support.
Guests:
Professor Sugata Mitra, Author/Professor of Educational Technology, Newcastle University Emily Murphy, Senior PD Lead, DNA Metacognition Project, Nord Anglia Education Richard Culatta, CEO, ISTETalking points and questions include:
What do we mean by equity of learning, and how can we understand context? Is there a danger that AI will simply be used to reinforce or replace existing conventional methods of assessing learning, despite what it's great potential? What needs to fall into place for AI to be the promise for education we know it could be? What needs to happen to have AI be the magic bullet for equity of learning from a teacher and headteacher perspective? If the technology is there, and it has the potential it has, how can teachers build on that? How have different practices and innovations in the classroom been adopted and rejected… is AI going to succeed where other initiatives and technologies have either failed to be adopted, or plateaued and fallen by the wayside? How is AI different? How do we talk about getting school infrastructure in place to use AI? How we do we convince educationalists, and the budget holders and local governance that AI and other emerging technologies are worth their investment? There is some understandable fear about revolutionary technology disrupting existing practice in the classroom, but are we underestimating our students and teachers? -
Continuing our miniseries on AI in education with the third episode centred around a global perspective on AI, host Professor Rose Luckin is joined by Andreas Schleicher of the OECD, Dr Elise Ecoff of Nord Anglia Education, and Dan Worth of Tes. This episode and our series are generously sponsored by Nord Anglia Education.
In our third instalment of this valuable series, we head out beyond the UK and the English-speaking world to get a global perspective on AI, and ask how educators and developers around the world build and engage with AI, and what users, teachers and learners want from the technology that might tell people back home a thing or two. We examine how international use of AI might change the way we engage with AI, and we also ask why they might be doing things differently.
Guests:
Dr Andreas Schleicher, Director for the Directorate of Education & Skills, OECD Dr Elise Ecoff, Chief Education Officer, Nord Anglia Education Dan Worth, Senior Editor, TesTalking points and questions include:
What are other countries tech and education ecosystems doing to develop and implement AI? International considerations of ethics and regulation Is the first world imposing a way of looking at technology and its innovation on the third world? What assumptions are we making, and are we mindful of the context? Is the first world restricting innovation through specific regulation to change what technology is being built and how, and who might it benefit? Skills and competencies development can be driven by the needs of business - what priorities for AI education exhibited by international models could the UK adopt or consider? -
What's in this episode?
Continuing our new 5-episode miniseries on AI in education with the second episode on AI's relationship to neuroscience and metacognition, host Professor Rose Luckin is joined by Dr Steve Fleming, Professor of Cognitive Neuroscience at UCL, UK, and Jessica Schultz, Academic & Curriculum Director at the San Roberto International School in Monterrey, Mexico. This episode and our series are generously sponsored by Nord Anglia Education.
Metacognition, neuroscience and AI aren’t just buzzwords but areas of intense research and innovation that will help learners in ways that until now have been unavailable to the vast majority of people. The technologies and approaches that study in these domains unlocks, however, must not be siloed or made inaccessible to public understanding. Real work must be done to bring these areas together and we are tremendously excited that this podcast will present a great opportunity to showcase what inroads have been made, where, why, and how.
Guests:
Dr Steve Fleming, Professor of Cognitive Neuroscience, UCL Jessica Schultz, Academic & Curriculum Director, San Roberto International SchoolTalking points and questions include:
Neuroscience and AI are well-respected fields with a massive amount of research underpinning their investigation and practices, but they are also two very shiny buzzwords that the public likely only understands in the abstract (and the words may even be misapplied to things that aren't based in neuroscience or AI). Can you tell our listeners what they are, how they intersect with one another, and what benefits their crossover can provide in the realms of skills and knowledge?
Can we use one field, AI, or Neuroscience, to talk about the other, to better 'sell' the idea of the other field of study, and in this way, drastically raise the bar of what is possible to detect, uncover and assess, in education, using these domains?
In practical terms, how do we use AI and neuroscience to measure what might be considered 'unmeasurable' in learning? What data is required, what expertise in the team, or in a partner organisation, can be leveraged, who can be responsible for doing this in an educational or training institution? What data or competencies or human resource do they need access to?
SponsorshipThank you so much to this series' sponsor: Nord Anglia Education, the world’s leading premium international schools organisation. They make every moment of your child’s education count. Their strong academic foundations combine world-class teaching and curricula with cutting-edge technology and facilities, to create learning experiences like no other. Inside and outside of the classroom, Nord Anglia Education inspires their students to achieve more than they ever thought possible.
"Along with great academic results, a Nord Anglia education means having the confidence, resilience and creativity to succeed at whatever you choose to do or be in life." - Dr Elise Ecoff, Chief Education Officer, Nord Anglia Education
-
What's in this episode?
Delighted to launch this new 5-episode miniseries on AI in education, sponsored by Nord Anglia Education, host Professor Rose Luckin kicks things off for the Edtech Podcast by examining how we keep education as the centre of gravity for AI.
AI has exploded in the public consciousness with innovative large language models writing our correspondence and helping with our essays, and sophisticated images, music, impersonations and video generated on-demand from prompts. Whilst big companies proclaim what this technology can achieve and how it will affect work, life, play and learning, the consumer and user on the ground and in our schools likely has little idea how it works or why, and it seems like a lot of loud voices are telling us only half the story. What's the truth behind AI's power? How do we know it works, and what are we using to measure its successes or failures? What are our young people getting out of the interaction with this sophisticated, scaled technology, and who can we trust to inject some integrity into the discourse? We're thrilled to have three guests in the Zoom studio with Rose this week:
Dr Paul LeBlanc, President, Southern New Hampshire University Dr Kate Erricker, Assistant Director of Curriculum, Nord Anglia Education Julie Henry, Freelance Education CorrespondentTalking points and questions include:
We often ask of technology in the classroom 'does it work'? But when it comes to AI, preparing people to work, live, and play with it will be more than just whether or not it does what the developers want it to. We need to start educating those same people HOW it works, because that will not only protect us as consumers out in the world, as owners of our own data, but help build a more responsible and 'intelligent' society that is learning all of the time, and better able to support those who need it most. So if we want that 'intelligence infrastructure', how do we build it? What examples of AI in education have we got so far, what areas have been penetrated and has anything radically changed for the better? Can assessment, grading, wellbeing, personalisation, tutoring, be improved with AI enhancements, and is there the structural will for this to happen in schools? The ‘white noise’ surrounding AI discourse: we know the conversation is being dominated by larger-than-life personalities and championed by global companies who have their own technologies and interests that they're trying to glamourise and market. What pushbacks, what reputable sources of information, layman's explanations, experts and opinions should we be listening to to get the real skinny on AI, especially for education? SponsorshipThank you so much to this series' sponsor: Nord Anglia Education, the world’s leading premium international schools organisation. They make every moment of your child’s education count. Their strong academic foundations combine world-class teaching and curricula with cutting-edge technology and facilities, to create learning experiences like no other. Inside and outside of the classroom, Nord Anglia Education inspires their students to achieve more than they ever thought possible.
"Along with great academic results, a Nord Anglia education means having the confidence, resilience and creativity to succeed at whatever you choose to do or be in life." - Dr Elise Ecoff, Chief Education Officer, Nord Anglia Education
-
Digital Transformation! Digital Strategy! Professional Education! What do they mean, and how do we implement them in a school? In today's episode we’re very lucky to have on three wonderful guests who operate at the intersection of educational practice and the leveraging of technology for a better learning experience. They are:
James Symons, CEO, LocknCharge Katie Novak, Education Strategist, Smart Technologies Associate Professor Jane Hunter, School of International Studies and Education, University of Technology, SydneyEach of these guests has a long history of working within the education space, from engineering and installing the hardware and catering to the evolving demands of schools, to leveraging the technology as a communal bridge between parents, teachers and students, and finally to researching and understanding the added value such technologies provide for teachers and learners and how they might successfully incorporate their use into daily practice.
Talking points and questions include:
The evolving demands of the classroom – what futureproofing and future planning exists in each of your spaces to accommodate new trends and developments? For those catering to the hardware, does the school or college determine what you make, or are they, and the ways their teachers and learners perform, conditioned by you? What space is there for reciprocity between the EdTech maker and the EdTech user? Teacher professional education – how important is this? Surely a learning tool lives and dies by the amount of training and ‘after-sales support’ is provided to practitioners? What is the extent of the refusal by a teacher or department to adopt the technology and how is this overcome? Is it just waving statistics about time-saving and cost-saving in their faces or is there a form of trust that must be engendered? Digital strategy – this means different things to different stakeholders. What are the commonalities that should be agreed upon for successful rollout of technology? Obviously contextual factors are key to each school, but what are the non-negotiables? And with regard to developments like generative AI and other future trends we can’t even predict yet, what kinds of guardrails need to be in place with teachers, leaders, and the developers of the tech to ensure ongoing supportive relationships with stakeholders. What foundations should be in place to support digital transformation no matter the bumps in the road ahead?Material discussed in today's episode includes:
LocknCharge classroom solutions to facilite mobile device workflow and management Improving Access & Outcomes Through Student & Teaching Voice: EdTech Assessment in the Cherry Hill Public Schools Australia and the Digital Education Revolution Jane Hunter: High Possibility STEM Classrooms and Technology Integration and High Possibility Classrooms Smart Technology: EdTech Assessment Tool Uruguay and Ceibal Project -
SCIENCE! Under discussion today are the ways in which students who were switched off the sciences at school manage to retain their curiosity about the subjects and can even reengage with it later in life. Professor Rose Luckin is very lucky to have in the online studio this week Dr Andrew Morris, Honorary Associate Professor at UCL, former president of the Education Section of the British Science Association, and author, whose book, Bugs, Drugs, and Three-Pin Plugs: Everyday Science, Simply Explained, is now available wherever books are sold.
Dr Morris has an interest in serving learners and the public through scientific and evidence-based outreach. The discussion in the studio centred around science, technology, research and practice in education.
Talking points and questions:
The ways in which people who were switched off the sciences at school retain their curiosity and can reengage with science at a later point in life Examples of topics and ways of approaching science that have been revealed by Dr Morris’ science discussion groups Research-informed educational practice, and research-informed educational policy Ways in which research can be transformed and mediated for useMaterial discussed in today's episode includes:
Smartphones in schools? Only when they clearly support learning, the 2023 Global Education Monitoring Report has just released a call for technology only to be used in class when it supports learning outcomes, and this includes the use of smartphones. The Skinny on AI for Education, EVR's newest publication featuring insights, trends and developments in the world of AI Ed -
Rose hosts Daisy Christodoulou, Director of Education at No More Marking in the EdTech Podcast Zoom studio this week, discussing AI regulation, evidence and effectiveness, and student outcomes in AI assessment, and what we think the future of AI-powered education might look like, and why!
In late March of this year, Professor Rose Luckin and Daisy Christodoulou spoke at the UK parliament’s Governance of Artificial Intelligence oral evidence session for education, and the discussion that took place was passionate and exciting. A link to the video of the session is below in the Show Notes if you’d like to watch it yourself, but a lot of ground was covered, yet not as much as they wished!
The interest in AI and its governance is very intense at the moment. The UK government had published a white paper setting out their proposed approach to the governance of AI and the indication from the paper was that rather than give responsibility for AI governance to a single new AI regulator, it intended to empower existing regulators, and that there were several that existed in the education sector already. Other points raised during the session included the idea of teaching a degree of scepticism in the public’s understanding of AI, meaning that the public should not believe everything that something like ChatGPT, a large language model, returns, for instance, when queried. Concerns about the speed of AI development were raised, there were questions on safeguarding, ethics, transparency, explainability, access to the technology, autonomy, adaptivity and more.
In today’s episode, we’d like to revisit those thoughts on AI regulation, evidence and effectiveness, student outcomes in AI assessment, and what we think the future of AI-powered education might look like and why…
Talking points and questions include:
Quality of evidence for improved student outcomes using AI The value of assessment: how, when, why, and in what form More discussion around the future of education with AI’s inclusion, and what we can do nowMaterial discussed in today’s episode includes:
Science and Technology Committee Oral evidence: Governance of artificial intelligence – PDF transcript as well as link to the video on the first page Daisy Christodoulou Books Mindspark Research Impact The Skinny on AI for Education, EVR’s newest publication featuring insights, trends and developments in the world of AI Ed -
Rose plays host to Nina Huntemann, Chief Academic Officer of Chegg, and Lord Jim Knight, in the EdTech Podcast Zoom studio this week, attempting to understand how best to cut through the white noise surrounding AI's hype, misinformation, exaggeration and marketing, and determining just how positive for education AI can be if done responsibly.
In our previous episodes on AI, Rose has been in conversation with universities from the US and the UK, examining what the role is for emerging technologies in higher education and what capacity exists to implement AI effectively. The podcast also saw a contributions from Karine George in discussing whether or not the release and widespread use of ChatGPT has actually done education a favour. Has its proliferation sparked debate about human cognition and limited understandings of AI, or initiated conversations in schools around digital transformation and strategy?
In this episode, we’d like to extend these same thoughts on AI to pedagogic effectiveness in education and academia, and how emerging technologies like AI can be incorporated into plans for companies’ commercial services.
Talking points in today's episode includes:
The development of ethical AI in commercial enterprises and how they ensure their responsible technologies are developed Tensions between the wealth of AI tools available and regulation of the market and educational use of such technologies Assessing AI tools' effectiveness Cutting through the huge amount of hype, headlines, and sensationalism at the heart of the communications and marketing around AIMaterial discussed in today's episode includes:
Yes, AI could profoundly disrupt education, but maybe that's not a bad thing, article in the Guardian UK Newspaper by Professor Rose Luckin Chegg's Centre for Digital Learning The Skinny on AI for Education, EVR's newest publication featuring insights, trends and developments in the world of AI Ed -
Karine and Rose meet this week to discuss Ofsted ratings, how AI can transform teachers' day-to-day tasks, and interview friend and colleague Dr Fiona Aubrey Smith on the recent publication of her book: From EdTech to PedTech: Changing the Way We Think About Digital Technology. Aimed at teachers and leaders looking to create greater impact on teaching and learning through the use of digital technology in schools, From EdTech to PedTech translates research on the effective integration of digital technology in education into relevant, accessible, and practical guidance for teachers and school leaders. This much-needed handbook bridges the gap between knowing ‘what works’ and knowing how to make it work for you and your learners.
Ofsted's rating can be transformative and catastrophic. Given Karine's experience as a headteacher, what does she think of its one-word proclamations? Also under discussion is the DfE's call for submission of evidence regarding the opportunities and risks of AI in education, and their recently published report on generative AI, available to view below.
Material discussed in this episode includes:
From EdTech to PedTech: Changing the Way we Think About Digital Technology UK Department for Education: Generative AI in Education: Departmental Statement Institute for Ethical AI in Education: Final Report OECD: Empowering Young Children in the Digital Age Machine Learning & Human IntelligenceTo get the latest insights, trends and developments on AI for Education, subscribe to EVR's new fortnightly publication: The Skinny on AI for Education
-
The fifth and final episode in the Evidence-Based EdTech miniseries produced by Professor Rose Luckin's EDUCATE Ventures Research, exploring education, research, AI and EdTech, and hosted on The Edtech Podcast
The Evidence-Based EdTech miniseries connects, combines, and highlights leading expertise and opinion from the worlds of EdTech, AI, Research, and Education, helping teachers, learners, and technology developers get to grips with ethical learning tools led by the evidence.
In our previous episode, Rose was in conversation with representatives from Make (Good) Trouble, Feminist Internet, and Soundwaves Foundation, an organisation pursuing technology to assist with deaf or hearing-impaired students in the classroom. We asked a number of questions that centred around what inclusive technology looks like to each of the guests in the room, given that they had and worked with unique perspectives, and what their thoughts were around user agency and why it was so vital EdTech developers be mindful of this in the creation of their products. Our last question was on what we should demand of technology that it cater to people from diverse backgrounds. Was it data, the context, access, that allowed tech to help those from diverse backgrounds?
In this episode, we’d like to extend these same thoughts on DEI and ethics outward, beyond the borders of the UK.
We'll be asking:
Are international education ecosystems implementing their diversity, equity and inclusion any differently from that of the UK? What could be learned from them that EdTech developers and educationalists can adopt and use in the UK? From an international perspective, is the technology developed in the first world, but exported to the third, sensitive to the context of its use or too prescriptive? And as an additional point, has the third world reshaped its attitudes towards diversity and ethics in technology in line with what it believes the first world will find desirable or employable? There’s rumour of national and international standards for good evidence in EdTech coming out of some countries, with presumably varying emphasis placed on adherence to these standards by different governments and regulatory bodies. What is our guest's opinion on how robust they think regulation needs to be where EdTech evidence is concerned, and how strictly do they think such standards should be enforced when developing and using EdTech?Our guest this week is Jane Mann, Managing Director for Cambridge Partnership for Education.
With over two decades of experience in the education sector, as Managing Director of the Cambridge Partnership for Education Jane is now focused on working with ministries of education, government agencies, NGOs, donor agencies and educational organisations to advocate for, design and implement effective programmes of education transformation. The Cambridge Partnership for Education works across the globe in curriculum and assessment design and development, creation of teaching and learning resources, professional development, stakeholder engagement and English language learning and skills.
Thank you to Cambridge Partnership for Education for sponsoring this episode, and for supporting the Evidence-Based EdTech series on the EdTech Podcast.
-
Karine and Rose meet this week to discuss how EdTech entrepreneurs and developers can evidence the impact of their products and services, with special guests Rajeshwari Iyer and Kavitha Ravindran of sAInaptic, the AI-powered EdTech app delivering interactive, instant, and personalised learning experiences for the UK's GCSE sciences.
Also in the news are reports of 'learning poverty' as both UK and international publications warn of 'cracks in the foundations' of education: a quarter of a million children are entering secondary education without basic skills in maths and English. Why is this happening, and with regard to maths, what technology exists to help solve the problem? And how do we know whether or not this technology does what it claims?
To take part in the EDUCATE Programme, visit https://www.educateventures.com
-
Hello everyone and welcome to The Edtech Podcast and this final episode in collaboration with EdSurge.
This is the last episode in a three-part series to explore the nuances of adult lifelong learners and what sparks their return to University.
A shout out to WorkTripp and Lumina Foundation for supporting this episode, EdSurge for the amazing journalism, and great to have the learner voice front and centre in this mini-series. As always, do let us know what you think. Here we go….
-
Welcome to the fourth episode in a series produced by Professor Rose Luckin’s EDUCATE Ventures Research, exploring ‘Evidence-Based EdTech’, and hosted on The Edtech Podcast.
For this episode we will examine topics such how we use existing technology to assist with DEI and ethics, and what we know of technology that does not include this perspective. We ask why that might be, and we look at the art of data capture, and data irresponsibility: what are we capturing that we shouldn’t, who is being affected by our biases, and if this is a step in the development of technological interventions that organisations can afford to skip. How do we mitigate systemic bias and scaled harm? What are examples of inclusive technology that accommodate the learning styles, online behaviours, device access, and dis/abilities of learners? Can we place more pressure on leadership in schools and institutions to incorporate inclusive technologies? What do we know of user agency, and how does that affect the design and transparency of an EdTech solution?
-
Karine and Rose meet this week to discuss Internet Safety with Edurio's Ernest Jenavs, and Natterhub's Caroline Allams. The group will explore Edurio's Autumn 2022 report on Pupil Safeguarding, the reaction to Ofsted Chief Inspector Amanda Spielman's 'surprise' over mobile phone use in-class, and discuss good technology role-modelling for young people.
-
Bett is a gigantic trade show, with over 30,000 people coming to East London’s ExCel Centre every year, and 600 resource and solution providers exhibiting in its massive halls. Amongst the new products, innovations, conversations and meetings, however, is the public, with that overriding question: what can I find here? This week, we invite a teacher, educational technology researcher, and founder and CEO, to answer why they return to the show year after year, and what questions they ask of the technology on display, and the predictions made in the heart of the Bett arenas.
-
The next Bett is being billed as the best Bett ever. It’s always an important date on the education calendar, but what will make this one different? Hear what Bett is doing differently, why it’s important, and what they'll be doing to measure whether or not it works. Here’s a hint: it’s all about the data.
- Näytä enemmän