Episoder
-
In this episode, I caught up with Nicolas Gonthier to learn about the FLAIR land cover mapping challenge.
In this challenge, 20cm resolution aerial imagery was used to create high-quality annotations. This data was paired with a time series of medium-resolution Sentinel 2 images to create a rich, multidimensional dataset. Participants in the challenge were able to surpass the baseline solution by 10 points in the target metric, representing a significant step forward in land cover classification capabilities.
The dataset is now being expanded to cover a larger area and incorporate additional imaging modalities, which have been shown to improve performance on this task. Nicolas also provided important context about the objectives of the organisation running this challenge, such as the need to balance model performance with processing costs.
* 🖥️ FLAIR website
* 🖥️ Page on the objectives of FLAIR
* 📖 The NeuRIPS paper about FLAIR
* 🤗 IGN on HuggingFace
* 🖥️ IGN datahub
* 👤 Nicolas on LinkedIn
* 📺 Video of this conversation on YouTube
Bio: Nicolas Gonthier is a R&D project manager in the innovation team at IGN the French National Institute of Geographical and Forest Information. He received a MSc. in data science from ISAE Supaero in 2017 and a Ph.D. degree in computer vision from Université Paris Saclay - Télécom Paris in 2021. His work focus on deep learning for earth observation (land cover segmentation, change detection, etc) and computer vision for geospatial data. He participate to different research and innovation projects.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode, I caught up with Marc Rußwurm to learn about Meta-learning with Meteor. Our conversation starts with a discussion about meta-learning and the training of Meteor, and how this approach differs from the typical approaches taken to train foundational models. We cover the advantages and challenges of this technique, and discuss the fine-tuning of Meteor with minimal examples—as few as five—for tasks like deforestation monitoring and change detection, and consider what the future could hold for this approach. Meteor showcases the significant potential of few-shot learning for processing remote sensing imagery and proves it's possible to tackle tasks even when very few training examples are available.
* 👤 Marc on LinkedIn
* 📖 Meteor Nature paper
* 💻 Meteor code on Github
* 📺 Video of this conversation on YouTube
Bio: Marc Rußwurm is Assistant Professor of Machine Learning and Remote Sensing at Wageningen University. His background is in Geodesy and Geoinformation, and he obtained a Ph.D. in Remote Sensing Technology at TU Munich. During his Ph.D., he could visit the European Space Agency and the University of Oxford as a participant in the Frontier Development Lab in 2018, the Obelix Laboratory in Vannes, and the Lobell Lab in Stanford. As a postdoctoral researcher, he joined the Environmental Computational Science and Earth Observation Laboratory at EPFL, Switzerland. His research interests are developing modern machine learning methods for real-world remote sensing problems, such as classifying vegetation from satellite time series and detecting marine debris in the oceans. He is interested in domain shifts and transfer learning problems naturally arising from geographic data.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
Mangler du episoder?
-
In this episode, I caught up with Nils Lehmann to learn about Uncertainty Quantification for Neural Networks. The conversation begins with a discussion on Bayesian neural networks and their ability to quantify the uncertainty of their predictions. Unlike regular deterministic neural networks, Bayesian neural networks offer a more principled method for providing predictions with a measure of confidence.
Nils then introduces the Pytorch Lightning UQ Box project on GitHub, a tool that enables experimentation with a variety of Uncertainty Quantification (UQ) techniques for neural networks. Model interpretability is a crucial topic, essential for providing transparency to end users of machine learning models. The video of this conversation is also available on YouTube here
* Nils’s website
* Lightning UQ box on Github
* Further reading: A survey of uncertainty in deep neural networks
Bio: Nils Lehmann is a PhD Student at the Technical University of Munich (TUM), supervised by Jonathan Bamber and Xiaoxiang Zhu, working on uncertainty quantification for sea-level rise. More broadly his interests lie in Bayesian Deep Learning, uncertainty quantification and generative modelling for Earth Observational data. He is also passionate about open-source software contributions and a maintainer of the Torchgeo package.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode I caught up with Samuel Bancroft to learn about segmenting field boundaries using Segment Anything, aka SAM. SAM is a foundational model for vision released by Meta, which is capable of zero shot segmentation. However there are many open questions about how to make use of SAM with remote sensing imagery.
In this conversation, Samuel describes how he used SAM to perform segmentation of field boundaries using Sentinel 2 imagery over the UK. His best results were obtained not by fine tuning SAM, but by carefully pre-processing a time series of images into HSV colour space, and using SAM without any modifications. This is a surprising result, and using this kind of approach significantly reduces the amount of work necessary to develop useful remote sensing applications utilising SAM. You can view the recording of this conversation on YouTube here
- Samuel on LinkedIn
- https://github.com/Spiruel/UKFields
Bio: Sam Bancroft is a final year PhD student at the University of Leeds. He is assessing future food production using satellite data and machine learning. This involves exploring new self- and semi- supervised deep learning approaches that help in producing more reliable and scalable crop type maps for major crops worldwide. He is a keen supporter in democratising access to models and datasets in Earth Observation and machine learning.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode I caught up with Yotam Azriel to learn about interpretable deep learning. Deep learning models are often criticised for being "black box" due to their complex architectures and large number of parameters. Model interpretability is crucial as it enables stakeholders to make informed decisions based on insights into how predictions were made. I think this is an important topic and I learned a lot about the sophisticated techniques and engineering required to develop a platform for model interpretability. You can also view the video of this recording on YouTube.
* tensorleap.ai
* Yotam on Linkedin
Bio: Yotam is an expert in machine and deep learning, with ten years of experience in these fields. He has been involved in massive military and government development projects, as well as with startups. Yotam developed and led AI projects from research to production and he also acts as a professional consultant to companies developing AI. His expertise includes image and video recognition, NLP, algo-trading, and signal analysis. Yotam is an autodidact with strong leadership qualities and great communication skills.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode I caught up with Daniele Rege Cambrin, to learn about Earthquake detection with Sentinel-1 (SAR) images. Daniele has a key role in organising a new competition on this task, SMAC: Seismic Monitoring and Analysis Challenge. The topics covered include the logistics of organising this competition, and the lessons Daniele learned from organising a previous one. You can also view the recording of this discussion on YouTube.
- Daniele on LinkedIn
- Competition website
Bio: Daniele Rege Cambrin is currently pursuing his Ph.D. and his research interests lie in deep learning. He is particularly interested in finding efficient and scalable solutions in areas such as remote sensing, computer vision, and natural language processing. Additionally, he has a keen interest in game development, and worked on two machine-learning competitions related to change detection.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode Robin catches up with Inon Sharony to learn about the fascinating world of machine learning with SAR imagery. The unique attributes of SAR imagery, such as its intensity, phase, and polarisation, provide rich information for deep learning models to learn features from. The discussion covers the innovative applications ASTERRA is developing, and the nuances of machine learning with SAR imagery. This video of this episode is available on YouTube
* https://asterra.io/
* https://www.linkedin.com/in/inonsharony/
Bio: Inon Sharony is the Head of AI at ASTERRA, where he is responsible for pushing boundaries in the field of deep learning for earth observation. Sharony brings a decade of experience leading development of cutting-edge AI technology that meets real-world business and product needs. His previous roles include Algorithm Group Manager at Rail Vision Ltd and R&D Group Lead & Head of Automotive Intelligence at L4B Software. He was PhD trained in Chemical Physics at Tel Aviv University and combines his extensive academic background in Physics and his hands-on experience with machine learning to develop strategic AI solutions for ASTERRA.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode, Robin catches up up with Alistair Francis and Mikolaj Czerkawski to learn about Major TOM, which is a significant new public dataset of Sentinel 2 imagery. Noteworthy for its immense size at 45 TB, Major TOM also introduces a set of standards for dataset filtering and integration with other datasets. Their aim in releasing this dataset is to foster a community-centred ecosystem of datasets, open to bias evaluation and adaptable to new domains and sensors. The potential of Major TOM to spur innovation in our field is truly exciting. Note you can also view the video of this recording on YouTube here. The video also includes a demonstration of accessing the dataset and a walkthrough of the associated Jupyter notebooks.
* Dataset on HuggingFace
* Paper
Alistair Francis is a Research Fellow at the European Space Agency’s Φ-lab in Frascati, Italy. Having studied for his PhD at the Mullard Space Science Laboratory, UCL, his research is focused on image analysis problems in remote sensing, using a variety of supervised, self-supervised and unsupervised approaches to tackle problems such as cloud masking, crater detection and land use mapping. Through this work, he has been involved in the creation of several public datasets for both Earth Observation and planetary science.
Mikolaj Czerkawski is a Research Fellow at the European Space Agency’s Φ-lab in Frascati, Italy. He received the B.Eng. degree in electronic and electrical engineering in 2019 from the University of Strathclyde in Glasgow, United Kingdom, and the Ph.D. degree in 2023 at the same university, specialising in applications of computer vision to Earth observation data. His research interests include image synthesis, generative models, and use cases involving restoration tasks of satellite imagery. Furthermore, he is a keen supporter and contributor to open-access and open-source models and datasets in the domain of AI and Earth observation.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this video Robin catches up with Konstantin Klemmer to discus SatClip, which is a new global & general purpose location encoder trained on Sentinel 2 imagery. The conversation covered the training of encoders such as CLIP, and discussed the implications for downstream applications. Note you can also view the video of this recording on YouTube here
* Konstantin on LinkedIn
* SatCLIP
Bio: Konstantin is a postdoctoral researcher at Microsoft Research New England. His research interests lie broadly within geospatial machine learning and bridging adjacent domains like remote sensing or spatial statistics. Konstantin has a PhD from the University of Warwick and NYU, a Master's from Imperial College London and an undergraduate degree from the University of Freiburg, Germany.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode Robin catches up with James Gallagher to learn about the latest AI innovations reshaping image annotation. The conversation covered significant new models such as Segment Anything, GroundingDINO and RemoteCLIP, and discussed how these models can be linked together to enable new annotation capabilities. Note you can also view the video of this recording on YouTube here
* James on LinkedIn
* Autodistill on Github
* Roboflow
Bio: James is a technical marketer at Roboflow, and has written over 100 guides on computer vision, covering areas from CLIP to dataset distillation and model evaluation. He also maintains several open source software packages at Roboflow, including Autodistill, a framework for auto-labelling images. In his free time, James has a unique hobby; he maintains a website that catalogues pianos available for public use in airports around the globe at airportpianos.org
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode, Robin catches up with Yosef Akhtman to discuss super resolution with satellite imagery. Super resolution is a technique which enables transforming an image with 10m pixels into an image with 1m pixels. While this method has some sceptics, it’s potential to improve analytics on the imagery is undeniable. Note you can also view the video of this recording on YouTube here
* Yosef on LinkedIn
* Medium article: Sentinel-2 Deep Resolution 3.0
* More resources on super-resolution
Bio: Yosef Akhtman – Independent Researcher with in-depth expertise in Remote Sensing, Earth Observation, Sensor Fusion, Hyperspectral Imaging and Deep Learning. Founder of Gamma Earth – a company focused on Environmental Intelligence solutions, including satellite imaging data enhancement, atmospheric calibration and cloud removal, as well as MineFree and Gamaya – a Swiss startup in the field of smart farming. Before establishing Gamaya, Yosef managed international applied research projects in the UK and Switzerland, spanning the subjects of remote sensing, mobile robotics and environmental monitoring.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
A large fraction of acquired satellite images contain 2D projections of Earth. However, for many downstream applications, 3D understanding is beneficial or necessary. In recent years, deep learning has enabled a number of solutions for learning 3D representations from 2D satellite images.
This episode delivers an overview of some of the prominent works in this area. Mikolaj hosts 3 guests: Dawa Derksen, Roger Marí, and Yujiao Shi, providing a summary of each guest’s contributions on the topic as well as a panel discussion. Note you can also view the video of this recording on YouTube here
Dawa Derksen - Origins of Shadow-NeRF
Dawa pursued a post-doctoral research fellowship at the European Space Agency from 2020-2022, and is currently working at the Centre National d’Etudes Spatiales (French Space Agency) where he is involved in the field of 3D Implicit Representation Learning applied to Remote Sensing.
* 🖥️ Shadow-NeRF
Roger Marí - EO-NeRF
Roger is a post-doc researcher from Barcelona specialised in 3D vision tasks. He is currently working at the Centre Borelli, ENS Paris-Saclay, in France, where his research topic is the application of neural rendering methods to satellite image collections. He is the author of Sat-NeRF and EO-NeRF, some of the first models in the literature to provide quantitatively convincing results in terms of surface reconstruction.
* 🖥️ https://rogermm14.github.io/
* 🖥️ EO-NeRF
Yujiao Shi - Connecting Satellite Image with StreetView
Yujiao is a research fellow at the Australian National University. She obtained her PhD degree at the same institute. Her research interests include satellite image-based localisation, cross-view synthesis, 3D vision-related tasks, and self-supervised learning.
* 🖥️ https://shiyujiao.github.io/
* 📖 Geometry-Guided Street-View Panorama Synthesis from Satellite Imagery
Host & Production: Mikolaj Czerkawski
https://mikonvergence.github.io
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode Robin catches up with Jake Wilkins to learn about Deep learning in Google Earth Engine. Jake has been building commercial Earth Engine applications for the past three years and in this conversation he describes the pros and cons of several approaches to using deep learning models with Earth Engine. Note you can also view the video of this recording on YouTube here
* Jake on LinkedIn
* https://earthengine.google.com/
Bio: Jake is a Software Engineer and Data Scientist based in London, UK. He has been building commercial Google Earth Engine applications for the past three years. His significant contributions include the no-code platform, Earth Blox, and the climate monitoring platform STRATA for UNEP (United Nations Environmental Programme). Alongside this, Jake has consistently developed his skills in machine learning, and a notable accomplishment in this field is winning the Earth-i hackathon last year. Jake has a deep passion for addressing the climate crisis and is committed to making Earth Observation more accessible to combat it.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode Robin catches up with Roberto Del Prete to learn about PyRaws. PyRaws is a powerful open source Python package that provides a comprehensive set of tools for working with Sentinel 2 raw imagery. It provides tools for band coregistration, geo-referencing, data visualisation, and image processing. What is particularly exciting is that this software could be deployed onto future satellites, enabling on-board processing using python. Note you can also view the video of this recording on YouTube here
* https://github.com/ESA-PhiLab/PyRawS
* https://www.linkedin.com/in/roberto-del-prete-8175a7147/
Bio: Roberto Del Prete is a PhD candidate focused on expanding the uptake of Deep Learning for enhancing the applications of onboard edge computing. His aim is to improve decision-making in time-critical scenarios by reducing the time lag required to process and deliver useful information to the ground. He is also working on developing autonomous spacecraft navigation systems using onboard instruments like cameras. Through his research he wants to contribute to the advancement of AI technology and its real-world applications, pushing the boundaries of what is possible to accomplish onboard.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode Robin catches up with Nathan Kundtz to learn about the creation, and use of synthetic image data in training machine machine models. Nathan has a PhD in physics, and over 40 peer reviewed papers and 15 patents to his name. As a serial entrepreneur, he has successfully founded multiple companies and raised over $250 million in venture capital funding. Note you can also view the video of this recording on YouTube here
* Nathan LinkedIn
* rendered.ai
* DIRSIG
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In the episode I caught up with the co-founder of the company developing Orbuculum, Derek Ding, to learn more about this innovative new platform. What makes Derek's story even more intriguing is that he doesn't have a traditional background in remote sensing. However, fuelled by ambition and a desire to introduce new technologies, he is determined to transform the landscape of the Earth observation data market. My conversation with Derek was thought-provoking, and offered valuable insights into the innovative possibilities within our field. I hope you enjoy this episode. Please note the video is also available on YouTube
* 🖥️ Orbuculum website
* 📺 Demo video of Orbuculum platform
* 🗣️ Orbuculum Discord
* 💻 Orbuculum Github
* 🐦 Orbuculum Twitter
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode, Robin catches up with Ryan Avery to learn about the machine learning workflow at Development Seed. The making of this episode was inspired by a three part blog series Ryan has authored on the ML tooling stack used at Development Seed. Please note the video is also available on YouTube
- https://developmentseed.org/blog/2023-04-13-ml-tooling-3
- https://www.linkedin.com/in/ryan-avery-75b165a8/
Bio: Ryan is an expert in developing machine learning-powered services for processing satellite and camera trap imagery, and he is deeply passionate about leveraging machine learning to enhance environmental outcomes and improve livelihoods. In addition to his work at Development Seed, Ryan has made significant contributions to open-source. These include a comprehensive two-day geospatial python curriculum, an image segmentation model service, and a torchserve deployment of Megadetector for wildlife monitoring.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this video, Robin catches up with Michael Bewley to hear about the use of AI at Nearmap. Nearmap captures very high resolution aerial imagery and Michael and his team have trained a single segmentation model to identify 78 different target layers in the imagery. These layers can then be displayed on a map or accessed via an API. Please note the video is also available on YouTube
* Michael on LinkedIn
* Nearmap
* Nearmap AI docs
Bio: Michael is the Vice President of AI and Computer Vision at Nearmap. He's worked as a data scientist in a range of areas including medical devices, underwater robotics and banking. For the last six years, he's been building machine learning based products on top of Nearmap's technology stack of Australian designed aerial imaging cameras, and one of the biggest aerial capture, photogrammetry and 3D reconstruction programs in the world.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
Join Robin in a career chat with Martha Morrisey, a senior machine learning engineer at Pachama, a company elevating remote sensing data and machine learning to fight climate change by monitoring carbon capture and storage projects in forests. In this episode, Martha shares her career journey and provides further insight into the role of a machine learning engineer.
* Martha on LinkedIn
* Pachama website
* Video on YouTube
Bio: Martha is a senior Machine Learning Engineer at Pachama. Prior to Pachama Martha worked at Development Seed, and Maxar. Martha has an undergraduate degree from UC Berkeley in Geography, and a master's degree in Geography from the University of Colorado, Boulder. Outside of work Martha loves spending time outside cycling, running, and attempting to take her cat on walks!
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com -
In this episode, Robin catches up with Gilberto Camara to talk about SITS. SITS is an open-source R package for land use and land cover classification of big Earth observation data using satellite image time series. Gilberto is a Senior Researcher in GIScience, Geoinformatics, Spatial Data Science and Land Use Change at Brazil’s National Institute for Space Research.
* https://github.com/e-sensing/sits
* https://gilbertocamara.org/
* Video on YouTube
Bio: Prof. Dr. Gilberto Câmara is a Brazilian researcher in Geoinformatics, GIScience, Spatial Analysis, and Land Use Modelling, who works at Brazil's National Institute for Space Research (INPE). He is internationally recognized for promoting free access for geospatial data and for setting up an efficient satellite monitoring of the Brazilian Amazon rainforest. After retiring from INPE in June 2016 after 35 years of work, he continues to conduct R&D activities at INPE as a Senior Research Fellow.
Logo animation and thumbnail credits: Mikolaj Czerkawski @mikonvergence
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.satellite-image-deep-learning.com - Se mer