Episodes
-
Join us for the final episode of Season 2 on J&J Talk AI, where we're exploring the cutting-edge realm of multimodal vision models and their wide-ranging applications.
Multimodal vision models might sound like something out of science fiction, but they're very much a reality. Essentially, they bring together various data modalities, such as images, text, and audio, and fuse them into a cohesive model.
How does it work, you ask? Well, it's all about creating a shared space in which these modalities can communicate. Take, for example, the CLIP model, a pioneer in this field. It uses separate text and image encoders to map information from both domains into a common vector space, allowing for meaningful comparisons.
So, why is this important? Multimodal models open doors to a wide array of applications, such as image search, content generation, and even assisting visually impaired individuals. You can also think of them as powerful tools for tasks like visual question answering, where they can analyze images and provide detailed answers.
But it doesn't stop there. These models have real-world applications, like simplifying complex tasks through interactive interfaces or bridging communication gaps by translating sign language into audio and vice versa.
And let's not forget zero-shot learning, where models tackle tasks they've never seen before, relying on their training to solve new challenges.
While we're wrapping up Season 2, we're excited to return in a few months, so stay tuned!
-
We're continuing our discussion on generative adversarial networks, or "GANs" for short. GANs are like a magical dance between two networks – a generator and a discriminator – creating images from random noise.
Johannes explains that while GANs may seem like magic, they're a powerful tool for generating realistic images from scratch.
We discuss how GANs can be used for generating realistic faces for applications like avatars in web development. Johannes also shares insights into comparing images and content using GANs, which can lead to exciting possibilities like style transfer and image manipulation.
But the magic doesn't stop there! We explore the world of neural radiance fields (NeRFs) and how they enable us to generate new perspectives from single images. Imagine being able to see what an object looks like from different angles with just one photo – it's like a superpower for augmented reality and more.
As Johannes points out, NeRFs are fantastic for compressing information and synthesizing depth and geometry. We discuss potential applications in fields like AR, 3D modeling, and even 3D printing.
-
Episodes manquant?
-
Generative AI is all about creating something new from existing data, and today, we're focusing on one exciting aspect: image-to-image translation. It's not about generating text but transforming images into something entirely different.
Think about those fun Snapchat filters that turn your face into a goofy dog – that's image-to-image translation in action. But it goes beyond that. We're talking about deep learning solutions that can take an image and make it look like a Van Gogh painting or upscale a small picture to 4K resolution.
Imagine being able to enhance your images without sending huge files over the internet. It's all about making the most of the information you have and creating something visually appealing.
And with the power of neural networks, we're tackling these challenges more effectively than ever before.
-
What exactly is geospatial analysis, you ask? Well, in the realm of computer vision, it's all about examining images from a unique perspective – not just your typical cat and dog snapshots, but images taken by satellites from above the Earth's surface.
From fine-tuning weather forecasts by identifying cloud patterns and air pressure to predicting forest fires and efficiently delivering aid during natural disasters, the possibilities are boundless.
And let's not forget how geospatial analysis plays a crucial role in various fields, including the military and even property value estimation for insurance purposes.
But that's not all – we'll also touch on intriguing topics like reconstructing 3D structures from images.
So, whether you're a cat person or a dog lover, this episode promises to be a captivating journey through the endless landscapes of geospatial analysis.
-
In season two, we are back to explore practical applications of computer vision and their unique challenges.
From identifying humans through facial analysis software to assisting doctors in diagnosing patients from medical images,
computer vision is making a big impact across various domains.
Join us as we unravel the mysteries of object detection, classification, and segmentation, and discover how these techniques
are powering self-driving cars like Tesla's autopilot.
-
In the fifth episode we talk about Vision Transformer architectures. An approach for language translation and only recently applied to computer vision. This approach is more suitable for specific tasks as it takes context into account.
Terms mentioned:
Vision Transformer: https://theaisummer.com/vision-transformer/
Attention and Self Attention: https://vaclavkosar.com/ml/transformers-self-attention-mechanism-simplified
Our sponsor is askui: Automate UI workflows on every platform!
-
In the fourth episode we talk about the prevalent Deep Learning Architectures and their Building blocks. We cover how they work mathematically and how they are stacked together to achieve specific tasks.
Terms mentioned:
Matrix Multiplication: https://en.wikipedia.org/wiki/Matrix_multiplication_algorithm
Activation Function: https://en.wikipedia.org/wiki/Activation_function
Our sponsor is askui: Automate UI workflows on every platform!
-
In the third episode we dive deeper into Deep Learning and how Convolutional Neural Networks tackle Computer Vision tasks.
Terms mentioned:
Deep Learning: https://en.wikipedia.org/wiki/Deep_learning
Convolutional Neural Networks: https://en.wikipedia.org/wiki/Convolutional_neural_network
Our sponsor is askui: Automate UI workflows on every platform!
-
In the second episode we talk about the origin of Computer vision. Dating back to to a summer school in 1966. We also talk about classical approaches that are still used today as they excel at certain tasks like eye tracking.
Terms mentioned:
Computer Vision: https://en.wikipedia.org/wiki/Computer_vision
Kernel Methods: https://en.wikipedia.org/wiki/Kernel_method
Feature Maps: https://machinelearningmastery.com/how-to-visualize-filters-and-feature-maps-in-convolutional-neural-networks/
Our sponsor is askui: Automate UI workflows on every platform!
-
In the first episode, we discuss the difference between artificial intelligence, machine learning, and deep learning, and explore the concept of artificial general intelligence. Tune in to learn more about the fascinating world of AI and machine learning!
Referenced material:
Video: A.I. and Stochastic Parrots | FACTUALLY with Emily Bender and Timnit Gebru
Paper 'Stochastic Parrots'
ELIZA Program: https://en.wikipedia.org/wiki/ELIZA
Terms mentioned:
Deep Learning: https://en.wikipedia.org/wiki/Deep_learning
Backpropagation: https://en.wikipedia.org/wiki/Backpropagation
Artificial General Intelligence (AGI): https://en.wikipedia.org/wiki/Artificial_general_intelligence
Technological Singularity: https://en.wikipedia.org/wiki/Technological_singularity
Our sponsor is askui: Automate UI workflows on every platform!