Эпизоды
-
We all have thoughts of the future. Some of us will only think of it in passing, but others will spend months or even years contemplating the endless possibilities.
Kazuo Ishiguro’s vision for the future, beautifully presented in his latest book, ‘Klara and the Sun,’ shows an excellent level of thought and research. The British novelist presents an emotionally nuanced concept of what it means to be human or non-human.
In this episode of Short and Sweet AI, I discuss Ishiguro’s latest book and its depiction of robots and artificial intelligence. I also delve into what immortality could look like for humans – will it be robots in our future or something different?
In this episode, find out:
What Ishiguro got right and wrong about the future of robots and AIHow Ishiguro depicts robots and the future of workThe debate about immortality – robots vs. the cloudThe ethical considerations of human-like robotsImportant Links & Mentions:
Neuralink UpdateThe Nobel Prize: Kazuo IshiguroResources:
The Atlantic: The Radiant Inner Life of a RobotWired: The Future of Work: ‘Remembrance,’ by Lexi PandellCNN International: Kazuo Ishiguro asks what it is to be humanWaterstones: Kazuo Ishiguro on Klara and the SunEpisode Transcript:
Hello to you who are curious about AI, I’m Dr. Peper.
We all have thoughts about the future, some of us in passing and some spend months and years thinking about it. Kazuo Ishiguro’s vision, beautifully presented in his latest book, Klara and the Sun, shows much thought and research. This British novelist presents emotionally nuanced concepts about what it means to be human and not human. I’m not an artificial intelligence expert nor a Nobel prizing winning author like Ishiguro. But I am someone who’s fascinated by artificial intelligence and want people to understand what AI means for our future. From that perspective, I’ve identified three things Ishiguro got right, and two things I think he got wrong, in his new book Klara and the Sun.
First, his depiction of Klara, an artificial friend, or robot, meshes with my understanding of what robots will be like in the future. They will have the ability to understand and integrate information and read and understand human emotions. This ability will surpass the ability of the humans around them at times. With exposure to more human situations and more human observations, robots will increase and refine their emotional abilities. They’ll have true feelings, not simulate them.
The second thing Ishiguro gets right is the future of work. There will be substitutions of humans with machines as machines do more and more of the work. Humans will be displaced and just as in the novel, people will struggle to redefine their role in society and find new meaning.
And the third thing that Ishiguro accurately writes about is the inequality created by those who choose and can afford to have gene-edited children, described as the lifted kids compared to the non-lifted kids, and those whose parents can’t afford or choose not to have their children’s genes edited before birth. I think this will be a real possibility in the near future. There will also be major inequalities in wealth,...
-
What is Liquid AI, and could it prove more effective than other types of AI?
New research into neural nets and algorithms has revealed what some call “Liquid AI,” a more fluid and adaptable version of artificial intelligence.
In my previous episode, I discussed the basics of AI and the limitations that hold it back. It looks like Liquid AI could provide the very solutions that the AI community has been searching for.
In this episode of Short and Sweet AI, I explore the new research behind Liquid AI, how it works, and what it does better than other types of AI.
In this episode find out:
The limitations of traditional neural networks in AIHow researchers created Liquid AIHow Liquid AI differs from other typesHow Liquid AI solves the limitations of computing power with smaller neural netsWhy Liquid AI is more transparent and easier to analyzeImportant Links & Mentions
A Simple Explanation of AIAlphaFold & The Protein Folding ProblemWhat is DALL·E?Resources:
SingularityHub: New ‘Liquid’ AI Learns Continuously from Its Experience of the WorldAnalytics Insight: Why is a ‘Liquid’ Neural Network from MIT a Revolutionary Innovation?TechCrunch: MIT researchers develop a new ‘liquid’ neural network that’s better at adapting to new infoEpisode Transcript:
Hello to you who are curious about AI, I’m Dr. Peper. Machine learning algorithms are getting an overhaul from a very unlikely source. It’s a fascinating story.
Neural Nets have Traditional Limitations
Neural nets are the powerhouse of machine learning. They have the ability to translate whole books within seconds with Google Translate, change written text into images with DALLE, and discover the 3D structure of a protein in hours with AlphaFold. But researchers have struggled with neural networks because of their limitations.
Neural nets cannot do anything other than what they’re trained for. They’re programed with parameters set to give the most accurate results. But that makes them brittle which means they can break when given new information they weren’t trained on. Today the deep learning neural nets used in autonomous driving have millions of parameters. And the newest neural nets are so complex, with hundreds of layers and billions of parameters, they require very powerful supercomputers to run the algorithms.
A Neuroplastic Neural Net based on a Nematode
Now researchers from MIT and Austria’s Science Institute have created a new, adaptive neural network they’re describing as “liquid” AI. The algorithm’s based on the nervous system of a simple worm, C. elegans. And elegant it truly is. This worm has only three hundred and two neurons but it’s very responsive with a variety of behaviors. The teams were able to mathematically model the worm’s neurons and build them into a neural network. I’ve explained neural networks in my previous episode called A Simple Explanation of...
-
Пропущенные эпизоды?
-
What is AI really, and how does it work?
If you are interested in AI, you’ll undoubtedly know that many of the concepts are a bit overwhelming. There are plenty of terminologies to understand, such as machine learning, deep learning, neural networks, algorithms, and much more.
With the world of AI continually evolving, it’s good to go over some of the basic concepts to better understand how it’s changing.
In this episode of Short and Sweet AI, I address some of the questions that I get asked a lot: what is AI? How does AI work? I also delve into some of the limitations of AI and their possible solutions.
In this episode find out:
How AI worksWhat machine learning and neural networks areHow deep learning worksThe limitations of AIHow AI neuroplasticity could solve the limitations of AIImportant Links & Mentions:
AlphaFold & The Protein Folding ProblemWhat is Machine Learning?Resources:
SAS: Neural Networks: What they are & why they matterExplainThatStuff: Neural networksQuanta Magazine: Artificial Neural Nets Finally Yield Clues to How Brains LearnEpisode Transcript:
Hello to you who are curious about AI. I’m Dr. Peper.
If you’re listening to this, you probably think AI’s interesting and important like me. But sometimes I find the concepts are a little overwhelming. I want to go over something I get asked a lot. People ask me, what is AI really, how does it work? Actually, there’re new things going on with how AI works. So, it’s good to go over some of the basic concepts in order to understand the way AI is changing.
How does AI work?
Artificial Intelligence happens with computers. They’re programed using algorithms. Algorithms are step by step instructions telling the computer what to do to solve a problem. Just like a recipe has specific steps you follow in sequence, to bake a cake, or cook something. Computer scientist write algorithms using a programming language the computer understands. These computer languages have strange names like Python or C plus, plus.
The computers also perform math calculations or computations to analyze the information and give an answer. This is known as computational analysis. Basically, the programing language and math calculations are computer software. Using this software, the algorithms come up with an answer from data sets fed into the computer.
Machine Learning is a type of AI
The major AI being used today is called machine learning. Machine learning is carried out by artificial neural networks, or nets for short. Neural nets underpin the most advanced artificial intelligence being used today. They’re called neural networks because they’re based in part on the way neurons in the brain function. In the brain the neuron receives inputs or information, processes the information, and then gives a result or output.
Artificial intelligence uses digital models of brain neurons. These are artificial neurons, based on the computer binary code of ones and zeros. The digital neurons process information and then pass it along to other higher layers of processing. Higher, meaning the results become more specific, just like in the brain.
Deep Learning is a type of...
-
Microscopic robots might sound like the plot of a futuristic novel, but they are very real.
In fact, nanotechnology has been a point of great interest for scientists for decades. In the past few years, research and experimentation have seen nanotechnology's science develop in new and fascinating ways.
In this episode of Short and Sweet AI, I delve into the topic of microscopic robots. The possibilities and capabilities of nanobots are something to keep a watchful eye on as research into nanotechnology starts to pick up speed.
In this episode, find out:
What microscopic robots areHow new research into nanotechnology has improved nanobot designWhy nanobots use similar technology to computer chipsThe possibilities of nanobots for healthcareHow nanotechnology could connect humans to technology and the CloudImportant Links & Mentions
Super Sad True Love Story by Gary ShteyngartThe Singularity is NearMarch of the Microscopic RobotsThe Future of Work: ‘Remembrance,’ by Lexi PandellResources:
Singularity Hub: An Army of Microscopic Robots Is Ready to Patrol Your BodyInteresting Engineering: Nanobots Will Be Flowing Through Your Body by 2030Episode Transcript:
Today I’m talking about microscopic robots.
In the book Super Sad True Love Story by Gary Shteyngart, set in the future, wealthy people pay for life extension treatments. These are called “dechronification” methods and include infusions of “smart blood” which contain swarms of microscopic robots. These tiny robots are about 100 nanometers long and rejuvenate cells and remodel major organs throughout the body via the bloodstream. In this way the wealthy live for over a century.
That book was my first introduction to the idea of microscopic robots, also known as nanobots, more than a decade ago. Nanotechnology is more than a subplot in a futuristic novel. It’s an emerging field of designing and building robots which are only nanometers long. A nanometer is 1000 times smaller than a micrometer. Atoms and molecules are measured in nanometers. For example, a red blood cell is about 7000 nanometers while a DNA molecule is two and a half nanometers.
The father of nanotechnology is considered to be Richard Feynman who won the Nobel prize in physics. He gave a talk in 1959 called “There’s Plenty of Room at the Bottom.” The bottom he’s referring to is size, specifically the size of atoms. He discussed a theoretical process for manipulating atoms and molecules which has become the core field of nanoscience.
The microscopic robots are about the size of a cell and are based on the same basic technology as computer chips. But creating an exoskeleton for robotic arms and getting these tiny robots to move in a controllable manner has been a big hurdle. Then in last few years Marc Miskin, a professor of electrical and systems engineering, and his colleagues, used a fresh, new design concept.
They paired 50 years of microelectronics and circuit boards to create limbs for the robots and used a power source in the form of tiny solar panels on its back. By shining lasers on the solar panels, they can control the robot’s...
-
Is a world without work a reality we need to prepare for?
In my last episode, I discussed whether the fear of machines taking over jobs was truly misplaced anxiety, as experts say. Experts believe that there’s no cause for alarm, but not everyone agrees.
Some believe that a future where human workers become obsolete is a real possibility we need to prepare for.
In this episode of Short and Sweet AI, I delve into the theory that our future will be a world without work. I discuss Daniel Susskind’s fascinating book, ‘A World Without Work,’ which explores the topic of technological unemployment in great detail.
In this episode, find out:
What Daniel Susskind believes about the future of workHow machines can replicate even cognitive skillsTheories on how society could adapt to a world without workHow we could live a meaningful life without workImportant Links & Mentions
A World Without WorkThe Future of Work: Misplaced Anxiety?How to Train Your Emotion AIResources
Oxford Martin School: "A world without work: technology, automation and how we should respond" with Daniel SusskindTED: 3 myths about the future of work (and why they're not true) | Daniel SusskindThe New York Times: Soon a Robot Will Be Writing This HeadlineEpisode Transcript:
Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about a world without work.
In my last episode, I talked about the future of work. Economists, futurists, and AI thinkers generally agree that technological unemployment is not a real threat. Our anxiety about machines taking our jobs is misplaced. There have been three centuries of technological advances and each time, technology has created more jobs than it destroyed. So, no need for alarm.
But Daniel Susskind, an Oxford economist and advisor to the British government, thinks this time, with artificial intelligence, the threat really is very real. He wants us to start discussing the future of work because as he sees it, the future of work is A World Without Work, which is the title of his recent book. He explains why what’s been called a slow-motion crisis of losing jobs to machines and automation, needs to be discussed now because it really isn’t slow-motion anymore.
Despite increased productivity and GDP from artificial intelligence, Susskind presents evidence technological unemployment is coming. As he says, we don’t need to solve the mysteries of how the brain and mind operate to build machines that can outperform human beings.
Machines have been taking over jobs requiring manual abilities for decades. It’s happening now. Although the American manufacturing economy has grown over the past few decades, it hasn’t created more work. Manufacturing produces 70 percent more output than it did in 1986 but requires 30 percent fewer workers to produce it.
More importantly, machines are...
-
Are you anxious that a machine will one day replace your job? It’s a common enough fear, especially with the rate technology is advancing.
If you have watched any of my previous episodes, you will know that technology is accelerating exponentially! We have seen the equivalent of 20,000 years of technology in just one century.
Naturally, people worry about what this means for the future of work. Will human workers become obsolete one day?
In this episode of Short and Sweet AI, I explore “technological unemployment” in more detail and whether it’s something we should be concerned about.
In this episode find out:
Why some experts think the anxiety over technological unemployment is misplacedWhy economists and AI experts are optimistic about AI’s impact on jobsHow AI could contribute to job creation and lossThe surprising impact technology has on certain job rolesImportant Links & Mentions:
What will the future of jobs be like?VICE Special Report: The Future of WorkResources:
The Takeaway: What Happens Next: The Future of WorkCouncil on Foreign Relations: Discussion of HBO VICE Special Report: The Future of WorkDaniel Susskind’s book: A World Without WorkEpisode Transcript:
Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about the future of work.
For centuries there’ve been predictions that machines would put people out of work for good and give rise to technological unemployment. If you’ve been listening to my episodes you know that technology today is accelerating exponentially. We are living at a time when many different types of technology are all merging and accelerating together. This is creating enormous advances which some have said will lead to the equivalent of 20,000 years of technology in this one century. And experts are asking what does that mean for the future of work?
Historians, economists, and futurists describe the anxiety about new machines replacing workers as a history of misplaced anxiety. Three hundred years of radical technological change have passed and there is still enough work for people to do. The experts say, yes, technology leads to the loss of jobs, but ultimately more new jobs are created in the process. Automation and the use of machines increases productivity which leads to creation of new jobs and increased GDP.
A well-known example would be the rise in the use of ATM machines in the 1990s which led to many bank tellers losing their jobs. But at the same time, the ATMs enabled banks to increase their productivity and profits and led to more branches being opened and more bank tellers being hired. The bank tellers now spent their time carrying out more value-added, non-routine tasks.
In the early industrial revolution, when mechanical looms were introduced, many highly skilled weavers lost their jobs, but even more jobs were created for less-skilled workers who operated the machines.
People who study economics and AI are optimistic. They think machines can readily perform routine tasks in a job but would struggle with non-routine tasks. Humans will still be needed for...
-
What is the protein folding problem that has left researchers stuck for nearly 50 years?
Knowing the 3D shape of proteins is so important for our understanding of various diseases and vaccine development. However, these shapes are fantastically complex and difficult to predict. Researchers have spent years trying to determine the 3D structure of proteins.
Thanks to AI systems like AlphaFold, it’s now much easier and faster to predict protein shapes. AlphaFold is currently leading the way in protein folding research and has been described as a “revolution in biology.”
In this episode of Short and Sweet AI, I explore the protein folding problem in more detail and how AlphaFold is accelerating our understanding of protein structures.
In this episode, find out:
Why protein folding is so importantWhy it’s so difficult to predict protein structuresHow Google’s DeepMind created AlphaFoldHow successful AlphaFold has been in predicting protein structuresImportant Links and Mentions:
AlphaFold: The making of a scientific breakthroughProtein folding explainedWalloped by AlphaGoWhat is AlphaZero?AlphaFold: Using AI for scientific discoveryResources:
Nature.com - ‘It will change everything’: DeepMind’s AI makes gigantic leap in solving protein structuresSciTech Daily - Major Scientific Advance: DeepMind AI AlphaFold Solves 50-Year-Old Grand Challenge of Protein Structure PredictionEpisode Transcript:
Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about AlphaFold.
One of Biology’s most difficult challenges, one that researchers have been stuck on for nearly 50 years is how to determine a protein’s 3D shape from its amino-acid sequence. It's known as “the protein folding problem”.
When I first came across the subject, I thought, ok, that’s a biology problem and maybe AI will solve it but there’s no big story here. I was wrong.
Some biologists spend months, years, or even decades performing experiments to determine the precise shape of a protein. Sometimes they never succeed. But they persist because having the ability to know how a protein folds up can accelerate our ability to understand diseases, develop new medicines and vaccines, and crack one of the greatest challenges in biology.
Why is protein folding so important? Proteins structures contain as much, if not more information, than stored in DNA. Their 3D shapes are fantastically complex. Proteins are made up of strings of amino acids, called the building blocks of life. In order to function, the strings twist and fold into a precise, delicate shapes that turn or wrap around each other. These strings can even merge into bigger, megaplex structures.
Only then can these proteins function in the way necessary to build and sustain life. A protein’s shape defines what the protein can
-
One of the founding principles of OpenAI, the company behind technology such as GPT-3 and DALL•E, is that AI should be available to all, not just the few.
Co-founded by Elon Musk and five others, OpenAI was partly created to counter the argument that AI could damage society.
OpenAI was originally founded as a non-profit AI research lab. In just six short years, the company has paved the way for some of the biggest breakthroughs in AI. Recent controversy arose when OpenAI announced that a separate section of its company would become for-profit.
In this episode of Short and Sweet AI, I discuss OpenAI’s mission to develop human-level AI that benefits all, not just a few. I also discuss the controversy around OpenAI’s decision to become for-profit.
In this episode, find out:
OpenAI’s missionHow human-level AI or AGI differs from Narrow AIHow far we are from using AGI in everyday lifeThe recent controversy around OpenAI’s decision to switch to a for-profit model.Important Links and Mentions:
What is GPT-3?OpenAI’s mission statementResources:
Elon Musk on Artificial IntelligenceTechnology Review: The messy, secretive reality behind OpenAI’s bid to save the worldWired: To Compete With Google, OpenAI Seeks Investors---and ProfitsWired: OpenAI Wants to Make Ultrapowerful AI. But Not in a Bad WayEpisode Transcript:
Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about a truly innovative company called OpenAI.
So what do we know about OpenAI, the company unleashing all these mind-blowing AI tools such as GPT-3 and DALL·E?
Open AI was founded as a non-profit AI research lab just 6 short years ago by Elon Musk and 5 others who pledged a billion dollars. Musk has been openly critical that AI poses the greatest existential threat to humanity. He was motivated in part to create OpenAI by concerns that human-level AI could damage society if built or used incorrectly.
Human-level AI is known as AGI or Artificial General Intelligence. The AI we have today is called Narrow AI, it’s good at doing one thing. General AI is great at any task. It’s created to learn how to do anything. Narrow AI is great at doing what it was designed for as compared to Artificial General Intelligence which is great at learning how to do what it needs to do.
To be a bit more specific, General AI would be able to learn, plan, reason, communicate in natural language, and integrate all of these skills to apply to any task, just as humans do. It would be human-level AI. It’s the holy grail of the leading AI research groups around the world such as Google’s DeepMind or Elon’s OpenAI: to create artificial general intelligence.
Because AI is accelerated at exponential speed, it’s hard to predict when human-level AI might come within reach. Musk wants computer scientists to build AI in a way that is safe and beneficial to humanity. He acknowledges that in trying to advance friendly AI, we may create the very thing we are concerned about. Yet he thinks the best
-
Is DALL·E the latest breakthrough in artificial intelligence?
It seems there’s no end to the fascinating innovations coming out in the world of AI. DALL·E, the most recent tool developed by OpenAI, was announced just months after unveiling its groundbreaking GPT-3 technology.
DALL·E is another exciting breakthrough that demonstrates the ability to turn words into images. As a natural extension of GPT-3, DALL·E takes pieces of text and generates images rather than words in response.
In this episode of Short and Sweet AI, I discuss DALL·E in more detail, how it differs from GPT-3, and how it was developed.
In this episode, find out:
What DALL·E isHow DALL·E can generate images from wordsWhat unintended yet useful behaviors DALL·E can produceThe human-like creativity of DALL·E.Important Links and Mentions:
DALL·E: Creating Images from TextThis avocado armchair could be the future of AIResources:
The Next Web: Here’s how OpenAI’s magical DALL-E image generator worksVenture Beat: OpenAI debuts DALL-E for generating images from textCNBC: Why everyone is talking about an image generator released by an Elon Musk-backed A.I. labEpisode Transcript:
Hello to you who are curious about AI. I’m Dr. Peper and today I’m talking about DALL·E.
In a previous episode, I highlighted a new type of AI tool called GPT-3. GPT-3 is a machine learning language model trained on a trillion words that generates poetry, stories, even computer code. Within months of announcing GPT-3, OpenAI released DALL·E. DALL·E is not just another breathtaking breakthrough in AI technology. It represents the ability, by a machine, to manipulate visual concepts through language.
DALL·E is a combination of the surrealist artist Salvador Dali and the animated robot Wall-E. What it does is simple but also revolutionary. It’s a natural extension of GPT-3. The AI system was trained with a combination of the 13 billion features of GPT-3 added to a dataset of 12 billion images.
DALL·E takes text prompts and responds not with words but images. If you give the system the text prompt, “an armchair in the shape of an avocado” it generates an image to match it. It’s a text-to-image technology that’s very powerful. It gives you the ability to create an image of what you want to see with language because DALL·E isn’t recognizing images, it draws them. And by the way, I would buy one of those avocado chairs if they existed.
You can visit OpenAI’s website and play with images generated by this astounding technology: a radish in a tutu walking a dog, a robot giraffe, a spaghetti knight. The images are from the real world or are things that don’t exist, like a cube of clouds.
How does It Work?
Text-to-image algorithms aren’t new but have been limited to things such as birds and flowers or other unsophisticated images. DALL·E is significantly different from others that have come before because it uses the GPT-3 neural network to train on text plus images.
DALL·E uses the language and understanding provided by GPT-3 and its own underlying structure to create an image prompted by a text. Each time it generates a large set...
-
Some have called it the most important and useful advance in AI in years. Others call it crazy accurate AI.
GPT-3 is a new tool from the AI research lab OpenAI. This tool was designed to generate natural language by analyzing thousands of books, Wikipedia entries, social media posts, blogs, and anything in between on the internet. It’s the largest artificial neural network ever created.
In this episode of Short and Sweet AI, I talk in more detail about how GPT-3 works and what it’s used for.
In this episode, find out:
What GPT-3 isHow GPT-3 can generate sentences independentlyWhat supervised vs. unsupervised learning isHow GPT-3 shocked developers by creating computer codeWhere GPT-3 falls short.Important Links and Mentions:
Meet GPT-3. It Has Learned to Code (and Blog and Argue)GPT-3 Creative FictionDid a Person Write This Headline, or a Machine?Resources:
Disruption Theory - GPT-3 Demo: New AI Algorithm Changes How We Interact with TechnologyForbes - What Is GPT-3 And Why Is It Revolutionizing Artificial Intelligence?Episode Transcript:
Today I’m talking about a breathtaking breakthrough in AI which you need to know about.
Some have called it the most important and useful advance in AI in years. Others call it crazy, accurate AI. It’s called GPT-3. GPT-3 stands for Generative Pre-trained Transformers 3, meaning it’s the third version to be released. One developer said, “Playing with GPT-3 feels like seeing the future”.
Another Mind-Blowing Tool from OpenAI
GPT-3 is a new AI tool from an artificial intelligence research lab called OpenAI. This neural network has learned to generate natural language by analyzing thousands of digital books, Wikipedia in its entirety, and a trillion words found on social media, blogs, news articles, anything and everything on the internet. A trillion words. Essentially, it’s the largest artificial neural network ever created. And with language models, size really does matter.
It’s a Language Predictor
GPT-3 can answer questions, write essays, summarize long texts, translate languages, take memos, basically, it can create anything that has a language structure. How does it do this? Well it’s a language predictor. If you give it one piece of language, the algorithms are designed to transform and predict what the most useful piece of language should be to follow it.
Machine learning neural networks study words and their meanings and how they differ depending on other words used in the text. The machine analyzes words to understand language. Then it generates sentences by taking words and sentences apart and rebuilding them itself.
Supervised vs Unsupervised machine learning
GPT-3 is a form of machine learning called unsupervised learning. It’s unsupervised because the training data is not labelled as a right or wrong response. It’s free from the limits imposed by using labelled data. This means unsupervised learning can detect all kinds of unknown patterns. The machine works on its own to discover...
-
The ethics surrounding AI are complicated yet fascinating to discuss. One issue that sits front and center is AI bias, but what is it?
AI is based on algorithms, fed by data and experiences. The problem is when that data is incorrect, biased or based on stereotypes. Unfortunately, this means that machines, just like humans, are guided by potentially biased information.
This means that your daily threat from AI is not from the machines themselves, but their bias. In this episode of Short and Sweet AI, I talk about this further and discuss a very serious problem: artificial intelligence bias.
In this episode, find out:
What AI bias is?The effects of AI biasThe three different types of bias and how they affect AIHow AI contributes to selection biasImportant Links & Mentions:
Amazon scraps secret AI recruiting tool that showed bias against womenGoogle Hired Timnit Gebru to be an outspoken critic of unethical AI Biased Algorithms Learn from Biased Data: 3 Kinds Biases Found In AI DatasetsBiased Programmers? Or Biased Data? A Field Experiment in Operationalizing AI EthicsResources:
Venture Beat – Study finds diversity in data science teams is key in reducing algorithmic biasThe New York Times - We Teach A.I. Systems Everything, Including Our BiasesEpisode Transcript:
Today I’m talking about a very serious problem: artificial intelligence bias.
AI Ethics
The ethics of AI are complicated. Every time I go to review this area, I’m dazed by all the issues. There are groups in the AI community who wrestle with robot ethics, the threat to human dignity, transparency ethics, self-driving car liability, AI accountability, the ethics of weaponizing AI, machine ethics, and even the existential risk from superintelligence. But of all these hidden terrors, one is front and center. Artificial intelligence bias. What is it?
Machines Built with Bias
AI is based on algorithms in the form of computer software. Algorithms power computers to make decisions through something called machine learning. Machine learning algorithms are all around us. They supply the Netflix suggestions we receive, the posts appearing at the top of our social media feeds, they drive the results of our google searches. Algorithms are fed on data. If you want to teach a machine to recognize a cat, you feed the algorithm thousands of cat images until it can recognize a cat better than you can.
The problem is machine learning algorithms are used to make decisions in our daily lives that can have extreme consequences. A computer program may help police decide where to send resources, or who’s approved for a mortgage, who’s accepted to a university or who gets the job.
More and more experts in the field are sounding the alarm. Machines, just...
-
How fast can you develop a vaccine? Never has this challenge been put to the test quite so intensely as in 2020.
In fact, Jason Moore, who heads Bioinformatics at UPenn thinks that if the virus had hit 20 years ago, the world might have been doomed. It’s only thanks to modern technology that we now have a safe vaccine. He said, “I think we have a fighting chance today because of AI and machine learning.”
So, how did AI help to make the Covid-19 vaccine a reality? The short answer is a combination of computational analysis and the system of AlphaFold. I talk more about how researchers developed the vaccine so fast in this episode of Short and Sweet AI.
In this episode find out:
How AI was used to learn more about Covid-19 through data analysisHow AI helped researchers develop the vaccine so quicklyWhere we would be without AI and machine learningImportant Links & Mentions
Deep Mind, Gaming, + the Nobel Prize AlphaFold: Using AI for Scientific DiscoveryAlpha Fold: the making of a scientific breakthroughResources:
IEEE Spectrum - What AI Can–and Can’t–Do in the Race for a Coronavirus VaccineWired.com - AI Can Help Scientists Find a Covid-19 VaccineWashington Post - Artificial Intelligence and Covid-19: Can the Machines Save Us?Episode Transcript:
Friends tease me because I’m so fascinated with artificial intelligence that I will claim AI is the reason we have a safe Covid-19 vaccine so quickly. And they’re right, it is one of the reasons. In fact, Jason Moore, who heads Bioinformatics at U Penn thinks if this virus had hit 20 years ago, the world might have been doomed. He said “I think we have a fighting chance today because of AI and machine learning.
How did AI help to make the Covid-19 vaccine a reality? The short answer is through computational analysis and Alpha Fold.
But first, a little background on vaccines. A vaccine provokes the body into producing defensive white blood cells and antibodies by imitating the infection. In order to imitate an infection, you need to find a target on the virus. Once you find the target you need to understand its 3D shape to make the vaccine against it. But it’s really hard to figure out all the possible shapes before you find the one, unique 3D shape of the target, unless…unless of course you use AI.
In the case of the Covid-19 vaccine, Google’s machine learning neural network called Alpha Fold saved the day. Alpha Fold predicted the 3D shape of the virus spike protein based on its genetic sequence. And did it really fast, as early as March 2020, three months after the pandemic started. Without AI, it would have taken months and months to come up with what the best possible target protein could be, and it might have been wrong. But with AI, researchers were able to race ahead to ultimately develop the mRNA vaccine.
-
Technology breakthroughs are disrupting every industry at a rapid rate. In fact, advances in technology are massively transforming every industry exponentially faster than ever before in history.
What do you call exponentially fast disruption and massive transformation in worldwide industries?
It’s called the 4th Industrial Revolution, which I talk about in more detail in this episode of Short and Sweet AI.
In this episode find out:
What the 4th Industrial Revolution isA brief overview of the previous industrial revolutionsWhether the 4th Industrial Revolution should be considered a part of the Third Industrial RevolutionPros and cons of the new Industry 4.0Why inequality may become the greatest threat of the 4th IRImportant Links & Mentions
What Is Edge AI or Edge Computing?5G: Fifth Generation Wireless, What Is It?What is IOT and Why Does it Matter?XR: What is Extended Reality?Resources:
CNBC - Everything you need to know about the Fourth Industrial RevolutionSalesforce - What Is the Fourth Industrial Revolution?World Economic Forum - The Fourth Industrial Revolution: what it means, how to respondWhat is the Fourth Industrial Revolution?What is the Fourth Industrial Revolution? | CNBC ExplainsThe Fourth Industrial Revolution by Klaus SchwabEpisode Transcript:
Welcome to those who are curious about AI. From Short and Sweet AI, I’m Dr. Peper.
Right here, right now, technology breakthroughs are disrupting every industry and massively transforming every industry, exponentially faster than ever before in history. What do you call exponentially fast disruption and massive transformation in world-wide industries? It’s called the 4th industrial revolution.
The 4th industrial revolution is also known as 4 IR or Industry 4.0. But what does it mean? Klaus Schwab, founder of the World Economic Forum, coined the term and wrote a book of the same title. He details how we are now living during a 4th industrial revolution characterized by the fusion of AI, robotics, 3D printing, IOT, quantum computing, blockchain, autonomous vehicles, 5G, synthetic biology, virtual reality, and countless other...
-
Is it time we regained control of our data and found new and better ways to protect it?
You and I know that the social media platforms and internet sites we visit collect data on us. In many ways, they monetize our data and use it as a product that can be purchased.
In this episode of Short and Sweet AI, I talk about personal data as private property and whether there is a way for us to choose who gets to use our data.
In this episode find out:
The true value of dataWhether we should get paid for our dataWho Professor Song isHow Professor Song and her company “Oasis Labs” are working on a system that could potentially help users protect their data and even get paid for itHow you could potentially make your data your private propertyProfessor Song’s vision for the future and why she believes that we should get revenue by sharing our dataImportant Links & Mentions
Oasis LabsAre Machine Learning and Deep learning the Same as AI?Resources:
Oasis Labs' Dawn Song on a Safer Way to Protect Your DataBuilding a World Where Data Privacy Exists OnlineGet Paid for Your Data, Reap the Data DividendGiving Users Control of their Genomic DataOasis Labs' Dawn Song in Conversation with Tom Simonitedeeplearning.ai's Heroes of Deep Learning: Dawn SongComputer Scientists Work To Fix Easily Fooled AIEpisode Transcript:
From Short and Sweet AI, I’m Dr. Peper, and today I want to talk with you about personal data as private property.
You and I know that social media platforms and internet sites we visit are collecting data on us. We know they’re selling our data to advertisers. I mean, that’s their business model. They provide a platform for us to connect with each other and we give them our personal data as payment. Data is valuable. Data is the new oil. It brings in billions of dollars of income for Google, Facebook, Instagram, Amazon, and countless other companies. When we’re online and we click on a pop-up that says “accept”, we’re essentially giving away our personal information to that company. And do we really have a choice? You either have to accept the terms or you’re not allowed to use that site.
Well, what if we could be paid for our data, what if we could determine who gets data about what sites we visit, what apps we use on our phones, what physical locations we go to, what conversations we have, basically what if we could be paid for all the information companies are gathering on us now on a daily basis. And what if we had a...
-
In this exciting episode of Short and Sweet AI, I talk about the recent update that Elon Musk gave on his company Neuralink – including how and why his team implanted a coin-sized computer chip in a pig’s brain to create a brain-to-machine interface.
In this episode find out:
What Neuralink isHow the Neuralink chip device worksHow Neuralink works when implanted in a pig’s brainWhat the future holds for Neuralink and how it may be able to help cure serious health conditionsImportant Links & Mentions:
Cyborgs Among UsResources:
Neuralink Is Impressive Tech, Wrapped In Musk HypeElon Musk’s brain-computer interface company Neuralink has money and buzz, but hurdles tooHow Neuralink WorksNeuralink Update (2020) - Highlights in 7 minutes
-
What does it take to be the godfather of AI? And, how does someone come to obtain such a legendary title?
In this episode of Short and Sweet AI, I talk about Geoffrey Hinton, a neuroscientist, computer scientist, and the man Google hired to make AI a reality. In many ways, we have Geoffrey Hinton to thank for developing modern AI and deep learning. It is thanks to him that deep learning has become mainstream in the field of artificial intelligence.
So, how did Geoffrey Hinton rise to become the godfather of AI? Watch this video to find out!
In this episode find out:
How Geoffrey Hinton became the godfather of AIWhy Geoffrey Hinton believes machines need to think the way humans doUnderstanding how deep neural networks replicate how the brain processes informationHow deep learning became mainstream after 30 years in the wildernessHow deep learning became AI's "lunatic core"Important Links & Mentions
Are Machine Learning and Deep Learning the same as AI?ImageNetResources:
Geoffrey Hinton: The Foundations of Deep LearningGeoffrey Hinton: “Probably machines will get smarter than people in almost everything”Meet the Man Google Hired to Make AI a RealityTuring Award Won by 3 Pioneers in Artificial Intelligence
-
Moore's Law is coming to an end, and many people don't know how to feel about it. In fairness, the end of Moore's Law is not something that crept up on us out of nowhere. Industry experts predicted the termination of Moore's Law years ago. They observed its gradual decline and forecasted a grim future for Moore's Law that has since proved to be an accurate calculation.
But the question remains… why is Moore's Law ending? And why should you care?
I'm kicking off the start of the year with a Short and Sweet AI podcast episode that focuses on endings. That is, the end of Moore's Law and why it matters. As always, I focus on AI in simple terms so that whether you're new to AI or a seasoned pro, you can follow along fully immersed!
In this episode find out:
What Moore’s Law is, who created it, and why it is so importantHow Google's big "OMG" moment led to the end of Moore’s LawWhat a Tensor Processing Unit (TPU) isWhy TPU is the “Helen of Troy” of AIWhat could replace Moore’s Law in the futureImportant Links & Mentions
Intel5G: Fifth Generation Wireless. What is it?What is Edge AI or Edge Computing?What is Quantum Computing?Resources:
Eye on AI: The PodcastWhat is Moore's Law? WIRED explains the theory that defined the tech industryHow Moore’s Law WorksMicroprocessor Transistor Counts 1971-2011 & Moore's Law
-
The world's most powerful supercomputer would take 10,000 years to solve a math problem a quantum computer solved in minutes. Welcome to quantum computing.
The post What is Quantum Computing? part 2 appeared first on Dr Peper MD.
-
Quantum computing is an extraordinary technology based on quantum physics which uses quantum bits or qubits to solve problems in a magical way.
The post What is Quantum Computing? appeared first on Dr Peper MD.
-
I've interrupted my podcasts to care for patients during the COVID surge in my area. Many death certificates, many flags at half-mast.
The post A Physician during COVID appeared first on Dr Peper MD.
- Показать больше