Episoder
-
There’s No Fire Alarm for Artificial General Intelligence by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0065-2018-03-18.mp3
-
Mangler du episoder?
-
We discuss Intelligence Explosion Microeconomics by Eliezer Yudkowsky http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0064-2018-03-11.mp3
-
Ted gave a live talk a few weeks ago.
-
http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0062-2018-03-04.mp3
-
Some believe civilization will collapse before the existential AI risk has a chance to play out. Are they right?
-
Timeline For Artificial Intelligence Risks Peter’s Superintelligence Year predictions (5% chance, 50%, 95%): 2032/2044/2059 You can get in touch with Peter at HumanCusp.com and [email protected] For reference (not discussed in this episode): Crisis of Control: How Artificial SuperIntelligences May Destroy Or Save the Human Race by Peter J. Scott http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0060-2018-01-21.mp3
-
There are understandable reasons why accomplished leaders in AI disregard AI risks. We discuss what they might be. Wikipedia’s list of cognitive biases Alpha Zero Virtual Reality recorded January 7, 2017, originally posted to Concerning.AI http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0058-2018-01-07.mp3
-
If the Universe Is Teeming With Aliens, Where is Everybody? http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0057-2017-11-12.mp3
-
Ted had a fascinating conversation with Sean Lane, founder and CEO of Crosschx.
-
We often talk about how know one really knows when the singularity might happen (if it does), when human-level AI will exist (if ever), when we might see superintelligence, etc. Back in January, we made up a 3 number system for talking about our own predictions and asked our community on facebook to play along […]
-
Great voice memos from listeners led to interesting conversations.
-
We continue our mini series about paths to AGI. Sam Harris’s podcast about the nature of consciousness Robot or Not podcast See also: 0050: Paths to AGI #3: Personal Assistants 0047: Paths to AGI #2: Robots 0046: Paths to AGI #1: Tools http://traffic.libsyn.com/friendlyai/ConcerningAI-episode-0052-2017-10-08.mp3
-
Read After On by Rob Reid, before you listen or because you listen.
-
This is our 2nd episode thinking about possible paths to superintelligence focusing on one kind of narrow AI each show. This episode is about embodiment and robots. It's possible we never really agreed about what we were talking about and need to come back to robots.
Future ideas for this series include:
personal assistants (Siri, Alexa, etc)
non-player characters
search engines (or maybe those just fall under tools
social networks or other big data / working on completely different time / size scale from humans
collective intelligence
simulation
whole-brain emulation
augmentation (computer / brain interface)
self-driving cars
See also:
0046: Paths to AGI #1: Tools
Robots learning to pick stuff up
Roomba mapping
https://youtu.be/iZhEcRrMA-M
https://youtu.be/97hOaXJ5nGU
https://youtu.be/tynDYRtRrag
https://youtu.be/FUbpCuBLvWM
- Se mer