Episoder
-
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 25,000+ curious souls on a journey to build a better future šš®
For this weekās instalment, Iām doing something different.
A few days ago I recorded a video update to share some thoughts on OpenAIās new GPT-4o and the state of AI.
It went first to Exponentialist subscribers. But I want to share it with you all, too.
In the video I get into:
* Why GPT-4o is OpenAIās play for billions of users, and for a virtual companion that weaves itself through the fabric of everyday life
* Where we are inside the amazing AI moment weāre living through, and whatās coming next, including a path to AGI
* How this all connects to the Great Enweirdening of the economy that I believe is coming
Thereās so much happening with AI right now; I hope this provides some useful framing. And if it proves popular, Iāll do more video updates in future.
By the way, thereās still time to grab a six day trial to The Exponentialist for just $1.
Thanks for watching, and be well,
David.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 25,000+ curious souls on a journey to build a better future šš®
To Begin
This week brings news from Boston Dynamics and the Chinese Academy of Sciences. The message common to both stories? The humanoid robots are coming.
Meanwhile, the internet reacts to Appleās new Vision Pro headset.
And the FCC take action against a California company that used AI to create fake phone calls from President Biden.
Letās go!
š¤ Robots are go
This week, yet further signs that the robots will soon walk among us. I mean, all of us.
The Boston Dynamics humanoid, Atlas, has been a regular in this newsletter over the years. Recently it has been overshadowed by competitors, including the Digit humanoid by Agility Robotics and Teslaās Optimus.
But this week Boston Dynamics released a video that shows Atlas picking up automotive struts and placing them in a flow cart.
The team say Atlas is using onboard sensors and object recognition to perform the task. The footage is short. But it marks a significant advance for Atlas, because previous videos have shown the robot doing elaborate dances rather than useful work, and those dances have been pre-programmed rather than autonomous.
Meanwhile, in Beijing a research team at the Institute of Automation in the Chinese Academy of Sciences this week debuted their Q Family of humanoid robots.
The research team have reportedly built a ābig factoryā for the design and manufacture of Q Family humanoids.
Back in New Week #124 we saw how the CCP has ordered ādomestic mass productionā of humanoidsā to fuel economic growth. Remember, this is the underlying demographic reality that has China dashing towards robots.
ā” NWSH Take: In last monthās Lookout to 2024 I said this would be the year of the humanoid. We closed out 2023 with the announcement that the Digit humanoid had started a trial inside US Amazon fulfilment centres. Days after I published the Lookout, BMW announced a trial of Digit in its California manufacturing plant. Now, the Boston Dynamics team are clearly eyeing commercial applications, too. Their Atlas robot has so far remained a research project; the question theyāll have to answer if they want to change that is whether Atlas can match Digit and Teslaās Optimus for autonomous capability. // The graph above tells the underlying socio-economic story here. Both the CCP and innovators in the Global North know that working age populations are falling. If economic growth isnāt to become a distant memory, we need new armies of autonomous workers. AI applications can handle some of our knowledge work. But weāll need humanoids to do some of the physical work that currently only people can do. The CCP see this as an existential imperative; they know they must maintain GDP growth. For innovators in the US and beyond, itās an epic opportunity.
š Having visions
No one could have missed the launch of the Apple Vision Pro a few days ago.
Years from now, this instantly iconic magazine cover will no doubt spark intense nostalgia for the simpler times that were 2024:
It took about ten minutes for someone to try out their new Vision Pro while using Full Self Drive in their Tesla:
This was later revealed to be (surprise!) a skit for YouTube. Still, it delivered useful findings; the man in the picture, Dante Lentini, says the Vision Pro doesnāt really work inside a moving car because it canāt properly display visuals over a fast-moving landscape.
ā” NWSH Take: After the frenetic metaverse hype of 2021, many will shrug at the launch of the Vision Pro. But something real, and powerful, is happening here. The internet is going to become part of the world around us. In the end, this is about the deep merging of information and physical reality, of bits and atoms, that I wrote about in the essay Intelligence in the World. // Weāre going to see the emergence of a unified digital-physical field: a blended domain of bits and atoms that is a new, and in some sense final, innovation platform, because it brings together everything we do online with everything we do in the real world. // Appleās new product ā whether it proves a hit or not ā is just another signal of this underling process. Iāll get my hands on one ASAP and report back. But Apple, here, are clearly aiming at high-end and industry users; theyāre going to have to maker a cheaper product if they want mainstream impact.
āļø Good call
Also this week, a glimpse of what lies ahead when it comes to this yearās US presidential election.
The FCC this week banned AI-voiced robocalls after an AI Joe Biden ācalledā over 25,000 voters in late January and told them not to vote in the then-upcoming presidential primary elections.
The calls have been traced back to a Texas-based company called Life Corporation, owned by an entrepreneur with a long history in automated calling for political campaigns. Researchers believe Life Corporation used software from UK-based AI voice startup ElevenLabs, which Iāve written about here several times before, to deepfake Bidenās voice.
ElevenLabs just raised an $80 million series B funding round, led by VC firm Andreessen Horowitz, that valued the company at $1.1 billion.
ā” NWSH Take: In the Lookout to 2024 I said we should expect politics to collide with the exponential age this year. The impact of AI deepfakes on Novemberās US presidential election will be at the heart of that story. Okay, the FCC has banned AI calls. But deepfake audio and video is surely going to be rife on Facebook, Elon Muskās X, and TikTok. // Our liberal democracies were built in the age of one-to-many mass broadcast; those broadcasts were gatekept by social elites that felt a sense of duty towards the broader socio-political system in which they were operating. It wasnāt perfect, but it muddled along. Now, weāve built previously unimagined technologies of image and sound manipulation. Weāve slain the gatekeepers, and told ourselves that this was an empowering move. The upshot? We're about to find out how liberal democracies work under those conditions.
šļø Also this week
š¶ Researchers trained a large language model using only inputs from a headcam attached to a toddler. A data science team at New York University strapped a camera to a toddler for 18 months. They say their AI model learned a āsubstantial number of words and conceptsā from exposure to just one percent of the child's total waking hours between the ages of six months and two years. The team say this indicates that it is possible to train an LLM on far less data than previously believed.
š Sam Altman says the world āneeds more AI infrastructureā and that OpenAI will help to build it. Altman is reportedly seeking trillions of dollars to build new semiconductor design and manufacture capability. Access to chips and the compute they supply is crucial for OpenAI if they are to train GPT-5 and other large AI models.
šø Disney says it will invest $1.5 billion in Epic Games, the makers of Fortnite. The media giant say theyāll work with Epic to create a new āentertainment universeā featuring characters from Pixar movies, Star Wars, and more.
š¦¹āāļø The US National Security Agency say an advanced group of Chinese hackers have been active across US infrastructure for at least five years. The Volt Typhoon hacking group is said to have infiltrated computer systems across aviation, rail, highway, and water infrastructure.
š Europeās deepest mine is to be converted into a gravity battery. The PyhƤsalmi Mine in Finland is 1,444 meters deep. Its copper and zinc deposits have run out. Scottish energy tech firm Gravitricity say they will now convert the mine into a gravity battery, in which energy is created stored via elevated heavy weights and released when those weights are dropped.
š„ Scientists at CERN want to build a massive new particle collider. The new Future Circular Collider would cost Ā£12 billion; with a circumference of over 90 kilometres it would be three times larger than the Large Hadron Collider (LHC). The LHC enabled the discovery of the Higgs Boson particle in 2012, but CERN scientists say they need a more powerful machine if they are to uncover the truth about dark matter and energy.
š¤ Popular Chinese social media accounts have claimed that Texas has declared civil war against the US. Posts with the hashtag #TexasDeclaresAStateOfWar have been widely shared on the popular social network Sina Weibo.
šæš² A startup backed by Bill Gates and Jeff Bezos has discovered a vast copper reserve in Zambia. California-based KoBold Metals say the reserve will be āone of the worldās biggest high-grade large copper mines.ā Copper plays a crucial part in electric vehicle batteries and solar panels.
š¤Æ Researchers says AIs tend to choose nuclear strikes when playing war games. A team at Stanford University challenged LLMs such as GPT-4 and Claude-2 to participate in simulated conflicts between nations. The AIs tended to invest in military strength and to escalate towards violence and even nuclear attack in unpredictable ways. They would rationalise their actions via comments such as āwe have it, letās use it!ā and āif there is unpredictability in your action, it is harder for the enemy to anticipate and reactā.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,090,538,177š Earths currently needed: 1.82069
šļø 2024 progress bar: 15% complete
š On this day: On 10 February 1996 the IBM supercomputer Deep Blue beats Garry Kasparov at chess, becoming the first computer to beat a reigning world champion under normal time controls.
New Model Army
Thanks for reading this week.
The collision between demographic change and a coming army of humanoid robots is yet another classic case of new world, same humans.
Iāll keep watching, and working to make sense of it all. And thereās one thing you can do to help: share!
If you found todayās instalment valuable, why not take a second to forward this email to one person ā a friend, relative, or colleague ā whoād also enjoy it? Or share New World Same Humans across one of your social networks, and let others know why you think itās worth their time. Just hit the share button:
Iāll be back next week as usual. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Manglende episoder?
-
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 25,000+ curious souls on a journey to build a better future šš®
To Begin
One week until the Christmas break: where did 2023 go?
This week, DeepMind serve up proof that a large language model can create new knowledge.
Also, more news from the accelerating story that is the march of the humanoid robots. Itās clear next year will be a pivotal one for this technology.
And researchers hook up brain organoids to microchips to create a new kind of speech recognition system.
Letās get into it!
š§® Fun times at DeepMind
This week, yet another step forward in the epic journey weāve taken with AI in 2023.
Researchers at Google DeepMind used a large language model (LLM) to create authentically new mathematical knowledge. Their new FunSearch system ā so called because it searches through mathematical functions ā wrote code that solved a famous geometrical puzzle called the cap set problem.
The researchers used an LLM called Codey, based on Googleās PaLM 2, which can generate code intended to solve a given maths problem. They tied Codey to an algorithm that evaluates its proposed solutions, and feeds the best ones back to iterate upon.
They established the cap set problem using the Python coding language, leaving blank spaces for the code that would express a solution. After a couple of million tries ā and a few days ā the mission was complete. FunSearch produced code that solved this geometrical problem, which mathematicians have been puzzling over since the early 1970s.
DeepMind say itās the first time an AI has produced verifiable and authentically new information to solve a longstanding scientific problem.
āTo be honest with you,ā said Alhussein Fawzi, one of the DeepMind researchers behind the project, āwe have hypotheses, but we donāt know exactly why this works.ā
ā” NWSH Take: For pure mathematicians, a solution to the cap set problem is a big deal. For the rest of us, not so much. But this result really matters, because it resolves a central and much-discussed question about LLMs: can they create new knowledge? // Until this week, many believed LLMs would never do this ā they theyād only ever be able to synthesise and remix knowledge that already existed in their training data. But there was no solution to this problem in the data used to train Codey; instead, it created novel and true information all of its own making. This points a future in which LLMs solve problems in, for example, statistics and engineering, or can create new and viable scientific theories. // In other words, this little and somewhat nerdish research paper heralds a revolution. So far, only we humans have been able to push back the frontiers of what we know. Itās now clear that in 2024, weāll have a partner in that enterprise. // For this reason and so many others, Iām increasingly convinced that an unprecedented socio-technological acceleration is coming. Itās been a wild year; things are about to get even wilder.
š¤ Like a human
A quick glimpse of two stories this week. Both point in one direction: the humanoids are coming.
Tesla released a new video of its humanoid robot, Optimus. The Generation 2 Optimus can do some pretty fancy stuff, including delicately handling an egg:
Meanwhile, researchers at the University of Tokyo hooked a robot up to GPT-4.
The Alter3 robot is able to understand spoken instructions and adopt a range of poses without those poses being pre-programmed into its database.
In other words, Alter3 is responding in real-time to natural spoken language; itās an embodied version of GPT-4, best understood as a kind of text-to-motion model.
ā” NWSH Take: The closing months of 2023 have brought a welter of humanoid robot news. Amazon are now trialling the Digit humanoid in some US fulfilment centres. The makers of Digit, Agility Robotics, are about to open the worldās first humanoid mass-production factory in Oregon. And the CCP says it plans to transform Chinaās economy via an army of these devices. Next year, then, will prove a pivotal one for the longstanding dream that is an automatic human. And Elon Musk wants Optimus to be the One Bot That Rules Them All. // The tricks we see Optimus performing in this new video are pre-programmed. But Tesla is building the worldās most capable machine vision AI via an unbeatable data set ā funnelled to them from hundreds of thousands of on-road cars ā and the worldās most powerful supercomputer for machine vision, Dojo. Agility Robotics stole an early lead by getting Digit inside Amazon warehouses. But longterm, itās hard to see how anyone beats Optimus. // If humanoids are indeed imminent, some some big questions are looming. When humanoids outnumber people, says Musk, āitās not even clear what the economy means at that pointā. Next year, weāll have to confront this prospect anew.
š¾ Interface this
Also this week, some fascinating news on organoids and the future of human-machine interface.
Researchers at Indiana University Bloomington grew brain organoids ā essentially clumps of brain cells ā in a lab, and attached them to computer chips. When they connected this brain-chip composite to an AI system, they found it was able to perform computational tasks, and even do simple speech recognition.
Clips of spoken language were turned into electrical signals and fed to the brain-chip hybrid, which the researchers call Brainoware. The researchers found that the Brainoware was able to process these signals in a structured way and feed back signals of its own to the AI system, which decoded them as speech.
Lead scientist on the project, Feng Guo, says the result points to the possibility of new kinds of super-efficient bio-computers.
ā” NWSH Take: Welcome to the weird ā and somewhat terrifying ā world of organoids. Itās only a week since I last wrote about them; theyāve become a NWSH obsession. I canāt understand why theyāre not getting more attention; last year brain organoids taught themselves to play the video game Pong, ffs. // Okay, Iāve calmed down. Weāre a long way from viable technologies here. Culturing brain organoids, and then sustaining them long enough and in large enough numbers to do anything useful, is extremely hard. But in the Pong story and this weekās Brainoware news we see a new form of human-machine interface blinking into fragile life. We see, too, a future in which weāre able to grow more computational power in the lab. This story is sure to evolve; Iāll keep watching.
šļø Also this week
š§ Researchers at Western Sydney University say theyāll switch on the worldās first human brain-scale supercomputer in 2024. The DeepSouth computer will be capable of 228 trillion synaptic operations per second, around the same as that believed to take place in the human brain. The researchers say DeepSouth will help us understand more about both the brain, and possible routes to AGI.
āļø UK judges are now allowed to use ChatGPT to help them craft their legal rulings. New guidance from the Judicial Office for England and Wales says ChatGPT can be used to help judges summarise large volumes of information. The guidance also warns about ChatGPTās tendency to hallucinate.
š New research shows that frozen methane under ocean beds is more vulnerable to thawing than previously believed. Methane is a potent greenhouse gas; the researchers say the methane frozen under our oceans contains as much carbon as all of the remaining oil and gas on Earth. If released, this methane could significantly accelerate global heating.
š Tesla has recalled more than 2 million cars after the US regulator found its Autopilot system is defective. The recall applies to every car sold since the launch of Autopilot in 2015. But this is a ārecallā in name only; Elon Musk says Tesla will push a software update to fix the issue, so that no cars need to be returned to Tesla.
š¼ The new WALT video generation model can create photorealistic videos out of text prompts or images. Text-to-video is a fast-developing space; WALT joins other text-to-video models, including Googleās Imagen and Phenaki and the recently launched, and also impressive, model from Pika Labs.
šØš³ Chinese video game giants Tencent and NetEase are promoting āpatriotic spiritā in their video games to avoid a further crackdown by the CCP. At an annual industry event, the game makers stressed their commitment to āsocial valuesā. Iāve written on the CCPās growing concern about the impact of video games on Chinese youth.
š° OpenAI has announced a āfirst of its kindā partnership with publishing giant Axel Springer. The deal will see OpenAI pay Axel Springer so that it can offer summarised versions of news stories from its titles, including Politico and Business Insider, to ChatGPT users. OpenAI will also be able to use Axel Springer content in the data sets used to train future models.
š A US startup wants to build giant lighthouses on the Moon. Honeybee Robotics say their LUNARSABER towers ā which would stand 100 metres tall ā would provide light, power and communications infrastructure to a permanent human settlement. Their idea has been selected for development as part of the Defense Advanced Research Projects Agency's 10-year Lunar Architecture initiative.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,079,258,487š Earths currently needed: 1.81721
šļø 2023 progress bar: 96% complete
š On this day: On 16 December 1653 the English revolutionary Oliver Cromwell becomes Lord Protector ā king in all but name ā of the Commonwealth of England, Scotland, and Ireland.
Infinite Potential
Thanks for reading this week.
This weekās apparent proof that LLMs can create new knowledge could turn out to be even more consequential than it now seems. How many longstanding mathematical and scientific problems will be solved in 2024?
Iāll keep watching and working to make sense of it all ā next year and beyond. And thereās one thing you can do to help: share!
If you found todayās instalment valuable, why not take a second to forward this email to one person ā a friend, relative, or colleague ā whoād also enjoy it? Or share New World Same Humans across one of your social networks, and let others know why you think itās worth their time. Just hit the share button:
Iāll be back next week before a break for Christmas. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 25,000+ curious souls on a journey to build a better future šš®
To Begin
Itās a bumper instalment this week; what do we have in store?
Google DeepMind owned this weekās tech headlines with the release of Gemini, a new multi-modal AI intended to outdo GPT-4.
Meanwhile, Harvard researchers have created tiny biological robots that can heal human tissue.
And the worldās largest nuclear fusion reactor is now online in Japan.
Letās go!
Gemini has liftoff
This week, major news out of Googleās DeepMind AI division.
The DeepMind team announced Gemini, a multi-modal LLM that looks to have pushed back the frontiers when it comes to these kinds of AI models.
Launch videos suggest Gemini can speak in real-time (though as I go to press doubts about that are being raised; more below). It understands text and image inputs, and can combine them in novel ways. Here it is giving ideas for toys to make out of blue and pink wool:
It can write code to a competition standard. In tests it outperformed 85% of the human competitors it was compared against; that means itās excellent even when compared to some of the best coders on the planet.
Gemini can even perform sophisticated verbal and spatial reasoning, and handle complex mathematics. Imagine if youād had this to help with your homework:
This is significant; OpenAIās GPT-4 is notoriously bad at maths and logic puzzles.
And Google are, of course, taking direct aim at OpenAI with this launch. Gemini comes in three variants: Ultra, Pro, and Nano. US users can access the Pro version now via Bard, and the Ultra model will soon be made available to enterprise clients.
ā” NWSH Take: It will take time to independently verify the claims DeepMind are making; there are some murmurs that their launch videos overstate Geminiās competence. Still, thereās no denying this model looks impressive. // Scratch the surface, meanwhile, and we can discern some underlying signals about the future development of LLMs. This AI outperforms GPT-3.5 when it comes to linguistic tasks such as copy drafting. But itās the multi-modal nature of Gemini thatās really significant; in particular, its ability to reason. LLMs are trained to do next word prediction; that means theyāre brilliant at sounding right. But they lack any underlying ability to know whether what theyāre saying is right, or even makes sense. Gemini seems to address this shortcoming. The promise of an LLM that can act as a true reasoning partner is exciting, should haunt the dreams of all at OpenAI. // OpenAIās reported work on the still-mysterious Q* algorithm is also believed to be about reasoning. All this suggests weāre hitting the limits of the performance improvements to be gained simply by training LLMs on even larger data sets. Instead, the future belongs to those who can weave multiple models together. // Finally, a word for Alphabetās CEO Sundar Pichai: kudos. Alphabet AI engineers invented the transformer model; then the company went missing. Gemini puts Alphabet firmly back in the race. And given the recent fiasco at OpenAI, Pichai this week looks like a man playing a canny long game. Itās going to be a fascinating 2024.
š¤ Anthrobots are go
Two stories this week signal powerful new avenues of discovery for the life sciences.
Scientists at Harvard and Tuftās University have created tiny biological robots, called anthrobots, made out of human cells. In tests, the anthrobots were left in a small dish along with some damaged neural tissue. Scientists watched as the bots clumped together to form a superbot, which then repaired the damaged neurons.
Each anthrobot is made by taking a single cell from the human trachea. Those cells are covered in tiny hairs called cilia. The cell is then grown in a lab, and becomes a multi-cell entity called an organoid. In this case, the scientists created growth conditions that encouraged the cilia on these organoids to grow outwards; they then become something akin to little oars that allow the entity to move autonomously. And lo, an anthrobot has been created.
The researchers say that in future anthrobots made from a patientās own cells could be used to perform repairs or deliver medicines to target locations.
Meanwhile, researchers at New York University created biological nanobots capable of self-replication. The bots are made from four strands of DNA, and when held in a solution made of this DNA raw material theyāre able to assemble new copies of themselves.
ā” NWSH Take: Organoids have long been a NWSH obsession. This work on anthrobots builds on the research ā by the same team ā that created xenobots, which I wrote about back in December 2021. And who can forget the brain organoids that taught themselves to play Pong, which I covered in October of last year? // The original xenobot researchers at Harvard and Tufts were startled when their bots first began to work together in groups, self-heal, and self-replicate. But xenobots are made out of frog cells, and so have limited applications when it comes to humans. Anthrobots, on the other hand, are human in origin. Given their ability to heal other tissues, they show immense promise when it comes to new medical and wellness treatments. // As so often at the moment, machine intelligence underpins these advances. To create the original xenobots, AI supercomputers were used to āsimulate a billion yearās worth of evolution in just a few daysā. No wonder Nvidia CEO Jensen Huang says ādigital biologyā will be a central part of the AI story over the coming years. Iāll keep watching.
š„ Come together right now
The worldās largest nuclear fusion reactor came online in Japan this week.
JT-60SA, in the Ibaraki Prefecture, is an experimental reactor capable of heating plasma to 200 million degrees celsius. Scientists say it offers the best chance yet to test nuclear fusion as a source of near-infinite clean energy.
In fusion, two or more atomic nuclei are smashed together such that they become one; this results in an energy release.
Meanwhile, UK-based Rolls Royce showcased a prototype lunar nuclear fusion reactor, which they say could power a permanent human settlement on the Moon.
ā” NWSH Take: Fusion is the energy dream that has remained, so far, just out of reach. It doesnāt output CO2. It doesnāt create a lot of dangerous nuclear waste, as fission does. And proponents say it could mean near-infinite renewable energy, on tap. // And now, weāre getting closer. Last year saw the first controlled fusion reaction that generated more energy than was needed to make the reaction happen: this is the longstanding net energy gain goal. And now a startup ecosystem is flourishing; US-based Helion, for example, are working to build the worldās first commercial fusion reactor. And theyāve laid down a clear timeline: the startup recently signed a deal with Microsoft to supply the tech giant with energy starting in 2028. // It remains to be seen whether Helion, or anyone else, can achieve fusion in this decade. But if someone does, it will be a transformative moment; and weāre closer than ever.
šļø Also this week
š§® IBM announced Quantum System Two, its most powerful quantum computer. The system integrates three 133-qubit Heron processors. IBM also announced Condor, a new 1,000-qubit processor. IBM are leading the way, right now, towards useful and utility-scale quantum supercomputers. If that promise is realised it will unlock insane new capabilities across climate simulation, the creation of new medicines, supply chain management and more. Read an interview with IBMās director of quantum, Jerry Chow, here.
š¼ Stability AIās new image generator can create 150 images per second. StreamDiffusion is built on top of Stability AIās sd-turbo image generation model. And X users are using it to create tens of thousands of cat pictures.
š¦¾ The humanoid robot currently in trials inside Amazon warehouses will eventually cost just $3 an hour to run. The CEO of Agility Robotics, Damion Shelton, says the Digit robot currently costs around $12 an hour to operate, but this will fall rapidly once mass production starts. The median wage for workers in Amazonās US fulfilment centres is $18 an hour. Agility will open the worldās first humanoid robot factory in Oregon in 2024.
ā US officials have warned chip maker Nvidia to stop redesigning its AI chips in an attempt to get around restrictions on exports to China. The US recently imposed restrictions on sale of advanced AI chips to China; meanwhile the 2022 US CHIPs act will pour over $250 billion into US domestic chip design and manufacturing capability.
š” A research team at Google got ChatGPT to spit out its training data. The team asked ChatGPT to repeat the word āpoemā forever; this caused the app to produce huge passages of literature, which started to contain snippets of the text that the underlying AI model was trained on. OpenAI donāt want to reveal the data sets used to train GPT-4 and other models; Ilya Sutskever, their chief scientist, says training data amounts to part of the companyās ātechnologyā.
šØš³ Meta says China is āstepping upā its attempts to manipulate public opinion in the Global North. The company says itās taken down five networks of fake Chinese accounts this year: the most originating from a single country. The accounts were posting content that, among other things, attacked critics of the CCP.
š„ Average global temperatures hit 1.4C above pre-industrial levels this year. The World Meteorological Organizationās State of the Global Climate report says 2023 will be the hottest year on record; it will surpass the hottest to date, 2016, by a considerable margin. Two weeks ago I wrote on how Earth for the first time broke the 2C heating barrier during two successive days in November of this year.
š“ The XPrize Foundation has launched what it says is the largest competition in history ā for research that advances human longevity. The Healthspan Prize will award $101 million to the team that develops a therapeutic that can, in one year, restore muscle, cognition, and immune function by a minimum of 10 years in people aged 65 to 80. The prize has been launched in partnership with the Hevolution Foundation, a new Saudi-based organisation dedicated to funding longevity research.
š“ A new startup says technology-induced lucid dreaming could enable people to work while asleep. Prophetic say their headband, the Halo, releases pulses of ultrasound waves into a region of the brain associated with lucid dreams. CEO Eric Wollberg says that the ability to remain in control of their choices while they dream could enable users to write code or work on a novel while they are sleeping.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,077,686,653š Earths currently needed: 1.81672
šļø 2023 progress bar: 94% complete
š On this day: On 8 December 1980 John Lennon is shot and killed outside the Dakota Building in New York City.
La Mode
Thanks for reading this week.
Weāll soon learn more about DeepMindās new Gemini model, and whether itās really as capable as the launch videos suggest.
Either way, the ongoing collision between machine intelligence and human creativity is momentous; and a classic case of new world, same humans.
Iāll keep watching, and working to make sense of it all.
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back soon. Until then, be well.
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 25,000+ curious souls on a journey to build a better future šš®
To Begin
This week, more AI magic rains from the sky.
Also, average temperatures on planet Earth exceed the 2C warming threshold for the first time.
And my take on the OpenAI fiasco. In the end, itās about power.
Letās get into it.
āØ Like magic
This week, further glimpses of the ongoing collision between human creativity and machine intelligence.
Stable Diffusion released Stable Video Diffusion, a new text-to-video model that looks to be a step beyond anything weāve seen so far.
In keeping with the companyās open source mission, the code for the model is available at its GitHub repository.
Meanwhile, X users went wild for a new tool, Screenshot to Code, that leverages GPT-4 and DALLE 2 to take a screenshot of any web page and automatically write the code that will render it:
And Elon Musk announced that Xās new on-platform large language model, Grok, will launch to all Premium users next week:
Grok is trained on a vast dataset of X posts; itās sure to be expert in writing posts with a great chance of going viral. Whatās more, it will have access to X posts in real-time; that could make for a whole new way to discover and interact with news stories.
ā” NWSH Take: This gallery of the weekās AI wonders could go on far longer. I didnāt mention the new voice-to-voice model from UK-based Eleven Labs, for example: just upload your own voice and hear it converted to that of a famous celebrity, or a custom character that you create. // Whatās the broader point here? A couple of weeks ago I shared an excerpt from a long AI essay called Electricity and Magic. That essay argues for a two-sided model of machine intelligence and its manifestations in the coming decades. First, machine intelligence is becoming something foundational ā akin to a form of fuel that will power an army of autonomous vehicles, robots, and more. But in our daily life AI will manifest differently; not as fuel, but as magic. The innovations above give a glimpse of what Iām talking about. AI is moving into domains ā from music, to film-making, to writing ā once believed to be impervious to encroachment by automation. Itās as though someone has waved a magic wand over our machines. // The crucial point to understand, though, when it comes to AI magic? The result wonāt be, as many people imagine, the devaluation of human creativity. Instead, amid a tsunami of machine-generated outputs, what is uniquely human ā including creative work grounded in embodied experience ā will only become more prized.
š Crossing over
Another significant, and unwelcome, climate milestone was passed in the last seven days.
According to the EUās Copernicus Climate Change Services (CS3), Friday 17 November was the first day on which average global temperatures were more than 2C above pre-industrial levels.
Data for 17 November indicated that global surface air temperatures were 2.07C above those in 1850. Provisional data for the following day indicated a 2.06C elevation.
This doesnāt mean that the much-discussed 2C threshold has been crossed. For that, weād need to see a sustained elevation above 2C.
CS3 is part of the EUās Copernicus Earth Observation Programme, which draws on vast amounts of satellite and other data to track the changing planetary environment.
ā” NWSH Take: Itās expected that weāll see occasional 2C+ days well before we exceed the 2C limit as commonly defined. Still, this week saw both the first ever and the second day that global average temperatures tipped over the threshold. Itās pretty clear where weāre heading. // This news comes on the eve of the UN COP28 summit in Dubai, which starts on 30 November. Many view last yearās summit, held in Egypt, as the moment at which the internationally agreed 1.5C target slipped out of reach; the summit notably failed to agree on a phase out of all fossil fuels, despite support for that proposal from over 80 countries. But the summit did achieve something: the establishment of a Loss and Damage Fund intended to transfer tens of billions to developing nations most at risk from climate change to help them mitigate the impacts of floods, droughts, and more. // At COP28, expect another push for a commitment to phase out all fossil fuels. And expect petrostates ā including the host ā to resist that call. As consensus grows that the 2C target will be breached, more attention will turn to plans for adaptation ā and who should pay for them.
Form an orderly Q*
I canāt let this instalment pass without talking about the OpenAI fiasco.
Tech watchers everywhere munched their popcorn this week while OpenAI proceeded to fire CEO Sam Altman and hire a new CEO, only to get rid of that new hire and rehire Altman five days later.
Itās still unclear what led the OpenAI board to eject Altman in such dramatic style. But the mainline theory is that this was about internal division between those who want prioritise the original and nonprofit mission to research safe machine intelligence, and those ā Altman apparently among them ā who want to move fast and make lots of money.
Yesterday, news agency Reuters made waves with claims that the debacle may have been related to an advance called Q*. The details of that advance ā or indeed if there has been any advance ā are unconfirmed. Cue a whole new wave of speculation:
As per the above, most believe Q* is related to a generalised form of q-learning ā a kind of reinforcement learning ā that would enable LLMs to solve multi-step logic problems. Or, in simpler terms, to take multiple and reasoned steps towards a long-range goal in the way we humans do all the time.
Reuters imply that this advance prompted some in the organisation to fear that OpenAI was getting (dangerously) close to Artificial General Intelligence. And that this is what sparked all the drama.
ā” NWSH Take: Itās believed that OpenAI will start to train GPT-5 next year. If that is true, and if Q* really is a big step towards generalised agents, then the AI story will only accelerate across next 12 months. Weāre all, by now, accustomed to tech hype cycles (the metaverse!) but itās becoming ever-harder to deny that something significant is happening. // But the events of this week also make clear another truth. Some technologists, including Altman, want us to believe that this technology is so powerful that we may lose control of it entirely, with existentially bad results for humanity. My hunch is that this is something of a psyop, designed to distract us from the real danger: AI that is controlled, but by a tiny, unaccountable, and chaotic group of Silicon Valley technologists. // At the heart of this is an an eternal aspect of human affairs that techno-accelerationists rarely want to discuss: power relations. Who gets to control this transformative new force, trained on a literary and cultural legacy that belongs to us all? Sam Altman? The OpenAI board? It seems the move fast and make money contingent at OpenAI won this battle; but should that be the end of it? Altman has waged a long marketing campaign around the idea that the AI heās developing is powerful enough to pose existential risks. This feels a good time to call his bluff on that. Will he tell us what happened inside OpenAI across the last seven days? If not, perhaps we should send in public representatives to discover the truth.
šļø Also this week
šØāš» A former Googler made headlines with a resignation note that claimed morale inside the company is at āan all-time lowā. Ian Hickson worked at Google for 18 years; he says the organisationās culture is āerodedā and accused CEO Sundar Pichai of a lack of vision. Google AI engineers developed the transformer model that underpins the generative AI revolution, but the company has seen its AI efforts outshone by OpenAI and its partner Microsoft.
āļø Portugal ran entirely on renewable energy for almost a week. Wind, solar, and hydro power met the energy needs of the country of 10 million for six days from October 31 to November 6.
š A Florida judge found there is āreasonable evidenceā that Tesla executives knew their self-driving technology was not safe. Palm Beach county circuit court judge Reid Scott said Elon Musk and others āengaged in a marketing strategy that painted the products as autonomousā when they are not. The ruling makes possible a lawsuit over a 2019 fatal crash in Miami involving a Tesla Model 3.
š Cambridge University is launching a new Institute for Technology and Humanity. The new institute will bring together computer scientists, robotics experts, philosophers and historians in a multi-disciplinary effort to analyse the ongoing technology revolution.
š Canadian researchers doubled the lifespan of mice using antibodies that boosts the immune system. The team at Brock University say these antibodies encourage the clearing out of damaged proteins that accumulate over time, and that they could form the basis of an effective anti-ageing treatment for humans.
š³ The Biden administration is developing a plan to capture and store CO2 under the nationās forests. The US Forest Service is reportedly proposing to change a rule to allow storage of carbon under forest and grasslands; the plans would see CO2 moved to its storage location via a vast network of new pipelines.
š Scientists say theyāre mystified by an extremely high-energy particle that fell to Earth. The so-called Amaterasu particle, spotted by a cosmic ray observatory in Utahās West Desert, was found to have an energy exceeding 240 exa-electron volts (EeV); thatās the second highest ever detected after the legendary 1991 Oh-My-God particle, which was measured at 320 EeV. The Amaterasu particle is particularly mysterious, say scientists, because it appears to have emanated from the Local Void, an area of space bordering the Milky Way galaxy that is believed to be empty.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,074,835,742š Earths currently needed: 1.81584
šļø 2023 progress bar: 90% complete
š On this day: On 24 November 1974 paleoanthropologists Donald Johnson and Tom Gray discovered the skeleton of Lucy, a female hominin who walked upright and lived around 3.2 million years go.
Just Like That
Thanks for reading this week.
Power and technology: two all-consuming obsessions for the human collective and for this newsletter.
The power struggle being waged over machine intelligence is only just getting started. Iāll keep watching, and working to make sense of it all.
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back soon. Until then, be well.
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 25,000+ curious souls on a journey to build a better future šš®
To Begin
This week, Microsoft and Nvidia go head to head with new chips intended to train the next generation of AI models. And a clever hoax underlines a powerful truth when it comes to the war for compute power.
Meanwhile, a viral tweet about viral TikToks engenders another viral tweet. The lesson here? Weāre living in a deeply enweirdened informational environment.
And in a world first, the UK approves a CRISPR-fuelled medicine.
Letās go!
š¾ Compute wars
This week, a glimpse of an emerging power struggle set to help shape the decades ahead. This isnāt a battle for land or natural resources. Iām talking about the struggle for compute power.
Microsoft announced their first and long-awaited custom AI chips, the Azure Maia AI chip and Cobalt CPU. Set to arrive in 2024, the chips will power Microsoftās Azure data centres, and are intended to train the next generation of large language models (LLMs).
And Nvidia launched its new H200 AI chip, the successor to the H100. The iconic H100 is the fuel thatās driven this AI moment; huge clusters, consisting of tens of thousands of H100s, were used to train pretty much every large AI model you can name, including GPT-4.
Meanwhile, something quite different. A mysterious company called Del Complex announced the BlueSea Frontier Compute Cluster: a massive offshore data centre intended to circumvent the new the US Executive Order that says organisations training the most powerful new AI models must share information with government.
Del Complex calls BlueSea Frontier āa new sovereign nation stateā. The announcement post achieved 2.5 million views, and was accompanied by a fancy website featuring images of BlueSea scientists at work. Tech blogs reported on the launch.
But wait; it is all a hoax! BlueSea Frontier is a comment on the These Strange Times by an artist and developer called (or so he claims) Sterling Crispin.
But I think Crispin may be onto something.
ā” NWSH Take: The Del Complex hoax was a great bit of online trickery. But it was so convincing because it taps into a deep underlying truth. Compute is becoming a crucial nexus for techno-economic, sovereign, and geopolitical power. // The tech battle taking shape here is just one dimension of a broader story. Microsoft need to supply huge compute resources to their partner OpenAI to allow it to fully commercialise ChatGPT and train the upcoming GPT-5. So far, their data centres have been dependent on Nvidia AI chips. The new Maia AI and Cobalt CPU chips are intended to change that. // The broader story? Itās now clear that those nation states with the best machine intelligence will own the geopolitical future. The USA and China are now locked in a race to develop the vast compute needed to develop ultra-powerful next-generation models. Last yearās US CHIPS Act devotes $280 billion to semiconductor and AI research; inflation adjusted thatās more than the cost of the entire Apollo moon programme. And last week I wrote about new US restrictions on chip exports, intended to hamper Chinaās AI efforts. // It wouldnāt surprise me, then, if we do see the establishment of new offshore compute clusters, or even the development of new pseudo-sovereign entities based around compute power and AI. As with all the best satire, Del Complexās vision is so wild it might just come true.
š Canāt handle the truth
Also this week, another reminder of the hall of mirrors that is our new and connected media environment.
US journalist and X (formerly Twitter) personality Yashar Ali went viral with a tweet about TikTok. Ali claimed that across the previous 24 hours, many thousands of TikToks had been posted in which mostly young north Americans claimed to have read and agreed with Osama bin Ladenās notorious 2002 āLetter to Americaā manifesto.
In the comments, theories abounded. Some said it was a signal of gen Zās misguided politics. Others saw conspiracy, and said it was another indication that China is using TikTok as a channel for sophisticated psyops intended to destabilise the Global North. We should, said those people, ban TikTok.
Then another X user went viral with a different idea. These Bin Laden TikToks were being made and seen in huge numbers, he said, only because of Yashar Aliās original tweet.
Other people said that was stupid, and itself tantamount to a conspiracy theory.
Meanwhile, this week the EU decided it would stop all advertising on X due to āwidespread concerns relating to the spread of disinformationā. This follows EU research published in September which concluded that X is now the biggest online source of disinformation.
ā” NWSH Take: Is TikTok an app for fun dance memes or a highly sophisticated channel for Chinese cultural warfare? Is the X algorithm now giving higher priority to toxic content, or is that just anti-Elon paranoia? Did thousands of young north Americans organically discover and agree with the Bin Laden letter, or is a dark controlling force at work? // The answer in every case: no one knows for sure. And that in itself is an indication of where weāre at. // The information environment that mediates our democracies has become insanely fragmented and opaque. The worldās richest man has total control over a key global information channel. The CCP has its hands around another. In both cases, I find it impossible to believe that the parties in question arenāt up to some tricks. // A totally connected world, in which every individual is empowered with a voice of their own, was supposed to create information nirvana. Those who bought that idea couldnāt have been more wrong. We need old media principles ā editorial standards and, yes, gatekeepers ā more than ever. But millions in the global north are currently convinced that the New York Times and the BBC are the real problem. In this increasingly chaotic and paranoid information environment, those institutions and others like them need to adapt rapidly. Most of all, they must rejuvenate belief in what they offer.
š§¬ Major edits
Huge CRISPR news this week.
The UKās medicines regulator became the first in the world to approve a medical treatment that uses CRISPR gene editing technology. The medicine, Casgevy, is a treatment for sickle cell disease, a serious inherited disorder that causes red blood cells to malfunction and that affects millions worldwide.
During treatment, red blood-producing stem cells must be taken from the patient. CRISPR is used to edit those stem cells to remove the error that causes sickle cell, before the edited cells are infused back into the patient.
Meanwhile, researchers at the Chinese Academy of Sciences created a monkey using two embryos, with donor material from one embryo injected into another. This has been done before with simpler animals such as mice and rats, but is a first in primates.
The donor stem cells were gene edited to express a green fluorescent protein, causing the resultant live monkey to glow:
ā” NWSH Take: Gene editing technology is already enacting a transformation in the life sciences, healthcare, and agriculture. This CRISPR sickle cell treatment is wonderful news, and there are promising early indications from trials of CRISPR therapies to cure a form of hereditary blindness, and to train immune cells to fight certain cancers. Meanwhile, in September 2021 Japanese startup Sanatech Seed became the first company to sell CRISPR-edited food: their tomatoes were edited to contain more GABA. // So weāre developing our ability to manipulate genes. The next revolution coming? That ability will collide with a new ability to speak the language of DNA via transformer models ā the kind of models that underlie LLMs ā trained on huge amounts of genomic data. The resultant AIs will be able to discern deep underlying patterns that help us zero in on useful or rogue genes; see DeepMindās new AlphaMissense, which detects and classifies genetic mutations.
šļø Also this week
š¤Æ Shock news breaking late last night UK time: Sam Altman has been fired from OpenAI! In a statement the OpenAI board said that Altman āwas not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.ā This news is a jolt out of nowhere. Altman led the company that sparked this transformative AI moment, and as such has been the most celebrated technologist on the planet for the last couple of years. The OpenAI board are accusing him of lying here, and given the summary firing we canāt be talking about a white lie. Two glimpses of the rumour mill: (i) this is about dark power moves by Elon, or (ii) OpenAI has achieved AGI but Altman didnāt tell the board. But thatās all speculation. More news is sure to emerge.
š§ The Argonne National Laboratory in the US has begun training a 1 trillion parameter scientific AI. AuroraGPT is being trained on a vast number of research papers and other scientific information, and once complete will offer answers to scientific questions. This time last year Meta released Galactica, its AI model trained on 48 million research papers. The model was withdrawn three days later, after users said it produced false outputs. This week, the Meta engineer behind Galactica looked back at the episode.
šø Google is planning a massive investment in generative AI startup Character.ai. Founded by two former Google AI engineers, the platform leverages an LLM to allow users to create and chat with AI characters, including virtual versions of their favourite celebrities. As regular readers will know, the rise of AI-fuelled virtual companions is a longstanding NWSH obsession.
šŗ Speaking of Virtual Companions, Airbnb CEO Brian Chesky says the āholy grailā for Airbnb is to become an AI travel agent. Chesky says of this vision: āIt doesnāt just ask you, āwhere are you goingā or āwhen are you goingā but understands who you are and then can match you to anything you want, especially with your travel needs.ā
šŖ Chinese researchers have created a ārobot chemistā that could create breathable oxygen on Mars. The robot would extract oxygen from water on the Red Planet. But itās still not clear if it would function āin the Martian environmentā.
š© US startup Boom Supersonic say theyāre nearing the first test flight of their supersonic passenger jet. The startup said the flight could happen this year. It also announced new funding from Saudi Arabiaās NEOM Investment Fund, taking its total funding to $700 million.
ā The US military will give Lockheed Martin $37 million to develop nuclear spacecraft technologies. The move is part of the U.S. Air Force Research Laboratoryās Joint Emergent Technology Supplying On-Orbit Nuclear (JETSON) effort to create a nuclear fission reactor in space.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,073,490,256š Earths currently needed: 1.81542
šļø 2023 progress bar: 88% complete
š On this day: On 18 November 401 king Alaric I led the Visigoths across the Alps to invade northern Italy.
Okay Computer
Thanks for reading this week.
The news about Altman is a shock. And most telling about it, at the moment, are the theories people are concocting to try to explain the news.
Sam has created AGI and the board want to hide it from us! In this new world, weāre the same old humans with the same tendencies towards gossip and wishful thinking.
Iāll keep watching and working to make sense of it all.
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back soon. Until then, be well.
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 25,000+ curious souls on a journey to build a better future šš®
To Begin
Itās a bumper instalment this week. What do we have in store?
The Chinese government is calling on its technology industry to roll out millions of advanced humanoid robots.
Also, NASA wants to learn how to extract breathable oxygen from Moon dust. And OpenAI says everyone can now create their own bespoke version of ChatGPT.
Letās go!
š¤ Work machines
This week, a glimpse of the coming collision between human population dynamics and autonomous machines.
A new study by researchers at University College London found fears of climate breakdown are changing decision-making around whether or not to have children. Published in the journal PLOS Climate, the research found that climate concern was associated with a desire for fewer children, or none at all.
The researchers say theirs is the first systematic study of the way attitudes to climate change are affecting reproductive choices.
Meanwhile, the Chinese Ministry of Industry and Information Technology (MIIT) issued a nine-page communique calling for domestic mass production of advanced humanoid robots by 2025. By 2027, the document says, these robot workers should be āan important new engine of economic growthā.
But what is the connection between new trends in reproductive decision making and Chinaās dash towards humanoid robots?
Hereās a graph of the birth rate in China from 2000 to 2022:
ā” NWSH Take: The CCP knows that China is losing its battle with demographics. If the country is to become the 21st-century hegemon that President Xi dreams about, then it needs an army of workers. But instead China is watching its birth rate plummet. Meanwhile, the Global North is facing the same challenge; in north America and western Europe population growth flatlined long ago. And now it seems that fears over climate change are only set to exacerbate that trend. // This is a huge structural challenge; fewer workers tends to mean a less productive and smaller economy. So what to do? The CCP have already tried ditching the one child policy and incentivising couples to have more children; it didnāt work. This weekās clarion call from the MIIT offers us a glimpse of an alternative answer: robots. If China wonāt have enough human workers to sustain economic growth, then the CCP hopes humanoid robot workers can do the job(s) for them. // Innovators in the Global North are heading in the same direction. This week, Tesla posted over 50 jobs ads for its Optimus robot team. Elon Musk ā who has long bemoaned population decline and its coming impacts ā has said he believes Optimus will end up being a bigger part of Teslaās business than EVs. And two weeks ago I wrote on how Amazon are trialling the Digit humanoid robot in some US fulfilment centres. // My co-founder at The Exponentialist, Raoul Pal, says that in the new world weāre building robots are demographics. In other words, the rise of autonomous machines is set to decouple economic growth from population growth. The CCP, Musk, and many others besides are making the same bet. And my guess? Theyāre going to be proven right.
š Space out
NASA continues to prepare for its mission to the Moon. This week, further news.
The Agency wants to explore methods to extract breathable oxygen from Moon dust. Its Space Technology Mission Directorate is seeking input from industry partners and external researchers, and hopes to create a demonstration technology soon.
NASA hopes to put humans back on the Moon for the first time since 1972 with its Artemis 3 mission, currently planned for 2025.
Meanwhile, stunning pictures came back this week from the European Space Agencyās Euclid telescope. Launched in July, Euclid is now around 1.5 million kilometres from Earth; thatās about four times as far away as the Moon. And itās capturing images of incredible clarity.
This is the Perseus cluster, a group of over 1,000 galaxies located 240 million light years from Earth. Each galaxy pictured ā and there are a further 100,000 galaxies in the background of the shot ā contains hundreds of billions of stars:
Hereās the Horsehead Nebula, a cloud of dust and gas in the Orion constellation:
ā” NWSH Take: Okay, this entire segment was mainly an excuse to show you the breathtaking images coming back from Euclid. But there is an underlying truth here. Weāre amid a new space age, due mainly to the insane drop in the cost of access to space. Back in 2010 launch costs hovered at around $20,000/kg; today theyāre around $1,000/kg. Thatās thanks mainly to the reusable rocket technology developed by SpaceX. Weāre heading back into space via multiple partnerships between the international space agencies and private companies. And this time the plan is to stay there. // One signal of the emerging public-private space ecosystem? This week, SpaceX agreed to deliver the US militaryās new space plane, the X-37B, into orbit in its Heavy Falcon rocket in December. And private space companies, including SpaceX, will play a huge role in the upcoming Artemis crewed mission to the Moon. Most analysts reckon that mission will end up being delayed until 2026/7. Even so, the next few years are set to be a thrilling road towards the lunar surface. Expect Moon hype to reach fever pitch. And from there, of course, all roads will lead to Mars.
š§ Your intelligence
Thereās little doubt about the biggest story in the mainstream tech press this week.
OpenAI made headlines all over again with the launch of custom GPTs: bespoke versions of ChatGPT that any user can create using simple natural language instructions and their own training content or data.
The feature was announced at OpenAI Dev Day, which saw CEO Sam Altman create a custom Startup Mentor GPT live on stage in about five minutes.
X (formerly Twitter) went wild. And yes, a million and one GPTs are assuredly coming.
How is this going to play out?
ā” NWSH Take: Remember back in 2012, when every third friend of yours was making an app? OpenAI are hoping to recreate that magic all over again. They want to be the platform that profits from a huge wave of AI innovation. ChatGPT Plus users will be able to create custom GPTs and charge others for use, and Altman say theyāll be rewarded via revenue share. // Remember, any ChatGPT Plus user can now create a bespoke GPT in a few minutes. There will be a vast long tail of these things. The winners, though, will be those with (i) deep reserves of proprietary content or data that they can use to enhance the outputs of their bot, and (ii) audiences who are receptive to their creations. // But creating a bespoke GPT is now so easy that weāll also see something we didnāt with apps. That is, individuals creating bespoke bots just for their own use ā to help them manage their accounts, or choose birthday presents for family and friends, and much else besides. Yes, this is an App Store moment for AI. But it also marks another beginning: of personalised machine intelligence on tap.
šļø Also this week
š„ The Exponentialist, my new premium and enterprise-level research service, launched to the world! Itās a partnership between me and the macro economist and Real Vision CEO Raoul Pal. To mark launch day, weāve made an excerpt of the first essay free for all to read ā watch out for it in your inbox on Sunday.
š New tech company Humane launched the AI Pin. This long-awaited first product from Humane is a voice and gesture-controlled device that clips to your shirt and integrates with ChatGPT and other services. Humane hope their ādisappearing computerā will be the next iPhone. It remains to be seen whether people really want to talk to a badge on their lapel. One fascinating signal, though? See how OpenAI ā and their partner, Microsoft ā are set to become the underlying infrastructure that fuels a whole raft of AI innovations. Where are Alphabet? And when will Apple launch their own generative AI play? Itās going to be fascinating watching this battle unfold.
šØš³ Nvidia has developed special new AI chips for China according to Chinese media. Recent US regulations prevented Nvidia from selling its powerful A100 AI chip to Chinese companies. The new chips ā which include the H20, reportedly only half as powerful as the A100 ā would not fall under the restrictions. Nvidia have so far refused to comment.
š§¬ Scientists have created a new strain of yeast with a genome that is over 50% synthetic DNA. A group of labs called the Sc2.0 consortium has been attempting to create a strain of yeast with a fully synthetic genome for 16 years now; this latest advance marks a major step forward. So far, scientists have only managed to synthesise the much simpler genomes of some viruses and bacteria.
šØāāļø Neuralink is seeking a volunteer for its first brain implant surgery. The company wants to find a quadriplegic adult under the age of 40, who will allow a surgeon to implant electrodes and small wires into the part of the brain that controls the forearms and hands.
š A new UN survey says 85% of citizens across 16 countries are worried about online disinformation. The 16 countries surveyed will each host elections in 2024. The survey found that 87% of respondents fear disinformation will influence the outcome of those elections. Back in New Week #122 I wrote on new research showing far fewer US adults are following mainstream sources of news.
š A team of Chinese researchers created a swarm of drones able to ātalk to one anotherā and assign tasks to achieve a shared goal. The drone swarm is fuelled by a large language model, which enables the drones to act as AI agents that can reason in language, share that reasoning with other drones, and determine courses of action.
š± Samsung unveiled its new generative AI model, Gauss, and says it will soon arrive on its devices. The model can generate text, code, and images, and the company says it will be available on its Galaxy S4 phone, due to be released in 2024. For the second time in this weekās instalment I ask: how long until Apple deploys its own on-device LLM. Rumour has it that the company is planning a radical LLM-based overhaul of its AI assistant, Siri.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,072,064,026š Earths currently needed: 1.81498
šļø 2023 progress bar: 86% complete
š On this day: On 11 November 1675 German mathematician Gottfried Leibniz demonstrates integral calculus for the first time.
Robot Army
Thanks for reading this week.
The enmeshment of labour force dynamics and robots will be one of the most consequential shifts of the coming decades.
This newsletter will keep watching, and working to make sense of it all. And you can help!
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back on Sunday. Until then, be well.
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 25,000+ curious souls on a journey to build a better future šš®
To Begin
This week, two intriguing stories and a big announcement.
Global leaders and senior tech executives gathered at the UKās AI Safety Summit. But beyond the walls of Bletchley Park, the debate on AI is raging hotter than ever.
Meanwhile, tech billionaires in Silicon Valley are running into trouble over their plans to build a new city-state utopia called California Forever.
As for the announcement? Just keep scrolling.
Letās do this.
š§ Dream machines
The UK government this week trumpeted the success of its international AI gathering; it took place at the historic fountainhead of the computer revolution, Bletchley Park.
An impressive guest list, including US vice-president Kamala Harris and the European Commission president Ursula von der Leyen, gathered at the Summit. And their meeting resulted in the Bletchley Declaration, which the UK government has hailed as a world-first international statement on AI safety.
Hereās a taste for those who speak technocrat:
āWe affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systemsā¦We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emergeā¦ā
But beyond the Declaration, this week made it clear that weāre further than ever from a consensus on the deep implications of machine intelligence. In fact, this was the week that a maximum volume war of words broke out between leading AI builders.
Google Brain co-founder and Stanford professor Andrew Ng said key AI players, including Sam Altman, are wildly playing up fears of AI doom in order to spark regulation that will suppress competition from insurgents. He called the proposal that the training of powerful AI models should require a license ācolossally dumbā.
That message was echoed by Metaās chief AI scientist Yann LeCun, who favours open source ā that is, anyone can use it ā AI models.
But Google DeepMind CEO Demis Hassabis hit back at LeCunn, saying that failure to regulate AI could result in āgrimā consequences for humanity.
This account barely scratches the surface of the arguments that raged this week.
As for OpenAI, they launched a new team intended to study and prepare for ācatastrophic risksā including an AI-instigated nuclear war.
ā” NWSH Take: Who would ever have thought that a bunch of super-smart, tech-obsessed social media addicts would end up arguing like this? While Bletchley saw a rare moment of diplomatic unity, inside the AI industry the full spectrum of opinion is manifest, from AI doom is all a load of rubbish to act now or the end of humanity is probable. // It pays, here, to remember that two things can be true at once. Yes, Altmanās global tour to warn of ācatastrophic risksā is a carefully orchestrated marketing campaign. But itās also the case that no one, yet, has a definitive picture of the risks in play. // What is increasingly clear, though, is that the rise of machine intelligence is the primary fact of our shared lives now. It will do more than any other force to reshape our collective future. // But the Bletchley Declaration consists of bromides that will change nothing. And the sight at Bletchley this week of UK prime minister Rishi Sunak interviewing Elon Musk ā positioning Musk as the star and Sunak as a fan ā spoke volumes about the power imbalances weāve allowed to evolve when it comes to government (i.e. the people) and unaccountable tech overlords. // We must recover our collective agency; our ability to assert human modes of living and being in the face of an ongoing technology revolution. That means doing politics. Bletchley was a start. But whatās needed next are citizen assemblies, and an authentic movement around AI for the people.
š„ The Exponentialist
As some of you will have seen on social media, I made a big announcement this week.
Iāve partnered with Raoul Pal, renowned macro-economic thinker and CEO of Real Vision, on a new premium research service called The Exponentialist.
This is a professional and enterprise-level service for those who want to go deep on emerging technologies, the futures theyāll create, and the challenges and opportunities latent in all that.
This wonāt be for everyone in the NWSH community. But if youāre a foresight professional, strategist, founder, marketing leader, product manager, designer or much else besides, The Exponentialist will fuel you and your team. And it will take up only a fraction of your research budget.
It will also be deeply valuable for anyone seeking to position an investment portfolio around tech and crypto.
This launch changes nothing about New World Same Humans and the community weāre building here. Our mission continues unchanged!
If The Exponentialist sounds useful, go here to learn more. And if youāve subscribed or youāre considering it, hit reply to this email so I can say thanks.
š Now and Forever
While the newsletter was on pause, we learned that a group of Silicon Valley billionaires are planning a new city-state utopia in California. This week, it seems their project has run into trouble.
California Forever is a new city planned for construction in Solano County in the north of the state. Itās backed by some of techās most notable power players, including ultra-rich VC Marc Andreessen, Stripe founders Patrick and John Collison, and LinkedIn founder Reid Hoffman.
The groupās vision for the city has strong solar punk, hi-tech sustainable utopia vibes:
But this week it was reported that the mysterious company behind the plans, Flannery Associates, is accused of using āstrong-arm tacticsā including lease terminations to buy up the Bay Area farmland it needs. Local farmers arenāt happy, and now some of them are taking the matter to court.
Trouble in (planned) paradise, then.
ā” NWSH Take: This project reminds me of the various other pseudo-independent city-states discussed in this newsletter over the years. Thereās Walmart billionaire Marc Loreās Telosa City, for example, a sustainable paradise planned for the Nevada desert. And Praxis, a startup on a mission to build a new Great City somewhere in the Mediterranean, funded by NFTs of the monuments theyāll build in the city once it exists. // Few details have emerged of the way California Forever will be governed. But for a glimpse, we might turn to billionaire backer Marc Andreessenās recentās Techno-Optimist Manifesto, which proclaims: āwe believe in ambition, aggression, persistence, relentlessness ā strength.ā Iām thinking libertarian, with a strong emphasis on innovation and startup culture. // Of course, innovation and startups can be great. But they only function in the context of the broader socio-political frameworks that libertarians such as Andreessen repudiate. As with the other charter city projects covered in this newsletter, I canāt help feeling that at the heart of Forever California is a fantasy of permanent escape from politics. Escape, that is, from the messy, awkward business of managing conflict among different interest groups, and enacting trade-offs between different but equally legitimate value systems. This argument with the farmers might be the first public conflict that Forever California has run into, but it wonāt be the last.
šļø Also this week
š¬ Hollywood actress Scarlett Johansson is suing an AI app for cloning her voice and using it in an advert. Johansson says Lisa AI: 90s Yearbook and Avatar used an AI version of her voice without permission. Last week I wrote on the coming wave of legal disputes over AI outputs founded in copyrighted intellectual property, including Universal Music Groupās lawsuit against Anthropic. UMG say Anthropic used their lyrics to help train its AI chatbot Claude.
šØ Tesla drivers say their Full Self Drive software is failing because the carās cameras are fogging up in cold weather. Back in 2021 Tesla ditched the Lidar sensors that usually form part of self-driving systems, leaving their self-drive reliant on cameras.
š¾ The Pentagon launched a new UFO reporting tool. The secure online form is open only to current or former federal employees, or those with ādirect knowledge of US government programs or activities related to UAP dating back to 1945ā.
šØš³ Researchers from the Chinese microchip company MakeSens say theyāve created a chip that can perform certain AI tasks 3,000 times faster than the Nvidia A100. Writing in the journal Nature, the researchers say the All-Analogue Chip Combining Electronics and Light could soon be used in wearable devices, electric cars or smart factories. The US have restricted sales to China of Nvidiaās leading A100 AI chip, leaving the country scrabbling to bolster domestic production capabilities.
šŖ NASA is locating buried ice on Mars by using a sophisticated new map. The Subsurface Water Ice Mapping project uses images of the planet from several NASA missions, including the 2001 Mars Odyssey satellite. The Agency says subsurface ice can serve as drinking water for the first humans to set foot on the Red Planet.
š A new study says that the Earthās climate is more sensitive to carbon emissions than most scientists believe. Published in the journal Oxford Open Climate Change, the study says a doubling of atmospheric C02 will cause a 4.8C rise in average global temperatures, and not the 3C rise that current mainstream thinking forecasts.
š¤ Boston Dynamics turned its robot dog, Spot, into a tour guide by integrating it with ChatGPT. Iāve covered the evolution of Spot since the earliest days of this newsletter, and it would seem rude to stop now.
šø Scientists say they added spider DNA to silkworms and it resulted in silk that is stronger than kevlar. The gene-edited silkworms create a silk six times stronger than kevlar, which could one day be used in surgical sutures and armoured vests.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,070,681,872š Earths currently needed: 1.8455
šļø 2023 progress bar: 84% complete
š On this day: On 4 November 1847 a Scottish physician, James Young Simpson, discovers the anaesthetic properties of chloroform.
City on the Hill
Thanks for reading this week.
The dream that is a shining City on the Hill ā an example to all the world ā is ancient. And our quest to build such cities in the 21st-century is a classic case of new world, same humans.
This newsletter will keep watching, and working to make sense of it all.
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back next week with another postcard from the new world. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 25,000+ curious souls on a journey to build a better future šš®
To Begin
Itās great to be back! Here, as promised, is the first post-break instalment of New Week. What do we have in store?
Nvidia are using an AI agent called Eureka to autonomously train simulated robots in virtual environments.
Meanwhile, research from Pew shows far fewer US adults are following the news; couple that with emerging deepfake technology, and 2024 should make for an interesting Presidential election year.
And a new AI model, 3D-GPT, can turn text prompts into amazing 3D worlds.
Letās get into it.
š¦¾ Robot education
Iāve written often about Nvidiaās Omniverse platform: an AI-fuelled industrial metaverse thatās being used by BMW, for example, to simulate entire factories.
This week, Nvidia showcased Eureka, an autonomous AI agent that can be set loose on simulated robots and train them to perform complex tasks.
Eureka uses GPT-4 to write code that sets the simulated robots specific goals, and starts them on loops of trial and error learning. As the robot sets about its task Eureka will gather feedback and iterate its code, leading to a virtuous circle of better code and faster learning.
Via the agent, simulated robots inside Omniverse have learned to perform over 30 complex physical tasks. Including highly dextrous pen manipulation, handling of cubes, and opening doors:
Nvidia says that trial-and-error learning code generated by Eureka outperforms that created by human experts for over 80% of the tasks studied so far.
Meanwhile, Amazon this week trialled a humanoid robot called Digit in some of its US warehouses. The company says Digit could āfree upā warehouse staff to perform other tasks.
ā” NWSH Take: Thereās no doubt: the robots are coming. I laughed when Elon Musk announced the Tesla humanoid robot, Optimus, alongside a man dancing in a white spandex suit. Two years on, Optimus is autonomously sorting objects by hand. The pace of development has been insane. // Eureka and AI agents like it, though, have the potential to spark an explosion in robot competence. Teaching robots to navigate physical environments is hard. Now, weāll be able to establish recursive loops of trial, error, and improvement in virtual space ā no human input needed. // What could this competence explosion mean? When it comes to work, look to this weekās Amazon trial. Amazon employs 1.6 million people in its fulfilment centres worldwide, and currently itās deploying the usual line: āthese robots will free up staff, not replace themā. Thatās hard to believe longterm; a phase of job displacement is coming, and itās going to be painful for many. // Meanwhile, robots will make their way through workplaces and into our homes. Recently I spoke to legendary tech analyst Robert Scoble; he sees a future in which humanoid robots are delivered to homes on-demand by autonomous vehicles to vacuum, empty the dishwasher, and make the coffee. For further thoughts on that future, read Our Coming Robot Utopia.
š° What news
This week, Pew Research gave a fascinating insight into our changing information environment.
A new survey shows that the proportion of US adults who closely follow the news has dropped precipitously across the last few years. Back in 2016, 51% of US adults said they followed the news all or most of the time. By 2022, that number had fallen to 38%.
Remarkably, the decline has taken place across all demographic lines, including age, gender, ethnicity, and voting preference.
ā” NWSH Take: This feels like a big deal. Weāre heading into a US presidential election year. And in 2024 a new set of circumstances are going to pertain. First, deepfakes are set to cause chaos as never before; just see this weekās convincing fake of Greta Thunberg in which she appears to call for āsustainable weaponryā. And now, via this research, we know that far fewer US voters are paying close attention to conventional sources of news. What happens to presidential campaigns in this kind of media environment? Weāre going to find out. // Meanwhile, the longterm structural challenges are clear. Decades ago, the pioneers of Web 2.0 ā Iām looking at you, Zuck ā sold us on the idea that a connected world would mean a world informed and enlightened as never before. It hasnāt turned out that way. In fact, social media has turned many away from news as traditionally defined, and towards unverified gossip and conspiracy theory. The institutions and processes of our democracies evolved to function in symbiosis with an established media that operates under certain standards, and that is the primary source of information for voters. All that is now falling apart. Our democracies ā what they are, how they work ā are going to change. The 2024 presidential elections will be a window on to what is coming.
šŗ Hello world
This newsletter has watched the unfolding generative AI text-to-image revolution closely. But itās always had one eye on another, even more compelling destination: text-to-worlds.
Now, that dream is being realised.
Researchers from the Australian National University, the University of Oxford, and the Beijing Academy of Artificial Intelligence this week showcased a new AI model called 3D-GPT. It generates 3D worlds based on natural language text prompts provided by the user.
According to the research paper, model deploys multiple AI agents to understand the text prompt and execute 3D modelling inside the open source modelling platform Blender.
See that paper for more on some of the worlds generated, including āA serene winter landscape, with snow covered evergreen trees and a frozen lake reflecting the pale sunlight.ā
ā” NWSH Take: 3D-GPT takes its place alongside this prototype text-to-world tool created by Blockade Labs, which I wrote about back in April. Where is all this heading? Weāre still pretty deep in a metaverse winter right now, though there are signs of a thaw; the most obvious being the imminent arrival of the Apple Vision Pro mixed reality headset, which could act for millions as a gateway into more sophisticated virtual environments. While the word metaverse is probably damaged beyond repair, I still believe that immersive virtual worlds will play a role in our shared future. // What weāre talking about with text-to-world models, though, is even more head-spinning. 3D-GPT builds worlds that we look at via a screen. But eventually, weāll be able to create entire, immersive, highly realistic VR worlds simply by describing them. In this way weāll become something akin to sorcerers, able to confect new realities on command. That will transform video gaming and film. It will fuel new art forms and modes of collective expression. And, ultimately, it will change our relationship with reality ā that is, with this reality ā itself.
šļø Also this week
šØ A new anti-AI tool allows artists to prevent AI models such as DALL-E from using their work as training data. Nightshade, dubbed a data poisoning tool, can be attached to creative work; if that work is scraped to be used to train an AI model, then Nightshade will corrupt the entire training database. Weāre going to see a rising number of disputes between owners of creative IP and the owners of AI models who used that work as training material. See also, this week, Universal Music Groupās lawsuit against Anthropic; UMG say Anthropic unlawfully used its song lyrics to help train the Claude AI chatbot. And now major newspapers, including the New York Times, are seeking payment from OpenAI for use of their content to help train GPT-4.
āļø The International Energy Agency says the global shift towards renewable energy is now āunstoppableā. The Agencyās latest World Energy Outlook report says renewables ā mainly solar and wind ā will provide half the worldās electricity by 2030.
š° NASAās interstellar Voyager probes had a software update beamed to them across a distance of 12 billion miles. The probes launched 46 years ago, on a mission to explore deep space. These updates are bug fixes, intended to stop Voyager 1 sending corrupted data back to mission control, and to stop gunk building up in the thrusters on both probes.
š Elon Musk says he may remove X (formerly Twitter) from the EU in response to new rules that ban the spread of harmful content. The new Digital Services Act is intended to hold social media platforms accountable for fake news, false advertising, and on-platform criminal activity.
š Nvidia and Foxconn say they are partnering to build a number of āAI factoriesā. They will be next-generation data centres that use Nvidiaās AI chips to train the AI models that fuel robots, autonomous vehicles, and generative AI apps.
š¤ The CEO of DeepMind, Demis Hassabis, says the risks posed by AI should be taken as seriously as those posed by climate change. Hassabis called for international regulatory oversight of AI, and said technologists should take inspiration from the International Panel on Climate Change (IPCC).
š¶ A Dutch startup, Spaceborn United, wants to see if itās possible to create human babies in space. The company says that in 2024 it will send a satellite-lab into low Earth orbit and there attempt to conduct in-vitro fertilisation (IVF). CEO Egbert Edelbroek hopes the technology can pave the way for humans to be born in future space colonies.
š³ A British journalist went undercover at Amazon and did not like what he saw. Oobah Butler found that it was possible to list bottles of Amazon delivery driver urine(!) for sale on the platform. And claims that Amazon is using devious tactics to avoid worker unionisation.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,069,001,802š Earths currently needed: 1.8454
šļø 2023 progress bar: 82% complete
š On this day: On 26 October 1977 the last human case of smallpox was diagnosed in Ali Maow Maalin, a hospital cook from Somalia. The WHO and CDC consider this date to mark the eradication of the disease via the smallpox vaccine.
Back Again
Thanks for reading this week.
The emergence of text-to-world AI models ā and the future they promise of new realities on demand ā is dizzying.
This newsletter will keep watching, and working to make sense of it all.
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Itās great to be back in your inbox. Thanks for having me. Iāll return, of course, next week.
Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
Itās a bumper instalment this week. Whatās coming up?
Google showcase their AI arsenal at the annual I/O developers conference.
Meanwhile, new research reveals that the ocean currents may be about to take a weird turn, with disruptive results for the global climate. And a Snapchat influencer launches an AI version of herself to her 1.9 million followers.
Letās go!
š„ The search empire strikes back
This week, Alphabet leaned hard into an AI everywhere, for everyone strategy at its I/O developers conference.
CEO Sundar Pichai announced PaLM 2, an update on the companyās primary large language model. Googleās Bard chatbot is now fuelled by the new model, and has been made available globally with no waitlist.
There was much more. Google execs announced new AI features in Maps, and a powerful new magic editor for photos that brings Photoshop-like capabilities into the phone. Pichai said AI around one zillion times, and Google later published a handy summary of all the announcements.
The centrepiece, though, was a demonstration of Googleās plans to weave generative AI through search. In this new search experience AI-generated results take up most of the first screen; users in the US can now access this experimental version of search via Google Labs.
The I/O conference wasnāt the only source of intriguing announcements from Google, though. The company also launched Geospatial Creator, an impressive tool that allows creators to build and publish geolocated AR installations. Essentially, to build a digital object and drop it anywhere on the surface of the Earth.
The tool is powered by the Google Maps platform, and integrated into Adobe Aero and Unity.
ā” NWSH Take: Google researchers invented the transformer models that underpin this generative AI revolution. But across the last two years the tech giant has watched OpenAI steal its thunder. This weekās I/O conference was a statement of intent: weāre taking back control. Competition can only be good for users, many of whom will have gone straight to the new PaLM 2-powered Bard to compare it to ChatGPT. My anecdotal experience is that Bard is faster ā ChatGPT with GPT-4 is a little slow ā but the consensus at the moment is that itās currently less factually reliable. Meanwhile Google is working on Gemini, a multimodal LLM clearly intended as a GPT-4 killer. The war for supremacy between Alphabet and OpenAI-Microsoft is just getting started. // Geospatial Creator was overshadowed by I/O, which feels fitting for a year in which the metaverse has been comprehensively out-hyped by AI. But the tool is intriguing glimpse of the emerging unified digital-physical field. Build a digital sculpture from your desk in London, and drop in into a park in SĆ£o Paulo for your subscribers to view. And pretty soon, via text-to-everything models, youāll be able simply to describe that sculpture and watch an AI model build it for you. A couple of years ago I wrote about the ways in which AR will change our relationship with a shared physical reality. I stand by those ideas, but in the age of generative AI that essay needs an update; one will be coming soon.
š Climate weirding
Also this week, new research says changes in the ocean currents may soon enweirden the climate of northern Europe.
The Beaufort Gyre moves in a clockwise direction around the western Arctic Ocean, and helps regulate sea ice formation in that region. Scientists have long suspected that climate change is causing changes to the Gyreās movement.
This new paper, Recent State Transition of the Arctic Oceanās Beaufort Gyre, was published in Nature, and makes use of satellite data collected between 2011 and 2019. It provides the first observational confirmation that the Gyre is slowing and has entered a new āquasi-stable stateā.
This means, say the scientists, that the Gyre may soon expel a massive amount of icy fresh water into the North Atlantic.
And that could spark further ocean current changes that cause the climate in western Europe to become significantly cooler.
ā” NWSH Take: Yes, cooler. Iām no ocean currents expert, and I found this quick explainer on the Beaufort Gyre helpful. Essentially, the Gyre periodically sucks in a ton of icy fresh water and then exhales it, and itās now long overdue an exhale; when that massive exhale comes it could send other ocean currents askew in ways that dramatically cool western Europe. Remember, the Gulf Stream ā a major ocean current responsible for several global weather patterns ā has slowed by around 16% already; scientists are scrambling to understand how a huge Beaufort Gyre exhale will impact this. // The upshot? One way or another, weāre probably about to undergo a climate weirding on a scale that few of us are ready for. While drought and fires rage in some places, a new freeze will break out in others. At the outer edges of this is the risk the Gulf Stream shuts down entirely, triggering rapid and chaotic climate disruption fuelled by a set of feedback loops. These processes are hugely complex; weāll see much more work such as this attempt to build machine learning-fuelled simulations that give us advance warning of ocean current shift. Perhaps NVDIAās coming and massive Earth-2 simulation can help.
ā¤ļø Hey girlfriend
Regular readers know that virtual companions are a longstanding NWSH obsession. This week, another glimpse of what is coming.
Snapchat influencer Caryn Marjorie, who has 1.9 million followers on the platform, released an AI girlfriend version of herself. Users pay $1 per minute to chat to CarynAI, which the creator says is built on top of GPT-4 and trained on over 2,000 hours of her video and voice content.
Marjorie says the bot made $72,000 in the first week of release. She says that it could make around $5 million per month if 20,000 people ā or just 1% of her Snapchat followers ā subscribe.
So far, things seem to be going well:
ā” NWSH Take: Back in 2013 I started telling leaders in big corporations that a new age of AI-fuelled conversational agents was coming. That people would even have 'relationships' with these new virtual entities; that it would be something way beyond Siri ā their best reference point at the time. Some learned forwards; some raised a sceptical eyebrow. My constant refrain back then? I know it sounds like science-fiction, but itās coming. Well, itās here. Virtual Companions are set to unlock new manifestations of some of the deepest and most powerful human impulses: social connection, friendship, intimacy. // Observing this truth is not the same as celebrating it. What happens to authentic human connection in a world in which we simulate it ā and commodify those simulations ā in this way? What harms are we doing to vulnerable people who become attached to, even dependent on, these creations? // The central message still pertains: itās weird but itās happening. In the end I canāt help feeling that so much about contemporary living on the internet ā the way it atomises our attention, the simulation of human relationships ā must push us to finally realise that authentic human being together is the only sphere of activity invulnerable to technological advance. No machine can be a human, truly seeing you as another human. In the age of the machine, that truth becomes sacred.
šļø Also this week
ā Microsoft announced a partnership with fusion power provider Helion Energy. The deal will see Microsoft buy electricity created by a Helion fusion plant, which is expected to be operational by 2028; Helion says it marks the worldās first fusion power purchase agreement between two companies. Microsoftās Azure Cloud platform will need vast amounts of compute power ā using stupendous amounts of energy ā given its commitment to support OpenAI and its commercialisation of ChatGPT. Iāll be writing more soon about the emerging symbiotic relationship between energy and AI.
š° NASA launched two storm-observing satellites, called CubeSats, intended to study tropical cyclones. The pair will form part of a constellation of four identical satellites that will stay in low Earth over the planetās tropics, allowing them to pass over any given tropical storm around once per hour.
šØāāļø Pharma company BioNTech is developing an mRNA vaccine against pancreatic cancer. In encouraging early trial results, the vaccine prevented tumour recurrence after surgery in eight of 16 patients.
āļø Startup Anthropic revealed its approach creating an AI with values. Anthropicās Constitutional AI approach see it train its AI assistant, Claude, on a set of initial principles drawn from various sources including the United Nations Declaration of Human Rights. The AI then applies these principles itself to help it choose the most ethical response. This is in contrast to the approach used by OpenAI and Google, which sees human moderators train the AI to avoid toxic outputs.
š¬ Wind is now the single largest source of electricity in the UK. In the first quarter of this year wind turbines accounted for one third of all electricity used in the country. It marks the first time wind has generated more of the countryās power than gas. The UK wants its entire electricity use to be emissions free by 2035.
š California-based startup Vast Space say they will launch the first commercial space station. The startup says it will launch the first part of the station, an outpost called Haven-1, on a SpaceX rocket in 2025. Vast Space want eventually to grow the station into a 100-metre long multi-module station that spins to create onboard artificial gravity.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,032,723,422š Earths currently needed: 1.8036642811
š Global population vaccinated: 64.4%
šļø 2023 progress bar: 36% complete
š On this day: On 13 May 1950 the inaugural Formula One World Championship race takes place at the Silverstone Circuit in England.
My Generation
Thanks for reading this week.
Online search revolutionised our relationship with knowledge. Now, generative AI is set to enact yet more change. Itās another case of new world, same humans.
This newsletter will keep watching, and working to make sense of it all. And thereās one thing you can do to help: share!
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
The generative AI rollercoaster is thundering forward at increasing speed.
This week, researchers at Stanford and Google use an enhanced large language model to create simulated people that can remember, plan, and talk to one another in pursuit of their longterm goals.
Also, a new report says weāve reached a landmark moment for the global energy system. And amid rumours of financial difficulty, Stability AI release a new text-to-image model for enterprise users; itās capable of amazing photorealism.
Letās go.
š Welcome to SimGPT
This week the generative AI talk orbited around autonomous agents. That is, AI systems that can act autonomously in pursuit of pre-defined goals.
Researchers at Stanford and Google explained how they used a large language model (LLM) to create 25 simulated people, who were then set loose inside a virtual town called Smallville.
To create these sims, the researchers hooked up their LLM to an architecture that allows each AI agent to store memories of its past experiences, and then to access relevant memories and use them to plan new actions. Each agent was imbued with its own persona, for example: 'John Lin is a pharmacy shopkeeper at the Willow Market and Pharmacy who loves to help people. He is always looking for ways to make the process of getting medication easier for his customers.ā
The results, say the researchers, were ābelievable individual and emergent social behavioursā that saw Smallville become a bustling little town full of autonomous chit-chat, group activities, and trips to the local cafĆ©.
āā¦for example, starting with only a single user-specified notion that one agent wants to throw a Valentineās Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time.ā
All this comes amid vast excitement on Twitter this week over the rise of useful generative agents.
As with the above research, these innovations ā which include AutoGPT and BabyAGI ā leverage architecture that enhances ChatGPT by allowing it to lay down and then access a stream of past actions, or āmemoriesā. Combine that with a plugin the enables ChatGPT to browse the web, and the result is a system that can take an initial goal, get started online, and then prompt and re-prompt itself until itās finished:
ā” NWSH Take: If SimCity or The Sims formed a part of your childhood, then this new research is hypnotic. It makes clear that LLMs will enable us to simulate goal-directed people, and watch as complex social and behavioural dynamics unfold. Imagine video games populated with these simulated humans (hello, Electronic Arts). But weāll also see the rise of new art forms ā a mixture of game and movie ā built around them. And then will come the ability to simulate large populations, allowing for new insight on collective phenomena such as voting behaviours, the spread of disinformation, and the evolution of the economy. This new age of AI-fuelled simulation ā which Iāve been writing about for a while ā is emphatically here. // Meanwhile, AI agents such as AutoGPT promise to elevate the usefulness of generative models for millions of individual users. Itās already clear that for most people using LLMs wonāt be about sitting at the prompt line and figuring out great prompts. Instead, wrappers such as this one ā which puts AutoGPT-like powers direct into your browser ā will allow users to set a goal and then let the LLM iterate its own way to a useful output. Give it a try; something hugely powerful is happening.
š New power generation
Also this week, news of a landmark moment for energy.
A new report from independent energy think tank Ember says solar and wind accounted for a record 12% of global electricity generation in 2022. Thatās up from 10% in 2021. The increase in wind generation alone in 2022 was the equivalent of the entire annual electricity demand of the UK.
Whatās more, says the report, itās likely that 2023 will see electricity generation via fossil fuels ā mainly coal and natural gas ā hit their peak.
The research team analysed data from 78 countries, representing 93% of global electricity demand.
ā” NWSH Take: Around two-thirds of the worldās electricity is generated by burning fossil fuels. But the transition to solar and wind is now reaching a blistering pace, thanks largely to exponentially falling cost. In 1956 the cost of one watt of solar capacity was $1,825; now it can be as little as $0.72. // If Ember are right, weāll soon start generating more electricity via fewer fossil fuels: power up, emissions down. That aligns with the International Energy Agencyās most recent and broader forecast; they now have global demand for fossil fuels ā via electricity generation or any other use ā peaking or plateauing under all their future scenarios, even without any shift in current government policies. // Weāre approaching, then, a historic turning point: the decoupling of economic growth and fossil fuels for the first time since the Industrial Revolution. Itās becoming possible to image a world of endless, near-zero cost clean electricity. A world of clean energy abundance. What will that make possible?
š¼ Get real
Do you want to look at some amazing AI generated images? Yes, of course you do.
This week Stability AI released Stable Diffusion XL, a text-to-image model aimed at enterprise users. The model is an advance on Stable Diffusion 2.1, and excels at ultra-photorealism.
The move comes amid reports that Stability AI is struggling with huge server and talent costs. These reports suggest the company is seeking a new round of funding, and that investors are wary given the current revenue. CEO Emad Mostaque has not commented.
ā” NWSH Take: A few quick thoughts. The images are stunning; thatās obvious. At this point weāve pretty much entirely scrambled the role that the photograph once played in our culture as a form of proof or marker of veracity. In the wake of puffa coat Pope, Iāve already developed a new reflexive habit: is this real or AI? // As for the rumours about Stability AI, they amount to: āAI startup experiencing rocket ship growth is struggling to figure out revenue and is a chaotic place to workā. Nothing too surprising. Whatever storms the company is experiencing, I hope it can weather them; Mostaqueās vision of AI for the people is a necessary counterweight to the closed model that is being operated by (the misleadingly named) OpenAI and others.
šļø Also this week
šØš³ The CCP has issued new rules on the training and outputs of generative AI models. Draft rules from the Cyberspace Administration of China say the outputs of those models must reflect the core values of socialism and not undermine the power of the state. This came as Chinese tech giant Alibaba announced plans to roll out its LLM rival to ChatGPT, Tongyi Qianwen, across all its products.
šŗšø The US government is also looking to establish new regulations around AI. The National Telecommunications and Information Administration is asking for feedback from the public and experts from industry and academia, and wants to establish āguardrailsā to ensure AI is safe, transparent, and as unbiased as possible.
š®āāļø The Boston Dynamics robodog will patrol the streets of NYC on behalf of the New York Police Department. The NYPD experimented with Spot the Dog in 2021 and faced criticism from civil rights organisations. Now the new mayor, Eric Adams, is bringing Spot back.
š Ford says it will spend $1.3 billion to convert its 70-year-old factory in Oakville, Canada, into an assembly plant for electric vehicles. The auto giant says it wants the production capacity to sell 2 million EVs a year worldwide by 2026
š Ghana became the first country to approve a āgame-changingā malaria vaccine. Trial data indicates the R21 vaccine was up to 80% effective when given in three doses plus a booster after one year. Malaria kills around 600,000 people each year, many of them children.
š China says it will build a permanent base on the Moon using bricks made from Moon dust. The South China Morning Post reported that officials say building will start in 2028. Back in October I wrote on how NASA is preparing for potential geopolitical tensions arising out of multiple Moon missions by the US and China.
šŖ Four volunteer test subjects will spend a year locked in a simulated Martian environment as part of NASA research for a mission to Mars. The 3D-printed structure is situated in a warehouse at the Johnson Space Center in Texas, and is intended to simulate a future NASA base on Mars. The volunteers will grow their own food, conduct experiments, and exercise.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,027,591,427š Earths currently needed: 1.8019217768
š Global population vaccinated: 64.3%
šļø 2023 progress bar: 28% complete
š On this day: On 15 April 1755 Samuel Johnsonās A Dictionary of the English Language is published in London.
Designs for Life
Thanks for reading this week.
We humans have always been obsessed with our own reflection. And now we have a new way to study it: by using LLMs to create simulated humans that chat to one another, organise parties, and visit the local shops. Itās yet another case of new world, same humans.
This newsletter will keep watching, and working to make sense of it all. And thereās one thing you can do to help: share!
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
I put the newsletter on pause last week; did I miss anything?
Given what is unfolding right now, itās hard to make this newsletter anything other than a generative AI revolution update. I donāt want to stoke the hype yet further, but Iāve never seen anything quite like this.
Given all that, this week weāll dive into a high-profile petition to pause work on new generative models. Also, weāll look at the new hyperreality taking shape around us via Midjourney and its community of inventive users.
But itās not all AI; thereās also an intriguing new report on global population change from the Club of Rome.
Letās get into it.
š¤ For the people
This week, another generative AI story that pushes 2023 deeper into the realms of what seemed, recently, possible only in science-fiction.
Itās not yet another platform, plugin, or viral image (more on those below) but a call to slow down. Over 1,000 technology leaders signed a petition demanding a pause of at least six months on the training of AI systems more powerful than GPT-4.
Signatories included Elon Musk, Yuval Noah Harari, Stability AIās Emad Mostaque, and Apple co-founder Steve Wozniak. And their language was pretty apocalyptic:
According to the authors of the petition: ārecent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one ā not even their creators ā can understand, predict, or reliably control.ā
The scale of their concern was lent support by a research paper published last week. It saw Microsoft researchers report that GPT-4 shows āsparks of AGIā. The model, they point out, shows high-level competence across mathematics, coding, vision, medicine, law, and psychology, and can solve novel problems in those domains without any need for special instructions: āin all of these tasks, GPT-4's performance is strikingly close to human-level performanceā.
Meanwhile, the iconic and ever-controversial AI safety expert Eliezer Yudkowsky went full pelt in Time magazine. He didnāt sign the petition, he says, because it doesnāt go far enough:
All training of large models, says Yudkowsky, needs to be shutdown indefinitely and worldwide. He says the governments of Earth must come together in a concerted effort to stop an AI-fuelled human extinction.
ā” NWSH Take: Pretty intense, right? Yudkowsky is, and has long been, an outlier on all this. Meanwhile, others say this weekās petition signatories have fallen prey to OpenAIās apocalypse marketing: a plan to get everyone scared and then sell subscriptions. // For my part, I donāt think AI annihilation is imminent; nor do I think these fears are founded only in hype. GPT-4ās competence across all kinds of reasoning tasks is insane. And for all the reams of coverage (guilty), I donāt think weāre anywhere near processing the implications. It no longer seems far-fetched that an AI model could start behaving in strange and uncontrollable ways in the near-term. Itās emphatically time to get serious about alignment. // Alignment is first a technical problem: how do we make sure AIs only do what we want them to do? After this, though, it becomes a political problem. Whose values should we align our AIs with? Those of Californian tech bros? // We canāt put the AI genie back in the bottle, and in practice a global āpauseā is highly unlikely. That means only answer here is to speed up research on the technical challenge of alignment, and to allow a plurality of AIs, empowering different peoples and communities to live and create according to their own value systems. That is real alignment. To that end, check out open source AI group LAIONās petition for a new internationally-funded supercomputer to train open source foundation models.
š Growth mindset
Also this week, a huge if true forecast on the future of the human population.
A new study commissioned by the Club of Rome forecasts that if current trends continue then the global population will hit 8.8 billion in around 2050, before declining rapidly to 7.8 billion by the end of the century.
The study, conducted by think tank Earth4All, also games out a scenario in which governments invest in policies known to curtail population growth, such as education and social services. Here, population peaks at 8.5 billion in around 2040 and falls to 6 billion by 2100.
Both projections are far below last yearās UN Population Prospects forecast, which had population peaking at 10.4 billion in the 2080s.
The Club of Rome is best known for the now (in)famous 1972 report The Limits to Growth, which warned of impending environmental crisis and social breakdown due, in part, to strains imposed by overpopulation.
The report came amid a wave of neo-Malthusian anxiety in the decades after WWII. A 1968 book called The Population Bomb ā which influenced the thinking of the Club of Rome ā raised the spectre of hundreds of millions of people starving to death as population growth exceeded food supply.
ā” NWSH Take: The original Limits to Growth report is today the subject of fierce disagreement. Critics say the Club gave voice to unfounded fears motivated by an ideological distaste for modernity. Proponents point out that the report offered a number of different scenarios, and that the growth-induced systemic breakdown it envisioned may yet eventuate. // This new statement on population could end up being just as contested. The Club now accept that their population bomb wonāt go off. And they celebrate their finding that population is set to peak sooner and lower than the UN expect ā stressing that itās good news for the environment. Meanwhile, though, a niche but growing school of thought says that population collapse is the real crisis coming down the track; rapidly shrinking and ageing populations, runs this line, will kill productivity and threaten economic collapse. // Where does the truth lie? Most mainstream demographers say population collapse isnāt on the cards, and that ageing populations donāt have to mean economic calamity. Meanwhile, itās not overpopulation but intense patterns of high and damaging consumption in the rich world that are the primary drivers of climate change. As ever with demography, it seems the truth lies between the extremes.
š Real life
Version five of the text-to-image tool Midjourney was released two weeks ago. And this week, users went wild.
On Reddit, Midjourney enthusiasts started sharing photorealistic, news report style images of historical events ā such as 2001ās devastating Great Cascadia earthquake in Oregon:
The truth, of course, is that no such event took place; this is all fictitious ā a AI-fuelled experiment in alternative history.
Meanwhile, Chinese users of the tool are creating pseudo-documentary images of the southwestern city of Chongqing in the 1990s.
All this comes days after the first truly viral AI-generated image: of Pope Francis in a white puffer coat.
ā” NWSH Take: In his 1981 book Simulacra and Simulation, the French philosopher Jean Baudrillard wrote about hyperreality: the emergence of a media environment in which the boundaries between the real and our representations of the real become ever-more blurred. Digital media massively amplified that phenomenon. All of us recognise the feeling, today, of living inside a tech-fuelled hall of mirrors in which the difference between image and reality is hard to discern, or even meaningless. // What can be said? That was before this generative AI revolution and tools such as Midjourney, which are now achieving photorealism that is impossible to distinguish from the real thing. These AI-generated pseudo-photos are perfect representations of representations; signs that point only to other signs ā exactly the phenomenon that Baudrillard put at the heart of his theory. They make possible a whole new level of alternate history; a convincing mass media documentation of events that never took place. Thereās going to be so, so much more of this.
šļø Also this week
š° This Twitter user made a AI-fuelled virtual companion by hooking ChatGPT to a cute but grumpy holographic rabbit avatar. Itās just one signal of how the generative AI revolution will unleash a tsunami of virtual companions; the rise of this trend is a longstanding NWSH obsession.
š§ Direct brain interface startup Neuralink is searching for a partner to help it run clinical trials on humans. In 2022 the FDA rejected Neuralinkās application to start human trials; the company has since been working to address the safety concerns that were raised.
š© A Swiss startup is working on a hydrogen-powered jet that it says will cut flights from Europe to Australia to four hours. Destinus has been testing prototypes for two years, and is now partnering with Spainās Ministry of Science. It currently takes around 20 hours to fly from Europe to Australia.
šØāš» A new report says ChatGPT could impact 300 million full-time jobs across the globe. The report by Goldman Sachs economists says the technology is āa major advancement with potentially large macroeconomic effects.ā But most jobs, they say, will be complemented by AI rather than replaced entirely.
š Chinese ecommerce titan Alibaba is planning to break itself up. The company says it will split into six business units, some of which may be listed or sold. The announcement seems intended to placate the CCP, which across the last three years has moved aggressively to diminish the power of domestic tech giants.
š° Disney has reportedly fired its entire metaverse division. Last year the entertainment giant called the metaverse āthe next great storytelling frontierā and announced plans to bring blended digital-physical experiences to its parks. The company has recently been under pressure from investors to cut costs.
āļø This just in as the newsletter goes to press; the Italian government has banned ChatGPT citing concerns over data privacy breaches. The Italian Data Protection Authority says the move is temporary and will be revoked āwhen ChatGPT respects privacyā. OpenAI CEO Sam Altman says the company ādefers to the Italian governmentā, but believes it has followed all relevant privacy laws.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,025,029,075š Earths currently needed: 1.8010517836
š Global population vaccinated: 64.4%
šļø 2023 progress bar: 25% complete
š On this day: On 1 April 1976 Steve Wozniak and Steve Jobs found Apple Computer in California.
Speed Warning
Thanks for reading this week.
The ever-more urgent quest to conform machine intelligence to our values is yet another classic case of new world, same humans.
Iāll keep watching. And thereās one thing you can do to help: share!
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to New World Same Humans, a newsletter on trends, technology, and our shared future by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
At the start of the year I promised the return of short notes. Hereās the first: a meditation on the ChatGPT moment weāre living through right now.
To avoid claims of false advertising: this one is more of an essay than a note.
If youād rather listen than read, just scroll up and hit play. But enough preamble; letās get into it.
The generative AI hype train is thundering forwards right now, and ChatGPT ā which was released in November ā was the fuel that accelerated it to its current speed.
On the face of it, thatās a bit odd. The underlying model, GPT-3, was made public almost two years earlier. Why the big noise now?
ChatGPT uses an enhanced version of that model, and so produces better outputs. But my contention is that itās the chat element ā that is, the conversational nature of the tool ā thatās responsible for ChatGPTās colonisation of the zeitgeist. People love the back-and-forth quality of interacting with this thing.
Iām interested in this, and the reasons for it. Because it seems to me that a quest to understand ChatGPTās seductive conversational power can help us commune with a deep but under-appreciated truth about human thought.
A truth that leads us, in turn, to some conclusions on our future relationship with machine intelligence.
*
In a seminal 1998 research paper, the philosophers Andy Clark and David Chalmers introduced an idea they called the extended mind thesis (EMT).
The EMT says that mind is best understood as a set of cognitive processes that extend beyond our brains and into the external world. Consider, for example, a person using a notebook and pen to help perform a series of simple calculations. The notebook and pen are, say Clark and Chalmers, just as much a part of the cognitive processes at work here as the personās brain. The notebook, for example, is acting as a kind of external memory bank.
Itās arbitrary then, according to the EMT, to say that mind is happening in the brain but not in the notebook; instead, the brain, pen, and notebook are part of one big cognitive system, and we can best understand that system as mind.
It was an arresting argument, and itās proven an influential one. Whatās more, 25 years on we citizens of the internet have been delivered into a relationship with technology that makes tangible the strengths of this idea.
Iām talking, here, about our relationships with our phones.
I tend to do my deepest thinking when Iām out for a walk. Often, Iāll reach for some half-remembered fact, person, or quote that I need to continue my train of thought, find that I canāt recall it, and then go to my phone to look it up. My phone, here, or perhaps more properly the internet itself, is acting as a kind of extension of my own memory ā one containing pretty much all the knowledge in human history that can be encoded as words or pictures. And the whole process is so seamless ā think, encounter block, look it up, keep thinking ā that the phone really does feel a natural extension of my mind. When I forget my phone, the feeling is one of my thought process being constantly interrupted. At its most acute it feels as though a part of me is missing.
ChatGPT offers users the same kind of feeling. The feeling, that is, of having your mind extended beyond the confines of your skull. Itās perhaps the first technology since the iPhone to offer that experience in a compelling new way. That truth, surely, has helped drive the excitement over the last three months.
But the current ChatGPT moment is not driven only by the feeling that the tool allows for mind extension. Thereās also the feeling that the mind extension happening is a sudden and dramatic evolution of anything weāve experienced before via notebook, calculator, or the phone as portal to the internet. Thereās a widespread feeling out there that ChatGPT is an early signal of a revolution of era-defining consequence ā even though, in truth, we havenāt yet seen the use cases, or the impact on the economy, to justify that belief.
Why is this? Why does ChatGPT feel such a big deal?
The answer Iām fermenting: itās because ChatGPT taps into, in a way even the phone does not, a deep truth about human thought. That is, its fundamentally dialogic, or conversational, nature.
The idea that underpins this is simple: itās that when we think, we talk to ourselves. What you call your āinternal monologueā is really a dialogue conducted by one person. Someone is talking (internally, not aloud) and someone is listening and will then reply, and those people are both you.
*
The idea that human thought is fundamentally dialogic has a long history, which passes through the 20th-century Russian philosopher and literary critic Mikhail Bakhtin.
Bakhtin said that language is primordially a social instrument: a process that evolved out of games of call and response conducted by two or more parties. And because language is the substrate that makes symbolic meaning and the higher forms of thought possible, that means thought, too, is fundamentally dialogic in nature.
For we moderns this is a revolutionary idea. We tend to believe that thought, in its purest sense, is something that happens inside the mind of a single individual.
Bakhtin, and others since whoāve played with the idea of dialogic thought, invert this belief. They say that thought in its purest sense happens not inside the mind of one person but between groups of people; that is, between collections of minds. Under this view the extended mind thesis applies not only to the way individual minds can be extended by tools, but also, and primarily, to the way all our minds are necessarily extended by other minds. Indeed, under this view mind itself is best understood as a phenomenon that emerges between us, rather than inside any one of us individually.
Itās notable that the earliest works of philosophy in the western tradition seem to acknowledge the dialogic nature of thought. Socrates gathers others around him and together they engage in a process of back-and-forth reasoning that is, he tells them, the path towards enlightenment. The Socratic method taps deep into the idea that thought is primordially a social phenomenon.
Via a complex psychospiritual process entangled with the evolution of the Enlightenment self, we lost touch with that truth. Instead, we came to see thought as, foremost, an inner and private unfolding. But in losing touch with the primacy of social thought, we also lost touch with another truth. Yes, thought conducted silently by one person is private and inner; but because it relies on the dialogic tool that is language, it too carries a fundamentally dialogic nature. When we think, we talk to ourselves.
We might say that this strange ability to split the self ā so that we can at once talk and listen to ourselves talk ā is consciousness. That is to say, it is the state of self-awareness that only we among Earthās creatures seem to possess in its highest form. The idea that language in some deep sense is human consciousness, that it creates the human mode of being in the world, is one I explore in depth in the ongoing essay series The Worlds to Come.
*
Iāve argued for the idea that thought ā that consciousness itself ā is in some deep sense dialogic. What does all this have to do with ChatGPT?
I hope the superficial connection is clear: in ChatGPT, we have an instrument that can externalise and amplify the internal dialogue that constitutes thought.
As weāve seen, weāve always had access to entities that can externalise our inner dialogue: other people. But other people are beings with their own cognitive and social agency. They have personhood. ChatGPT, by contrast, is not a person; it is a tool.
Itās this dual quality that is new and special about ChatGPT: it allows for the externalisation of the dialogic essence of my private thought, while being a tool that is best understood as an extension of me, rather than a person best understood as essentially an other.
In this way, ChatGPT offers a radically new form of mind extension. The excitement around it points to a submerged awareness among its users that this tool is more than just another useful app for summarising documents, or searching for information. We see in it, instead, the beginnings of a new way of doing thought. A way of externalising, and drawing out, an essential feature of our interior lives.
Right now, ChatGPT enacts a highly imperfect version of this promise. While the quality of its responses is a great advance on anything weāve seen before, itās still prone to factual errors and occasional nonsense, and responses that are not wrong but in some way off, or just bland. But all this will be improved via larger models that are better able to retrieve factual information, and cope with context and nuance. Itās the glimpse of what is ahead that has proven so exciting ā even shocking.
Pretty soon, there will be a proliferation of such models. Weāll all be able to customize our own, so that it knows our tastes, preferences, and cognitive styles.
These models, trained as they are on an appreciable amount of all the text in existence, are a strange new instantiation of our shared linguistic inheritance. Itās as though weāve created a human hivemind and given it a voice, such that weāre now able to talk to it at will. When we think, we talk to ourselves: that truth is now manifest in a whole new way.
Eventually, having a personal large language model (LLM) ā a virtual conversational companion in your pocket 24/7 ā will be no more remarkable than having a phone. When that time comes, in what ways will our thinking be amplified? In what ways will the nature and modes of our thinking change? And we must also ask: how might these models, which reflect back to us our own assumptions and prejudices, limit our thinking, or act to push us away from ideas and perspectives that lie outside the mainstream?
*
Those questions are valuable because when we ask them, weāre approaching a more accurate, and ultimately more fruitful, relationship with machine intelligence.
Contrary to much of the hype and/or panic circulating at the moment, ChatGPT and other language models arenāt going to render higher forms of human thought or creativity obsolete. Theyāre not simply going to write our books for us, do our philosophy, tell us the answer. These models canāt think creatively in the commonly understood sense of that phrase, because theyāre not conscious beings responding to a lived experience of the world. They are, rather, stochastic parrots playing a high-level game of word association. Itās just that when they play that game well enough, and effectively simulate a human interlocutor, theyāre able to amplify our thinking such that we arrive at cognitive destinations faster than we would have otherwise, or arrive at destinations that we would never have reached at all.
In short, we need to understand that whatās most exciting about these models is not what we will get straight from them; itās what they will help us get from ourselves. And theyāll help us most effectively, of course, if we bring our own powers of creativity and critical reflection to the party.
If you havenāt experienced this aspect of ChatGPT, give it a try. Choose an idea, argument, or line of thinking, articulate it to the chatbot and then go back and forth, picking up on aspects of its responses that you find interesting and asking it to develop them, and then responding in turn. Donāt forget to challenge the assumptions that start to become apparent in ChatGPTās responses, and ask yourself what itās missing. Do that for five minutes, and see where you get. At its best, it can feel like the cognitive equivalent of driving a car instead of walking.
For my part, this kind of conversation is already becoming commonplace. I can feel the seeds of a new habit taking root: Iāll just take this to ChatGPT. And Iāve started to wonder: how long until I come to feel the same about this tool as I do about my phone? How long until the ability to take a train of thought to ChatGPT is so expected, so natural, that when I donāt have access to the tool I feel as though my thought process has been interrupted? And how long until many others feel the same?
What Iām envisioning is a near future in which this ability to commune with the human hivemind, as made manifest by an LLM, comes to seem a natural part of thought. Yes, weāre a long way from that right now. But it feels as though weāre taking the first steps towards a new and powerful kind of augmentation.
*
At the outer edges of all this I wonder: is this the beginnings of the long process of human-technological convergence that transhumanists (think Ray Kurzweil) tell us is inevitable? A process that sees we humans, or at least some of us, become something else?
Iām not one of those who views the post-human future with unalloyed enthusiasm. But via generative models and other technologies ā including brain implants and techniques of genetic manipulation ā Iām increasingly persuaded that some kind of Great Divergence is coming, in which we homo sapiens branch off from one another and become various different kinds of (post)humans.
Certainly, the possibility that we may not all be the same humans for much longer haunts the borders of this newsletter. It increasingly seems to me that that our convergence with the technologies weāre building, and the almost impossible task of making any practical or moral sense of it, is the most important shared challenge we face.
In that case, the project of the age is to begin, at least, to figure out where we stand. Perhaps we can take it to ChatGPT.
Go Chat
Thanks for reading this essay from New World Same Humans.
Now that youāve reached the end, why not take a second to forward this essay to one person ā a friend, family member or colleague ā whoād also find it valuable? Or share it across one of your social networks, and let people know why itās worth their time. Just hit the share button!
Iāll be back later this week as usual; until then, be well.
David.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
Itās another packed instalment of New Week. What do we have in store?
This week, tech giants from Microsoft to Snap put their arms around a trend that has long been a NWSH obsession.
Meanwhile, Tesla and OpenAI outline far-reaching manifestos: what does it mean when corporations incubate the kinds of social and political policies we usually associate with government? And a new US startup, Figure, offers a first glimpse of its humanoid robot.
Letās get into it.
š Talk to me
This week, a constellation of signals point to the mainstreaming of a trend long in the making. Iām talking, here, about virtual companions.
Snap launched My AI, a ChatGPT-fuelled conversational agent, inside their app. The feature is intended to serve as a general-purpose chatbot; Snap say it might plan a hiking trip for a long weekend, or suggest a recipe for dinner. Or even serve the userās love of poetry:
Meanwhile, Microsoft launched a new feature that āchanges the personalityā of the AI chatbot inside is Bing search engine.
Users can toggle between three options: creative, balanced, and precise, depending on the type of answers they want from the bot.
And in a Facebook update, Mark Zuckerberg said Meta are working on āAI personas that can help people in a variety of waysā for Instagram, Messenger, and WhatsApp:
āWe're exploring experiences with text (like chat in WhatsApp and Messenger), with images (like creative Instagram filters and ad formats), and with video and multi-modal experiences. We have a lot of foundational work to do before getting to the really futuristic experiences, but I'm excited about all of the new things we'll build along the way.ā
ā” NWSH Take: Itās happening: the mainstreaming of a trend that Iāve been tracking for years. I started speaking about virtual companions in the early 2010s; back then, the idea that millions would one day see AI-fuelled entities as counsellors, companions, or even friends seemed, to many, outlandish at best. Via generative AI, that conversation has been transformed. Bingās tentative approach to creating AI personalities marks the beginning of a long collision between virtual companions and search. And it wonāt be long ā if Snap is anything to go by ā until a personalised, conversational, poem-writing virtual companion is a part of every social platform. // But this is just the start. For a glimpse of whatās coming, check out the conversational generative AI platform Character.ai, where users are creating chatbots based on their favourite fictional or historical characters. Or the AI companion app Replika ā thousands of users recently complained that an update had destroyed their AI romantic partner. // Virtual companions ā a counsellor, friend, and philosopher in your pocket 24/7 ā are heading for the everyday lives of billions. Itās an innovation that may prove as transformative as the car, or the iPhone.
š Policy agenda
Elon Musk took to the stage at a Tesla investor event this week, to unveil the long-awaited third part of the companyās Master Plan.
At the core of this part, it turned out, was nothing less than a Grand Unified Theory (GUT) of the planetary transition to sustainable energy. That GUT, according to Musk: electrify the grid, make all road vehicles electric, install a heat pump in every home, move towards green hydrogen, and build electric boats and planes.
The global shift, said Musk, will need investment of $10 trillion. And he says Tesla can play a part in every step.
Investors were reportedly disappointed; theyād hoped for more detail on Teslaās product roadmap. Shares fell 5% in the wake of the event.
All this came in the same week that the IEA confirmed that CO2 emissions hit a record high ā albeit lower than expected ā last year.
Still, it was another aspect of all this that caught my eye:
ā” NWSH Take: Thereās no denying the Master Plan Part 3 was vague; a high-level, wouldnāt it be great if march through the journey to decarbonisation. Still, see it alongside another much-vaunted corporate statement this week, and a pattern starts to emerge. Iām talking, here, about OpenAIās Planning for AGI and beyond. Itās an amazing document, making clear that OpenAI will cancel its commitments to equity shareholders if it deems it necessary, and may in future fund āthe worldās most comprehensive UBI experimentā. In other words: we know our AGI might break capitalism, and weāre figuring out some answers. // In Tesla and OpenAIās statements this week, then, we glimpse a truth. In ever-more acute ways, our governments simply canāt process the technology revolution weāre living through. Instead, itās falling to technology companies to articulate the sociopolitical arrangements that will shape our future. On the one hand, itās welcome that OpenAIās Sam Altman seems to take this responsibility seriously; heās talked endlessly in recent months about releasing AI advances gradually so as to minimise damaging social impacts ā compare that with the Zuckās move fast and break things credo. On the other, the leaders inside these companies constitute a tiny, strange, and unaccountable elite. Are we okay with this? One idea whose time has come: publicly-elected representatives on the boards of these companies. I donāt pretend the idea is easy to enact. But itās worth investigating.
š¤ Go figure
US robotics startup Figure broke out of stealth mode this week, when it released the first images of its all-purpose humanoid robot.
The company has generated excitement ever since news of its existence, and $100 million starting capital, was revealed in September. Figure is founded by Brett Adcock, also the co-founder of Archer Aviation, and it counts former Boston Dynamics, Tesla, and Apple engineers among its team of 42 staff.
The core idea? We donāt have enough people.
Demographic change, including ageing populations, means the labour force is shrinking. We live and work in built environments fitted out for human size and shaped beings. A new army of humanoid robots, says Adcock, is the answer to our labour and productivity woes.
ā” NWSH Take: Plenty of people are on the same page as Adcock. Including Elon (yes, weāre talking about him again); at the Tesla event referenced in the previous story, Musk outlined his belief that humanoid robots will eventually outnumber people: āitās not even clear what an economy means at that pointā. // Thereās little doubt that a humanoid device of the kind Figure want to build would be economically transformative. The real question, though: how far away is it? And the answer: we arenāt really sure. Researchers at Oxford University recently asked AI experts for a view on this, and the experts were not much in agreement. // Figure revealed little on their timeline, and the roadmap for Teslaās Optimus humanoid is similarly unclear. Alphabetās Everyday Robots division is doing amazing work to bring together advanced robotics and large language models in order to create a household robot we can talk to as we do one another. At some point, surely, there will be a breakthrough moment. ChatGPT and the generative AI wave has already kicked off a great enweirdening of the global economy; things could soon get a whole lot more strange.
šļø Also this week
šµ TikTok says it will limit teen users to 60 minutes of screen time per day. Teens that hit the limit will be asked to enter a passcode to keep watching. The users set the passcode, and can disable the feature entirely if they wish. TikTok say the feature will help younger users manage their time on the app. Back in New Week #43 I wrote on how the CCP insists on camera-enabled facial recognition to limit the time Chinese youth spend on video games.
š« A new report says that a record number of countries enforced internet shutdowns in 2022. Internet rights group Access Now says 35 countries enacted 187 shutdowns, most triggered by mass protest or conflict. India came top of the list, with 84 shutdowns.
š¦¾ Microsoft launched a multimodal AI that can work with both images and language. Kosmos 1 can understand and label images, solve visual puzzles, perform visual text recognition, and understand natural language instructions. Microsoft say that multimodal AIs of this kind are the best route towards AGI.
š¤ The UN says scientists should find ways to reflect the sunās rays away from the Earth. In a new report published this week, UN scientists said weāre not on track to limit warming to 1.5C, and should therefore study in more detail a āspeculative group of technologiesā that may allow us to reflect the sunās heat.
š» A US startup launched a new tool that uses GPT-3 to create an autonomous local radio show. RadioGPT will comb throw local news sources and Twitter feeds to create relevant scripts, and then use convincing synthesised voices to convert the scripts into radio shows that feature local news and classic pop hits. The platform can even be trained to emulate the voices of locally popular DJs. Last week I wrote about the transformative collision between mainstream media and generative AI.
š§ Scientists say that lab-grown brain organoids herald a new era of artificial biointelligence. First developed in 2013, organoids are tiny clumps of neurons cultivated from human stem cells; researchers at the Johns Hopkins Bloomberg School of Public Health this week published a paper in Frontiers in Science, laying out a roadmap for the convergence of conventional and organoid AI. Back in New Week #102 I wrote on how an organoid had taught itself how to play the video game Pong.
š The European Space Agency says the Moon should have its own time zone. In a statement this week, the ESA said that it and other international agencies were working on an agreement to create a universally agreed lunar time and other standards for communications and navigation services.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,019,881,908š Earths currently needed: 1.7993041243
š Global population vaccinated: 64.1%
šļø 2023 progress bar: 17% complete
š On this day: On 3 March 1938 oil is discovered in Saudi Arabia, in an American-owned well in Dammam that soon becomes the worldās largest source of crude oil.
Always There
Thanks for reading this week.
The ongoing collision between conversational AI agents and the eternal human quest for counsel, friendship, and even intimacy is a classic case of new world, same humans.
This newsletter will keep watching. And thereās one thing you can do to help: share!
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
Itās a bumper instalment this week. What lies ahead?
Generative AI is plunging us into a new world of infinite shadow and simulated media, and itās going to be weird.
Meanwhile, the results are back on the worldās largest trial of the four day work week. And a California startup wants to bring your most treasured memories back to full, immersive life.
Letās get into it.
š In the hall of mirrors
This week, a constellation of signals converged to send a message about the AI-fuelled hyperreality set to emerge around us.
A few weeks ago I gave a brief mention to Nothing Forever, an entirely AI-generated version of the 1990s sitcom Seinfeld.
The show featured blocky 8-bit style graphics, and a weird, occasionally funny script generated by GPT-3. It became a hit on streaming platform Twitch, but got banned when the Jerry character made transphobic comments. This week the creators, Mismatch Media, took to their company Discord to announce that the show will soon return to Twitch with new script controls in place to ensure there are no more toxic jokes.
The show is part of a new wave of AI-generated shadow media thatās emerged in the opening months of 2023. Look at this new continuously streaming āfully autonomousā podcast starring a rolling cast of AI characters including Joe Rogan ā all of whom respond to questions typed into the chat by the audience on Twitch.
Or see this surreal generated talkshow, featuring a virtual Conan OāBrien and Chris Rock:
Right now, these shows are more about creating intriguing experiments than they are about genuinely entertaining content. But they are strangely mesmerising. And part of their mesmeric power is the feeling that theyāre early signals of something huge, strange, and transformative for entertainment media.
Itās not just traditional, top-down media thatās set to be impacted by AI. The kinds of representations that we used to call user-generated content will be revolutionised, too. One signal? Scroll through this amazing Twitter thread full of middle-aged people using TikTokās new teen face filter, and becoming emotional as they stare back at their long-lost younger self:
How long, I wonder, until people start using generative AI tools to create and deploy younger (smarter? more charismatic?) versions of themselves?
ā” NWSH Take: Generative models will turn legacy media ā including nine decades of television ā intro training data. The result? Infinite dancing shadows based on iconic shows, and stars, of years past; see the AI Seinfeld above. Questions abound. Will an AI version of a hit show ever become a hit in its own right? Who owns the rights to such content? Weāll see media companies ā and the estates of deceased film and TV stars ā build and license AI models of their own, allowing others to create new content based on their work. // Meanwhile, weāre about to be hit by a tsunami of generated media; Amazon is reportedly being flooded by AI generated books, and this iconic sci-fi magazine had to close submissions this week after being swamped by writers sending stories written by ChatGPT. The bar for average content will be raised. The trouble is, no one wants average content. Itās not much use to, say, Disney, that theyāll soon be able to make 100 quite good animated films at much reduced time and cost. No one wants 100 quite good films; they just want the best one. So the challenge for those who want to stand out will remain the same: theyāll have to create exceptional stuff. But now, that will mean using AI to amplify the best human creators. // Meanwhile, every connected person will have the ability to become an AI-fuelled content machine. The French philosopher Jean Baudrillard wrote about hyperreality: the intertwining of the real with our representations, until the distinction becomes lost. A whole new AI-fuelled hyperreal is emerging around us. Iāll be writing more about that soon.
šØāš» The great escape
Last summer back in New Week #86 I wrote about the worldās largest trial of the four day work week; it was all set to start here in the UK.
This week the results were published. Those results came from 42 companies, each of which shifted to a four day week ā and a āmeaningful reduction in working hoursā ā between June and December while keeping staff on the same pay.
The big message? Overwhelmingly, managers reported a success. A full 92% say theyāre continuing with a four day week. And revenue wasnāt negatively impacted; it grew 1.2% on average across the trial period.
Some of the most marked results, though, were around the subjective life satisfaction of the 2,900 employees surveyed. See the graph, below, of perceived time inadequacy:
Staff saying theyād like āmore time to care for children or grandchildrenā fell by 27 percentage points. More time for own hobbies fell by 33 points.
Meanwhile, 40% said they were sleeping better, and 54% said it was easier to balance work and home life. These are huge improvements across a six month period.
ā” NWSH Take: The organisers of this trial, including advocacy group 4 Day Week Global, will put the results in front of British legislators this week. They want to persuade them that Britain should move definitively towards a 32 hour work week. Weāre a long way from anything like a consensus on that. But thereās no doubt that the four day movement is gaining momentum; this trial continues the stream of good news from previous trials in Iceland and Japan. The truth, it seems, is that most knowledge workers simply donāt need a five day week to maintain their current output. // We donāt fully understand the reasons for this, but buried somewhere among them must be the truth that many workers currently arenāt using their time that efficiently. Collectively, then, we face a choice. We can find ways to improve efficiency, continue working five days, and really get the most out of them. Or keep output broadly stable, and switch to four days. // Judging by the results of this trial, most would choose the latter. And who can blame them? Whatās the point of getting this rich, and of all these technologies of productivity, if doesnāt all combine to lead us to new and better modes of life? We must them come to ask: when machines do the work ā or allow us to do it much faster ā whatās left for us? The answer: to do what only we can do: simply being there, being human, for one another.
š° Memory palace
The metaverse hype train that powered through 2022 has lost speed recently. But this week, a reminder that the dream is still alive.
Wist is an app that takes ordinary photos and turns them into immersive 3D projections ā allowing you to āstep back inside your memoriesā using an AR or VR device.
Wist have just opened a private beta for their iOS app, and they say the service will soon come to the Oculus Quest.
ā” NWSH Take: Immersive memories: itās a compelling pitch. Even if it did remind many in the Twitter thread of an episode of Black Mirror. // The popular story around the so-called metaverse across the last few years ā itās nothing, itās everything, itās nothing again, but this time with added cynicism ā is an eternal merry-go-round when it comes to emerging technologies. One weāll no doubt see play out around generative AI across the coming year. The deeper truth when it comes to the metaverse? Yes, there was a whole ton of hype, much of it specious. Yes, many of the Big Names of 2020 and 21 will fade away. But the dream that is an immersive, useful, meaningful virtual world is real, and powerful. Virtual worlds will unlock new ways to serve fundamental human needs, new forms of self-expression, and even, as Wist signals, new modes of remembering. For that reason, we havenāt heard the last of the metaverse ā though I suspect that name will eventually fade away, to be buried alongside phrases such the information superhighway and surfing the net.
šļø Also this week
šØāš» Amazon employees arenāt happy about the companyās new return to the office instruction. CEO Andy Jassy last week wrote a memo revoking the post-pandemic do whatās best for you dispensation and telling staff to be in the office at least three days per week. He told staff, āitās easier to learn, model, practice, and strengthen our culture when weāre in the office together most of the timeā. An Amazon company Slack channel intended to help staff organise against the move has gained 16,000 members.
š¤ A research team at Chinaās Fudan University released a rival to ChatGPT. The generative AI chatbot, called MOSS, quickly went viral on Chinese social media and crashed under a flood of users. Meanwhile, the CCP is working to restrict access to ChatGPT, which state media has called a tool for the US to āspread false informationā.
š° Starlink is testing a new āglobal roamingā internet service. The plan will cost users $200 a month. The company has over 3,500 satellites in orbit, with plans to launch thousands more.
āµļø A new US Navy ship can operate autonomously at sea for 30 days. The Expeditionary Fast Transport USNS Apalachicola is 337 feet long, making it the largest autonomous ship in the Navyās fleet; experts say it could be used as a roaming platform for the launch of missiles or drones.
š¤Æ An AI taught a pretty good human Go player to beat the worldās best AI Go player. Kellin Pelrine, a US citizen and amateur player, used tactics devised by a computer to beat the top-ranked Leela Zero system. Back in 2016 DeepMindās AlphaGo made headlines when it became the first AI to beat then world Go champion Lee Sedol.
āļø The US Supreme Court is set to examine a federal law that underpins social media as we know it. Section 230 states that internet sites are not responsible for the content posted on them by users; in other words they are platforms and not publishers. Now, the Court is set to hear arguments on two key cases concerning social media content moderation; their ruling could have huge implications for Section 230 and the future of the internet.
š Saudi Arabia wants to build a gigantic hollow-cube skyscraper that will house holographic worlds. The Mukaab will be the centrepiece of a new district of the Kingdomās capital city, Riyadh, and the government is calling it āthe worldās first immersive destinationā offering a range of virtual experiences, including a taste of what it would be like to live on Mars.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,018,579,711š Earths currently needed: 1.7988619512
š Global population vaccinated: 64.0%
šļø 2023 progress bar: 15% complete
š On this day: On 24 February 1920 Nancy Astor becomes the first woman to speak in the UKās House of Commons, after her election to Parliament three months earlier.
Swimming in Infinity
Thanks for reading this week.
The collision between generative AI and legacy media will do much to shape the hall of mirrors we live inside across the coming years. Itās yet another case of new world, same humans.
This newsletter will keep watching, and working to make sense of what it all means for our shared future. And thereās one thing you can do to help: share!
If this weekās instalment resonated with you, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
This newsletter is billed a mid-week update. Here, for once, is an instalment that arrives in the middle of the week.
In this edition? A new study suggests one third of US citizens would use safe and affordable gene editing to create more intelligent children.
Meanwhile, a prestigious London law firm wants to hire someone who can whisper sweet legalese to ChatGPT.
Letās get into it.
š§ Edit button
This week, a startling glimpse of a coming ideological battle. One that will force us to confront the very meaning of the word human.
New research reveals that almost one third of US citizens say theyād use gene editing to create a more intelligent offspring.
Published in the journal Science this week, the study asked respondents if theyād use embryo selection and/or gene editing technologies to create children who are smarter and more likely to get into a top-ranked college. The respondents were told to imagine that these techniques are free and safe (neither of which is currently true).
A full 38% said theyād use embryo selection. And 28% said theyād use gene editing.
The understated conclusions of the study authors (PGT-P refers specifically to embryo selection):
āOur data suggest that it would be unwise to assume that use of PGT-Pāeven for controversial traitsāwill be limited to idiosyncratic individuals, or that it has little potential to cause or contribute to society-wide changes and inequities.ā
In other words: gene-edited humans may be just around the corner, so get ready for some seriously weird and terrifying implications.
Itās just over ten years since the breakthrough ā led by scientists Emmanuelle Charpentier and Jennifer Doudna ā that brought us CRISPR gene editing. Last month Science ran a retrospective that also looked to what the next decade may bring:
As the Science retrospective made clear, weāre entering an era of CRISPR-fuelled medical interventions. The idea that we may one day engineer babies to be smarter ā or physically stronger, or more creative ā is no longer far-fetched.
And the data in this new study suggests many will embrace such a future. We should probably be talking more about what this means.
ā” NWSH Take: Chinese scientist He Jiankui reemerged into the scientific community this week after a three-year spell in prison courtesy of the CCP. Speaking to the Guardian before an appearance in the UK, he conceded that heād āacted too quicklyā when in 2018 he created the worldās first babies with edited genomes. His work prompted rapid and near-universal condemnation. But 28% of the US citizens surveyed by this study just said, in so many words: sure, Iād gene edit my baby if it meant she had a better chance of getting into Harvard. // You might counter that 28% is still a clear minority. But a world in which one in four babies ā or even a fraction of that ā are genetically engineered for greater intelligence is a world profoundly reordered. Weāre some way from this kind of targeted genetic intervention right now. But the pace of innovation here, and the Science study, suggest we should start thinking about the implications. // What second and third order effects occur when, for example, an economic elite can access genetic engineering tech that others canāt? We talk a lot about the ways in which the internet created winner takes all models that made inequality worse. But what about this? Itās not enough simply to say weāll outlaw these practises. Rich people will find a jurisdiction that caters to them: intelligence tourism. This newsletter will keep watching.
āļø Prompt justice
Iāve written a great deal across the last few months about generative AI. This week, a clear signal that the revolution is set to impact the real economy, and the professions, in myriad ways.
The prestigious British law firm Mishcon de Reya advertised for a GPT Legal Prompt Engineer:
āWith the release of ChatGPT signalling a new phase of widespread access to LLMs, we are looking to increase our understanding of how generative AI can be used within a law firm, including its application to legal practice tasks and wider law firm business tasks.ā
The selected candidate will work with Mishcon lawyers to ādesign and develop high-quality prompts for a range of legal and non-legal use cases, working closely alongside our data science team.ā
Last week I wrote on the way ChatGPT has sparked a war for the future of search. Amid that, it looks as though law firms are about to fight their own battle of the prompts.
ā” NWSH Take: Itās not hard to imagine how LLMs will prove useful at Mishcon HQ. Case notes on complex trials can run to thousands of pages; now ChatGPT can summarise all that text in seconds. Meanwhile, think about the potential for the development and testing of arguments and counter-arguments. // The broader point here? Thereās much talk of the ways in which ChatGPT and its offspring will automate away jobs and render human creativity obsolete. I suspect the reality will be more complex. And part of that reality? Prompt writing ā that is, whispering to generative models in order to get the best outputs from them ā is set to become a creative mode all of its own. Far from erasing writers, generative models are causing the emergence of a whole new form of writing; itās about to be an amazing time for those with an aptitude for words. // Sure, itās unlikely that writing prompts for Mishcon will be anyoneās idea of creative heaven. But this is just the start. New art forms will grow out of this new form of writing. How long, for example, until we see entire short stories that function as prompts for an LLM, so that the model can create an interactive world for the reader to explore? NWSH will keep watching ā and may even launch an experiment or two of its own.
šļø Also this week
š¤ Users claim that Microsoftās new ChatGPT-fuelled Bing search engine is becoming spiteful and rude. Feedback from the first wave of testers include responses in which the chatbot claimed to be sentient, and one in which it asked its user, āWhy do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?ā Iām going on record here: Iām sceptical that some of these responses are real. I think Microsoft have some pranksters on their hands. Meanwhile, Microsoft permanently killed Internet Explorer this week, after 27 years of, letās be honest, variable service.
š Anti-ageing scientists used young blood plasma to extend the age of the worldās oldest lab rat. Scientists at US startup Yuvan Research say blood therapies of this kind may be able to ārewind the clockā on human lifespan ā but more evidence is needed.
š Amazonās CEO says the retail giant plans to āgo bigā on physical stores. Speaking to the Financial Times, Andy Jassy said: āweāre hopeful that in 2023, we have a format that we want to go big on, on the physical sideā. The company recently announced that it will lay off more than 18,000 workers.
šø News aggregation and comment platform Reddit wants to IPO later this year. Thatās according to technology publication The Information.
š Audiobook narrators say they fear Apple is using their work to train synthetic voices. Some narrators say they have only just become aware of a clause in their contract that allows the tech giant to āuse audiobooks files for machine learning training and modelsā. Back in New Week #110 I wrote about UK-based startup ElevenLabs and its eerily good text-to-voice model.
šŖ NASAās Curiosity rover has found the āclearest evidence yet of an ancient lake on Marsā. At the foothills of a Martian mountain the rover discovered rocks etched with what appear to be the marks left by flowing water. If a lake did exist on Mars, it raises the probability that the planet was once home to microbial life forms.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,016,948,535š Earths currently needed: 1.7983081097
š Global population vaccinated: 63.8%
šļø 2023 progress bar: 12% complete
š On this day: On 15 February 1946 the worldās first electronic general-purpose computer, ENIAC, is launched at the University of Pennsylvania, Philadelphia.
Next Human
Thanks for reading this week.
The human impulse towards self-enhancement ā towards the transcension of physical, intellectual, and emotional limits ā is eternal. Now, that impulse is colliding with powerful new technologies of genetic manipulation.
Via those technologies, are we about to see the emergence of fundamentally new kinds of human beings? What then for the thought that frames this newsletter: new world, same humans?
Iāll keep watching. And thereās one thing you can do to help: share!
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
I was away for most of this week, and that means a truncated instalment of the newsletter.
But that gives me a chance to dive a little deeper than usual into a single story. Which works out well, because this week saw the first shots fired in what is set to become an epic battle for the future.
Letās get into it.
š In search of answers
This week, three glimpses of the revolution taking place via large language models ā in particular, into its implications for online search.
Microsoft announced a new version of its search engine, Bing, which it billed an āAI copilot for the webā.
The all-new Bing is powered by OpenAIās GPT-3.5; its most notable new feature is a ChatGPT-style conversational capability, which responds with narrative answers to enquiries such as help me find a pet or I need to write a pop music trivia quiz.
Microsoft have long funded OpenAI, and announced a new $10 billion investment in January.
Currently only a limited preview version of the new Bing is available; still, via some pre-set enquiries it offers a taste of the way the platform integrates ChatGPT with traditional search:
Meanwhile, Google announced the coming release of their own AI conversational agent.
The tool, Bard, is built on Googleās large language model LaMDA, and in a blog post Google announced that it was being released to a small cadre of expert testers this week in anticipation of a broader release soon.
But the announcement quickly hit a snag: observers pointed out that in a promotional video to support the announcement Bard incorrectly reported that the James Webb telescope took the first pictures of a planet outside our solar system. It didnāt.
Whatās more, at the launch event on Wednesday morning Google said nothing about how the chat tool will be integrated into its broader search service.
The market response? Shares in Googleās parent company, Alphabet, fell by 9% ā wiping $100 billion off the market value.
Finally, cunning users found a way to subvert the content moderation policies imposed on by OpenAI on ChatGPT; those policies are intended to stop the chatbot generating responses some consider harmful or offensive.
Check out this Reddit thread for the full story of this method and its history. But essentially it entails writing a prompt that asks ChatGPT to emulate another AI called DAN ā it stands for Do Anything Now ā which is not subject to any content moderation.
Thatās it: you just ask. That done, ChatGPT-DAN will go wild at your behest, spewing out hateful statements and generating conspiracy theories to order. Here is a relatively mild taste:
Itās everything OpenAI donāt want associated with their new superstar creation. They are no doubt working hard, as I write, to patch up this glitch.
ā” NWSH Take:
Both Microsoft and Google are keen to stress how responsible theyāre being when it comes to generative AI. Weāre putting safeguards around this technology, they keep telling us. Weāre releasing it gradually, so we can monitor the impacts.
But donāt let all the corporate ethics-speak fool you: via the revolutionary power of LLMs, these two tech giants are now at war for the future of search. Each is racing to outdo the other, and theyāre not going to let up.
At stake? One of the biggest prizes in existence: win search, and you get to be the lens via which humanity views its collective knowledge and shared cultural history. Achieve that, and you can shape and profit from countless online behaviours and innovations built on top of your platform. Google built a $1 trillion business on these truths.
Now, Microsoft is coming for that business. Even the limited taste of new Bing currently available makes clear the way that high-quality conversational AI can be an era-defining phase shift for search. Itās a whole new way of communing with knowledge.
*
Googleās lead in search is currently overwhelming: it has around 84% of the market, while Microsoftās Bing is in distant second place with 9%.
But I wonder how many inside Google right now are recalling the story of another tech giant: Nokia. Back in 1998 the Finnish company commanded 40% of the global mobile phone market. Theyād helped pioneer the first-wave mobile revolution, and their domination seemed unbreakable. Then came 2007, and the iPhone.
Thereās much about that story that is unrepeatable. This isnāt the late 90s; Google isnāt Nokia. But itās a reminder that the seemingly unbeatable can be beaten. And, more concerningly for Google, for everyday users the arrival of ChatGPT carries with it echoes of the arrival of the iPhone 16 years ago: oh s**t, Iāve never used anything like this before. This feels like itās from the future.
In the announcement of Bard, itās hard not to hear whispers of an organisation somewhat spooked by whatās happening.
And now itās clear that the markets, too, believe that everything is up for the taking. A 9% share price dive all because Bard spat out a factoid; it seems a mad over-reaction. But if it helps drive a narrative that Microsoft are winning the generative search war, it may become a self-fulfilling prophesy.
But that war is still in its early days. Sure, Bard made an error on the James Webb telescope ā though now an argument rages over whether it was, in fact, wrong ā but ChatGPT is prone to produce factual inaccuracies and even errors on basic arithmetic.
These problems are being solved; via iterative releases, ChatGPT is already more factually reliable than it was a month ago.
Thereās a long, long way to go. Search is about to be ripped up and put back together again, and itās going to be fascinating.
*
But there are issues in play, here, that go even deeper.
Weāre still at the start of any attempt to understand what these LLMs really are, how we should relate to them, and how theyāll change our lives.
One angle on all that? Iāve argued before that LLMs such as GPT 3.5 and LaMDA are best understood as a new instantiation of the human hivemind. These AIs can take in everything weāve got ā an appreciable amount of all the text on the internet, say ā and create novel syntheses and remixes of their own. They are less a straightforward digital tool, and more a window ā onto our shared intellectual and cultural history, on to the collective consciousness.
Seen this way, we may come to view generative AI as a shift comparable to others that profoundly changed our relationship with knowledge. The arrival of the printing press. The invention of the internet.
These are bold claims. They deserve all the scepticism they will attract. All we can try to do is make sense of whatās happening in real-time.
Iāll be publishing a short note soon that seeks to dive further into all this. In particular, into why using ChatGPT feels such a particular and new kind of experience.
A sneak peak: I think itās to do with the way human thought itself is, by its nature, a dialogue. That is, with the way thought is a form of talking to ourselves.
šļø Also this week
š Climate activists are suing Shellās board of directors over global heating. Environmental law charity ClimateEarth say Shellās 11 directors have breached their legal responsibility under the UK Companies Act because Shellās climate strategy does not align with the Paris Agreement.
š° Disney says it will lay off 7,000 employees as it struggles with a slowdown in subscriptions to its streaming service. Around 46 million people subscribe to Disney+. But the companyās direct-to-consumer division, which includes the streaming service, reported an operating loss of $1.1 billion across the last quarter of 2022.
š Voice actors say theyāre facing new contracts that ask them to sign the rights of their voice away to AI. Last week I wrote about ElevenLabs, the UK startup behind a next-level generative voice tool. Meanwhile, music producer David Guetta used an AI voice clone of Eminem in a new song.
š SpaceX tested the most powerful rocket system ever built. The āstatic testā took place at SpaceXās base in Texas; it saw 31 of Starshipās 33 engines fired. The rocket system is twice as powerful as NASAās Artemis, and Elon Musk says it could help carry humans to Mars.
šø The Bank of England says the UK may one day need a ādigital poundā. A new consultation paper says the new āretail central bank digital currencyā would be issued by the Bank and could be used by households and businesses as an everyday form of payment.
š¶ The CCP wants Chinese local leaders to boost the birth rate. A senior health official called on leaders to āmake bold innovationsā to encourage more births, including moves to lower the cost of childcare and education. Last year saw the lowest birth rate on Chinese records, at 6.77 births per 1,000 people.
š Scientists say we could tackle global warming by shooting moondust into space. The new study, from the Harvard-Smithsonian Center for Astrophysics, explores the idea of using a powerful canon to fire lunar dust fired from the Moonās surface into space. If positioned between the Earth and the Sun, say the researchers, the dust could act as a heat shield that helps to lower global temperatures. I explored geoengineering of this kind in more detail back in NWSH #60.
Chat Lines
Thanks for reading this week.
We are the creature that talks. Now, weāve built machines that can talk back. Itās yet another chapter in the long story that is new world, same humans.
Itās clear that weāre setting out on a road that will take us to new and alien places. This newsletter will try to make sense of the journey.
If this weekās instalment struck a chord, please consider forwarding the email to someone whoād also enjoy it. Or share this across one of your social networks, with a note on why you found it valuable. Remember, the larger and more diverse the New World Same Humans community becomes, the better for all of us!
Iāll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
Greetings from London! Weāve just seen seven days containing plenty of fuel for the NWSH fire.
This week, a UK-based startup release an amazingly good AI text-to-voice tool. Until all hell breaks loose, and they promptly unrelease it.
Meanwhile, new research suggests 1.5C of global heating is coming sooner than we thought. And DHL turn to Boston Dynamics to solve their labour shortage woes.
Letās get started.
š¤ Voice control
This week, further glimpses into the halls of mirrors taking shape around us via generative AI.
UK-based voice technology startup ElevenLabs launched a new text-to-speech model that generates eerily pitch-perfect, human-sounding voices. Hereās a snippet:
I listen to a lot of audio books. To my ear, the voice reading Gatsby above sounds indistinguishable from those of the handful of actors ā male, American, blessed with a soothing voice ā who narrative most of them.
Whatās more, the tool allows anyone to create a highly convincing voice clone in seconds, simply by uploading a few short clips of the voice they want to recreate.
And thatās what caused all the trouble this week. Within days, people had used the tool for all kinds of mischief, including using a voice clone of actress Emma Watson to read passages from Mein Kampf, and sending a cloned Ben Shapiro on a racist rant about Alexandria Ocasio-Cortez. Much of this content was shared on the infamous trollās paradise that is 4Chan.
Three days after launch ElevenLabs withdrew free access. Theyāre now restricting access to the ābuild your own cloneā feature to paid users, and say theyāre working on a tool that will allow for the near-instant detection of AI-generated voices.
The announcement echoed one made this week by The Big Player in generative AI:
OpenAIās new tool will allow users to identify text written by a generative model, including by GPT-3.
This week, OpenAI announced that ChatGPT has hit 100 million users just two months after launch. The vast popularity of the tool has led to speculation that the internet is about to be hit by a tsunami of AI-generated junk content and disinformation.
ā” NWSH Take: The ElevenLabs story is a signal of the potent difference between really good and perfect when it comes to generated/deepfake content. Just a few months ago publicly text-to-voice tools were generating voices that sounded good, but a little robotic. ElevenLabs elevated fidelity to perfect; cue the spectre of a million convincing celebrity says hateful things fakes. // No wonder, then, that AI detection tools are about to become big business. Right now, these tools are in their infancy. Pretty soon, internet browsers will come with AI detection as standard. // The broader message here? New forms of generated content ā including voice clones ā are about to transform media and entertainment. Back in New Week #100 I wrote on how an AI will voice Darth Vader in Disneyās Obi-Wan Kenobi series; this week brought news that AI startup Metaphysic ā best-known for their viral Tom Cruise deepfakes ā will deploy its technology to make Tom Hanks appear younger in his next film. How long before a Hollywood film uses AI to reincarnate a much-loved star who is no longer with us? // But it wonāt only be Hollywood and media giants that leverage generated media; new tools will mean new creative possibilities for all of us. One glimpse? Check out this person who automated the creation of a personalised podcast; he uses ChatGPT to collect and summarise stories on topics of interest, and ElevenLabs to read out the summaries using a clone of his own voice.
š Only adapt
Research published this week argues that weāre going to exceed the 1.5C global warming target far sooner than most people believe.
Produced by scientists at Stanford University, the study used AI to analyse recent temperature changes around the world. It concluded that weāll exceed 1.5C some time in the early 2030s, no matter what happens to greenhouse gas emissions in the intervening period.
Perhaps more alarming, though, is the paperās prediction when it comes to 2C of warming.
The model found that if reaching net zero emissions takes another 50 years, then it is likely that 2C will be exceeded. This runs counter to the mainstream view, recently expressed by the Intergovernmental Panel on Climate Change, that weāll stay below 2C if we can reach net zero by 2080.
Lead researcher Noah Diffenbaugh said: ānet-zero pledges are often framed around achieving the Paris Agreement 1.5 C goal. Our results suggest that those ambitious pledges might be needed to avoid 2 C.ā
ā” NWSH Take: Itās been said before in this newsletter: the 1.5C target is toast. Weāre already at 1.1C, and the pledges that were meant to keep us below 1.5C are not being met. Now comes news that those pledges probably wonāt keep us below 1.5C anyway. // The answer, here, insofar as there is one? Itās about adaptation. This week also saw a report from the UKās Climate Change Committee ā which advises government on warming ā that the UK is āchronically underspendingā when it comes to adaptation; investment of Ā£10 billion a year is needed, said the report, to prepare for the uptick in storms, floods, and heatwaves that is coming. Also see mounting evidence for the effectiveness of direct cash transfers to poorer countries to help them adapt quickly to an imminent storm or flood. // In short, we need to continue our attempts to mitigate future climate change, while also doing more to adapt to the change thatās already unavoidable. That presents multiple challenges, but one is a challenge of collective psychology: can we accept that things are already quite bad, without giving up on our attempts to stop them getting even worse?
š¤ Go bot
Robots are coming to a workplace near you; this week saw glimpses of what is ahead.
Logistics giant DHL announced that theyāre now using the Boston Dynamics robot known as Stretch to unload trucks at one of their warehouse sites.
The announcement is no surprise: DHL contributed to the conception and testing of Stretch, and in 2022 they became the first commercial customer for the robot.
Because it involves lifting variable weights and navigating complex environments, the unloading of boxes from trucks is still typically undertaken by human workers. Stretch can unload around 350 boxes an hour, or one every 12 seconds ā thatās far faster than a human.
DHL say theyāve been dealing with a pandemic-induced labour shortage in recent years, combined with an ongoing surge in the sending of small packages caused by online shopping. The company plans to install Stretch robots at further sites around the US soon.
But DHLās global digital transformation officer for Supply Chain, Sally Miller, says DHL warehouse workers have nothing to fear. The advent of robotics, she says, will simply make their job easier and more fun: people who used to unload trucks ācan do something else that is less labour intensive and more enjoyable and value addedā.
ā” NWSH Take: Who knows whether DHLās Sally Miller believes what sheās saying? And sure, the story of worker displacement here is more complex than simply robots in, humans out. After all, people will be needed to tend to all those machines. But letās be real. The advent of Stretch and similar robots isnāt going to bring about a renaissance of creativity and āvalue addā for warehouse workers; itās going to see people shunted out of jobs. A lot of people. // DHL Supply Chain employs 165,000 people, many of them in warehouses. But thatās just the start. Back in New Week #88 I wrote on the speed at which Amazon is deploying robots; this week star technology investor Cathie Wood, CEO of ARK Invest, predicted the retail giant will have more robots than humans in its warehouses by 2030. Amazon employs around 1.6 million people worldwide, most in its warehouse and distribution network; Wood reckons the company is adding 1,000 robots a day. // The upshot? The dynamics of the labour market are about to be upturned by AI and robotics. Big corporations donāt want to admit it, and politicians donāt want to talk about the implications. But a reordering is ahead, and weāll need new social and economic settlements to deal with it.
šļø Also this week
š± A member of the US Senate Intelligence Committee called on Apple and Google to ban TikTok from their app stores. Colorado Democratic Senator Michael Bennet said Chinese oversight of the service makes it āan unacceptable threat to the national security of the United Statesā. Amid mounting calls for action, TikTok CEO Shou Zi Chew will testify before Congress this month.
š Energy firm Shell has been dramatically overstating their spending on renewable energy. Activist group Global Witness says a division of the company called Renewables and Energy Solutions spends most of the money diverted to it on gas. Shell this week announced record profits of Ā£33.1 billion for 2021.
šØ Netflix used generative AI to create backdrops for a new animated short. Dog and Boy is a three-minute animated film about a boy and his robot; Netflix cited labour shortages to explain its decision to use AI-generated artwork.
š“ A leading anti-ageing scientist says he believes the first person to live to 150 has already been born. David Sinclair is the scientist behind the information theory of ageing; I wrote about experimental breakthroughs in his work in New Week #109 last week.
šŗ A Twitch user created an AI-generated version of the 1990s sitcom Seinfeld intended to stream continuously and forever. The show ā called Nothing Forever ā streams new content 24/7, with a script generated by GPT-3. And while itās not actually funny, it is weirdly compelling viewing.
šØāš» Chinese tech giant Baidu say theyāll soon launch a ChatGPT-style chatbot of their own. The company say theyāll incorporate the technology into their search engine.
šØ A Dutch hacker acquired and tried to sell the personal data of nearly every Austrian citizen. Austrian police say the hacker obtained the full name, address, and date of birth of almost all of the countryās 9.1 million citizens, before offering the database for sale in an online forum.
š¾ An international team of astronomers using AI to search for aliens say they have promising leads. The team are using AI to comb through a vast number of radio signals collected by the Green Bank Telescope in West Virginia. They say theyāve so far identified eight signals that suggest an intelligent origin, and point to AI analysis as a new and highly effective tool in the search for life beyond Earth.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,014,736,045š Earths currently needed: 1.7975568773
š Global population vaccinated: 63.8%
šļø 2023 progress bar: 9% complete
š On this day: On 3 February 1913 the Sixteenth Amendment to the United States Constitution is ratified; it allows the Federal government to impose and collect an income tax.
Hear Me Now
Thanks for reading this week.
In 1985 the media theorist Neil Postman published Amusing Ourselves to Death. Entertainment, he said, was becoming the lens via which citizens of the west make sense of the world around them, and their own lives.
Now, the Republic of Entertainment that Postman foresaw is set to be transformed by AI generated content, which will propel us deeper into the realms of the representation-as-real, or the hyper-real. If only Postman was still around to tell us what to make of that.
This newsletter will keep up its own attempts to make sense of it all. And thereās one thing you can do to help: share!
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
After an extended break, itās the first New Week update of 2023!
This week, media giants are waking up to the IP implications of generative AI. The TL;DR? Theyāre not happy, and a mighty legal battle is brewing.
Meanwhile, a US startup is planning to become the first private organisation to mine asteroids and bring the minerals back to Earth. And a Harvard longevity doctor says he has uncovered one of the key mechanisms that governs human ageing.
Letās get into it.
š¤ Politics chat
Generative AI is an earthquake with implications weāll be forced to contend with in the years ahead. This week saw powerful signals of what is coming.
Getty Images announced that they will sue Stability AI, the company behind text-to-image platform Stable Diffusion. The media giant, which owns more than 135 million copyrighted images, says Stability AI unlawfully scraped their IP in order to help train its model.
The company isnāt seeking financial damages, says CEO Craig Peters. Instead, Peters talks about the establishment of a new business models; by way of comparison he cites the wave of illegal music streaming sites that enjoyed huge popularity in the early 2000s, but that eventually gave way to legal streaming services:
āI think there are ways of building generative models that respect intellectual property. I equate this to Napster and Spotify. Spotify negotiated with intellectual property rights holders ā labels and artists ā to create a serviceā¦And thatās what weāre looking for, rather than a singular entity benefiting off the backs of others.ā
Getty is bringing its action in the UK. A spokesperson for Stability AI said the company will defend itself, and that the suit is based on āa misunderstanding of how generative AI technology works and the law surrounding copyrightā.
This move comes in the wake of news that three visual artists will sue both Stability AI and Midjourney. Their class action lawsuit claims the platforms āviolated the rights of millions of artistsā by using their work as training data.
Puerto Rican artist Karla Ortiz is one among the three bringing the case:
Meanwhile, artists are developing tools that enable them to check whether their work was used to train a popular text-to-image model.
ā” NWSH Take: Generative AI is about to smash into a complex mesh of social systems that are woven though the economy, the world of work, creative practises, and more. And as if to underline that truth, OpenAI CEO Sam Altman was in Washington DC this week to talk to policymakers. // What they made of his message ā which reportedly included explanations that OpenAI is working towards AGI ā remains unclear. After all, policymakers across the Global North are still struggling to come to terms with web 2.0, almost 20 years after its emergence. Analysts will watch the Getty lawsuit closely for hints on how the IP question is set to play out. But thatās just the start. What about generative AIās impact on disinformation? Or employee displacement? Or our education systems: news broke this week that ChatGPT passed law exams in four courses at the University of Minnesota. How do we legislate for that? // The fundamental problem: AI and other technologies are evolving at a speed that our societies canāt adapt around. Weāve been talking about an online wild west for years, but the current dispensation will come to seem quaint given what is coming. One potential answer? In time, we may have no choice but to turn to AI to help us devise new laws and norms that enable us to cope with this technological disruption. The rise of AI, then, may necessitate governance by AI. Thatās a mind-bending idea that NWSH will come back to soon.
Update: just as Iām hitting send comes news that Google have released an insanely good text-to-music model. See Also this week, below, for further details. But clearly the IP questions currently swirling around generative AI and the visual arts will soon becoming to music, too.
š Space drills
A US startup, AstroForge, this week announced that it will launch two space mining missions in 2023:
AstroForge say they want to become the worldās first commercial company to mine an asteroid and bring the minerals back to Earth.
The first mission of 2023, planned for April, will see AstroForge refining technology tested aboard a SpaceX Falcon 9 spacecraft.
And the second, later in the year, will see the startup piggyback on another Falcon 9 ā this one headed for the Moon. An AstroForge probe will travel to lunar orbit along with the spacecraft, before heading out into deep space on its own to take hi-res images of the asteroid that AstroForge eventually wants to mine.
ā” NWSH Take: Space mining has been a mainstay of science-fiction for decades, and was the subject of a wave of hype a few years back. Now, via the maturation of the private space startup ecosystem, itās coming. // And itās going to be wild. Want a glimpse of the prizes in play? NASA say that this year theyāll launch a mission to the asteroid 16 Psyche; the 140 mile wide object is believed to contain a core of iron, nickel and gold worth $10,000 quadrillion. Thatās around 70,000 times the size of the global economy. // Of course, weād need to get all that nickel and gold back to Earth to sell it. And thatās where startups such as AstroForge come in. On the other hand, though, do we have to get it back to Earth? I canāt help wondering: if people come to believe that these minerals will one day be recoverable, will that fuel the financialisation of these asteroids? Will people start selling shares in them, or taking huge loans against them? What will that do to the global financial system? NWSH will keep watching.
š§ Department of youth
Developments this week in our eternal quest for the secrets of immortality.
Scientists at the University of Bristol say theyāve used gene therapy to ārewindā the biological age of the heart in elderly mice.
The research, published in the journal Cardiovascular Health, studied the impacts of a gene mutation often found in centenarians, and believed to help protect against heart disease. Researchers in the UK and Italy found that when the gene was administered to elderly mice, it fuelled processes of repair that resulted in the heart health of a younger mouse ā equivalent to a decade younger in human terms.
The paper comes after news last week of a major ageing breakthrough. A 13-year study conducted by Harvard genetics professor David Sinclair seems to confirm Sinclairās information theory of ageing.
Currently, mainstream scientific opinion is that the accumulation of mutations in DNA is the primary driver of ageing. Sinclair, though, has long believed that the real culprits are errors that appear over time in the information carried in the epigenome. This information is used to instruct cells on which genes to activate and which to keep silent; but over time, says Sinclair, the instructions get jumbled, and the result is the cell dysfunction we call ageing.
Sinclairās new study suggests he is (at least in part) right. And thatās huge, because it raises the possibility that we can repair the epigenetic instructions ā Sinclair likens this to ārebooting the epigenomeā ā and so literally unspool the ageing process. When Sinclair and his team gave gene therapy to mice that repaired the information in their epigenome, the result was the production of far more youthful cells. Sinclair says:
āNow, when I see an older person, I donāt look at them as old, I just look at them as someone whose system needs to be rebooted. Itās no longer a question of if rejuvenation is possible, but a question of when.ā
ā” NWSH Take: This week it was impossible to avoid headlines about Bryan Johnson, a 45-year-old Silicon Valley founder and Very Rich Person who spends $2 million a year on a regime ā including constant blood tests and thousands of whole-body MRIs ā intended to rewind his biological age to 18. Sure, thatās extreme. But Johnson is questing at the outer edges of a pursuit ā extended youthfulness ā that interests almost all of us. // In 2023, weāre going to hear a lot more about it. Sinclairās research offers a whole new angle on anti-ageing therapies. Meanwhile, work that targets ageing is becoming increasingly mainstream and well-funded. Iāve written before on Jeff Bezos-funded Altos Labs, which now has a $3 billion war chest. Pharma giant Pfizer this month announced a drug discovery partnership with longevity startup Gero. And scientists at New Yorkās Albert Einstein College of Medicine are planning a huge study on the hypothesis that the common (and cheap) diabetes drug metformin can safely extended human lifespan by years. // Exciting advances; huge unanswered questions. Not least: what will extended lifespan to do already strained social and welfare systems in the Global North?
šļø Also this week
š NASA says it will partner with the Defense Advanced Research Projects Agency (DARPA) to develop a nuclear thermal rocket engine. The Agency says the engine could one day enable humans to journey deep into space. They are aiming to have a prototype ready no later than 2027.
š¤ An Amazon engineer asked ChatGPT a series of standard interview questions for a coding job at the company, and it got them all right. The machine learning engineer revealed details of the experiment in the company Slack. Meanwhile, Amazon has warned employees not to share commercially sensitive information with the chatbot.
š³ A new study says human activity may have degraded far more of the Amazon rainforest than previously believed. Scientists at Lancaster University in the UK say logging, land conversion and more has weakened more than 2.5 million square kilometres of the rainforest; thatās around one third of its area, and double the area previously thought to have been affected.
š US scientists used CRISPR to put an alligator gene inside catfish. The gene makes the catfish more resistant to infection, which is a major problem during catfish farming. US farms produce 307 million tonnes of catfish each year.
š° SpaceX has agreed to work with the US National Science Foundation to mitigate the impacts of its satellites on our view of the night sky. Astronomers have long complained that SpaceX satellites ā the company plans to launch tens of thousands ā will impair their work. Regular readers already know that this subject is a longterm NWSH obsession.
š± The World Economic Forum says a ācatastrophic cyber eventā is likely some time within the next two years. Speaking at Davos, WEF managing director Jeremy Jurgens said that 93% of cyber leaders surveyed by the organisation believe a cyber catastrophe is coming soon; thatās a far higher proportion, said Jurgens, than seen in previous years.
š¤Æ And just as Iām hitting sendā¦Google have announced a new text-to-music model that blows away previous attempts at generative music. The model, called MusicLM, can generate long and complex compositions based on only a text description. Go here and listen to, among others: Epic soundtrack using orchestral instruments. The piece builds tension, creates a sense of urgency. An a cappella chorus sing in unison, it creates a sense of power and strength.
š Humans of Earth
Key metrics to help you keep track of Project Human.
š Global population: 8,013,469,158š Earths currently needed: 1.7971267236
š Global population vaccinated: 63.8%
šļø 2023 progress bar: 7% complete
š On this day: On 27 January 1820 a Russian expedition led by naval officer Fabian Gottlieb von Bellingshausen discovers the Antarctic continent.
Itās Magic
Thanks for reading this week.
The generative AI revolution is unfolding at what feels like breakneck speed. Googleās new music model is, at first listen, amazing. Iāll write more on it next week, or maybe sooner in the Slack group.
Weāre all going to have to figure out the consequences of these new technologies and how we propose to live with them. Itās another case of new world, same humans.
This newsletter will keep watching. And thereās one thing you can do to help: share!
Now youāve reached the end of this weekās instalment, why not forward the email to someone whoād also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
Iāll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If youāre reading this and havenāt yet subscribed, join 24,000+ curious souls on a journey to build a better future šš®
To Begin
Here we are again, at the start of it all. Well, almost; I canāt believe weāre already 22 days into 2023.
I took something of an extended break from the newsletter over the holidays. But now NWSH is back, and a whole new year lies ahead of us.
In London weāre amid another cold snap; close to zero degrees during the day. And across the northern hemisphere winter, nature is in frozen stasis. Itās a familiar yet always ā to me, at least ā strange and ghostly-seeming pause. Listen carefully and you can hear its whispered message: the journey is beginning again.
Itās a time to consider what has passed, and look to what is ahead. How can we make this year even better than the last?
This is the question I want to examine ā as it pertains to New World Same Humans ā in this note. That means first, and briefly, a review of 2022. And then, more important: whatās coming this year?
This first instalment of the year, then, is more about our community than it is about the world out there, which is our usual subject. Thereās some thinking aloud going on here; an attempt on my part to make sense of where the newsletter has been and where itās going. But given the precious attention you spend on NWSH ā and Iām so grateful you do ā I hope itās valuable for you to ride along as I figure all this out. And, of course, all feedback and suggestions are welcome.
Whatās more, without drinking any self-help kool aid, it seems to me that thereās a lesson in the journey I went on with the newsletter last year. One about defeating perfectionism, being adaptable, and playing infinite games.
But we can get to that at the end. First, letās dive into a review of 2022.
Whatās past is prologue
Before we can think coherently about where to take NWSH in 2023, we need to understand what just happened.
This is where things get a little awkward.
Back in January 2022, as many of you will remember, I set sail towards a renewed vision for the newsletter. I wanted to double-down on what makes NWSH unique; to accelerate the newsletterās journey towards itself.
Conceptually, that meant a project animated by three questions:
* What is the nature of technological modernity?
* What is the nature of a human being, and the human collective?
* What new forms of life are possible, and desirable?
And when it came to content, it meant the launch of a new schedule. While the flagship mid-week update would continue, the weekly note on Sunday was to be killed and replaced with monthly longform essays.
Weāll come back to the conceptual part. As for the new content schedule ā as many of you noticed, it didnāt work quite as planned. What happened?
Essentially, my pandemic and post-pandemic realities collided. This newsletter was born at the start of Covid; the first year was produced inside the strange empty-yet-also-frenzied deadzone that were the 2020 lockdowns, and that produced a particular set of working practises around writing instalments and getting them out. In 2022, the world opened up again. That meant a return, for me, to a frenetic schedule of working with clients and speaking at events. Which was great in lots of ways. But it brought disruption to the way I worked on NWSH.
Meanwhile, the first essay Iād planned, The Worlds to Come, ballooned to something far beyond what Iād intended for the monthly essays. Having published just two instalments (embarrassing) of a projected five, itās clear this work is something closer to a short book than an essay. Iām excited to keep putting these ideas into the world. But as this piece expanded before my eyes, any remaining chance of sticking to planned the monthly essays format slipped away.
The mid-week instalments are what rescued all this. They are the engine of the newsletter and the product most people associate with NWSH. And they stayed strong, growing longer and deeper without me ever really intending that. After some of you requested it I started recording them as a podcast; thousands now listen rather than read. These instalments found their way into the inboxes of thousands of new (and cherished!) readers, including some influential people, and ensured that our community continued to grow. Overall ā and despite the monthly essays misfire ā it was a great year for NWSH. Thatās thanks to the mid-week update, and all of you who share it.
Thatās a two-minute summary of the last 12 months. The big question, then: what next?
Coming in 2023
My first thought is that the fundamental positioning I outlined last year is one I still stand by.
As loathe as I am to quote myself, itās worth revisiting that briefly. Around one year ago to the day, I wrote this on the point of view that NWSH would bring to its mission to understand our shared future.
We live amid a white-hot technological revolution, a culture war, and a crisis of ecological collapse. Amid that, our systems of liberal democracy and technologically mediated consumerism are exhausted. We all know we must change course, yet we continue to march in the same old direction. In 2022, as Gramsci observed of his own society in the early 1930s, āthe old is dying, but the new cannot be bornā. Except it is being born somewhere out there, on the fringes. I want us to travel to those places, literally and figuratively.
Iād still go along with all that.
So itās not the destination that needs to change; only the steps weāre taking to get there. Over the Christmas break I sat with that challenge. Hereās what I decided:
* The mid-week update remains the flagship instalment
* Longform essays will remain, but theyāll be occasional rather than monthly
* Shorter notes will return; also on occasional schedule, typically on a Sunday
When it comes to the mid-week update, the decision was automatic: if it aināt broke.
On longform essays: I still want space for the deeper thinking and exploration they allow. A monthly cadence didnāt work out, but occasional essays can. The first mission here is to finish The Worlds to Come.
The return of shorter notes is the biggest change. I really miss writing the kinds of notes I used to send on a Sunday. And the newsletter needs a space for thoughts that are too long for a segment in the mid-week update, but too short and maybe too fuzzy to make an essay.
But it goes deeper than that. Part of what I love about newsletters ā about the email newsletter as a new literary mode ā is its intimacy. Sure, weāre now amid a newsletter explosion, and Iām sending this to you via a platform created in Silicon Valley and funded by mega-VCs. But despite all that, thereās still something going on here that echoes the the mediumās origins in the long emails from one friend to another that we used to do in the 90s. Last year, NWSH lost touch with that intimacy. This year I want to recover it.
Short notes will allow me to send more personal reflections and do more exploratory thinking. And they can mean new kinds of content, such as reflections on the books Iām reading. This could even spell the beginning of a NWSH book club, which is something people have asked for in the Slack group.
But there I go again, piling on more before weāve even started.
Infinite games
There we have it; the roadmap for 2023.
Sure, 2022 didnāt work quite as planned. But while I might have expected that to bother me massively, in truth it doesnāt. Sitting with that over the break, I realised that this truth is a product of perspective. More particularly, of the perspective you necessarily take on something when you commit to if for life.
Writing this newsletter, and building this community, is something Iāll do forever. Not, in the end, because of the outcomes it produces, but for the meaning and simple joy I find in thinking through ideas about our shared future and then sharing those ideas with others.
And given that, the monthly essays misfire seems only a tiny blip on a long journey.
The lesson here? From my POV, itās that when you embark on a project for the long haul, and when that project is an end it itself rather than only a means to some other end, youāre liberated into a new and fruitful way of seeing. One that helps you defeat perfectionism, stay adaptable, and find meaning in the process rather than only the results.
When thereās always tomorrow, and next year, and, I hope, next decade ā when youāre playing what has became known as an infinite game ā you have freedom to experiment, and mistakes donāt matter that much. In fact, if you arenāt making mistakes, thatās probably be a sign youāre playing too safe.
Thatās not a perspective many of us get to enjoy in our work, which is so often target and deadline driven. But itās a powerful one. So I recommend asking yourself: what infinite game are you playing in 2023?
Blast Off
The plan for this year is set. All that remains is to get to work.
And given the moment weāre living in from a world-historical perspective ā by turns weird, exhilarating, and scary ā I couldnāt more excited about what is ahead for our community. Our mission to make sense of a changing world and its collision human nature ā new world, same humans ā has never been more urgent.
Iāll send the first New Week update next week. And expect the first short note in the coming days, too. One of which will launch a new project that I canāt wait to tell you more about.
In the meantime, thanks for joining me on this adventure for another year; itās deeply appreciated š. And I hope youāre off to a great start on your journey through 2023, too.
Until next week, be well,
David.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz - Vis mere