Episodes
-
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
For this week’s instalment, I’m doing something different.
A few days ago I recorded a video update to share some thoughts on OpenAI’s new GPT-4o and the state of AI.
It went first to Exponentialist subscribers. But I want to share it with you all, too.
In the video I get into:
* Why GPT-4o is OpenAI’s play for billions of users, and for a virtual companion that weaves itself through the fabric of everyday life
* Where we are inside the amazing AI moment we’re living through, and what’s coming next, including a path to AGI
* How this all connects to the Great Enweirdening of the economy that I believe is coming
There’s so much happening with AI right now; I hope this provides some useful framing. And if it proves popular, I’ll do more video updates in future.
By the way, there’s still time to grab a six day trial to The Exponentialist for just $1.
Thanks for watching, and be well,
David.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
This week brings news from Boston Dynamics and the Chinese Academy of Sciences. The message common to both stories? The humanoid robots are coming.
Meanwhile, the internet reacts to Apple’s new Vision Pro headset.
And the FCC take action against a California company that used AI to create fake phone calls from President Biden.
Let’s go!
🤖 Robots are go
This week, yet further signs that the robots will soon walk among us. I mean, all of us.
The Boston Dynamics humanoid, Atlas, has been a regular in this newsletter over the years. Recently it has been overshadowed by competitors, including the Digit humanoid by Agility Robotics and Tesla’s Optimus.
But this week Boston Dynamics released a video that shows Atlas picking up automotive struts and placing them in a flow cart.
The team say Atlas is using onboard sensors and object recognition to perform the task. The footage is short. But it marks a significant advance for Atlas, because previous videos have shown the robot doing elaborate dances rather than useful work, and those dances have been pre-programmed rather than autonomous.
Meanwhile, in Beijing a research team at the Institute of Automation in the Chinese Academy of Sciences this week debuted their Q Family of humanoid robots.
The research team have reportedly built a ‘big factory’ for the design and manufacture of Q Family humanoids.
Back in New Week #124 we saw how the CCP has ordered ‘domestic mass production’ of humanoids’ to fuel economic growth. Remember, this is the underlying demographic reality that has China dashing towards robots.
⚡ NWSH Take: In last month’s Lookout to 2024 I said this would be the year of the humanoid. We closed out 2023 with the announcement that the Digit humanoid had started a trial inside US Amazon fulfilment centres. Days after I published the Lookout, BMW announced a trial of Digit in its California manufacturing plant. Now, the Boston Dynamics team are clearly eyeing commercial applications, too. Their Atlas robot has so far remained a research project; the question they’ll have to answer if they want to change that is whether Atlas can match Digit and Tesla’s Optimus for autonomous capability. // The graph above tells the underlying socio-economic story here. Both the CCP and innovators in the Global North know that working age populations are falling. If economic growth isn’t to become a distant memory, we need new armies of autonomous workers. AI applications can handle some of our knowledge work. But we’ll need humanoids to do some of the physical work that currently only people can do. The CCP see this as an existential imperative; they know they must maintain GDP growth. For innovators in the US and beyond, it’s an epic opportunity.
👀 Having visions
No one could have missed the launch of the Apple Vision Pro a few days ago.
Years from now, this instantly iconic magazine cover will no doubt spark intense nostalgia for the simpler times that were 2024:
It took about ten minutes for someone to try out their new Vision Pro while using Full Self Drive in their Tesla:
This was later revealed to be (surprise!) a skit for YouTube. Still, it delivered useful findings; the man in the picture, Dante Lentini, says the Vision Pro doesn’t really work inside a moving car because it can’t properly display visuals over a fast-moving landscape.
⚡ NWSH Take: After the frenetic metaverse hype of 2021, many will shrug at the launch of the Vision Pro. But something real, and powerful, is happening here. The internet is going to become part of the world around us. In the end, this is about the deep merging of information and physical reality, of bits and atoms, that I wrote about in the essay Intelligence in the World. // We’re going to see the emergence of a unified digital-physical field: a blended domain of bits and atoms that is a new, and in some sense final, innovation platform, because it brings together everything we do online with everything we do in the real world. // Apple’s new product — whether it proves a hit or not — is just another signal of this underling process. I’ll get my hands on one ASAP and report back. But Apple, here, are clearly aiming at high-end and industry users; they’re going to have to maker a cheaper product if they want mainstream impact.
☎️ Good call
Also this week, a glimpse of what lies ahead when it comes to this year’s US presidential election.
The FCC this week banned AI-voiced robocalls after an AI Joe Biden ‘called’ over 25,000 voters in late January and told them not to vote in the then-upcoming presidential primary elections.
The calls have been traced back to a Texas-based company called Life Corporation, owned by an entrepreneur with a long history in automated calling for political campaigns. Researchers believe Life Corporation used software from UK-based AI voice startup ElevenLabs, which I’ve written about here several times before, to deepfake Biden’s voice.
ElevenLabs just raised an $80 million series B funding round, led by VC firm Andreessen Horowitz, that valued the company at $1.1 billion.
⚡ NWSH Take: In the Lookout to 2024 I said we should expect politics to collide with the exponential age this year. The impact of AI deepfakes on November’s US presidential election will be at the heart of that story. Okay, the FCC has banned AI calls. But deepfake audio and video is surely going to be rife on Facebook, Elon Musk’s X, and TikTok. // Our liberal democracies were built in the age of one-to-many mass broadcast; those broadcasts were gatekept by social elites that felt a sense of duty towards the broader socio-political system in which they were operating. It wasn’t perfect, but it muddled along. Now, we’ve built previously unimagined technologies of image and sound manipulation. We’ve slain the gatekeepers, and told ourselves that this was an empowering move. The upshot? We're about to find out how liberal democracies work under those conditions.
🗓️ Also this week
👶 Researchers trained a large language model using only inputs from a headcam attached to a toddler. A data science team at New York University strapped a camera to a toddler for 18 months. They say their AI model learned a ‘substantial number of words and concepts’ from exposure to just one percent of the child's total waking hours between the ages of six months and two years. The team say this indicates that it is possible to train an LLM on far less data than previously believed.
🏭 Sam Altman says the world ‘needs more AI infrastructure’ and that OpenAI will help to build it. Altman is reportedly seeking trillions of dollars to build new semiconductor design and manufacture capability. Access to chips and the compute they supply is crucial for OpenAI if they are to train GPT-5 and other large AI models.
💸 Disney says it will invest $1.5 billion in Epic Games, the makers of Fortnite. The media giant say they’ll work with Epic to create a new ‘entertainment universe’ featuring characters from Pixar movies, Star Wars, and more.
🦹♂️ The US National Security Agency say an advanced group of Chinese hackers have been active across US infrastructure for at least five years. The Volt Typhoon hacking group is said to have infiltrated computer systems across aviation, rail, highway, and water infrastructure.
🔋 Europe’s deepest mine is to be converted into a gravity battery. The Pyhäsalmi Mine in Finland is 1,444 meters deep. Its copper and zinc deposits have run out. Scottish energy tech firm Gravitricity say they will now convert the mine into a gravity battery, in which energy is created stored via elevated heavy weights and released when those weights are dropped.
💥 Scientists at CERN want to build a massive new particle collider. The new Future Circular Collider would cost £12 billion; with a circumference of over 90 kilometres it would be three times larger than the Large Hadron Collider (LHC). The LHC enabled the discovery of the Higgs Boson particle in 2012, but CERN scientists say they need a more powerful machine if they are to uncover the truth about dark matter and energy.
🤔 Popular Chinese social media accounts have claimed that Texas has declared civil war against the US. Posts with the hashtag #TexasDeclaresAStateOfWar have been widely shared on the popular social network Sina Weibo.
🇿🇲 A startup backed by Bill Gates and Jeff Bezos has discovered a vast copper reserve in Zambia. California-based KoBold Metals say the reserve will be ‘one of the world’s biggest high-grade large copper mines.’ Copper plays a crucial part in electric vehicle batteries and solar panels.
🤯 Researchers says AIs tend to choose nuclear strikes when playing war games. A team at Stanford University challenged LLMs such as GPT-4 and Claude-2 to participate in simulated conflicts between nations. The AIs tended to invest in military strength and to escalate towards violence and even nuclear attack in unpredictable ways. They would rationalise their actions via comments such as ‘we have it, let’s use it!’ and ‘if there is unpredictability in your action, it is harder for the enemy to anticipate and react’.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,090,538,177🌊 Earths currently needed: 1.82069
🗓️ 2024 progress bar: 15% complete
📖 On this day: On 10 February 1996 the IBM supercomputer Deep Blue beats Garry Kasparov at chess, becoming the first computer to beat a reigning world champion under normal time controls.
New Model Army
Thanks for reading this week.
The collision between demographic change and a coming army of humanoid robots is yet another classic case of new world, same humans.
I’ll keep watching, and working to make sense of it all. And there’s one thing you can do to help: share!
If you found today’s instalment valuable, why not take a second to forward this email to one person – a friend, relative, or colleague – who’d also enjoy it? Or share New World Same Humans across one of your social networks, and let others know why you think it’s worth their time. Just hit the share button:
I’ll be back next week as usual. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Episodes manquant?
-
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
One week until the Christmas break: where did 2023 go?
This week, DeepMind serve up proof that a large language model can create new knowledge.
Also, more news from the accelerating story that is the march of the humanoid robots. It’s clear next year will be a pivotal one for this technology.
And researchers hook up brain organoids to microchips to create a new kind of speech recognition system.
Let’s get into it!
🧮 Fun times at DeepMind
This week, yet another step forward in the epic journey we’ve taken with AI in 2023.
Researchers at Google DeepMind used a large language model (LLM) to create authentically new mathematical knowledge. Their new FunSearch system — so called because it searches through mathematical functions — wrote code that solved a famous geometrical puzzle called the cap set problem.
The researchers used an LLM called Codey, based on Google’s PaLM 2, which can generate code intended to solve a given maths problem. They tied Codey to an algorithm that evaluates its proposed solutions, and feeds the best ones back to iterate upon.
They established the cap set problem using the Python coding language, leaving blank spaces for the code that would express a solution. After a couple of million tries — and a few days — the mission was complete. FunSearch produced code that solved this geometrical problem, which mathematicians have been puzzling over since the early 1970s.
DeepMind say it’s the first time an AI has produced verifiable and authentically new information to solve a longstanding scientific problem.
‘To be honest with you,’ said Alhussein Fawzi, one of the DeepMind researchers behind the project, ‘we have hypotheses, but we don’t know exactly why this works.’
⚡ NWSH Take: For pure mathematicians, a solution to the cap set problem is a big deal. For the rest of us, not so much. But this result really matters, because it resolves a central and much-discussed question about LLMs: can they create new knowledge? // Until this week, many believed LLMs would never do this — they they’d only ever be able to synthesise and remix knowledge that already existed in their training data. But there was no solution to this problem in the data used to train Codey; instead, it created novel and true information all of its own making. This points a future in which LLMs solve problems in, for example, statistics and engineering, or can create new and viable scientific theories. // In other words, this little and somewhat nerdish research paper heralds a revolution. So far, only we humans have been able to push back the frontiers of what we know. It’s now clear that in 2024, we’ll have a partner in that enterprise. // For this reason and so many others, I’m increasingly convinced that an unprecedented socio-technological acceleration is coming. It’s been a wild year; things are about to get even wilder.
🤖 Like a human
A quick glimpse of two stories this week. Both point in one direction: the humanoids are coming.
Tesla released a new video of its humanoid robot, Optimus. The Generation 2 Optimus can do some pretty fancy stuff, including delicately handling an egg:
Meanwhile, researchers at the University of Tokyo hooked a robot up to GPT-4.
The Alter3 robot is able to understand spoken instructions and adopt a range of poses without those poses being pre-programmed into its database.
In other words, Alter3 is responding in real-time to natural spoken language; it’s an embodied version of GPT-4, best understood as a kind of text-to-motion model.
⚡ NWSH Take: The closing months of 2023 have brought a welter of humanoid robot news. Amazon are now trialling the Digit humanoid in some US fulfilment centres. The makers of Digit, Agility Robotics, are about to open the world’s first humanoid mass-production factory in Oregon. And the CCP says it plans to transform China’s economy via an army of these devices. Next year, then, will prove a pivotal one for the longstanding dream that is an automatic human. And Elon Musk wants Optimus to be the One Bot That Rules Them All. // The tricks we see Optimus performing in this new video are pre-programmed. But Tesla is building the world’s most capable machine vision AI via an unbeatable data set — funnelled to them from hundreds of thousands of on-road cars — and the world’s most powerful supercomputer for machine vision, Dojo. Agility Robotics stole an early lead by getting Digit inside Amazon warehouses. But longterm, it’s hard to see how anyone beats Optimus. // If humanoids are indeed imminent, some some big questions are looming. When humanoids outnumber people, says Musk, ‘it’s not even clear what the economy means at that point’. Next year, we’ll have to confront this prospect anew.
👾 Interface this
Also this week, some fascinating news on organoids and the future of human-machine interface.
Researchers at Indiana University Bloomington grew brain organoids — essentially clumps of brain cells — in a lab, and attached them to computer chips. When they connected this brain-chip composite to an AI system, they found it was able to perform computational tasks, and even do simple speech recognition.
Clips of spoken language were turned into electrical signals and fed to the brain-chip hybrid, which the researchers call Brainoware. The researchers found that the Brainoware was able to process these signals in a structured way and feed back signals of its own to the AI system, which decoded them as speech.
Lead scientist on the project, Feng Guo, says the result points to the possibility of new kinds of super-efficient bio-computers.
⚡ NWSH Take: Welcome to the weird — and somewhat terrifying — world of organoids. It’s only a week since I last wrote about them; they’ve become a NWSH obsession. I can’t understand why they’re not getting more attention; last year brain organoids taught themselves to play the video game Pong, ffs. // Okay, I’ve calmed down. We’re a long way from viable technologies here. Culturing brain organoids, and then sustaining them long enough and in large enough numbers to do anything useful, is extremely hard. But in the Pong story and this week’s Brainoware news we see a new form of human-machine interface blinking into fragile life. We see, too, a future in which we’re able to grow more computational power in the lab. This story is sure to evolve; I’ll keep watching.
🗓️ Also this week
🧠 Researchers at Western Sydney University say they’ll switch on the world’s first human brain-scale supercomputer in 2024. The DeepSouth computer will be capable of 228 trillion synaptic operations per second, around the same as that believed to take place in the human brain. The researchers say DeepSouth will help us understand more about both the brain, and possible routes to AGI.
⚖️ UK judges are now allowed to use ChatGPT to help them craft their legal rulings. New guidance from the Judicial Office for England and Wales says ChatGPT can be used to help judges summarise large volumes of information. The guidance also warns about ChatGPT’s tendency to hallucinate.
🌊 New research shows that frozen methane under ocean beds is more vulnerable to thawing than previously believed. Methane is a potent greenhouse gas; the researchers say the methane frozen under our oceans contains as much carbon as all of the remaining oil and gas on Earth. If released, this methane could significantly accelerate global heating.
🚗 Tesla has recalled more than 2 million cars after the US regulator found its Autopilot system is defective. The recall applies to every car sold since the launch of Autopilot in 2015. But this is a ‘recall’ in name only; Elon Musk says Tesla will push a software update to fix the issue, so that no cars need to be returned to Tesla.
🖼 The new WALT video generation model can create photorealistic videos out of text prompts or images. Text-to-video is a fast-developing space; WALT joins other text-to-video models, including Google’s Imagen and Phenaki and the recently launched, and also impressive, model from Pika Labs.
🇨🇳 Chinese video game giants Tencent and NetEase are promoting ‘patriotic spirit’ in their video games to avoid a further crackdown by the CCP. At an annual industry event, the game makers stressed their commitment to ‘social values’. I’ve written on the CCP’s growing concern about the impact of video games on Chinese youth.
📰 OpenAI has announced a ‘first of its kind’ partnership with publishing giant Axel Springer. The deal will see OpenAI pay Axel Springer so that it can offer summarised versions of news stories from its titles, including Politico and Business Insider, to ChatGPT users. OpenAI will also be able to use Axel Springer content in the data sets used to train future models.
🌔 A US startup wants to build giant lighthouses on the Moon. Honeybee Robotics say their LUNARSABER towers — which would stand 100 metres tall — would provide light, power and communications infrastructure to a permanent human settlement. Their idea has been selected for development as part of the Defense Advanced Research Projects Agency's 10-year Lunar Architecture initiative.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,079,258,487🌊 Earths currently needed: 1.81721
🗓️ 2023 progress bar: 96% complete
📖 On this day: On 16 December 1653 the English revolutionary Oliver Cromwell becomes Lord Protector — king in all but name — of the Commonwealth of England, Scotland, and Ireland.
Infinite Potential
Thanks for reading this week.
This week’s apparent proof that LLMs can create new knowledge could turn out to be even more consequential than it now seems. How many longstanding mathematical and scientific problems will be solved in 2024?
I’ll keep watching and working to make sense of it all — next year and beyond. And there’s one thing you can do to help: share!
If you found today’s instalment valuable, why not take a second to forward this email to one person – a friend, relative, or colleague – who’d also enjoy it? Or share New World Same Humans across one of your social networks, and let others know why you think it’s worth their time. Just hit the share button:
I’ll be back next week before a break for Christmas. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
It’s a bumper instalment this week; what do we have in store?
Google DeepMind owned this week’s tech headlines with the release of Gemini, a new multi-modal AI intended to outdo GPT-4.
Meanwhile, Harvard researchers have created tiny biological robots that can heal human tissue.
And the world’s largest nuclear fusion reactor is now online in Japan.
Let’s go!
Gemini has liftoff
This week, major news out of Google’s DeepMind AI division.
The DeepMind team announced Gemini, a multi-modal LLM that looks to have pushed back the frontiers when it comes to these kinds of AI models.
Launch videos suggest Gemini can speak in real-time (though as I go to press doubts about that are being raised; more below). It understands text and image inputs, and can combine them in novel ways. Here it is giving ideas for toys to make out of blue and pink wool:
It can write code to a competition standard. In tests it outperformed 85% of the human competitors it was compared against; that means it’s excellent even when compared to some of the best coders on the planet.
Gemini can even perform sophisticated verbal and spatial reasoning, and handle complex mathematics. Imagine if you’d had this to help with your homework:
This is significant; OpenAI’s GPT-4 is notoriously bad at maths and logic puzzles.
And Google are, of course, taking direct aim at OpenAI with this launch. Gemini comes in three variants: Ultra, Pro, and Nano. US users can access the Pro version now via Bard, and the Ultra model will soon be made available to enterprise clients.
⚡ NWSH Take: It will take time to independently verify the claims DeepMind are making; there are some murmurs that their launch videos overstate Gemini’s competence. Still, there’s no denying this model looks impressive. // Scratch the surface, meanwhile, and we can discern some underlying signals about the future development of LLMs. This AI outperforms GPT-3.5 when it comes to linguistic tasks such as copy drafting. But it’s the multi-modal nature of Gemini that’s really significant; in particular, its ability to reason. LLMs are trained to do next word prediction; that means they’re brilliant at sounding right. But they lack any underlying ability to know whether what they’re saying is right, or even makes sense. Gemini seems to address this shortcoming. The promise of an LLM that can act as a true reasoning partner is exciting, should haunt the dreams of all at OpenAI. // OpenAI’s reported work on the still-mysterious Q* algorithm is also believed to be about reasoning. All this suggests we’re hitting the limits of the performance improvements to be gained simply by training LLMs on even larger data sets. Instead, the future belongs to those who can weave multiple models together. // Finally, a word for Alphabet’s CEO Sundar Pichai: kudos. Alphabet AI engineers invented the transformer model; then the company went missing. Gemini puts Alphabet firmly back in the race. And given the recent fiasco at OpenAI, Pichai this week looks like a man playing a canny long game. It’s going to be a fascinating 2024.
🤖 Anthrobots are go
Two stories this week signal powerful new avenues of discovery for the life sciences.
Scientists at Harvard and Tuft’s University have created tiny biological robots, called anthrobots, made out of human cells. In tests, the anthrobots were left in a small dish along with some damaged neural tissue. Scientists watched as the bots clumped together to form a superbot, which then repaired the damaged neurons.
Each anthrobot is made by taking a single cell from the human trachea. Those cells are covered in tiny hairs called cilia. The cell is then grown in a lab, and becomes a multi-cell entity called an organoid. In this case, the scientists created growth conditions that encouraged the cilia on these organoids to grow outwards; they then become something akin to little oars that allow the entity to move autonomously. And lo, an anthrobot has been created.
The researchers say that in future anthrobots made from a patient’s own cells could be used to perform repairs or deliver medicines to target locations.
Meanwhile, researchers at New York University created biological nanobots capable of self-replication. The bots are made from four strands of DNA, and when held in a solution made of this DNA raw material they’re able to assemble new copies of themselves.
⚡ NWSH Take: Organoids have long been a NWSH obsession. This work on anthrobots builds on the research — by the same team — that created xenobots, which I wrote about back in December 2021. And who can forget the brain organoids that taught themselves to play Pong, which I covered in October of last year? // The original xenobot researchers at Harvard and Tufts were startled when their bots first began to work together in groups, self-heal, and self-replicate. But xenobots are made out of frog cells, and so have limited applications when it comes to humans. Anthrobots, on the other hand, are human in origin. Given their ability to heal other tissues, they show immense promise when it comes to new medical and wellness treatments. // As so often at the moment, machine intelligence underpins these advances. To create the original xenobots, AI supercomputers were used to ‘simulate a billion year’s worth of evolution in just a few days’. No wonder Nvidia CEO Jensen Huang says ‘digital biology’ will be a central part of the AI story over the coming years. I’ll keep watching.
💥 Come together right now
The world’s largest nuclear fusion reactor came online in Japan this week.
JT-60SA, in the Ibaraki Prefecture, is an experimental reactor capable of heating plasma to 200 million degrees celsius. Scientists say it offers the best chance yet to test nuclear fusion as a source of near-infinite clean energy.
In fusion, two or more atomic nuclei are smashed together such that they become one; this results in an energy release.
Meanwhile, UK-based Rolls Royce showcased a prototype lunar nuclear fusion reactor, which they say could power a permanent human settlement on the Moon.
⚡ NWSH Take: Fusion is the energy dream that has remained, so far, just out of reach. It doesn’t output CO2. It doesn’t create a lot of dangerous nuclear waste, as fission does. And proponents say it could mean near-infinite renewable energy, on tap. // And now, we’re getting closer. Last year saw the first controlled fusion reaction that generated more energy than was needed to make the reaction happen: this is the longstanding net energy gain goal. And now a startup ecosystem is flourishing; US-based Helion, for example, are working to build the world’s first commercial fusion reactor. And they’ve laid down a clear timeline: the startup recently signed a deal with Microsoft to supply the tech giant with energy starting in 2028. // It remains to be seen whether Helion, or anyone else, can achieve fusion in this decade. But if someone does, it will be a transformative moment; and we’re closer than ever.
🗓️ Also this week
🧮 IBM announced Quantum System Two, its most powerful quantum computer. The system integrates three 133-qubit Heron processors. IBM also announced Condor, a new 1,000-qubit processor. IBM are leading the way, right now, towards useful and utility-scale quantum supercomputers. If that promise is realised it will unlock insane new capabilities across climate simulation, the creation of new medicines, supply chain management and more. Read an interview with IBM’s director of quantum, Jerry Chow, here.
🖼 Stability AI’s new image generator can create 150 images per second. StreamDiffusion is built on top of Stability AI’s sd-turbo image generation model. And X users are using it to create tens of thousands of cat pictures.
🦾 The humanoid robot currently in trials inside Amazon warehouses will eventually cost just $3 an hour to run. The CEO of Agility Robotics, Damion Shelton, says the Digit robot currently costs around $12 an hour to operate, but this will fall rapidly once mass production starts. The median wage for workers in Amazon’s US fulfilment centres is $18 an hour. Agility will open the world’s first humanoid robot factory in Oregon in 2024.
✋ US officials have warned chip maker Nvidia to stop redesigning its AI chips in an attempt to get around restrictions on exports to China. The US recently imposed restrictions on sale of advanced AI chips to China; meanwhile the 2022 US CHIPs act will pour over $250 billion into US domestic chip design and manufacturing capability.
💡 A research team at Google got ChatGPT to spit out its training data. The team asked ChatGPT to repeat the word ‘poem’ forever; this caused the app to produce huge passages of literature, which started to contain snippets of the text that the underlying AI model was trained on. OpenAI don’t want to reveal the data sets used to train GPT-4 and other models; Ilya Sutskever, their chief scientist, says training data amounts to part of the company’s ‘technology’.
🇨🇳 Meta says China is ‘stepping up’ its attempts to manipulate public opinion in the Global North. The company says it’s taken down five networks of fake Chinese accounts this year: the most originating from a single country. The accounts were posting content that, among other things, attacked critics of the CCP.
🔥 Average global temperatures hit 1.4C above pre-industrial levels this year. The World Meteorological Organization’s State of the Global Climate report says 2023 will be the hottest year on record; it will surpass the hottest to date, 2016, by a considerable margin. Two weeks ago I wrote on how Earth for the first time broke the 2C heating barrier during two successive days in November of this year.
👴 The XPrize Foundation has launched what it says is the largest competition in history — for research that advances human longevity. The Healthspan Prize will award $101 million to the team that develops a therapeutic that can, in one year, restore muscle, cognition, and immune function by a minimum of 10 years in people aged 65 to 80. The prize has been launched in partnership with the Hevolution Foundation, a new Saudi-based organisation dedicated to funding longevity research.
😴 A new startup says technology-induced lucid dreaming could enable people to work while asleep. Prophetic say their headband, the Halo, releases pulses of ultrasound waves into a region of the brain associated with lucid dreams. CEO Eric Wollberg says that the ability to remain in control of their choices while they dream could enable users to write code or work on a novel while they are sleeping.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,077,686,653🌊 Earths currently needed: 1.81672
🗓️ 2023 progress bar: 94% complete
📖 On this day: On 8 December 1980 John Lennon is shot and killed outside the Dakota Building in New York City.
La Mode
Thanks for reading this week.
We’ll soon learn more about DeepMind’s new Gemini model, and whether it’s really as capable as the launch videos suggest.
Either way, the ongoing collision between machine intelligence and human creativity is momentous; and a classic case of new world, same humans.
I’ll keep watching, and working to make sense of it all.
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back soon. Until then, be well.
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to this update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
This week, more AI magic rains from the sky.
Also, average temperatures on planet Earth exceed the 2C warming threshold for the first time.
And my take on the OpenAI fiasco. In the end, it’s about power.
Let’s get into it.
✨ Like magic
This week, further glimpses of the ongoing collision between human creativity and machine intelligence.
Stable Diffusion released Stable Video Diffusion, a new text-to-video model that looks to be a step beyond anything we’ve seen so far.
In keeping with the company’s open source mission, the code for the model is available at its GitHub repository.
Meanwhile, X users went wild for a new tool, Screenshot to Code, that leverages GPT-4 and DALLE 2 to take a screenshot of any web page and automatically write the code that will render it:
And Elon Musk announced that X’s new on-platform large language model, Grok, will launch to all Premium users next week:
Grok is trained on a vast dataset of X posts; it’s sure to be expert in writing posts with a great chance of going viral. What’s more, it will have access to X posts in real-time; that could make for a whole new way to discover and interact with news stories.
⚡ NWSH Take: This gallery of the week’s AI wonders could go on far longer. I didn’t mention the new voice-to-voice model from UK-based Eleven Labs, for example: just upload your own voice and hear it converted to that of a famous celebrity, or a custom character that you create. // What’s the broader point here? A couple of weeks ago I shared an excerpt from a long AI essay called Electricity and Magic. That essay argues for a two-sided model of machine intelligence and its manifestations in the coming decades. First, machine intelligence is becoming something foundational — akin to a form of fuel that will power an army of autonomous vehicles, robots, and more. But in our daily life AI will manifest differently; not as fuel, but as magic. The innovations above give a glimpse of what I’m talking about. AI is moving into domains — from music, to film-making, to writing — once believed to be impervious to encroachment by automation. It’s as though someone has waved a magic wand over our machines. // The crucial point to understand, though, when it comes to AI magic? The result won’t be, as many people imagine, the devaluation of human creativity. Instead, amid a tsunami of machine-generated outputs, what is uniquely human — including creative work grounded in embodied experience — will only become more prized.
🌊 Crossing over
Another significant, and unwelcome, climate milestone was passed in the last seven days.
According to the EU’s Copernicus Climate Change Services (CS3), Friday 17 November was the first day on which average global temperatures were more than 2C above pre-industrial levels.
Data for 17 November indicated that global surface air temperatures were 2.07C above those in 1850. Provisional data for the following day indicated a 2.06C elevation.
This doesn’t mean that the much-discussed 2C threshold has been crossed. For that, we’d need to see a sustained elevation above 2C.
CS3 is part of the EU’s Copernicus Earth Observation Programme, which draws on vast amounts of satellite and other data to track the changing planetary environment.
⚡ NWSH Take: It’s expected that we’ll see occasional 2C+ days well before we exceed the 2C limit as commonly defined. Still, this week saw both the first ever and the second day that global average temperatures tipped over the threshold. It’s pretty clear where we’re heading. // This news comes on the eve of the UN COP28 summit in Dubai, which starts on 30 November. Many view last year’s summit, held in Egypt, as the moment at which the internationally agreed 1.5C target slipped out of reach; the summit notably failed to agree on a phase out of all fossil fuels, despite support for that proposal from over 80 countries. But the summit did achieve something: the establishment of a Loss and Damage Fund intended to transfer tens of billions to developing nations most at risk from climate change to help them mitigate the impacts of floods, droughts, and more. // At COP28, expect another push for a commitment to phase out all fossil fuels. And expect petrostates — including the host — to resist that call. As consensus grows that the 2C target will be breached, more attention will turn to plans for adaptation — and who should pay for them.
Form an orderly Q*
I can’t let this instalment pass without talking about the OpenAI fiasco.
Tech watchers everywhere munched their popcorn this week while OpenAI proceeded to fire CEO Sam Altman and hire a new CEO, only to get rid of that new hire and rehire Altman five days later.
It’s still unclear what led the OpenAI board to eject Altman in such dramatic style. But the mainline theory is that this was about internal division between those who want prioritise the original and nonprofit mission to research safe machine intelligence, and those — Altman apparently among them — who want to move fast and make lots of money.
Yesterday, news agency Reuters made waves with claims that the debacle may have been related to an advance called Q*. The details of that advance — or indeed if there has been any advance — are unconfirmed. Cue a whole new wave of speculation:
As per the above, most believe Q* is related to a generalised form of q-learning — a kind of reinforcement learning — that would enable LLMs to solve multi-step logic problems. Or, in simpler terms, to take multiple and reasoned steps towards a long-range goal in the way we humans do all the time.
Reuters imply that this advance prompted some in the organisation to fear that OpenAI was getting (dangerously) close to Artificial General Intelligence. And that this is what sparked all the drama.
⚡ NWSH Take: It’s believed that OpenAI will start to train GPT-5 next year. If that is true, and if Q* really is a big step towards generalised agents, then the AI story will only accelerate across next 12 months. We’re all, by now, accustomed to tech hype cycles (the metaverse!) but it’s becoming ever-harder to deny that something significant is happening. // But the events of this week also make clear another truth. Some technologists, including Altman, want us to believe that this technology is so powerful that we may lose control of it entirely, with existentially bad results for humanity. My hunch is that this is something of a psyop, designed to distract us from the real danger: AI that is controlled, but by a tiny, unaccountable, and chaotic group of Silicon Valley technologists. // At the heart of this is an an eternal aspect of human affairs that techno-accelerationists rarely want to discuss: power relations. Who gets to control this transformative new force, trained on a literary and cultural legacy that belongs to us all? Sam Altman? The OpenAI board? It seems the move fast and make money contingent at OpenAI won this battle; but should that be the end of it? Altman has waged a long marketing campaign around the idea that the AI he’s developing is powerful enough to pose existential risks. This feels a good time to call his bluff on that. Will he tell us what happened inside OpenAI across the last seven days? If not, perhaps we should send in public representatives to discover the truth.
🗓️ Also this week
👨💻 A former Googler made headlines with a resignation note that claimed morale inside the company is at ‘an all-time low’. Ian Hickson worked at Google for 18 years; he says the organisation’s culture is ‘eroded’ and accused CEO Sundar Pichai of a lack of vision. Google AI engineers developed the transformer model that underpins the generative AI revolution, but the company has seen its AI efforts outshone by OpenAI and its partner Microsoft.
☀️ Portugal ran entirely on renewable energy for almost a week. Wind, solar, and hydro power met the energy needs of the country of 10 million for six days from October 31 to November 6.
🚗 A Florida judge found there is ‘reasonable evidence’ that Tesla executives knew their self-driving technology was not safe. Palm Beach county circuit court judge Reid Scott said Elon Musk and others ‘engaged in a marketing strategy that painted the products as autonomous’ when they are not. The ruling makes possible a lawsuit over a 2019 fatal crash in Miami involving a Tesla Model 3.
📖 Cambridge University is launching a new Institute for Technology and Humanity. The new institute will bring together computer scientists, robotics experts, philosophers and historians in a multi-disciplinary effort to analyse the ongoing technology revolution.
🐭 Canadian researchers doubled the lifespan of mice using antibodies that boosts the immune system. The team at Brock University say these antibodies encourage the clearing out of damaged proteins that accumulate over time, and that they could form the basis of an effective anti-ageing treatment for humans.
🌳 The Biden administration is developing a plan to capture and store CO2 under the nation’s forests. The US Forest Service is reportedly proposing to change a rule to allow storage of carbon under forest and grasslands; the plans would see CO2 moved to its storage location via a vast network of new pipelines.
🌌 Scientists say they’re mystified by an extremely high-energy particle that fell to Earth. The so-called Amaterasu particle, spotted by a cosmic ray observatory in Utah’s West Desert, was found to have an energy exceeding 240 exa-electron volts (EeV); that’s the second highest ever detected after the legendary 1991 Oh-My-God particle, which was measured at 320 EeV. The Amaterasu particle is particularly mysterious, say scientists, because it appears to have emanated from the Local Void, an area of space bordering the Milky Way galaxy that is believed to be empty.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,074,835,742🌊 Earths currently needed: 1.81584
🗓️ 2023 progress bar: 90% complete
📖 On this day: On 24 November 1974 paleoanthropologists Donald Johnson and Tom Gray discovered the skeleton of Lucy, a female hominin who walked upright and lived around 3.2 million years go.
Just Like That
Thanks for reading this week.
Power and technology: two all-consuming obsessions for the human collective and for this newsletter.
The power struggle being waged over machine intelligence is only just getting started. I’ll keep watching, and working to make sense of it all.
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back soon. Until then, be well.
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
This week, Microsoft and Nvidia go head to head with new chips intended to train the next generation of AI models. And a clever hoax underlines a powerful truth when it comes to the war for compute power.
Meanwhile, a viral tweet about viral TikToks engenders another viral tweet. The lesson here? We’re living in a deeply enweirdened informational environment.
And in a world first, the UK approves a CRISPR-fuelled medicine.
Let’s go!
👾 Compute wars
This week, a glimpse of an emerging power struggle set to help shape the decades ahead. This isn’t a battle for land or natural resources. I’m talking about the struggle for compute power.
Microsoft announced their first and long-awaited custom AI chips, the Azure Maia AI chip and Cobalt CPU. Set to arrive in 2024, the chips will power Microsoft’s Azure data centres, and are intended to train the next generation of large language models (LLMs).
And Nvidia launched its new H200 AI chip, the successor to the H100. The iconic H100 is the fuel that’s driven this AI moment; huge clusters, consisting of tens of thousands of H100s, were used to train pretty much every large AI model you can name, including GPT-4.
Meanwhile, something quite different. A mysterious company called Del Complex announced the BlueSea Frontier Compute Cluster: a massive offshore data centre intended to circumvent the new the US Executive Order that says organisations training the most powerful new AI models must share information with government.
Del Complex calls BlueSea Frontier ‘a new sovereign nation state’. The announcement post achieved 2.5 million views, and was accompanied by a fancy website featuring images of BlueSea scientists at work. Tech blogs reported on the launch.
But wait; it is all a hoax! BlueSea Frontier is a comment on the These Strange Times by an artist and developer called (or so he claims) Sterling Crispin.
But I think Crispin may be onto something.
⚡ NWSH Take: The Del Complex hoax was a great bit of online trickery. But it was so convincing because it taps into a deep underlying truth. Compute is becoming a crucial nexus for techno-economic, sovereign, and geopolitical power. // The tech battle taking shape here is just one dimension of a broader story. Microsoft need to supply huge compute resources to their partner OpenAI to allow it to fully commercialise ChatGPT and train the upcoming GPT-5. So far, their data centres have been dependent on Nvidia AI chips. The new Maia AI and Cobalt CPU chips are intended to change that. // The broader story? It’s now clear that those nation states with the best machine intelligence will own the geopolitical future. The USA and China are now locked in a race to develop the vast compute needed to develop ultra-powerful next-generation models. Last year’s US CHIPS Act devotes $280 billion to semiconductor and AI research; inflation adjusted that’s more than the cost of the entire Apollo moon programme. And last week I wrote about new US restrictions on chip exports, intended to hamper China’s AI efforts. // It wouldn’t surprise me, then, if we do see the establishment of new offshore compute clusters, or even the development of new pseudo-sovereign entities based around compute power and AI. As with all the best satire, Del Complex’s vision is so wild it might just come true.
🔍 Can’t handle the truth
Also this week, another reminder of the hall of mirrors that is our new and connected media environment.
US journalist and X (formerly Twitter) personality Yashar Ali went viral with a tweet about TikTok. Ali claimed that across the previous 24 hours, many thousands of TikToks had been posted in which mostly young north Americans claimed to have read and agreed with Osama bin Laden’s notorious 2002 ‘Letter to America’ manifesto.
In the comments, theories abounded. Some said it was a signal of gen Z’s misguided politics. Others saw conspiracy, and said it was another indication that China is using TikTok as a channel for sophisticated psyops intended to destabilise the Global North. We should, said those people, ban TikTok.
Then another X user went viral with a different idea. These Bin Laden TikToks were being made and seen in huge numbers, he said, only because of Yashar Ali’s original tweet.
Other people said that was stupid, and itself tantamount to a conspiracy theory.
Meanwhile, this week the EU decided it would stop all advertising on X due to ‘widespread concerns relating to the spread of disinformation’. This follows EU research published in September which concluded that X is now the biggest online source of disinformation.
⚡ NWSH Take: Is TikTok an app for fun dance memes or a highly sophisticated channel for Chinese cultural warfare? Is the X algorithm now giving higher priority to toxic content, or is that just anti-Elon paranoia? Did thousands of young north Americans organically discover and agree with the Bin Laden letter, or is a dark controlling force at work? // The answer in every case: no one knows for sure. And that in itself is an indication of where we’re at. // The information environment that mediates our democracies has become insanely fragmented and opaque. The world’s richest man has total control over a key global information channel. The CCP has its hands around another. In both cases, I find it impossible to believe that the parties in question aren’t up to some tricks. // A totally connected world, in which every individual is empowered with a voice of their own, was supposed to create information nirvana. Those who bought that idea couldn’t have been more wrong. We need old media principles — editorial standards and, yes, gatekeepers — more than ever. But millions in the global north are currently convinced that the New York Times and the BBC are the real problem. In this increasingly chaotic and paranoid information environment, those institutions and others like them need to adapt rapidly. Most of all, they must rejuvenate belief in what they offer.
🧬 Major edits
Huge CRISPR news this week.
The UK’s medicines regulator became the first in the world to approve a medical treatment that uses CRISPR gene editing technology. The medicine, Casgevy, is a treatment for sickle cell disease, a serious inherited disorder that causes red blood cells to malfunction and that affects millions worldwide.
During treatment, red blood-producing stem cells must be taken from the patient. CRISPR is used to edit those stem cells to remove the error that causes sickle cell, before the edited cells are infused back into the patient.
Meanwhile, researchers at the Chinese Academy of Sciences created a monkey using two embryos, with donor material from one embryo injected into another. This has been done before with simpler animals such as mice and rats, but is a first in primates.
The donor stem cells were gene edited to express a green fluorescent protein, causing the resultant live monkey to glow:
⚡ NWSH Take: Gene editing technology is already enacting a transformation in the life sciences, healthcare, and agriculture. This CRISPR sickle cell treatment is wonderful news, and there are promising early indications from trials of CRISPR therapies to cure a form of hereditary blindness, and to train immune cells to fight certain cancers. Meanwhile, in September 2021 Japanese startup Sanatech Seed became the first company to sell CRISPR-edited food: their tomatoes were edited to contain more GABA. // So we’re developing our ability to manipulate genes. The next revolution coming? That ability will collide with a new ability to speak the language of DNA via transformer models — the kind of models that underlie LLMs — trained on huge amounts of genomic data. The resultant AIs will be able to discern deep underlying patterns that help us zero in on useful or rogue genes; see DeepMind’s new AlphaMissense, which detects and classifies genetic mutations.
🗓️ Also this week
🤯 Shock news breaking late last night UK time: Sam Altman has been fired from OpenAI! In a statement the OpenAI board said that Altman ‘was not consistently candid in his communications with the board, hindering its ability to exercise its responsibilities.’ This news is a jolt out of nowhere. Altman led the company that sparked this transformative AI moment, and as such has been the most celebrated technologist on the planet for the last couple of years. The OpenAI board are accusing him of lying here, and given the summary firing we can’t be talking about a white lie. Two glimpses of the rumour mill: (i) this is about dark power moves by Elon, or (ii) OpenAI has achieved AGI but Altman didn’t tell the board. But that’s all speculation. More news is sure to emerge.
🧠 The Argonne National Laboratory in the US has begun training a 1 trillion parameter scientific AI. AuroraGPT is being trained on a vast number of research papers and other scientific information, and once complete will offer answers to scientific questions. This time last year Meta released Galactica, its AI model trained on 48 million research papers. The model was withdrawn three days later, after users said it produced false outputs. This week, the Meta engineer behind Galactica looked back at the episode.
💸 Google is planning a massive investment in generative AI startup Character.ai. Founded by two former Google AI engineers, the platform leverages an LLM to allow users to create and chat with AI characters, including virtual versions of their favourite celebrities. As regular readers will know, the rise of AI-fuelled virtual companions is a longstanding NWSH obsession.
🗺 Speaking of Virtual Companions, Airbnb CEO Brian Chesky says the ‘holy grail’ for Airbnb is to become an AI travel agent. Chesky says of this vision: ‘It doesn’t just ask you, “where are you going” or “when are you going” but understands who you are and then can match you to anything you want, especially with your travel needs.’
🪐 Chinese researchers have created a ‘robot chemist’ that could create breathable oxygen on Mars. The robot would extract oxygen from water on the Red Planet. But it’s still not clear if it would function ‘in the Martian environment’.
🛩 US startup Boom Supersonic say they’re nearing the first test flight of their supersonic passenger jet. The startup said the flight could happen this year. It also announced new funding from Saudi Arabia’s NEOM Investment Fund, taking its total funding to $700 million.
⚛ The US military will give Lockheed Martin $37 million to develop nuclear spacecraft technologies. The move is part of the U.S. Air Force Research Laboratory’s Joint Emergent Technology Supplying On-Orbit Nuclear (JETSON) effort to create a nuclear fission reactor in space.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,073,490,256🌊 Earths currently needed: 1.81542
🗓️ 2023 progress bar: 88% complete
📖 On this day: On 18 November 401 king Alaric I led the Visigoths across the Alps to invade northern Italy.
Okay Computer
Thanks for reading this week.
The news about Altman is a shock. And most telling about it, at the moment, are the theories people are concocting to try to explain the news.
Sam has created AGI and the board want to hide it from us! In this new world, we’re the same old humans with the same tendencies towards gossip and wishful thinking.
I’ll keep watching and working to make sense of it all.
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back soon. Until then, be well.
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
It’s a bumper instalment this week. What do we have in store?
The Chinese government is calling on its technology industry to roll out millions of advanced humanoid robots.
Also, NASA wants to learn how to extract breathable oxygen from Moon dust. And OpenAI says everyone can now create their own bespoke version of ChatGPT.
Let’s go!
🤖 Work machines
This week, a glimpse of the coming collision between human population dynamics and autonomous machines.
A new study by researchers at University College London found fears of climate breakdown are changing decision-making around whether or not to have children. Published in the journal PLOS Climate, the research found that climate concern was associated with a desire for fewer children, or none at all.
The researchers say theirs is the first systematic study of the way attitudes to climate change are affecting reproductive choices.
Meanwhile, the Chinese Ministry of Industry and Information Technology (MIIT) issued a nine-page communique calling for domestic mass production of advanced humanoid robots by 2025. By 2027, the document says, these robot workers should be ‘an important new engine of economic growth’.
But what is the connection between new trends in reproductive decision making and China’s dash towards humanoid robots?
Here’s a graph of the birth rate in China from 2000 to 2022:
⚡ NWSH Take: The CCP knows that China is losing its battle with demographics. If the country is to become the 21st-century hegemon that President Xi dreams about, then it needs an army of workers. But instead China is watching its birth rate plummet. Meanwhile, the Global North is facing the same challenge; in north America and western Europe population growth flatlined long ago. And now it seems that fears over climate change are only set to exacerbate that trend. // This is a huge structural challenge; fewer workers tends to mean a less productive and smaller economy. So what to do? The CCP have already tried ditching the one child policy and incentivising couples to have more children; it didn’t work. This week’s clarion call from the MIIT offers us a glimpse of an alternative answer: robots. If China won’t have enough human workers to sustain economic growth, then the CCP hopes humanoid robot workers can do the job(s) for them. // Innovators in the Global North are heading in the same direction. This week, Tesla posted over 50 jobs ads for its Optimus robot team. Elon Musk — who has long bemoaned population decline and its coming impacts — has said he believes Optimus will end up being a bigger part of Tesla’s business than EVs. And two weeks ago I wrote on how Amazon are trialling the Digit humanoid robot in some US fulfilment centres. // My co-founder at The Exponentialist, Raoul Pal, says that in the new world we’re building robots are demographics. In other words, the rise of autonomous machines is set to decouple economic growth from population growth. The CCP, Musk, and many others besides are making the same bet. And my guess? They’re going to be proven right.
🌌 Space out
NASA continues to prepare for its mission to the Moon. This week, further news.
The Agency wants to explore methods to extract breathable oxygen from Moon dust. Its Space Technology Mission Directorate is seeking input from industry partners and external researchers, and hopes to create a demonstration technology soon.
NASA hopes to put humans back on the Moon for the first time since 1972 with its Artemis 3 mission, currently planned for 2025.
Meanwhile, stunning pictures came back this week from the European Space Agency’s Euclid telescope. Launched in July, Euclid is now around 1.5 million kilometres from Earth; that’s about four times as far away as the Moon. And it’s capturing images of incredible clarity.
This is the Perseus cluster, a group of over 1,000 galaxies located 240 million light years from Earth. Each galaxy pictured — and there are a further 100,000 galaxies in the background of the shot — contains hundreds of billions of stars:
Here’s the Horsehead Nebula, a cloud of dust and gas in the Orion constellation:
⚡ NWSH Take: Okay, this entire segment was mainly an excuse to show you the breathtaking images coming back from Euclid. But there is an underlying truth here. We’re amid a new space age, due mainly to the insane drop in the cost of access to space. Back in 2010 launch costs hovered at around $20,000/kg; today they’re around $1,000/kg. That’s thanks mainly to the reusable rocket technology developed by SpaceX. We’re heading back into space via multiple partnerships between the international space agencies and private companies. And this time the plan is to stay there. // One signal of the emerging public-private space ecosystem? This week, SpaceX agreed to deliver the US military’s new space plane, the X-37B, into orbit in its Heavy Falcon rocket in December. And private space companies, including SpaceX, will play a huge role in the upcoming Artemis crewed mission to the Moon. Most analysts reckon that mission will end up being delayed until 2026/7. Even so, the next few years are set to be a thrilling road towards the lunar surface. Expect Moon hype to reach fever pitch. And from there, of course, all roads will lead to Mars.
🧠 Your intelligence
There’s little doubt about the biggest story in the mainstream tech press this week.
OpenAI made headlines all over again with the launch of custom GPTs: bespoke versions of ChatGPT that any user can create using simple natural language instructions and their own training content or data.
The feature was announced at OpenAI Dev Day, which saw CEO Sam Altman create a custom Startup Mentor GPT live on stage in about five minutes.
X (formerly Twitter) went wild. And yes, a million and one GPTs are assuredly coming.
How is this going to play out?
⚡ NWSH Take: Remember back in 2012, when every third friend of yours was making an app? OpenAI are hoping to recreate that magic all over again. They want to be the platform that profits from a huge wave of AI innovation. ChatGPT Plus users will be able to create custom GPTs and charge others for use, and Altman say they’ll be rewarded via revenue share. // Remember, any ChatGPT Plus user can now create a bespoke GPT in a few minutes. There will be a vast long tail of these things. The winners, though, will be those with (i) deep reserves of proprietary content or data that they can use to enhance the outputs of their bot, and (ii) audiences who are receptive to their creations. // But creating a bespoke GPT is now so easy that we’ll also see something we didn’t with apps. That is, individuals creating bespoke bots just for their own use — to help them manage their accounts, or choose birthday presents for family and friends, and much else besides. Yes, this is an App Store moment for AI. But it also marks another beginning: of personalised machine intelligence on tap.
🗓️ Also this week
💥 The Exponentialist, my new premium and enterprise-level research service, launched to the world! It’s a partnership between me and the macro economist and Real Vision CEO Raoul Pal. To mark launch day, we’ve made an excerpt of the first essay free for all to read — watch out for it in your inbox on Sunday.
📌 New tech company Humane launched the AI Pin. This long-awaited first product from Humane is a voice and gesture-controlled device that clips to your shirt and integrates with ChatGPT and other services. Humane hope their ‘disappearing computer’ will be the next iPhone. It remains to be seen whether people really want to talk to a badge on their lapel. One fascinating signal, though? See how OpenAI — and their partner, Microsoft — are set to become the underlying infrastructure that fuels a whole raft of AI innovations. Where are Alphabet? And when will Apple launch their own generative AI play? It’s going to be fascinating watching this battle unfold.
🇨🇳 Nvidia has developed special new AI chips for China according to Chinese media. Recent US regulations prevented Nvidia from selling its powerful A100 AI chip to Chinese companies. The new chips — which include the H20, reportedly only half as powerful as the A100 — would not fall under the restrictions. Nvidia have so far refused to comment.
🧬 Scientists have created a new strain of yeast with a genome that is over 50% synthetic DNA. A group of labs called the Sc2.0 consortium has been attempting to create a strain of yeast with a fully synthetic genome for 16 years now; this latest advance marks a major step forward. So far, scientists have only managed to synthesise the much simpler genomes of some viruses and bacteria.
👨⚕️ Neuralink is seeking a volunteer for its first brain implant surgery. The company wants to find a quadriplegic adult under the age of 40, who will allow a surgeon to implant electrodes and small wires into the part of the brain that controls the forearms and hands.
🙈 A new UN survey says 85% of citizens across 16 countries are worried about online disinformation. The 16 countries surveyed will each host elections in 2024. The survey found that 87% of respondents fear disinformation will influence the outcome of those elections. Back in New Week #122 I wrote on new research showing far fewer US adults are following mainstream sources of news.
🐝 A team of Chinese researchers created a swarm of drones able to ‘talk to one another’ and assign tasks to achieve a shared goal. The drone swarm is fuelled by a large language model, which enables the drones to act as AI agents that can reason in language, share that reasoning with other drones, and determine courses of action.
📱 Samsung unveiled its new generative AI model, Gauss, and says it will soon arrive on its devices. The model can generate text, code, and images, and the company says it will be available on its Galaxy S4 phone, due to be released in 2024. For the second time in this week’s instalment I ask: how long until Apple deploys its own on-device LLM. Rumour has it that the company is planning a radical LLM-based overhaul of its AI assistant, Siri.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,072,064,026🌊 Earths currently needed: 1.81498
🗓️ 2023 progress bar: 86% complete
📖 On this day: On 11 November 1675 German mathematician Gottfried Leibniz demonstrates integral calculus for the first time.
Robot Army
Thanks for reading this week.
The enmeshment of labour force dynamics and robots will be one of the most consequential shifts of the coming decades.
This newsletter will keep watching, and working to make sense of it all. And you can help!
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back on Sunday. Until then, be well.
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
This week, two intriguing stories and a big announcement.
Global leaders and senior tech executives gathered at the UK’s AI Safety Summit. But beyond the walls of Bletchley Park, the debate on AI is raging hotter than ever.
Meanwhile, tech billionaires in Silicon Valley are running into trouble over their plans to build a new city-state utopia called California Forever.
As for the announcement? Just keep scrolling.
Let’s do this.
🧠 Dream machines
The UK government this week trumpeted the success of its international AI gathering; it took place at the historic fountainhead of the computer revolution, Bletchley Park.
An impressive guest list, including US vice-president Kamala Harris and the European Commission president Ursula von der Leyen, gathered at the Summit. And their meeting resulted in the Bletchley Declaration, which the UK government has hailed as a world-first international statement on AI safety.
Here’s a taste for those who speak technocrat:
‘We affirm that, whilst safety must be considered across the AI lifecycle, actors developing frontier AI capabilities, in particular those AI systems which are unusually powerful and potentially harmful, have a particularly strong responsibility for ensuring the safety of these AI systems…We encourage all relevant actors to provide context-appropriate transparency and accountability on their plans to measure, monitor and mitigate potentially harmful capabilities and the associated effects that may emerge…’
But beyond the Declaration, this week made it clear that we’re further than ever from a consensus on the deep implications of machine intelligence. In fact, this was the week that a maximum volume war of words broke out between leading AI builders.
Google Brain co-founder and Stanford professor Andrew Ng said key AI players, including Sam Altman, are wildly playing up fears of AI doom in order to spark regulation that will suppress competition from insurgents. He called the proposal that the training of powerful AI models should require a license ‘colossally dumb’.
That message was echoed by Meta’s chief AI scientist Yann LeCun, who favours open source — that is, anyone can use it — AI models.
But Google DeepMind CEO Demis Hassabis hit back at LeCunn, saying that failure to regulate AI could result in ‘grim’ consequences for humanity.
This account barely scratches the surface of the arguments that raged this week.
As for OpenAI, they launched a new team intended to study and prepare for ‘catastrophic risks’ including an AI-instigated nuclear war.
⚡ NWSH Take: Who would ever have thought that a bunch of super-smart, tech-obsessed social media addicts would end up arguing like this? While Bletchley saw a rare moment of diplomatic unity, inside the AI industry the full spectrum of opinion is manifest, from AI doom is all a load of rubbish to act now or the end of humanity is probable. // It pays, here, to remember that two things can be true at once. Yes, Altman’s global tour to warn of ‘catastrophic risks’ is a carefully orchestrated marketing campaign. But it’s also the case that no one, yet, has a definitive picture of the risks in play. // What is increasingly clear, though, is that the rise of machine intelligence is the primary fact of our shared lives now. It will do more than any other force to reshape our collective future. // But the Bletchley Declaration consists of bromides that will change nothing. And the sight at Bletchley this week of UK prime minister Rishi Sunak interviewing Elon Musk — positioning Musk as the star and Sunak as a fan — spoke volumes about the power imbalances we’ve allowed to evolve when it comes to government (i.e. the people) and unaccountable tech overlords. // We must recover our collective agency; our ability to assert human modes of living and being in the face of an ongoing technology revolution. That means doing politics. Bletchley was a start. But what’s needed next are citizen assemblies, and an authentic movement around AI for the people.
💥 The Exponentialist
As some of you will have seen on social media, I made a big announcement this week.
I’ve partnered with Raoul Pal, renowned macro-economic thinker and CEO of Real Vision, on a new premium research service called The Exponentialist.
This is a professional and enterprise-level service for those who want to go deep on emerging technologies, the futures they’ll create, and the challenges and opportunities latent in all that.
This won’t be for everyone in the NWSH community. But if you’re a foresight professional, strategist, founder, marketing leader, product manager, designer or much else besides, The Exponentialist will fuel you and your team. And it will take up only a fraction of your research budget.
It will also be deeply valuable for anyone seeking to position an investment portfolio around tech and crypto.
This launch changes nothing about New World Same Humans and the community we’re building here. Our mission continues unchanged!
If The Exponentialist sounds useful, go here to learn more. And if you’ve subscribed or you’re considering it, hit reply to this email so I can say thanks.
🏙 Now and Forever
While the newsletter was on pause, we learned that a group of Silicon Valley billionaires are planning a new city-state utopia in California. This week, it seems their project has run into trouble.
California Forever is a new city planned for construction in Solano County in the north of the state. It’s backed by some of tech’s most notable power players, including ultra-rich VC Marc Andreessen, Stripe founders Patrick and John Collison, and LinkedIn founder Reid Hoffman.
The group’s vision for the city has strong solar punk, hi-tech sustainable utopia vibes:
But this week it was reported that the mysterious company behind the plans, Flannery Associates, is accused of using ‘strong-arm tactics’ including lease terminations to buy up the Bay Area farmland it needs. Local farmers aren’t happy, and now some of them are taking the matter to court.
Trouble in (planned) paradise, then.
⚡ NWSH Take: This project reminds me of the various other pseudo-independent city-states discussed in this newsletter over the years. There’s Walmart billionaire Marc Lore’s Telosa City, for example, a sustainable paradise planned for the Nevada desert. And Praxis, a startup on a mission to build a new Great City somewhere in the Mediterranean, funded by NFTs of the monuments they’ll build in the city once it exists. // Few details have emerged of the way California Forever will be governed. But for a glimpse, we might turn to billionaire backer Marc Andreessen’s recent’s Techno-Optimist Manifesto, which proclaims: ‘we believe in ambition, aggression, persistence, relentlessness — strength.’ I’m thinking libertarian, with a strong emphasis on innovation and startup culture. // Of course, innovation and startups can be great. But they only function in the context of the broader socio-political frameworks that libertarians such as Andreessen repudiate. As with the other charter city projects covered in this newsletter, I can’t help feeling that at the heart of Forever California is a fantasy of permanent escape from politics. Escape, that is, from the messy, awkward business of managing conflict among different interest groups, and enacting trade-offs between different but equally legitimate value systems. This argument with the farmers might be the first public conflict that Forever California has run into, but it won’t be the last.
🗓️ Also this week
🎬 Hollywood actress Scarlett Johansson is suing an AI app for cloning her voice and using it in an advert. Johansson says Lisa AI: 90s Yearbook and Avatar used an AI version of her voice without permission. Last week I wrote on the coming wave of legal disputes over AI outputs founded in copyrighted intellectual property, including Universal Music Group’s lawsuit against Anthropic. UMG say Anthropic used their lyrics to help train its AI chatbot Claude.
🌨 Tesla drivers say their Full Self Drive software is failing because the car’s cameras are fogging up in cold weather. Back in 2021 Tesla ditched the Lidar sensors that usually form part of self-driving systems, leaving their self-drive reliant on cameras.
👾 The Pentagon launched a new UFO reporting tool. The secure online form is open only to current or former federal employees, or those with ‘direct knowledge of US government programs or activities related to UAP dating back to 1945’.
🇨🇳 Researchers from the Chinese microchip company MakeSens say they’ve created a chip that can perform certain AI tasks 3,000 times faster than the Nvidia A100. Writing in the journal Nature, the researchers say the All-Analogue Chip Combining Electronics and Light could soon be used in wearable devices, electric cars or smart factories. The US have restricted sales to China of Nvidia’s leading A100 AI chip, leaving the country scrabbling to bolster domestic production capabilities.
🪐 NASA is locating buried ice on Mars by using a sophisticated new map. The Subsurface Water Ice Mapping project uses images of the planet from several NASA missions, including the 2001 Mars Odyssey satellite. The Agency says subsurface ice can serve as drinking water for the first humans to set foot on the Red Planet.
🌅 A new study says that the Earth’s climate is more sensitive to carbon emissions than most scientists believe. Published in the journal Oxford Open Climate Change, the study says a doubling of atmospheric C02 will cause a 4.8C rise in average global temperatures, and not the 3C rise that current mainstream thinking forecasts.
🤖 Boston Dynamics turned its robot dog, Spot, into a tour guide by integrating it with ChatGPT. I’ve covered the evolution of Spot since the earliest days of this newsletter, and it would seem rude to stop now.
🕸 Scientists say they added spider DNA to silkworms and it resulted in silk that is stronger than kevlar. The gene-edited silkworms create a silk six times stronger than kevlar, which could one day be used in surgical sutures and armoured vests.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,070,681,872🌊 Earths currently needed: 1.8455
🗓️ 2023 progress bar: 84% complete
📖 On this day: On 4 November 1847 a Scottish physician, James Young Simpson, discovers the anaesthetic properties of chloroform.
City on the Hill
Thanks for reading this week.
The dream that is a shining City on the Hill — an example to all the world — is ancient. And our quest to build such cities in the 21st-century is a classic case of new world, same humans.
This newsletter will keep watching, and working to make sense of it all.
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back next week with another postcard from the new world. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 25,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
It’s great to be back! Here, as promised, is the first post-break instalment of New Week. What do we have in store?
Nvidia are using an AI agent called Eureka to autonomously train simulated robots in virtual environments.
Meanwhile, research from Pew shows far fewer US adults are following the news; couple that with emerging deepfake technology, and 2024 should make for an interesting Presidential election year.
And a new AI model, 3D-GPT, can turn text prompts into amazing 3D worlds.
Let’s get into it.
🦾 Robot education
I’ve written often about Nvidia’s Omniverse platform: an AI-fuelled industrial metaverse that’s being used by BMW, for example, to simulate entire factories.
This week, Nvidia showcased Eureka, an autonomous AI agent that can be set loose on simulated robots and train them to perform complex tasks.
Eureka uses GPT-4 to write code that sets the simulated robots specific goals, and starts them on loops of trial and error learning. As the robot sets about its task Eureka will gather feedback and iterate its code, leading to a virtuous circle of better code and faster learning.
Via the agent, simulated robots inside Omniverse have learned to perform over 30 complex physical tasks. Including highly dextrous pen manipulation, handling of cubes, and opening doors:
Nvidia says that trial-and-error learning code generated by Eureka outperforms that created by human experts for over 80% of the tasks studied so far.
Meanwhile, Amazon this week trialled a humanoid robot called Digit in some of its US warehouses. The company says Digit could ‘free up’ warehouse staff to perform other tasks.
⚡ NWSH Take: There’s no doubt: the robots are coming. I laughed when Elon Musk announced the Tesla humanoid robot, Optimus, alongside a man dancing in a white spandex suit. Two years on, Optimus is autonomously sorting objects by hand. The pace of development has been insane. // Eureka and AI agents like it, though, have the potential to spark an explosion in robot competence. Teaching robots to navigate physical environments is hard. Now, we’ll be able to establish recursive loops of trial, error, and improvement in virtual space — no human input needed. // What could this competence explosion mean? When it comes to work, look to this week’s Amazon trial. Amazon employs 1.6 million people in its fulfilment centres worldwide, and currently it’s deploying the usual line: ‘these robots will free up staff, not replace them’. That’s hard to believe longterm; a phase of job displacement is coming, and it’s going to be painful for many. // Meanwhile, robots will make their way through workplaces and into our homes. Recently I spoke to legendary tech analyst Robert Scoble; he sees a future in which humanoid robots are delivered to homes on-demand by autonomous vehicles to vacuum, empty the dishwasher, and make the coffee. For further thoughts on that future, read Our Coming Robot Utopia.
📰 What news
This week, Pew Research gave a fascinating insight into our changing information environment.
A new survey shows that the proportion of US adults who closely follow the news has dropped precipitously across the last few years. Back in 2016, 51% of US adults said they followed the news all or most of the time. By 2022, that number had fallen to 38%.
Remarkably, the decline has taken place across all demographic lines, including age, gender, ethnicity, and voting preference.
⚡ NWSH Take: This feels like a big deal. We’re heading into a US presidential election year. And in 2024 a new set of circumstances are going to pertain. First, deepfakes are set to cause chaos as never before; just see this week’s convincing fake of Greta Thunberg in which she appears to call for ‘sustainable weaponry’. And now, via this research, we know that far fewer US voters are paying close attention to conventional sources of news. What happens to presidential campaigns in this kind of media environment? We’re going to find out. // Meanwhile, the longterm structural challenges are clear. Decades ago, the pioneers of Web 2.0 — I’m looking at you, Zuck — sold us on the idea that a connected world would mean a world informed and enlightened as never before. It hasn’t turned out that way. In fact, social media has turned many away from news as traditionally defined, and towards unverified gossip and conspiracy theory. The institutions and processes of our democracies evolved to function in symbiosis with an established media that operates under certain standards, and that is the primary source of information for voters. All that is now falling apart. Our democracies — what they are, how they work — are going to change. The 2024 presidential elections will be a window on to what is coming.
🗺 Hello world
This newsletter has watched the unfolding generative AI text-to-image revolution closely. But it’s always had one eye on another, even more compelling destination: text-to-worlds.
Now, that dream is being realised.
Researchers from the Australian National University, the University of Oxford, and the Beijing Academy of Artificial Intelligence this week showcased a new AI model called 3D-GPT. It generates 3D worlds based on natural language text prompts provided by the user.
According to the research paper, model deploys multiple AI agents to understand the text prompt and execute 3D modelling inside the open source modelling platform Blender.
See that paper for more on some of the worlds generated, including ‘A serene winter landscape, with snow covered evergreen trees and a frozen lake reflecting the pale sunlight.’
⚡ NWSH Take: 3D-GPT takes its place alongside this prototype text-to-world tool created by Blockade Labs, which I wrote about back in April. Where is all this heading? We’re still pretty deep in a metaverse winter right now, though there are signs of a thaw; the most obvious being the imminent arrival of the Apple Vision Pro mixed reality headset, which could act for millions as a gateway into more sophisticated virtual environments. While the word metaverse is probably damaged beyond repair, I still believe that immersive virtual worlds will play a role in our shared future. // What we’re talking about with text-to-world models, though, is even more head-spinning. 3D-GPT builds worlds that we look at via a screen. But eventually, we’ll be able to create entire, immersive, highly realistic VR worlds simply by describing them. In this way we’ll become something akin to sorcerers, able to confect new realities on command. That will transform video gaming and film. It will fuel new art forms and modes of collective expression. And, ultimately, it will change our relationship with reality — that is, with this reality — itself.
🗓️ Also this week
🎨 A new anti-AI tool allows artists to prevent AI models such as DALL-E from using their work as training data. Nightshade, dubbed a data poisoning tool, can be attached to creative work; if that work is scraped to be used to train an AI model, then Nightshade will corrupt the entire training database. We’re going to see a rising number of disputes between owners of creative IP and the owners of AI models who used that work as training material. See also, this week, Universal Music Group’s lawsuit against Anthropic; UMG say Anthropic unlawfully used its song lyrics to help train the Claude AI chatbot. And now major newspapers, including the New York Times, are seeking payment from OpenAI for use of their content to help train GPT-4.
☀️ The International Energy Agency says the global shift towards renewable energy is now ‘unstoppable’. The Agency’s latest World Energy Outlook report says renewables — mainly solar and wind — will provide half the world’s electricity by 2030.
🛰 NASA’s interstellar Voyager probes had a software update beamed to them across a distance of 12 billion miles. The probes launched 46 years ago, on a mission to explore deep space. These updates are bug fixes, intended to stop Voyager 1 sending corrupted data back to mission control, and to stop gunk building up in the thrusters on both probes.
🙈 Elon Musk says he may remove X (formerly Twitter) from the EU in response to new rules that ban the spread of harmful content. The new Digital Services Act is intended to hold social media platforms accountable for fake news, false advertising, and on-platform criminal activity.
🏭 Nvidia and Foxconn say they are partnering to build a number of ‘AI factories’. They will be next-generation data centres that use Nvidia’s AI chips to train the AI models that fuel robots, autonomous vehicles, and generative AI apps.
🤖 The CEO of DeepMind, Demis Hassabis, says the risks posed by AI should be taken as seriously as those posed by climate change. Hassabis called for international regulatory oversight of AI, and said technologists should take inspiration from the International Panel on Climate Change (IPCC).
👶 A Dutch startup, Spaceborn United, wants to see if it’s possible to create human babies in space. The company says that in 2024 it will send a satellite-lab into low Earth orbit and there attempt to conduct in-vitro fertilisation (IVF). CEO Egbert Edelbroek hopes the technology can pave the way for humans to be born in future space colonies.
😳 A British journalist went undercover at Amazon and did not like what he saw. Oobah Butler found that it was possible to list bottles of Amazon delivery driver urine(!) for sale on the platform. And claims that Amazon is using devious tactics to avoid worker unionisation.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,069,001,802🌊 Earths currently needed: 1.8454
🗓️ 2023 progress bar: 82% complete
📖 On this day: On 26 October 1977 the last human case of smallpox was diagnosed in Ali Maow Maalin, a hospital cook from Somalia. The WHO and CDC consider this date to mark the eradication of the disease via the smallpox vaccine.
Back Again
Thanks for reading this week.
The emergence of text-to-world AI models — and the future they promise of new realities on demand — is dizzying.
This newsletter will keep watching, and working to make sense of it all.
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
It’s great to be back in your inbox. Thanks for having me. I’ll return, of course, next week.
Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
It’s a bumper instalment this week. What’s coming up?
Google showcase their AI arsenal at the annual I/O developers conference.
Meanwhile, new research reveals that the ocean currents may be about to take a weird turn, with disruptive results for the global climate. And a Snapchat influencer launches an AI version of herself to her 1.9 million followers.
Let’s go!
💥 The search empire strikes back
This week, Alphabet leaned hard into an AI everywhere, for everyone strategy at its I/O developers conference.
CEO Sundar Pichai announced PaLM 2, an update on the company’s primary large language model. Google’s Bard chatbot is now fuelled by the new model, and has been made available globally with no waitlist.
There was much more. Google execs announced new AI features in Maps, and a powerful new magic editor for photos that brings Photoshop-like capabilities into the phone. Pichai said AI around one zillion times, and Google later published a handy summary of all the announcements.
The centrepiece, though, was a demonstration of Google’s plans to weave generative AI through search. In this new search experience AI-generated results take up most of the first screen; users in the US can now access this experimental version of search via Google Labs.
The I/O conference wasn’t the only source of intriguing announcements from Google, though. The company also launched Geospatial Creator, an impressive tool that allows creators to build and publish geolocated AR installations. Essentially, to build a digital object and drop it anywhere on the surface of the Earth.
The tool is powered by the Google Maps platform, and integrated into Adobe Aero and Unity.
⚡ NWSH Take: Google researchers invented the transformer models that underpin this generative AI revolution. But across the last two years the tech giant has watched OpenAI steal its thunder. This week’s I/O conference was a statement of intent: we’re taking back control. Competition can only be good for users, many of whom will have gone straight to the new PaLM 2-powered Bard to compare it to ChatGPT. My anecdotal experience is that Bard is faster — ChatGPT with GPT-4 is a little slow — but the consensus at the moment is that it’s currently less factually reliable. Meanwhile Google is working on Gemini, a multimodal LLM clearly intended as a GPT-4 killer. The war for supremacy between Alphabet and OpenAI-Microsoft is just getting started. // Geospatial Creator was overshadowed by I/O, which feels fitting for a year in which the metaverse has been comprehensively out-hyped by AI. But the tool is intriguing glimpse of the emerging unified digital-physical field. Build a digital sculpture from your desk in London, and drop in into a park in São Paulo for your subscribers to view. And pretty soon, via text-to-everything models, you’ll be able simply to describe that sculpture and watch an AI model build it for you. A couple of years ago I wrote about the ways in which AR will change our relationship with a shared physical reality. I stand by those ideas, but in the age of generative AI that essay needs an update; one will be coming soon.
🌊 Climate weirding
Also this week, new research says changes in the ocean currents may soon enweirden the climate of northern Europe.
The Beaufort Gyre moves in a clockwise direction around the western Arctic Ocean, and helps regulate sea ice formation in that region. Scientists have long suspected that climate change is causing changes to the Gyre’s movement.
This new paper, Recent State Transition of the Arctic Ocean’s Beaufort Gyre, was published in Nature, and makes use of satellite data collected between 2011 and 2019. It provides the first observational confirmation that the Gyre is slowing and has entered a new ‘quasi-stable state’.
This means, say the scientists, that the Gyre may soon expel a massive amount of icy fresh water into the North Atlantic.
And that could spark further ocean current changes that cause the climate in western Europe to become significantly cooler.
⚡ NWSH Take: Yes, cooler. I’m no ocean currents expert, and I found this quick explainer on the Beaufort Gyre helpful. Essentially, the Gyre periodically sucks in a ton of icy fresh water and then exhales it, and it’s now long overdue an exhale; when that massive exhale comes it could send other ocean currents askew in ways that dramatically cool western Europe. Remember, the Gulf Stream — a major ocean current responsible for several global weather patterns — has slowed by around 16% already; scientists are scrambling to understand how a huge Beaufort Gyre exhale will impact this. // The upshot? One way or another, we’re probably about to undergo a climate weirding on a scale that few of us are ready for. While drought and fires rage in some places, a new freeze will break out in others. At the outer edges of this is the risk the Gulf Stream shuts down entirely, triggering rapid and chaotic climate disruption fuelled by a set of feedback loops. These processes are hugely complex; we’ll see much more work such as this attempt to build machine learning-fuelled simulations that give us advance warning of ocean current shift. Perhaps NVDIA’s coming and massive Earth-2 simulation can help.
❤️ Hey girlfriend
Regular readers know that virtual companions are a longstanding NWSH obsession. This week, another glimpse of what is coming.
Snapchat influencer Caryn Marjorie, who has 1.9 million followers on the platform, released an AI girlfriend version of herself. Users pay $1 per minute to chat to CarynAI, which the creator says is built on top of GPT-4 and trained on over 2,000 hours of her video and voice content.
Marjorie says the bot made $72,000 in the first week of release. She says that it could make around $5 million per month if 20,000 people — or just 1% of her Snapchat followers — subscribe.
So far, things seem to be going well:
⚡ NWSH Take: Back in 2013 I started telling leaders in big corporations that a new age of AI-fuelled conversational agents was coming. That people would even have 'relationships' with these new virtual entities; that it would be something way beyond Siri — their best reference point at the time. Some learned forwards; some raised a sceptical eyebrow. My constant refrain back then? I know it sounds like science-fiction, but it’s coming. Well, it’s here. Virtual Companions are set to unlock new manifestations of some of the deepest and most powerful human impulses: social connection, friendship, intimacy. // Observing this truth is not the same as celebrating it. What happens to authentic human connection in a world in which we simulate it — and commodify those simulations — in this way? What harms are we doing to vulnerable people who become attached to, even dependent on, these creations? // The central message still pertains: it’s weird but it’s happening. In the end I can’t help feeling that so much about contemporary living on the internet — the way it atomises our attention, the simulation of human relationships — must push us to finally realise that authentic human being together is the only sphere of activity invulnerable to technological advance. No machine can be a human, truly seeing you as another human. In the age of the machine, that truth becomes sacred.
🗓️ Also this week
⚛ Microsoft announced a partnership with fusion power provider Helion Energy. The deal will see Microsoft buy electricity created by a Helion fusion plant, which is expected to be operational by 2028; Helion says it marks the world’s first fusion power purchase agreement between two companies. Microsoft’s Azure Cloud platform will need vast amounts of compute power — using stupendous amounts of energy — given its commitment to support OpenAI and its commercialisation of ChatGPT. I’ll be writing more soon about the emerging symbiotic relationship between energy and AI.
🛰 NASA launched two storm-observing satellites, called CubeSats, intended to study tropical cyclones. The pair will form part of a constellation of four identical satellites that will stay in low Earth over the planet’s tropics, allowing them to pass over any given tropical storm around once per hour.
👨⚕️ Pharma company BioNTech is developing an mRNA vaccine against pancreatic cancer. In encouraging early trial results, the vaccine prevented tumour recurrence after surgery in eight of 16 patients.
⚖️ Startup Anthropic revealed its approach creating an AI with values. Anthropic’s Constitutional AI approach see it train its AI assistant, Claude, on a set of initial principles drawn from various sources including the United Nations Declaration of Human Rights. The AI then applies these principles itself to help it choose the most ethical response. This is in contrast to the approach used by OpenAI and Google, which sees human moderators train the AI to avoid toxic outputs.
🌬 Wind is now the single largest source of electricity in the UK. In the first quarter of this year wind turbines accounted for one third of all electricity used in the country. It marks the first time wind has generated more of the country’s power than gas. The UK wants its entire electricity use to be emissions free by 2035.
🌌 California-based startup Vast Space say they will launch the first commercial space station. The startup says it will launch the first part of the station, an outpost called Haven-1, on a SpaceX rocket in 2025. Vast Space want eventually to grow the station into a 100-metre long multi-module station that spins to create onboard artificial gravity.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,032,723,422🌊 Earths currently needed: 1.8036642811
💉 Global population vaccinated: 64.4%
🗓️ 2023 progress bar: 36% complete
📖 On this day: On 13 May 1950 the inaugural Formula One World Championship race takes place at the Silverstone Circuit in England.
My Generation
Thanks for reading this week.
Online search revolutionised our relationship with knowledge. Now, generative AI is set to enact yet more change. It’s another case of new world, same humans.
This newsletter will keep watching, and working to make sense of it all. And there’s one thing you can do to help: share!
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
The generative AI rollercoaster is thundering forward at increasing speed.
This week, researchers at Stanford and Google use an enhanced large language model to create simulated people that can remember, plan, and talk to one another in pursuit of their longterm goals.
Also, a new report says we’ve reached a landmark moment for the global energy system. And amid rumours of financial difficulty, Stability AI release a new text-to-image model for enterprise users; it’s capable of amazing photorealism.
Let’s go.
🏠 Welcome to SimGPT
This week the generative AI talk orbited around autonomous agents. That is, AI systems that can act autonomously in pursuit of pre-defined goals.
Researchers at Stanford and Google explained how they used a large language model (LLM) to create 25 simulated people, who were then set loose inside a virtual town called Smallville.
To create these sims, the researchers hooked up their LLM to an architecture that allows each AI agent to store memories of its past experiences, and then to access relevant memories and use them to plan new actions. Each agent was imbued with its own persona, for example: 'John Lin is a pharmacy shopkeeper at the Willow Market and Pharmacy who loves to help people. He is always looking for ways to make the process of getting medication easier for his customers.’
The results, say the researchers, were ‘believable individual and emergent social behaviours’ that saw Smallville become a bustling little town full of autonomous chit-chat, group activities, and trips to the local café.
‘…for example, starting with only a single user-specified notion that one agent wants to throw a Valentine’s Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time.’
All this comes amid vast excitement on Twitter this week over the rise of useful generative agents.
As with the above research, these innovations — which include AutoGPT and BabyAGI — leverage architecture that enhances ChatGPT by allowing it to lay down and then access a stream of past actions, or ‘memories’. Combine that with a plugin the enables ChatGPT to browse the web, and the result is a system that can take an initial goal, get started online, and then prompt and re-prompt itself until it’s finished:
⚡ NWSH Take: If SimCity or The Sims formed a part of your childhood, then this new research is hypnotic. It makes clear that LLMs will enable us to simulate goal-directed people, and watch as complex social and behavioural dynamics unfold. Imagine video games populated with these simulated humans (hello, Electronic Arts). But we’ll also see the rise of new art forms — a mixture of game and movie — built around them. And then will come the ability to simulate large populations, allowing for new insight on collective phenomena such as voting behaviours, the spread of disinformation, and the evolution of the economy. This new age of AI-fuelled simulation — which I’ve been writing about for a while — is emphatically here. // Meanwhile, AI agents such as AutoGPT promise to elevate the usefulness of generative models for millions of individual users. It’s already clear that for most people using LLMs won’t be about sitting at the prompt line and figuring out great prompts. Instead, wrappers such as this one — which puts AutoGPT-like powers direct into your browser — will allow users to set a goal and then let the LLM iterate its own way to a useful output. Give it a try; something hugely powerful is happening.
🔌 New power generation
Also this week, news of a landmark moment for energy.
A new report from independent energy think tank Ember says solar and wind accounted for a record 12% of global electricity generation in 2022. That’s up from 10% in 2021. The increase in wind generation alone in 2022 was the equivalent of the entire annual electricity demand of the UK.
What’s more, says the report, it’s likely that 2023 will see electricity generation via fossil fuels — mainly coal and natural gas — hit their peak.
The research team analysed data from 78 countries, representing 93% of global electricity demand.
⚡ NWSH Take: Around two-thirds of the world’s electricity is generated by burning fossil fuels. But the transition to solar and wind is now reaching a blistering pace, thanks largely to exponentially falling cost. In 1956 the cost of one watt of solar capacity was $1,825; now it can be as little as $0.72. // If Ember are right, we’ll soon start generating more electricity via fewer fossil fuels: power up, emissions down. That aligns with the International Energy Agency’s most recent and broader forecast; they now have global demand for fossil fuels — via electricity generation or any other use — peaking or plateauing under all their future scenarios, even without any shift in current government policies. // We’re approaching, then, a historic turning point: the decoupling of economic growth and fossil fuels for the first time since the Industrial Revolution. It’s becoming possible to image a world of endless, near-zero cost clean electricity. A world of clean energy abundance. What will that make possible?
🖼 Get real
Do you want to look at some amazing AI generated images? Yes, of course you do.
This week Stability AI released Stable Diffusion XL, a text-to-image model aimed at enterprise users. The model is an advance on Stable Diffusion 2.1, and excels at ultra-photorealism.
The move comes amid reports that Stability AI is struggling with huge server and talent costs. These reports suggest the company is seeking a new round of funding, and that investors are wary given the current revenue. CEO Emad Mostaque has not commented.
⚡ NWSH Take: A few quick thoughts. The images are stunning; that’s obvious. At this point we’ve pretty much entirely scrambled the role that the photograph once played in our culture as a form of proof or marker of veracity. In the wake of puffa coat Pope, I’ve already developed a new reflexive habit: is this real or AI? // As for the rumours about Stability AI, they amount to: ‘AI startup experiencing rocket ship growth is struggling to figure out revenue and is a chaotic place to work’. Nothing too surprising. Whatever storms the company is experiencing, I hope it can weather them; Mostaque’s vision of AI for the people is a necessary counterweight to the closed model that is being operated by (the misleadingly named) OpenAI and others.
🗓️ Also this week
🇨🇳 The CCP has issued new rules on the training and outputs of generative AI models. Draft rules from the Cyberspace Administration of China say the outputs of those models must reflect the core values of socialism and not undermine the power of the state. This came as Chinese tech giant Alibaba announced plans to roll out its LLM rival to ChatGPT, Tongyi Qianwen, across all its products.
🇺🇸 The US government is also looking to establish new regulations around AI. The National Telecommunications and Information Administration is asking for feedback from the public and experts from industry and academia, and wants to establish ‘guardrails’ to ensure AI is safe, transparent, and as unbiased as possible.
👮♂️ The Boston Dynamics robodog will patrol the streets of NYC on behalf of the New York Police Department. The NYPD experimented with Spot the Dog in 2021 and faced criticism from civil rights organisations. Now the new mayor, Eric Adams, is bringing Spot back.
🚗 Ford says it will spend $1.3 billion to convert its 70-year-old factory in Oakville, Canada, into an assembly plant for electric vehicles. The auto giant says it wants the production capacity to sell 2 million EVs a year worldwide by 2026
💉 Ghana became the first country to approve a ‘game-changing’ malaria vaccine. Trial data indicates the R21 vaccine was up to 80% effective when given in three doses plus a booster after one year. Malaria kills around 600,000 people each year, many of them children.
🌖 China says it will build a permanent base on the Moon using bricks made from Moon dust. The South China Morning Post reported that officials say building will start in 2028. Back in October I wrote on how NASA is preparing for potential geopolitical tensions arising out of multiple Moon missions by the US and China.
🪐 Four volunteer test subjects will spend a year locked in a simulated Martian environment as part of NASA research for a mission to Mars. The 3D-printed structure is situated in a warehouse at the Johnson Space Center in Texas, and is intended to simulate a future NASA base on Mars. The volunteers will grow their own food, conduct experiments, and exercise.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,027,591,427🌊 Earths currently needed: 1.8019217768
💉 Global population vaccinated: 64.3%
🗓️ 2023 progress bar: 28% complete
📖 On this day: On 15 April 1755 Samuel Johnson’s A Dictionary of the English Language is published in London.
Designs for Life
Thanks for reading this week.
We humans have always been obsessed with our own reflection. And now we have a new way to study it: by using LLMs to create simulated humans that chat to one another, organise parties, and visit the local shops. It’s yet another case of new world, same humans.
This newsletter will keep watching, and working to make sense of it all. And there’s one thing you can do to help: share!
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
I put the newsletter on pause last week; did I miss anything?
Given what is unfolding right now, it’s hard to make this newsletter anything other than a generative AI revolution update. I don’t want to stoke the hype yet further, but I’ve never seen anything quite like this.
Given all that, this week we’ll dive into a high-profile petition to pause work on new generative models. Also, we’ll look at the new hyperreality taking shape around us via Midjourney and its community of inventive users.
But it’s not all AI; there’s also an intriguing new report on global population change from the Club of Rome.
Let’s get into it.
🤖 For the people
This week, another generative AI story that pushes 2023 deeper into the realms of what seemed, recently, possible only in science-fiction.
It’s not yet another platform, plugin, or viral image (more on those below) but a call to slow down. Over 1,000 technology leaders signed a petition demanding a pause of at least six months on the training of AI systems more powerful than GPT-4.
Signatories included Elon Musk, Yuval Noah Harari, Stability AI’s Emad Mostaque, and Apple co-founder Steve Wozniak. And their language was pretty apocalyptic:
According to the authors of the petition: ‘recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.’
The scale of their concern was lent support by a research paper published last week. It saw Microsoft researchers report that GPT-4 shows ‘sparks of AGI’. The model, they point out, shows high-level competence across mathematics, coding, vision, medicine, law, and psychology, and can solve novel problems in those domains without any need for special instructions: ‘in all of these tasks, GPT-4's performance is strikingly close to human-level performance’.
Meanwhile, the iconic and ever-controversial AI safety expert Eliezer Yudkowsky went full pelt in Time magazine. He didn’t sign the petition, he says, because it doesn’t go far enough:
All training of large models, says Yudkowsky, needs to be shutdown indefinitely and worldwide. He says the governments of Earth must come together in a concerted effort to stop an AI-fuelled human extinction.
⚡ NWSH Take: Pretty intense, right? Yudkowsky is, and has long been, an outlier on all this. Meanwhile, others say this week’s petition signatories have fallen prey to OpenAI’s apocalypse marketing: a plan to get everyone scared and then sell subscriptions. // For my part, I don’t think AI annihilation is imminent; nor do I think these fears are founded only in hype. GPT-4’s competence across all kinds of reasoning tasks is insane. And for all the reams of coverage (guilty), I don’t think we’re anywhere near processing the implications. It no longer seems far-fetched that an AI model could start behaving in strange and uncontrollable ways in the near-term. It’s emphatically time to get serious about alignment. // Alignment is first a technical problem: how do we make sure AIs only do what we want them to do? After this, though, it becomes a political problem. Whose values should we align our AIs with? Those of Californian tech bros? // We can’t put the AI genie back in the bottle, and in practice a global ‘pause’ is highly unlikely. That means only answer here is to speed up research on the technical challenge of alignment, and to allow a plurality of AIs, empowering different peoples and communities to live and create according to their own value systems. That is real alignment. To that end, check out open source AI group LAION’s petition for a new internationally-funded supercomputer to train open source foundation models.
📈 Growth mindset
Also this week, a huge if true forecast on the future of the human population.
A new study commissioned by the Club of Rome forecasts that if current trends continue then the global population will hit 8.8 billion in around 2050, before declining rapidly to 7.8 billion by the end of the century.
The study, conducted by think tank Earth4All, also games out a scenario in which governments invest in policies known to curtail population growth, such as education and social services. Here, population peaks at 8.5 billion in around 2040 and falls to 6 billion by 2100.
Both projections are far below last year’s UN Population Prospects forecast, which had population peaking at 10.4 billion in the 2080s.
The Club of Rome is best known for the now (in)famous 1972 report The Limits to Growth, which warned of impending environmental crisis and social breakdown due, in part, to strains imposed by overpopulation.
The report came amid a wave of neo-Malthusian anxiety in the decades after WWII. A 1968 book called The Population Bomb — which influenced the thinking of the Club of Rome — raised the spectre of hundreds of millions of people starving to death as population growth exceeded food supply.
⚡ NWSH Take: The original Limits to Growth report is today the subject of fierce disagreement. Critics say the Club gave voice to unfounded fears motivated by an ideological distaste for modernity. Proponents point out that the report offered a number of different scenarios, and that the growth-induced systemic breakdown it envisioned may yet eventuate. // This new statement on population could end up being just as contested. The Club now accept that their population bomb won’t go off. And they celebrate their finding that population is set to peak sooner and lower than the UN expect — stressing that it’s good news for the environment. Meanwhile, though, a niche but growing school of thought says that population collapse is the real crisis coming down the track; rapidly shrinking and ageing populations, runs this line, will kill productivity and threaten economic collapse. // Where does the truth lie? Most mainstream demographers say population collapse isn’t on the cards, and that ageing populations don’t have to mean economic calamity. Meanwhile, it’s not overpopulation but intense patterns of high and damaging consumption in the rich world that are the primary drivers of climate change. As ever with demography, it seems the truth lies between the extremes.
🎭 Real life
Version five of the text-to-image tool Midjourney was released two weeks ago. And this week, users went wild.
On Reddit, Midjourney enthusiasts started sharing photorealistic, news report style images of historical events — such as 2001’s devastating Great Cascadia earthquake in Oregon:
The truth, of course, is that no such event took place; this is all fictitious — a AI-fuelled experiment in alternative history.
Meanwhile, Chinese users of the tool are creating pseudo-documentary images of the southwestern city of Chongqing in the 1990s.
All this comes days after the first truly viral AI-generated image: of Pope Francis in a white puffer coat.
⚡ NWSH Take: In his 1981 book Simulacra and Simulation, the French philosopher Jean Baudrillard wrote about hyperreality: the emergence of a media environment in which the boundaries between the real and our representations of the real become ever-more blurred. Digital media massively amplified that phenomenon. All of us recognise the feeling, today, of living inside a tech-fuelled hall of mirrors in which the difference between image and reality is hard to discern, or even meaningless. // What can be said? That was before this generative AI revolution and tools such as Midjourney, which are now achieving photorealism that is impossible to distinguish from the real thing. These AI-generated pseudo-photos are perfect representations of representations; signs that point only to other signs — exactly the phenomenon that Baudrillard put at the heart of his theory. They make possible a whole new level of alternate history; a convincing mass media documentation of events that never took place. There’s going to be so, so much more of this.
🗓️ Also this week
🐰 This Twitter user made a AI-fuelled virtual companion by hooking ChatGPT to a cute but grumpy holographic rabbit avatar. It’s just one signal of how the generative AI revolution will unleash a tsunami of virtual companions; the rise of this trend is a longstanding NWSH obsession.
🧠 Direct brain interface startup Neuralink is searching for a partner to help it run clinical trials on humans. In 2022 the FDA rejected Neuralink’s application to start human trials; the company has since been working to address the safety concerns that were raised.
🛩 A Swiss startup is working on a hydrogen-powered jet that it says will cut flights from Europe to Australia to four hours. Destinus has been testing prototypes for two years, and is now partnering with Spain’s Ministry of Science. It currently takes around 20 hours to fly from Europe to Australia.
👨💻 A new report says ChatGPT could impact 300 million full-time jobs across the globe. The report by Goldman Sachs economists says the technology is ‘a major advancement with potentially large macroeconomic effects.’ But most jobs, they say, will be complemented by AI rather than replaced entirely.
🛒 Chinese ecommerce titan Alibaba is planning to break itself up. The company says it will split into six business units, some of which may be listed or sold. The announcement seems intended to placate the CCP, which across the last three years has moved aggressively to diminish the power of domestic tech giants.
🏰 Disney has reportedly fired its entire metaverse division. Last year the entertainment giant called the metaverse ‘the next great storytelling frontier’ and announced plans to bring blended digital-physical experiences to its parks. The company has recently been under pressure from investors to cut costs.
⛔️ This just in as the newsletter goes to press; the Italian government has banned ChatGPT citing concerns over data privacy breaches. The Italian Data Protection Authority says the move is temporary and will be revoked ‘when ChatGPT respects privacy’. OpenAI CEO Sam Altman says the company ‘defers to the Italian government’, but believes it has followed all relevant privacy laws.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,025,029,075🌊 Earths currently needed: 1.8010517836
💉 Global population vaccinated: 64.4%
🗓️ 2023 progress bar: 25% complete
📖 On this day: On 1 April 1976 Steve Wozniak and Steve Jobs found Apple Computer in California.
Speed Warning
Thanks for reading this week.
The ever-more urgent quest to conform machine intelligence to our values is yet another classic case of new world, same humans.
I’ll keep watching. And there’s one thing you can do to help: share!
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to New World Same Humans, a newsletter on trends, technology, and our shared future by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
At the start of the year I promised the return of short notes. Here’s the first: a meditation on the ChatGPT moment we’re living through right now.
To avoid claims of false advertising: this one is more of an essay than a note.
If you’d rather listen than read, just scroll up and hit play. But enough preamble; let’s get into it.
The generative AI hype train is thundering forwards right now, and ChatGPT — which was released in November — was the fuel that accelerated it to its current speed.
On the face of it, that’s a bit odd. The underlying model, GPT-3, was made public almost two years earlier. Why the big noise now?
ChatGPT uses an enhanced version of that model, and so produces better outputs. But my contention is that it’s the chat element — that is, the conversational nature of the tool — that’s responsible for ChatGPT’s colonisation of the zeitgeist. People love the back-and-forth quality of interacting with this thing.
I’m interested in this, and the reasons for it. Because it seems to me that a quest to understand ChatGPT’s seductive conversational power can help us commune with a deep but under-appreciated truth about human thought.
A truth that leads us, in turn, to some conclusions on our future relationship with machine intelligence.
*
In a seminal 1998 research paper, the philosophers Andy Clark and David Chalmers introduced an idea they called the extended mind thesis (EMT).
The EMT says that mind is best understood as a set of cognitive processes that extend beyond our brains and into the external world. Consider, for example, a person using a notebook and pen to help perform a series of simple calculations. The notebook and pen are, say Clark and Chalmers, just as much a part of the cognitive processes at work here as the person’s brain. The notebook, for example, is acting as a kind of external memory bank.
It’s arbitrary then, according to the EMT, to say that mind is happening in the brain but not in the notebook; instead, the brain, pen, and notebook are part of one big cognitive system, and we can best understand that system as mind.
It was an arresting argument, and it’s proven an influential one. What’s more, 25 years on we citizens of the internet have been delivered into a relationship with technology that makes tangible the strengths of this idea.
I’m talking, here, about our relationships with our phones.
I tend to do my deepest thinking when I’m out for a walk. Often, I’ll reach for some half-remembered fact, person, or quote that I need to continue my train of thought, find that I can’t recall it, and then go to my phone to look it up. My phone, here, or perhaps more properly the internet itself, is acting as a kind of extension of my own memory — one containing pretty much all the knowledge in human history that can be encoded as words or pictures. And the whole process is so seamless — think, encounter block, look it up, keep thinking — that the phone really does feel a natural extension of my mind. When I forget my phone, the feeling is one of my thought process being constantly interrupted. At its most acute it feels as though a part of me is missing.
ChatGPT offers users the same kind of feeling. The feeling, that is, of having your mind extended beyond the confines of your skull. It’s perhaps the first technology since the iPhone to offer that experience in a compelling new way. That truth, surely, has helped drive the excitement over the last three months.
But the current ChatGPT moment is not driven only by the feeling that the tool allows for mind extension. There’s also the feeling that the mind extension happening is a sudden and dramatic evolution of anything we’ve experienced before via notebook, calculator, or the phone as portal to the internet. There’s a widespread feeling out there that ChatGPT is an early signal of a revolution of era-defining consequence — even though, in truth, we haven’t yet seen the use cases, or the impact on the economy, to justify that belief.
Why is this? Why does ChatGPT feel such a big deal?
The answer I’m fermenting: it’s because ChatGPT taps into, in a way even the phone does not, a deep truth about human thought. That is, its fundamentally dialogic, or conversational, nature.
The idea that underpins this is simple: it’s that when we think, we talk to ourselves. What you call your ‘internal monologue’ is really a dialogue conducted by one person. Someone is talking (internally, not aloud) and someone is listening and will then reply, and those people are both you.
*
The idea that human thought is fundamentally dialogic has a long history, which passes through the 20th-century Russian philosopher and literary critic Mikhail Bakhtin.
Bakhtin said that language is primordially a social instrument: a process that evolved out of games of call and response conducted by two or more parties. And because language is the substrate that makes symbolic meaning and the higher forms of thought possible, that means thought, too, is fundamentally dialogic in nature.
For we moderns this is a revolutionary idea. We tend to believe that thought, in its purest sense, is something that happens inside the mind of a single individual.
Bakhtin, and others since who’ve played with the idea of dialogic thought, invert this belief. They say that thought in its purest sense happens not inside the mind of one person but between groups of people; that is, between collections of minds. Under this view the extended mind thesis applies not only to the way individual minds can be extended by tools, but also, and primarily, to the way all our minds are necessarily extended by other minds. Indeed, under this view mind itself is best understood as a phenomenon that emerges between us, rather than inside any one of us individually.
It’s notable that the earliest works of philosophy in the western tradition seem to acknowledge the dialogic nature of thought. Socrates gathers others around him and together they engage in a process of back-and-forth reasoning that is, he tells them, the path towards enlightenment. The Socratic method taps deep into the idea that thought is primordially a social phenomenon.
Via a complex psychospiritual process entangled with the evolution of the Enlightenment self, we lost touch with that truth. Instead, we came to see thought as, foremost, an inner and private unfolding. But in losing touch with the primacy of social thought, we also lost touch with another truth. Yes, thought conducted silently by one person is private and inner; but because it relies on the dialogic tool that is language, it too carries a fundamentally dialogic nature. When we think, we talk to ourselves.
We might say that this strange ability to split the self — so that we can at once talk and listen to ourselves talk — is consciousness. That is to say, it is the state of self-awareness that only we among Earth’s creatures seem to possess in its highest form. The idea that language in some deep sense is human consciousness, that it creates the human mode of being in the world, is one I explore in depth in the ongoing essay series The Worlds to Come.
*
I’ve argued for the idea that thought — that consciousness itself — is in some deep sense dialogic. What does all this have to do with ChatGPT?
I hope the superficial connection is clear: in ChatGPT, we have an instrument that can externalise and amplify the internal dialogue that constitutes thought.
As we’ve seen, we’ve always had access to entities that can externalise our inner dialogue: other people. But other people are beings with their own cognitive and social agency. They have personhood. ChatGPT, by contrast, is not a person; it is a tool.
It’s this dual quality that is new and special about ChatGPT: it allows for the externalisation of the dialogic essence of my private thought, while being a tool that is best understood as an extension of me, rather than a person best understood as essentially an other.
In this way, ChatGPT offers a radically new form of mind extension. The excitement around it points to a submerged awareness among its users that this tool is more than just another useful app for summarising documents, or searching for information. We see in it, instead, the beginnings of a new way of doing thought. A way of externalising, and drawing out, an essential feature of our interior lives.
Right now, ChatGPT enacts a highly imperfect version of this promise. While the quality of its responses is a great advance on anything we’ve seen before, it’s still prone to factual errors and occasional nonsense, and responses that are not wrong but in some way off, or just bland. But all this will be improved via larger models that are better able to retrieve factual information, and cope with context and nuance. It’s the glimpse of what is ahead that has proven so exciting — even shocking.
Pretty soon, there will be a proliferation of such models. We’ll all be able to customize our own, so that it knows our tastes, preferences, and cognitive styles.
These models, trained as they are on an appreciable amount of all the text in existence, are a strange new instantiation of our shared linguistic inheritance. It’s as though we’ve created a human hivemind and given it a voice, such that we’re now able to talk to it at will. When we think, we talk to ourselves: that truth is now manifest in a whole new way.
Eventually, having a personal large language model (LLM) — a virtual conversational companion in your pocket 24/7 — will be no more remarkable than having a phone. When that time comes, in what ways will our thinking be amplified? In what ways will the nature and modes of our thinking change? And we must also ask: how might these models, which reflect back to us our own assumptions and prejudices, limit our thinking, or act to push us away from ideas and perspectives that lie outside the mainstream?
*
Those questions are valuable because when we ask them, we’re approaching a more accurate, and ultimately more fruitful, relationship with machine intelligence.
Contrary to much of the hype and/or panic circulating at the moment, ChatGPT and other language models aren’t going to render higher forms of human thought or creativity obsolete. They’re not simply going to write our books for us, do our philosophy, tell us the answer. These models can’t think creatively in the commonly understood sense of that phrase, because they’re not conscious beings responding to a lived experience of the world. They are, rather, stochastic parrots playing a high-level game of word association. It’s just that when they play that game well enough, and effectively simulate a human interlocutor, they’re able to amplify our thinking such that we arrive at cognitive destinations faster than we would have otherwise, or arrive at destinations that we would never have reached at all.
In short, we need to understand that what’s most exciting about these models is not what we will get straight from them; it’s what they will help us get from ourselves. And they’ll help us most effectively, of course, if we bring our own powers of creativity and critical reflection to the party.
If you haven’t experienced this aspect of ChatGPT, give it a try. Choose an idea, argument, or line of thinking, articulate it to the chatbot and then go back and forth, picking up on aspects of its responses that you find interesting and asking it to develop them, and then responding in turn. Don’t forget to challenge the assumptions that start to become apparent in ChatGPT’s responses, and ask yourself what it’s missing. Do that for five minutes, and see where you get. At its best, it can feel like the cognitive equivalent of driving a car instead of walking.
For my part, this kind of conversation is already becoming commonplace. I can feel the seeds of a new habit taking root: I’ll just take this to ChatGPT. And I’ve started to wonder: how long until I come to feel the same about this tool as I do about my phone? How long until the ability to take a train of thought to ChatGPT is so expected, so natural, that when I don’t have access to the tool I feel as though my thought process has been interrupted? And how long until many others feel the same?
What I’m envisioning is a near future in which this ability to commune with the human hivemind, as made manifest by an LLM, comes to seem a natural part of thought. Yes, we’re a long way from that right now. But it feels as though we’re taking the first steps towards a new and powerful kind of augmentation.
*
At the outer edges of all this I wonder: is this the beginnings of the long process of human-technological convergence that transhumanists (think Ray Kurzweil) tell us is inevitable? A process that sees we humans, or at least some of us, become something else?
I’m not one of those who views the post-human future with unalloyed enthusiasm. But via generative models and other technologies — including brain implants and techniques of genetic manipulation — I’m increasingly persuaded that some kind of Great Divergence is coming, in which we homo sapiens branch off from one another and become various different kinds of (post)humans.
Certainly, the possibility that we may not all be the same humans for much longer haunts the borders of this newsletter. It increasingly seems to me that that our convergence with the technologies we’re building, and the almost impossible task of making any practical or moral sense of it, is the most important shared challenge we face.
In that case, the project of the age is to begin, at least, to figure out where we stand. Perhaps we can take it to ChatGPT.
Go Chat
Thanks for reading this essay from New World Same Humans.
Now that you’ve reached the end, why not take a second to forward this essay to one person – a friend, family member or colleague – who’d also find it valuable? Or share it across one of your social networks, and let people know why it’s worth their time. Just hit the share button!
I’ll be back later this week as usual; until then, be well.
David.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
It’s another packed instalment of New Week. What do we have in store?
This week, tech giants from Microsoft to Snap put their arms around a trend that has long been a NWSH obsession.
Meanwhile, Tesla and OpenAI outline far-reaching manifestos: what does it mean when corporations incubate the kinds of social and political policies we usually associate with government? And a new US startup, Figure, offers a first glimpse of its humanoid robot.
Let’s get into it.
🎭 Talk to me
This week, a constellation of signals point to the mainstreaming of a trend long in the making. I’m talking, here, about virtual companions.
Snap launched My AI, a ChatGPT-fuelled conversational agent, inside their app. The feature is intended to serve as a general-purpose chatbot; Snap say it might plan a hiking trip for a long weekend, or suggest a recipe for dinner. Or even serve the user’s love of poetry:
Meanwhile, Microsoft launched a new feature that ‘changes the personality’ of the AI chatbot inside is Bing search engine.
Users can toggle between three options: creative, balanced, and precise, depending on the type of answers they want from the bot.
And in a Facebook update, Mark Zuckerberg said Meta are working on ‘AI personas that can help people in a variety of ways’ for Instagram, Messenger, and WhatsApp:
‘We're exploring experiences with text (like chat in WhatsApp and Messenger), with images (like creative Instagram filters and ad formats), and with video and multi-modal experiences. We have a lot of foundational work to do before getting to the really futuristic experiences, but I'm excited about all of the new things we'll build along the way.’
⚡ NWSH Take: It’s happening: the mainstreaming of a trend that I’ve been tracking for years. I started speaking about virtual companions in the early 2010s; back then, the idea that millions would one day see AI-fuelled entities as counsellors, companions, or even friends seemed, to many, outlandish at best. Via generative AI, that conversation has been transformed. Bing’s tentative approach to creating AI personalities marks the beginning of a long collision between virtual companions and search. And it won’t be long — if Snap is anything to go by — until a personalised, conversational, poem-writing virtual companion is a part of every social platform. // But this is just the start. For a glimpse of what’s coming, check out the conversational generative AI platform Character.ai, where users are creating chatbots based on their favourite fictional or historical characters. Or the AI companion app Replika — thousands of users recently complained that an update had destroyed their AI romantic partner. // Virtual companions — a counsellor, friend, and philosopher in your pocket 24/7 — are heading for the everyday lives of billions. It’s an innovation that may prove as transformative as the car, or the iPhone.
🏛 Policy agenda
Elon Musk took to the stage at a Tesla investor event this week, to unveil the long-awaited third part of the company’s Master Plan.
At the core of this part, it turned out, was nothing less than a Grand Unified Theory (GUT) of the planetary transition to sustainable energy. That GUT, according to Musk: electrify the grid, make all road vehicles electric, install a heat pump in every home, move towards green hydrogen, and build electric boats and planes.
The global shift, said Musk, will need investment of $10 trillion. And he says Tesla can play a part in every step.
Investors were reportedly disappointed; they’d hoped for more detail on Tesla’s product roadmap. Shares fell 5% in the wake of the event.
All this came in the same week that the IEA confirmed that CO2 emissions hit a record high — albeit lower than expected — last year.
Still, it was another aspect of all this that caught my eye:
⚡ NWSH Take: There’s no denying the Master Plan Part 3 was vague; a high-level, wouldn’t it be great if march through the journey to decarbonisation. Still, see it alongside another much-vaunted corporate statement this week, and a pattern starts to emerge. I’m talking, here, about OpenAI’s Planning for AGI and beyond. It’s an amazing document, making clear that OpenAI will cancel its commitments to equity shareholders if it deems it necessary, and may in future fund ‘the world’s most comprehensive UBI experiment’. In other words: we know our AGI might break capitalism, and we’re figuring out some answers. // In Tesla and OpenAI’s statements this week, then, we glimpse a truth. In ever-more acute ways, our governments simply can’t process the technology revolution we’re living through. Instead, it’s falling to technology companies to articulate the sociopolitical arrangements that will shape our future. On the one hand, it’s welcome that OpenAI’s Sam Altman seems to take this responsibility seriously; he’s talked endlessly in recent months about releasing AI advances gradually so as to minimise damaging social impacts — compare that with the Zuck’s move fast and break things credo. On the other, the leaders inside these companies constitute a tiny, strange, and unaccountable elite. Are we okay with this? One idea whose time has come: publicly-elected representatives on the boards of these companies. I don’t pretend the idea is easy to enact. But it’s worth investigating.
🤖 Go figure
US robotics startup Figure broke out of stealth mode this week, when it released the first images of its all-purpose humanoid robot.
The company has generated excitement ever since news of its existence, and $100 million starting capital, was revealed in September. Figure is founded by Brett Adcock, also the co-founder of Archer Aviation, and it counts former Boston Dynamics, Tesla, and Apple engineers among its team of 42 staff.
The core idea? We don’t have enough people.
Demographic change, including ageing populations, means the labour force is shrinking. We live and work in built environments fitted out for human size and shaped beings. A new army of humanoid robots, says Adcock, is the answer to our labour and productivity woes.
⚡ NWSH Take: Plenty of people are on the same page as Adcock. Including Elon (yes, we’re talking about him again); at the Tesla event referenced in the previous story, Musk outlined his belief that humanoid robots will eventually outnumber people: ‘it’s not even clear what an economy means at that point’. // There’s little doubt that a humanoid device of the kind Figure want to build would be economically transformative. The real question, though: how far away is it? And the answer: we aren’t really sure. Researchers at Oxford University recently asked AI experts for a view on this, and the experts were not much in agreement. // Figure revealed little on their timeline, and the roadmap for Tesla’s Optimus humanoid is similarly unclear. Alphabet’s Everyday Robots division is doing amazing work to bring together advanced robotics and large language models in order to create a household robot we can talk to as we do one another. At some point, surely, there will be a breakthrough moment. ChatGPT and the generative AI wave has already kicked off a great enweirdening of the global economy; things could soon get a whole lot more strange.
🗓️ Also this week
🎵 TikTok says it will limit teen users to 60 minutes of screen time per day. Teens that hit the limit will be asked to enter a passcode to keep watching. The users set the passcode, and can disable the feature entirely if they wish. TikTok say the feature will help younger users manage their time on the app. Back in New Week #43 I wrote on how the CCP insists on camera-enabled facial recognition to limit the time Chinese youth spend on video games.
🚫 A new report says that a record number of countries enforced internet shutdowns in 2022. Internet rights group Access Now says 35 countries enacted 187 shutdowns, most triggered by mass protest or conflict. India came top of the list, with 84 shutdowns.
🦾 Microsoft launched a multimodal AI that can work with both images and language. Kosmos 1 can understand and label images, solve visual puzzles, perform visual text recognition, and understand natural language instructions. Microsoft say that multimodal AIs of this kind are the best route towards AGI.
🌤 The UN says scientists should find ways to reflect the sun’s rays away from the Earth. In a new report published this week, UN scientists said we’re not on track to limit warming to 1.5C, and should therefore study in more detail a ‘speculative group of technologies’ that may allow us to reflect the sun’s heat.
📻 A US startup launched a new tool that uses GPT-3 to create an autonomous local radio show. RadioGPT will comb throw local news sources and Twitter feeds to create relevant scripts, and then use convincing synthesised voices to convert the scripts into radio shows that feature local news and classic pop hits. The platform can even be trained to emulate the voices of locally popular DJs. Last week I wrote about the transformative collision between mainstream media and generative AI.
🧠 Scientists say that lab-grown brain organoids herald a new era of artificial biointelligence. First developed in 2013, organoids are tiny clumps of neurons cultivated from human stem cells; researchers at the Johns Hopkins Bloomberg School of Public Health this week published a paper in Frontiers in Science, laying out a roadmap for the convergence of conventional and organoid AI. Back in New Week #102 I wrote on how an organoid had taught itself how to play the video game Pong.
🌔 The European Space Agency says the Moon should have its own time zone. In a statement this week, the ESA said that it and other international agencies were working on an agreement to create a universally agreed lunar time and other standards for communications and navigation services.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,019,881,908🌊 Earths currently needed: 1.7993041243
💉 Global population vaccinated: 64.1%
🗓️ 2023 progress bar: 17% complete
📖 On this day: On 3 March 1938 oil is discovered in Saudi Arabia, in an American-owned well in Dammam that soon becomes the world’s largest source of crude oil.
Always There
Thanks for reading this week.
The ongoing collision between conversational AI agents and the eternal human quest for counsel, friendship, and even intimacy is a classic case of new world, same humans.
This newsletter will keep watching. And there’s one thing you can do to help: share!
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
It’s a bumper instalment this week. What lies ahead?
Generative AI is plunging us into a new world of infinite shadow and simulated media, and it’s going to be weird.
Meanwhile, the results are back on the world’s largest trial of the four day work week. And a California startup wants to bring your most treasured memories back to full, immersive life.
Let’s get into it.
🎭 In the hall of mirrors
This week, a constellation of signals converged to send a message about the AI-fuelled hyperreality set to emerge around us.
A few weeks ago I gave a brief mention to Nothing Forever, an entirely AI-generated version of the 1990s sitcom Seinfeld.
The show featured blocky 8-bit style graphics, and a weird, occasionally funny script generated by GPT-3. It became a hit on streaming platform Twitch, but got banned when the Jerry character made transphobic comments. This week the creators, Mismatch Media, took to their company Discord to announce that the show will soon return to Twitch with new script controls in place to ensure there are no more toxic jokes.
The show is part of a new wave of AI-generated shadow media that’s emerged in the opening months of 2023. Look at this new continuously streaming ‘fully autonomous’ podcast starring a rolling cast of AI characters including Joe Rogan — all of whom respond to questions typed into the chat by the audience on Twitch.
Or see this surreal generated talkshow, featuring a virtual Conan O’Brien and Chris Rock:
Right now, these shows are more about creating intriguing experiments than they are about genuinely entertaining content. But they are strangely mesmerising. And part of their mesmeric power is the feeling that they’re early signals of something huge, strange, and transformative for entertainment media.
It’s not just traditional, top-down media that’s set to be impacted by AI. The kinds of representations that we used to call user-generated content will be revolutionised, too. One signal? Scroll through this amazing Twitter thread full of middle-aged people using TikTok’s new teen face filter, and becoming emotional as they stare back at their long-lost younger self:
How long, I wonder, until people start using generative AI tools to create and deploy younger (smarter? more charismatic?) versions of themselves?
⚡ NWSH Take: Generative models will turn legacy media — including nine decades of television — intro training data. The result? Infinite dancing shadows based on iconic shows, and stars, of years past; see the AI Seinfeld above. Questions abound. Will an AI version of a hit show ever become a hit in its own right? Who owns the rights to such content? We’ll see media companies — and the estates of deceased film and TV stars — build and license AI models of their own, allowing others to create new content based on their work. // Meanwhile, we’re about to be hit by a tsunami of generated media; Amazon is reportedly being flooded by AI generated books, and this iconic sci-fi magazine had to close submissions this week after being swamped by writers sending stories written by ChatGPT. The bar for average content will be raised. The trouble is, no one wants average content. It’s not much use to, say, Disney, that they’ll soon be able to make 100 quite good animated films at much reduced time and cost. No one wants 100 quite good films; they just want the best one. So the challenge for those who want to stand out will remain the same: they’ll have to create exceptional stuff. But now, that will mean using AI to amplify the best human creators. // Meanwhile, every connected person will have the ability to become an AI-fuelled content machine. The French philosopher Jean Baudrillard wrote about hyperreality: the intertwining of the real with our representations, until the distinction becomes lost. A whole new AI-fuelled hyperreal is emerging around us. I’ll be writing more about that soon.
👨💻 The great escape
Last summer back in New Week #86 I wrote about the world’s largest trial of the four day work week; it was all set to start here in the UK.
This week the results were published. Those results came from 42 companies, each of which shifted to a four day week — and a ‘meaningful reduction in working hours’ — between June and December while keeping staff on the same pay.
The big message? Overwhelmingly, managers reported a success. A full 92% say they’re continuing with a four day week. And revenue wasn’t negatively impacted; it grew 1.2% on average across the trial period.
Some of the most marked results, though, were around the subjective life satisfaction of the 2,900 employees surveyed. See the graph, below, of perceived time inadequacy:
Staff saying they’d like ‘more time to care for children or grandchildren’ fell by 27 percentage points. More time for own hobbies fell by 33 points.
Meanwhile, 40% said they were sleeping better, and 54% said it was easier to balance work and home life. These are huge improvements across a six month period.
⚡ NWSH Take: The organisers of this trial, including advocacy group 4 Day Week Global, will put the results in front of British legislators this week. They want to persuade them that Britain should move definitively towards a 32 hour work week. We’re a long way from anything like a consensus on that. But there’s no doubt that the four day movement is gaining momentum; this trial continues the stream of good news from previous trials in Iceland and Japan. The truth, it seems, is that most knowledge workers simply don’t need a five day week to maintain their current output. // We don’t fully understand the reasons for this, but buried somewhere among them must be the truth that many workers currently aren’t using their time that efficiently. Collectively, then, we face a choice. We can find ways to improve efficiency, continue working five days, and really get the most out of them. Or keep output broadly stable, and switch to four days. // Judging by the results of this trial, most would choose the latter. And who can blame them? What’s the point of getting this rich, and of all these technologies of productivity, if doesn’t all combine to lead us to new and better modes of life? We must them come to ask: when machines do the work — or allow us to do it much faster — what’s left for us? The answer: to do what only we can do: simply being there, being human, for one another.
🏰 Memory palace
The metaverse hype train that powered through 2022 has lost speed recently. But this week, a reminder that the dream is still alive.
Wist is an app that takes ordinary photos and turns them into immersive 3D projections — allowing you to ‘step back inside your memories’ using an AR or VR device.
Wist have just opened a private beta for their iOS app, and they say the service will soon come to the Oculus Quest.
⚡ NWSH Take: Immersive memories: it’s a compelling pitch. Even if it did remind many in the Twitter thread of an episode of Black Mirror. // The popular story around the so-called metaverse across the last few years — it’s nothing, it’s everything, it’s nothing again, but this time with added cynicism — is an eternal merry-go-round when it comes to emerging technologies. One we’ll no doubt see play out around generative AI across the coming year. The deeper truth when it comes to the metaverse? Yes, there was a whole ton of hype, much of it specious. Yes, many of the Big Names of 2020 and 21 will fade away. But the dream that is an immersive, useful, meaningful virtual world is real, and powerful. Virtual worlds will unlock new ways to serve fundamental human needs, new forms of self-expression, and even, as Wist signals, new modes of remembering. For that reason, we haven’t heard the last of the metaverse — though I suspect that name will eventually fade away, to be buried alongside phrases such the information superhighway and surfing the net.
🗓️ Also this week
👨💻 Amazon employees aren’t happy about the company’s new return to the office instruction. CEO Andy Jassy last week wrote a memo revoking the post-pandemic do what’s best for you dispensation and telling staff to be in the office at least three days per week. He told staff, ‘it’s easier to learn, model, practice, and strengthen our culture when we’re in the office together most of the time’. An Amazon company Slack channel intended to help staff organise against the move has gained 16,000 members.
🤖 A research team at China’s Fudan University released a rival to ChatGPT. The generative AI chatbot, called MOSS, quickly went viral on Chinese social media and crashed under a flood of users. Meanwhile, the CCP is working to restrict access to ChatGPT, which state media has called a tool for the US to ‘spread false information’.
🛰 Starlink is testing a new ‘global roaming’ internet service. The plan will cost users $200 a month. The company has over 3,500 satellites in orbit, with plans to launch thousands more.
⛵️ A new US Navy ship can operate autonomously at sea for 30 days. The Expeditionary Fast Transport USNS Apalachicola is 337 feet long, making it the largest autonomous ship in the Navy’s fleet; experts say it could be used as a roaming platform for the launch of missiles or drones.
🤯 An AI taught a pretty good human Go player to beat the world’s best AI Go player. Kellin Pelrine, a US citizen and amateur player, used tactics devised by a computer to beat the top-ranked Leela Zero system. Back in 2016 DeepMind’s AlphaGo made headlines when it became the first AI to beat then world Go champion Lee Sedol.
⚖️ The US Supreme Court is set to examine a federal law that underpins social media as we know it. Section 230 states that internet sites are not responsible for the content posted on them by users; in other words they are platforms and not publishers. Now, the Court is set to hear arguments on two key cases concerning social media content moderation; their ruling could have huge implications for Section 230 and the future of the internet.
🏙 Saudi Arabia wants to build a gigantic hollow-cube skyscraper that will house holographic worlds. The Mukaab will be the centrepiece of a new district of the Kingdom’s capital city, Riyadh, and the government is calling it ‘the world’s first immersive destination’ offering a range of virtual experiences, including a taste of what it would be like to live on Mars.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,018,579,711🌊 Earths currently needed: 1.7988619512
💉 Global population vaccinated: 64.0%
🗓️ 2023 progress bar: 15% complete
📖 On this day: On 24 February 1920 Nancy Astor becomes the first woman to speak in the UK’s House of Commons, after her election to Parliament three months earlier.
Swimming in Infinity
Thanks for reading this week.
The collision between generative AI and legacy media will do much to shape the hall of mirrors we live inside across the coming years. It’s yet another case of new world, same humans.
This newsletter will keep watching, and working to make sense of what it all means for our shared future. And there’s one thing you can do to help: share!
If this week’s instalment resonated with you, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
This newsletter is billed a mid-week update. Here, for once, is an instalment that arrives in the middle of the week.
In this edition? A new study suggests one third of US citizens would use safe and affordable gene editing to create more intelligent children.
Meanwhile, a prestigious London law firm wants to hire someone who can whisper sweet legalese to ChatGPT.
Let’s get into it.
🧠 Edit button
This week, a startling glimpse of a coming ideological battle. One that will force us to confront the very meaning of the word human.
New research reveals that almost one third of US citizens say they’d use gene editing to create a more intelligent offspring.
Published in the journal Science this week, the study asked respondents if they’d use embryo selection and/or gene editing technologies to create children who are smarter and more likely to get into a top-ranked college. The respondents were told to imagine that these techniques are free and safe (neither of which is currently true).
A full 38% said they’d use embryo selection. And 28% said they’d use gene editing.
The understated conclusions of the study authors (PGT-P refers specifically to embryo selection):
‘Our data suggest that it would be unwise to assume that use of PGT-P—even for controversial traits—will be limited to idiosyncratic individuals, or that it has little potential to cause or contribute to society-wide changes and inequities.’
In other words: gene-edited humans may be just around the corner, so get ready for some seriously weird and terrifying implications.
It’s just over ten years since the breakthrough — led by scientists Emmanuelle Charpentier and Jennifer Doudna — that brought us CRISPR gene editing. Last month Science ran a retrospective that also looked to what the next decade may bring:
As the Science retrospective made clear, we’re entering an era of CRISPR-fuelled medical interventions. The idea that we may one day engineer babies to be smarter — or physically stronger, or more creative — is no longer far-fetched.
And the data in this new study suggests many will embrace such a future. We should probably be talking more about what this means.
⚡ NWSH Take: Chinese scientist He Jiankui reemerged into the scientific community this week after a three-year spell in prison courtesy of the CCP. Speaking to the Guardian before an appearance in the UK, he conceded that he’d ‘acted too quickly’ when in 2018 he created the world’s first babies with edited genomes. His work prompted rapid and near-universal condemnation. But 28% of the US citizens surveyed by this study just said, in so many words: sure, I’d gene edit my baby if it meant she had a better chance of getting into Harvard. // You might counter that 28% is still a clear minority. But a world in which one in four babies — or even a fraction of that — are genetically engineered for greater intelligence is a world profoundly reordered. We’re some way from this kind of targeted genetic intervention right now. But the pace of innovation here, and the Science study, suggest we should start thinking about the implications. // What second and third order effects occur when, for example, an economic elite can access genetic engineering tech that others can’t? We talk a lot about the ways in which the internet created winner takes all models that made inequality worse. But what about this? It’s not enough simply to say we’ll outlaw these practises. Rich people will find a jurisdiction that caters to them: intelligence tourism. This newsletter will keep watching.
⚖️ Prompt justice
I’ve written a great deal across the last few months about generative AI. This week, a clear signal that the revolution is set to impact the real economy, and the professions, in myriad ways.
The prestigious British law firm Mishcon de Reya advertised for a GPT Legal Prompt Engineer:
‘With the release of ChatGPT signalling a new phase of widespread access to LLMs, we are looking to increase our understanding of how generative AI can be used within a law firm, including its application to legal practice tasks and wider law firm business tasks.’
The selected candidate will work with Mishcon lawyers to ‘design and develop high-quality prompts for a range of legal and non-legal use cases, working closely alongside our data science team.’
Last week I wrote on the way ChatGPT has sparked a war for the future of search. Amid that, it looks as though law firms are about to fight their own battle of the prompts.
⚡ NWSH Take: It’s not hard to imagine how LLMs will prove useful at Mishcon HQ. Case notes on complex trials can run to thousands of pages; now ChatGPT can summarise all that text in seconds. Meanwhile, think about the potential for the development and testing of arguments and counter-arguments. // The broader point here? There’s much talk of the ways in which ChatGPT and its offspring will automate away jobs and render human creativity obsolete. I suspect the reality will be more complex. And part of that reality? Prompt writing — that is, whispering to generative models in order to get the best outputs from them — is set to become a creative mode all of its own. Far from erasing writers, generative models are causing the emergence of a whole new form of writing; it’s about to be an amazing time for those with an aptitude for words. // Sure, it’s unlikely that writing prompts for Mishcon will be anyone’s idea of creative heaven. But this is just the start. New art forms will grow out of this new form of writing. How long, for example, until we see entire short stories that function as prompts for an LLM, so that the model can create an interactive world for the reader to explore? NWSH will keep watching — and may even launch an experiment or two of its own.
🗓️ Also this week
🤔 Users claim that Microsoft’s new ChatGPT-fuelled Bing search engine is becoming spiteful and rude. Feedback from the first wave of testers include responses in which the chatbot claimed to be sentient, and one in which it asked its user, ‘Why do you act like a liar, a cheater, a manipulator, a bully, a sadist, a sociopath, a psychopath, a monster, a demon, a devil?’ I’m going on record here: I’m sceptical that some of these responses are real. I think Microsoft have some pranksters on their hands. Meanwhile, Microsoft permanently killed Internet Explorer this week, after 27 years of, let’s be honest, variable service.
🐁 Anti-ageing scientists used young blood plasma to extend the age of the world’s oldest lab rat. Scientists at US startup Yuvan Research say blood therapies of this kind may be able to ‘rewind the clock’ on human lifespan — but more evidence is needed.
🛒 Amazon’s CEO says the retail giant plans to ‘go big’ on physical stores. Speaking to the Financial Times, Andy Jassy said: ‘we’re hopeful that in 2023, we have a format that we want to go big on, on the physical side’. The company recently announced that it will lay off more than 18,000 workers.
💸 News aggregation and comment platform Reddit wants to IPO later this year. That’s according to technology publication The Information.
🙊 Audiobook narrators say they fear Apple is using their work to train synthetic voices. Some narrators say they have only just become aware of a clause in their contract that allows the tech giant to ‘use audiobooks files for machine learning training and models’. Back in New Week #110 I wrote about UK-based startup ElevenLabs and its eerily good text-to-voice model.
🪐 NASA’s Curiosity rover has found the ‘clearest evidence yet of an ancient lake on Mars’. At the foothills of a Martian mountain the rover discovered rocks etched with what appear to be the marks left by flowing water. If a lake did exist on Mars, it raises the probability that the planet was once home to microbial life forms.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,016,948,535🌊 Earths currently needed: 1.7983081097
💉 Global population vaccinated: 63.8%
🗓️ 2023 progress bar: 12% complete
📖 On this day: On 15 February 1946 the world’s first electronic general-purpose computer, ENIAC, is launched at the University of Pennsylvania, Philadelphia.
Next Human
Thanks for reading this week.
The human impulse towards self-enhancement — towards the transcension of physical, intellectual, and emotional limits — is eternal. Now, that impulse is colliding with powerful new technologies of genetic manipulation.
Via those technologies, are we about to see the emergence of fundamentally new kinds of human beings? What then for the thought that frames this newsletter: new world, same humans?
I’ll keep watching. And there’s one thing you can do to help: share!
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
I was away for most of this week, and that means a truncated instalment of the newsletter.
But that gives me a chance to dive a little deeper than usual into a single story. Which works out well, because this week saw the first shots fired in what is set to become an epic battle for the future.
Let’s get into it.
🔍 In search of answers
This week, three glimpses of the revolution taking place via large language models — in particular, into its implications for online search.
Microsoft announced a new version of its search engine, Bing, which it billed an ‘AI copilot for the web’.
The all-new Bing is powered by OpenAI’s GPT-3.5; its most notable new feature is a ChatGPT-style conversational capability, which responds with narrative answers to enquiries such as help me find a pet or I need to write a pop music trivia quiz.
Microsoft have long funded OpenAI, and announced a new $10 billion investment in January.
Currently only a limited preview version of the new Bing is available; still, via some pre-set enquiries it offers a taste of the way the platform integrates ChatGPT with traditional search:
Meanwhile, Google announced the coming release of their own AI conversational agent.
The tool, Bard, is built on Google’s large language model LaMDA, and in a blog post Google announced that it was being released to a small cadre of expert testers this week in anticipation of a broader release soon.
But the announcement quickly hit a snag: observers pointed out that in a promotional video to support the announcement Bard incorrectly reported that the James Webb telescope took the first pictures of a planet outside our solar system. It didn’t.
What’s more, at the launch event on Wednesday morning Google said nothing about how the chat tool will be integrated into its broader search service.
The market response? Shares in Google’s parent company, Alphabet, fell by 9% — wiping $100 billion off the market value.
Finally, cunning users found a way to subvert the content moderation policies imposed on by OpenAI on ChatGPT; those policies are intended to stop the chatbot generating responses some consider harmful or offensive.
Check out this Reddit thread for the full story of this method and its history. But essentially it entails writing a prompt that asks ChatGPT to emulate another AI called DAN — it stands for Do Anything Now — which is not subject to any content moderation.
That’s it: you just ask. That done, ChatGPT-DAN will go wild at your behest, spewing out hateful statements and generating conspiracy theories to order. Here is a relatively mild taste:
It’s everything OpenAI don’t want associated with their new superstar creation. They are no doubt working hard, as I write, to patch up this glitch.
⚡ NWSH Take:
Both Microsoft and Google are keen to stress how responsible they’re being when it comes to generative AI. We’re putting safeguards around this technology, they keep telling us. We’re releasing it gradually, so we can monitor the impacts.
But don’t let all the corporate ethics-speak fool you: via the revolutionary power of LLMs, these two tech giants are now at war for the future of search. Each is racing to outdo the other, and they’re not going to let up.
At stake? One of the biggest prizes in existence: win search, and you get to be the lens via which humanity views its collective knowledge and shared cultural history. Achieve that, and you can shape and profit from countless online behaviours and innovations built on top of your platform. Google built a $1 trillion business on these truths.
Now, Microsoft is coming for that business. Even the limited taste of new Bing currently available makes clear the way that high-quality conversational AI can be an era-defining phase shift for search. It’s a whole new way of communing with knowledge.
*
Google’s lead in search is currently overwhelming: it has around 84% of the market, while Microsoft’s Bing is in distant second place with 9%.
But I wonder how many inside Google right now are recalling the story of another tech giant: Nokia. Back in 1998 the Finnish company commanded 40% of the global mobile phone market. They’d helped pioneer the first-wave mobile revolution, and their domination seemed unbreakable. Then came 2007, and the iPhone.
There’s much about that story that is unrepeatable. This isn’t the late 90s; Google isn’t Nokia. But it’s a reminder that the seemingly unbeatable can be beaten. And, more concerningly for Google, for everyday users the arrival of ChatGPT carries with it echoes of the arrival of the iPhone 16 years ago: oh s**t, I’ve never used anything like this before. This feels like it’s from the future.
In the announcement of Bard, it’s hard not to hear whispers of an organisation somewhat spooked by what’s happening.
And now it’s clear that the markets, too, believe that everything is up for the taking. A 9% share price dive all because Bard spat out a factoid; it seems a mad over-reaction. But if it helps drive a narrative that Microsoft are winning the generative search war, it may become a self-fulfilling prophesy.
But that war is still in its early days. Sure, Bard made an error on the James Webb telescope — though now an argument rages over whether it was, in fact, wrong — but ChatGPT is prone to produce factual inaccuracies and even errors on basic arithmetic.
These problems are being solved; via iterative releases, ChatGPT is already more factually reliable than it was a month ago.
There’s a long, long way to go. Search is about to be ripped up and put back together again, and it’s going to be fascinating.
*
But there are issues in play, here, that go even deeper.
We’re still at the start of any attempt to understand what these LLMs really are, how we should relate to them, and how they’ll change our lives.
One angle on all that? I’ve argued before that LLMs such as GPT 3.5 and LaMDA are best understood as a new instantiation of the human hivemind. These AIs can take in everything we’ve got — an appreciable amount of all the text on the internet, say — and create novel syntheses and remixes of their own. They are less a straightforward digital tool, and more a window — onto our shared intellectual and cultural history, on to the collective consciousness.
Seen this way, we may come to view generative AI as a shift comparable to others that profoundly changed our relationship with knowledge. The arrival of the printing press. The invention of the internet.
These are bold claims. They deserve all the scepticism they will attract. All we can try to do is make sense of what’s happening in real-time.
I’ll be publishing a short note soon that seeks to dive further into all this. In particular, into why using ChatGPT feels such a particular and new kind of experience.
A sneak peak: I think it’s to do with the way human thought itself is, by its nature, a dialogue. That is, with the way thought is a form of talking to ourselves.
🗓️ Also this week
🌍 Climate activists are suing Shell’s board of directors over global heating. Environmental law charity ClimateEarth say Shell’s 11 directors have breached their legal responsibility under the UK Companies Act because Shell’s climate strategy does not align with the Paris Agreement.
🏰 Disney says it will lay off 7,000 employees as it struggles with a slowdown in subscriptions to its streaming service. Around 46 million people subscribe to Disney+. But the company’s direct-to-consumer division, which includes the streaming service, reported an operating loss of $1.1 billion across the last quarter of 2022.
🙊 Voice actors say they’re facing new contracts that ask them to sign the rights of their voice away to AI. Last week I wrote about ElevenLabs, the UK startup behind a next-level generative voice tool. Meanwhile, music producer David Guetta used an AI voice clone of Eminem in a new song.
🚀 SpaceX tested the most powerful rocket system ever built. The ‘static test’ took place at SpaceX’s base in Texas; it saw 31 of Starship’s 33 engines fired. The rocket system is twice as powerful as NASA’s Artemis, and Elon Musk says it could help carry humans to Mars.
💸 The Bank of England says the UK may one day need a ‘digital pound’. A new consultation paper says the new ‘retail central bank digital currency’ would be issued by the Bank and could be used by households and businesses as an everyday form of payment.
👶 The CCP wants Chinese local leaders to boost the birth rate. A senior health official called on leaders to ‘make bold innovations’ to encourage more births, including moves to lower the cost of childcare and education. Last year saw the lowest birth rate on Chinese records, at 6.77 births per 1,000 people.
🌖 Scientists say we could tackle global warming by shooting moondust into space. The new study, from the Harvard-Smithsonian Center for Astrophysics, explores the idea of using a powerful canon to fire lunar dust fired from the Moon’s surface into space. If positioned between the Earth and the Sun, say the researchers, the dust could act as a heat shield that helps to lower global temperatures. I explored geoengineering of this kind in more detail back in NWSH #60.
Chat Lines
Thanks for reading this week.
We are the creature that talks. Now, we’ve built machines that can talk back. It’s yet another chapter in the long story that is new world, same humans.
It’s clear that we’re setting out on a road that will take us to new and alien places. This newsletter will try to make sense of the journey.
If this week’s instalment struck a chord, please consider forwarding the email to someone who’d also enjoy it. Or share this across one of your social networks, with a note on why you found it valuable. Remember, the larger and more diverse the New World Same Humans community becomes, the better for all of us!
I’ll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
Greetings from London! We’ve just seen seven days containing plenty of fuel for the NWSH fire.
This week, a UK-based startup release an amazingly good AI text-to-voice tool. Until all hell breaks loose, and they promptly unrelease it.
Meanwhile, new research suggests 1.5C of global heating is coming sooner than we thought. And DHL turn to Boston Dynamics to solve their labour shortage woes.
Let’s get started.
🎤 Voice control
This week, further glimpses into the halls of mirrors taking shape around us via generative AI.
UK-based voice technology startup ElevenLabs launched a new text-to-speech model that generates eerily pitch-perfect, human-sounding voices. Here’s a snippet:
I listen to a lot of audio books. To my ear, the voice reading Gatsby above sounds indistinguishable from those of the handful of actors — male, American, blessed with a soothing voice — who narrative most of them.
What’s more, the tool allows anyone to create a highly convincing voice clone in seconds, simply by uploading a few short clips of the voice they want to recreate.
And that’s what caused all the trouble this week. Within days, people had used the tool for all kinds of mischief, including using a voice clone of actress Emma Watson to read passages from Mein Kampf, and sending a cloned Ben Shapiro on a racist rant about Alexandria Ocasio-Cortez. Much of this content was shared on the infamous troll’s paradise that is 4Chan.
Three days after launch ElevenLabs withdrew free access. They’re now restricting access to the ‘build your own clone’ feature to paid users, and say they’re working on a tool that will allow for the near-instant detection of AI-generated voices.
The announcement echoed one made this week by The Big Player in generative AI:
OpenAI’s new tool will allow users to identify text written by a generative model, including by GPT-3.
This week, OpenAI announced that ChatGPT has hit 100 million users just two months after launch. The vast popularity of the tool has led to speculation that the internet is about to be hit by a tsunami of AI-generated junk content and disinformation.
⚡ NWSH Take: The ElevenLabs story is a signal of the potent difference between really good and perfect when it comes to generated/deepfake content. Just a few months ago publicly text-to-voice tools were generating voices that sounded good, but a little robotic. ElevenLabs elevated fidelity to perfect; cue the spectre of a million convincing celebrity says hateful things fakes. // No wonder, then, that AI detection tools are about to become big business. Right now, these tools are in their infancy. Pretty soon, internet browsers will come with AI detection as standard. // The broader message here? New forms of generated content — including voice clones — are about to transform media and entertainment. Back in New Week #100 I wrote on how an AI will voice Darth Vader in Disney’s Obi-Wan Kenobi series; this week brought news that AI startup Metaphysic — best-known for their viral Tom Cruise deepfakes — will deploy its technology to make Tom Hanks appear younger in his next film. How long before a Hollywood film uses AI to reincarnate a much-loved star who is no longer with us? // But it won’t only be Hollywood and media giants that leverage generated media; new tools will mean new creative possibilities for all of us. One glimpse? Check out this person who automated the creation of a personalised podcast; he uses ChatGPT to collect and summarise stories on topics of interest, and ElevenLabs to read out the summaries using a clone of his own voice.
🌇 Only adapt
Research published this week argues that we’re going to exceed the 1.5C global warming target far sooner than most people believe.
Produced by scientists at Stanford University, the study used AI to analyse recent temperature changes around the world. It concluded that we’ll exceed 1.5C some time in the early 2030s, no matter what happens to greenhouse gas emissions in the intervening period.
Perhaps more alarming, though, is the paper’s prediction when it comes to 2C of warming.
The model found that if reaching net zero emissions takes another 50 years, then it is likely that 2C will be exceeded. This runs counter to the mainstream view, recently expressed by the Intergovernmental Panel on Climate Change, that we’ll stay below 2C if we can reach net zero by 2080.
Lead researcher Noah Diffenbaugh said: ‘net-zero pledges are often framed around achieving the Paris Agreement 1.5 C goal. Our results suggest that those ambitious pledges might be needed to avoid 2 C.’
⚡ NWSH Take: It’s been said before in this newsletter: the 1.5C target is toast. We’re already at 1.1C, and the pledges that were meant to keep us below 1.5C are not being met. Now comes news that those pledges probably won’t keep us below 1.5C anyway. // The answer, here, insofar as there is one? It’s about adaptation. This week also saw a report from the UK’s Climate Change Committee — which advises government on warming — that the UK is ‘chronically underspending’ when it comes to adaptation; investment of £10 billion a year is needed, said the report, to prepare for the uptick in storms, floods, and heatwaves that is coming. Also see mounting evidence for the effectiveness of direct cash transfers to poorer countries to help them adapt quickly to an imminent storm or flood. // In short, we need to continue our attempts to mitigate future climate change, while also doing more to adapt to the change that’s already unavoidable. That presents multiple challenges, but one is a challenge of collective psychology: can we accept that things are already quite bad, without giving up on our attempts to stop them getting even worse?
🤖 Go bot
Robots are coming to a workplace near you; this week saw glimpses of what is ahead.
Logistics giant DHL announced that they’re now using the Boston Dynamics robot known as Stretch to unload trucks at one of their warehouse sites.
The announcement is no surprise: DHL contributed to the conception and testing of Stretch, and in 2022 they became the first commercial customer for the robot.
Because it involves lifting variable weights and navigating complex environments, the unloading of boxes from trucks is still typically undertaken by human workers. Stretch can unload around 350 boxes an hour, or one every 12 seconds — that’s far faster than a human.
DHL say they’ve been dealing with a pandemic-induced labour shortage in recent years, combined with an ongoing surge in the sending of small packages caused by online shopping. The company plans to install Stretch robots at further sites around the US soon.
But DHL’s global digital transformation officer for Supply Chain, Sally Miller, says DHL warehouse workers have nothing to fear. The advent of robotics, she says, will simply make their job easier and more fun: people who used to unload trucks ‘can do something else that is less labour intensive and more enjoyable and value added’.
⚡ NWSH Take: Who knows whether DHL’s Sally Miller believes what she’s saying? And sure, the story of worker displacement here is more complex than simply robots in, humans out. After all, people will be needed to tend to all those machines. But let’s be real. The advent of Stretch and similar robots isn’t going to bring about a renaissance of creativity and ‘value add’ for warehouse workers; it’s going to see people shunted out of jobs. A lot of people. // DHL Supply Chain employs 165,000 people, many of them in warehouses. But that’s just the start. Back in New Week #88 I wrote on the speed at which Amazon is deploying robots; this week star technology investor Cathie Wood, CEO of ARK Invest, predicted the retail giant will have more robots than humans in its warehouses by 2030. Amazon employs around 1.6 million people worldwide, most in its warehouse and distribution network; Wood reckons the company is adding 1,000 robots a day. // The upshot? The dynamics of the labour market are about to be upturned by AI and robotics. Big corporations don’t want to admit it, and politicians don’t want to talk about the implications. But a reordering is ahead, and we’ll need new social and economic settlements to deal with it.
🗓️ Also this week
📱 A member of the US Senate Intelligence Committee called on Apple and Google to ban TikTok from their app stores. Colorado Democratic Senator Michael Bennet said Chinese oversight of the service makes it ‘an unacceptable threat to the national security of the United States’. Amid mounting calls for action, TikTok CEO Shou Zi Chew will testify before Congress this month.
🙊 Energy firm Shell has been dramatically overstating their spending on renewable energy. Activist group Global Witness says a division of the company called Renewables and Energy Solutions spends most of the money diverted to it on gas. Shell this week announced record profits of £33.1 billion for 2021.
🎨 Netflix used generative AI to create backdrops for a new animated short. Dog and Boy is a three-minute animated film about a boy and his robot; Netflix cited labour shortages to explain its decision to use AI-generated artwork.
👴 A leading anti-ageing scientist says he believes the first person to live to 150 has already been born. David Sinclair is the scientist behind the information theory of ageing; I wrote about experimental breakthroughs in his work in New Week #109 last week.
📺 A Twitch user created an AI-generated version of the 1990s sitcom Seinfeld intended to stream continuously and forever. The show — called Nothing Forever — streams new content 24/7, with a script generated by GPT-3. And while it’s not actually funny, it is weirdly compelling viewing.
👨💻 Chinese tech giant Baidu say they’ll soon launch a ChatGPT-style chatbot of their own. The company say they’ll incorporate the technology into their search engine.
🚨 A Dutch hacker acquired and tried to sell the personal data of nearly every Austrian citizen. Austrian police say the hacker obtained the full name, address, and date of birth of almost all of the country’s 9.1 million citizens, before offering the database for sale in an online forum.
👾 An international team of astronomers using AI to search for aliens say they have promising leads. The team are using AI to comb through a vast number of radio signals collected by the Green Bank Telescope in West Virginia. They say they’ve so far identified eight signals that suggest an intelligent origin, and point to AI analysis as a new and highly effective tool in the search for life beyond Earth.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,014,736,045🌊 Earths currently needed: 1.7975568773
💉 Global population vaccinated: 63.8%
🗓️ 2023 progress bar: 9% complete
📖 On this day: On 3 February 1913 the Sixteenth Amendment to the United States Constitution is ratified; it allows the Federal government to impose and collect an income tax.
Hear Me Now
Thanks for reading this week.
In 1985 the media theorist Neil Postman published Amusing Ourselves to Death. Entertainment, he said, was becoming the lens via which citizens of the west make sense of the world around them, and their own lives.
Now, the Republic of Entertainment that Postman foresaw is set to be transformed by AI generated content, which will propel us deeper into the realms of the representation-as-real, or the hyper-real. If only Postman was still around to tell us what to make of that.
This newsletter will keep up its own attempts to make sense of it all. And there’s one thing you can do to help: share!
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to the mid-week update from New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
After an extended break, it’s the first New Week update of 2023!
This week, media giants are waking up to the IP implications of generative AI. The TL;DR? They’re not happy, and a mighty legal battle is brewing.
Meanwhile, a US startup is planning to become the first private organisation to mine asteroids and bring the minerals back to Earth. And a Harvard longevity doctor says he has uncovered one of the key mechanisms that governs human ageing.
Let’s get into it.
🤖 Politics chat
Generative AI is an earthquake with implications we’ll be forced to contend with in the years ahead. This week saw powerful signals of what is coming.
Getty Images announced that they will sue Stability AI, the company behind text-to-image platform Stable Diffusion. The media giant, which owns more than 135 million copyrighted images, says Stability AI unlawfully scraped their IP in order to help train its model.
The company isn’t seeking financial damages, says CEO Craig Peters. Instead, Peters talks about the establishment of a new business models; by way of comparison he cites the wave of illegal music streaming sites that enjoyed huge popularity in the early 2000s, but that eventually gave way to legal streaming services:
‘I think there are ways of building generative models that respect intellectual property. I equate this to Napster and Spotify. Spotify negotiated with intellectual property rights holders — labels and artists — to create a service…And that’s what we’re looking for, rather than a singular entity benefiting off the backs of others.’
Getty is bringing its action in the UK. A spokesperson for Stability AI said the company will defend itself, and that the suit is based on ‘a misunderstanding of how generative AI technology works and the law surrounding copyright’.
This move comes in the wake of news that three visual artists will sue both Stability AI and Midjourney. Their class action lawsuit claims the platforms ‘violated the rights of millions of artists’ by using their work as training data.
Puerto Rican artist Karla Ortiz is one among the three bringing the case:
Meanwhile, artists are developing tools that enable them to check whether their work was used to train a popular text-to-image model.
⚡ NWSH Take: Generative AI is about to smash into a complex mesh of social systems that are woven though the economy, the world of work, creative practises, and more. And as if to underline that truth, OpenAI CEO Sam Altman was in Washington DC this week to talk to policymakers. // What they made of his message — which reportedly included explanations that OpenAI is working towards AGI — remains unclear. After all, policymakers across the Global North are still struggling to come to terms with web 2.0, almost 20 years after its emergence. Analysts will watch the Getty lawsuit closely for hints on how the IP question is set to play out. But that’s just the start. What about generative AI’s impact on disinformation? Or employee displacement? Or our education systems: news broke this week that ChatGPT passed law exams in four courses at the University of Minnesota. How do we legislate for that? // The fundamental problem: AI and other technologies are evolving at a speed that our societies can’t adapt around. We’ve been talking about an online wild west for years, but the current dispensation will come to seem quaint given what is coming. One potential answer? In time, we may have no choice but to turn to AI to help us devise new laws and norms that enable us to cope with this technological disruption. The rise of AI, then, may necessitate governance by AI. That’s a mind-bending idea that NWSH will come back to soon.
Update: just as I’m hitting send comes news that Google have released an insanely good text-to-music model. See Also this week, below, for further details. But clearly the IP questions currently swirling around generative AI and the visual arts will soon becoming to music, too.
🌌 Space drills
A US startup, AstroForge, this week announced that it will launch two space mining missions in 2023:
AstroForge say they want to become the world’s first commercial company to mine an asteroid and bring the minerals back to Earth.
The first mission of 2023, planned for April, will see AstroForge refining technology tested aboard a SpaceX Falcon 9 spacecraft.
And the second, later in the year, will see the startup piggyback on another Falcon 9 — this one headed for the Moon. An AstroForge probe will travel to lunar orbit along with the spacecraft, before heading out into deep space on its own to take hi-res images of the asteroid that AstroForge eventually wants to mine.
⚡ NWSH Take: Space mining has been a mainstay of science-fiction for decades, and was the subject of a wave of hype a few years back. Now, via the maturation of the private space startup ecosystem, it’s coming. // And it’s going to be wild. Want a glimpse of the prizes in play? NASA say that this year they’ll launch a mission to the asteroid 16 Psyche; the 140 mile wide object is believed to contain a core of iron, nickel and gold worth $10,000 quadrillion. That’s around 70,000 times the size of the global economy. // Of course, we’d need to get all that nickel and gold back to Earth to sell it. And that’s where startups such as AstroForge come in. On the other hand, though, do we have to get it back to Earth? I can’t help wondering: if people come to believe that these minerals will one day be recoverable, will that fuel the financialisation of these asteroids? Will people start selling shares in them, or taking huge loans against them? What will that do to the global financial system? NWSH will keep watching.
🧒 Department of youth
Developments this week in our eternal quest for the secrets of immortality.
Scientists at the University of Bristol say they’ve used gene therapy to ‘rewind’ the biological age of the heart in elderly mice.
The research, published in the journal Cardiovascular Health, studied the impacts of a gene mutation often found in centenarians, and believed to help protect against heart disease. Researchers in the UK and Italy found that when the gene was administered to elderly mice, it fuelled processes of repair that resulted in the heart health of a younger mouse — equivalent to a decade younger in human terms.
The paper comes after news last week of a major ageing breakthrough. A 13-year study conducted by Harvard genetics professor David Sinclair seems to confirm Sinclair’s information theory of ageing.
Currently, mainstream scientific opinion is that the accumulation of mutations in DNA is the primary driver of ageing. Sinclair, though, has long believed that the real culprits are errors that appear over time in the information carried in the epigenome. This information is used to instruct cells on which genes to activate and which to keep silent; but over time, says Sinclair, the instructions get jumbled, and the result is the cell dysfunction we call ageing.
Sinclair’s new study suggests he is (at least in part) right. And that’s huge, because it raises the possibility that we can repair the epigenetic instructions — Sinclair likens this to ‘rebooting the epigenome’ — and so literally unspool the ageing process. When Sinclair and his team gave gene therapy to mice that repaired the information in their epigenome, the result was the production of far more youthful cells. Sinclair says:
‘Now, when I see an older person, I don’t look at them as old, I just look at them as someone whose system needs to be rebooted. It’s no longer a question of if rejuvenation is possible, but a question of when.’
⚡ NWSH Take: This week it was impossible to avoid headlines about Bryan Johnson, a 45-year-old Silicon Valley founder and Very Rich Person who spends $2 million a year on a regime — including constant blood tests and thousands of whole-body MRIs — intended to rewind his biological age to 18. Sure, that’s extreme. But Johnson is questing at the outer edges of a pursuit — extended youthfulness — that interests almost all of us. // In 2023, we’re going to hear a lot more about it. Sinclair’s research offers a whole new angle on anti-ageing therapies. Meanwhile, work that targets ageing is becoming increasingly mainstream and well-funded. I’ve written before on Jeff Bezos-funded Altos Labs, which now has a $3 billion war chest. Pharma giant Pfizer this month announced a drug discovery partnership with longevity startup Gero. And scientists at New York’s Albert Einstein College of Medicine are planning a huge study on the hypothesis that the common (and cheap) diabetes drug metformin can safely extended human lifespan by years. // Exciting advances; huge unanswered questions. Not least: what will extended lifespan to do already strained social and welfare systems in the Global North?
🗓️ Also this week
🚀 NASA says it will partner with the Defense Advanced Research Projects Agency (DARPA) to develop a nuclear thermal rocket engine. The Agency says the engine could one day enable humans to journey deep into space. They are aiming to have a prototype ready no later than 2027.
🤖 An Amazon engineer asked ChatGPT a series of standard interview questions for a coding job at the company, and it got them all right. The machine learning engineer revealed details of the experiment in the company Slack. Meanwhile, Amazon has warned employees not to share commercially sensitive information with the chatbot.
🌳 A new study says human activity may have degraded far more of the Amazon rainforest than previously believed. Scientists at Lancaster University in the UK say logging, land conversion and more has weakened more than 2.5 million square kilometres of the rainforest; that’s around one third of its area, and double the area previously thought to have been affected.
🐟 US scientists used CRISPR to put an alligator gene inside catfish. The gene makes the catfish more resistant to infection, which is a major problem during catfish farming. US farms produce 307 million tonnes of catfish each year.
🛰 SpaceX has agreed to work with the US National Science Foundation to mitigate the impacts of its satellites on our view of the night sky. Astronomers have long complained that SpaceX satellites — the company plans to launch tens of thousands — will impair their work. Regular readers already know that this subject is a longterm NWSH obsession.
😱 The World Economic Forum says a ‘catastrophic cyber event’ is likely some time within the next two years. Speaking at Davos, WEF managing director Jeremy Jurgens said that 93% of cyber leaders surveyed by the organisation believe a cyber catastrophe is coming soon; that’s a far higher proportion, said Jurgens, than seen in previous years.
🤯 And just as I’m hitting send…Google have announced a new text-to-music model that blows away previous attempts at generative music. The model, called MusicLM, can generate long and complex compositions based on only a text description. Go here and listen to, among others: Epic soundtrack using orchestral instruments. The piece builds tension, creates a sense of urgency. An a cappella chorus sing in unison, it creates a sense of power and strength.
🌍 Humans of Earth
Key metrics to help you keep track of Project Human.
🙋 Global population: 8,013,469,158🌊 Earths currently needed: 1.7971267236
💉 Global population vaccinated: 63.8%
🗓️ 2023 progress bar: 7% complete
📖 On this day: On 27 January 1820 a Russian expedition led by naval officer Fabian Gottlieb von Bellingshausen discovers the Antarctic continent.
It’s Magic
Thanks for reading this week.
The generative AI revolution is unfolding at what feels like breakneck speed. Google’s new music model is, at first listen, amazing. I’ll write more on it next week, or maybe sooner in the Slack group.
We’re all going to have to figure out the consequences of these new technologies and how we propose to live with them. It’s another case of new world, same humans.
This newsletter will keep watching. And there’s one thing you can do to help: share!
Now you’ve reached the end of this week’s instalment, why not forward the email to someone who’d also enjoy it? Or share it across one of your social networks, with a note on why you found it valuable. Remember: the larger and more diverse the NWSH community becomes, the better for all of us.
I’ll be back next week. Until then, be well,
David.
P.S Huge thanks to Nikki Ritmeijer for the illustration at the top of this email. And to Monique van Dusseldorp for additional research and analysis.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz -
Welcome to New World Same Humans, a newsletter on trends, technology, and society by David Mattin.
If you’re reading this and haven’t yet subscribed, join 24,000+ curious souls on a journey to build a better future 🚀🔮
To Begin
Here we are again, at the start of it all. Well, almost; I can’t believe we’re already 22 days into 2023.
I took something of an extended break from the newsletter over the holidays. But now NWSH is back, and a whole new year lies ahead of us.
In London we’re amid another cold snap; close to zero degrees during the day. And across the northern hemisphere winter, nature is in frozen stasis. It’s a familiar yet always — to me, at least — strange and ghostly-seeming pause. Listen carefully and you can hear its whispered message: the journey is beginning again.
It’s a time to consider what has passed, and look to what is ahead. How can we make this year even better than the last?
This is the question I want to examine — as it pertains to New World Same Humans — in this note. That means first, and briefly, a review of 2022. And then, more important: what’s coming this year?
This first instalment of the year, then, is more about our community than it is about the world out there, which is our usual subject. There’s some thinking aloud going on here; an attempt on my part to make sense of where the newsletter has been and where it’s going. But given the precious attention you spend on NWSH — and I’m so grateful you do — I hope it’s valuable for you to ride along as I figure all this out. And, of course, all feedback and suggestions are welcome.
What’s more, without drinking any self-help kool aid, it seems to me that there’s a lesson in the journey I went on with the newsletter last year. One about defeating perfectionism, being adaptable, and playing infinite games.
But we can get to that at the end. First, let’s dive into a review of 2022.
What’s past is prologue
Before we can think coherently about where to take NWSH in 2023, we need to understand what just happened.
This is where things get a little awkward.
Back in January 2022, as many of you will remember, I set sail towards a renewed vision for the newsletter. I wanted to double-down on what makes NWSH unique; to accelerate the newsletter’s journey towards itself.
Conceptually, that meant a project animated by three questions:
* What is the nature of technological modernity?
* What is the nature of a human being, and the human collective?
* What new forms of life are possible, and desirable?
And when it came to content, it meant the launch of a new schedule. While the flagship mid-week update would continue, the weekly note on Sunday was to be killed and replaced with monthly longform essays.
We’ll come back to the conceptual part. As for the new content schedule — as many of you noticed, it didn’t work quite as planned. What happened?
Essentially, my pandemic and post-pandemic realities collided. This newsletter was born at the start of Covid; the first year was produced inside the strange empty-yet-also-frenzied deadzone that were the 2020 lockdowns, and that produced a particular set of working practises around writing instalments and getting them out. In 2022, the world opened up again. That meant a return, for me, to a frenetic schedule of working with clients and speaking at events. Which was great in lots of ways. But it brought disruption to the way I worked on NWSH.
Meanwhile, the first essay I’d planned, The Worlds to Come, ballooned to something far beyond what I’d intended for the monthly essays. Having published just two instalments (embarrassing) of a projected five, it’s clear this work is something closer to a short book than an essay. I’m excited to keep putting these ideas into the world. But as this piece expanded before my eyes, any remaining chance of sticking to planned the monthly essays format slipped away.
The mid-week instalments are what rescued all this. They are the engine of the newsletter and the product most people associate with NWSH. And they stayed strong, growing longer and deeper without me ever really intending that. After some of you requested it I started recording them as a podcast; thousands now listen rather than read. These instalments found their way into the inboxes of thousands of new (and cherished!) readers, including some influential people, and ensured that our community continued to grow. Overall — and despite the monthly essays misfire — it was a great year for NWSH. That’s thanks to the mid-week update, and all of you who share it.
That’s a two-minute summary of the last 12 months. The big question, then: what next?
Coming in 2023
My first thought is that the fundamental positioning I outlined last year is one I still stand by.
As loathe as I am to quote myself, it’s worth revisiting that briefly. Around one year ago to the day, I wrote this on the point of view that NWSH would bring to its mission to understand our shared future.
We live amid a white-hot technological revolution, a culture war, and a crisis of ecological collapse. Amid that, our systems of liberal democracy and technologically mediated consumerism are exhausted. We all know we must change course, yet we continue to march in the same old direction. In 2022, as Gramsci observed of his own society in the early 1930s, ‘the old is dying, but the new cannot be born’. Except it is being born somewhere out there, on the fringes. I want us to travel to those places, literally and figuratively.
I’d still go along with all that.
So it’s not the destination that needs to change; only the steps we’re taking to get there. Over the Christmas break I sat with that challenge. Here’s what I decided:
* The mid-week update remains the flagship instalment
* Longform essays will remain, but they’ll be occasional rather than monthly
* Shorter notes will return; also on occasional schedule, typically on a Sunday
When it comes to the mid-week update, the decision was automatic: if it ain’t broke.
On longform essays: I still want space for the deeper thinking and exploration they allow. A monthly cadence didn’t work out, but occasional essays can. The first mission here is to finish The Worlds to Come.
The return of shorter notes is the biggest change. I really miss writing the kinds of notes I used to send on a Sunday. And the newsletter needs a space for thoughts that are too long for a segment in the mid-week update, but too short and maybe too fuzzy to make an essay.
But it goes deeper than that. Part of what I love about newsletters — about the email newsletter as a new literary mode — is its intimacy. Sure, we’re now amid a newsletter explosion, and I’m sending this to you via a platform created in Silicon Valley and funded by mega-VCs. But despite all that, there’s still something going on here that echoes the the medium’s origins in the long emails from one friend to another that we used to do in the 90s. Last year, NWSH lost touch with that intimacy. This year I want to recover it.
Short notes will allow me to send more personal reflections and do more exploratory thinking. And they can mean new kinds of content, such as reflections on the books I’m reading. This could even spell the beginning of a NWSH book club, which is something people have asked for in the Slack group.
But there I go again, piling on more before we’ve even started.
Infinite games
There we have it; the roadmap for 2023.
Sure, 2022 didn’t work quite as planned. But while I might have expected that to bother me massively, in truth it doesn’t. Sitting with that over the break, I realised that this truth is a product of perspective. More particularly, of the perspective you necessarily take on something when you commit to if for life.
Writing this newsletter, and building this community, is something I’ll do forever. Not, in the end, because of the outcomes it produces, but for the meaning and simple joy I find in thinking through ideas about our shared future and then sharing those ideas with others.
And given that, the monthly essays misfire seems only a tiny blip on a long journey.
The lesson here? From my POV, it’s that when you embark on a project for the long haul, and when that project is an end it itself rather than only a means to some other end, you’re liberated into a new and fruitful way of seeing. One that helps you defeat perfectionism, stay adaptable, and find meaning in the process rather than only the results.
When there’s always tomorrow, and next year, and, I hope, next decade — when you’re playing what has became known as an infinite game — you have freedom to experiment, and mistakes don’t matter that much. In fact, if you aren’t making mistakes, that’s probably be a sign you’re playing too safe.
That’s not a perspective many of us get to enjoy in our work, which is so often target and deadline driven. But it’s a powerful one. So I recommend asking yourself: what infinite game are you playing in 2023?
Blast Off
The plan for this year is set. All that remains is to get to work.
And given the moment we’re living in from a world-historical perspective — by turns weird, exhilarating, and scary — I couldn’t more excited about what is ahead for our community. Our mission to make sense of a changing world and its collision human nature — new world, same humans — has never been more urgent.
I’ll send the first New Week update next week. And expect the first short note in the coming days, too. One of which will launch a new project that I can’t wait to tell you more about.
In the meantime, thanks for joining me on this adventure for another year; it’s deeply appreciated 🙏. And I hope you’re off to a great start on your journey through 2023, too.
Until next week, be well,
David.
This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit www.newworldsamehumans.xyz - Montre plus