Folgen
-
It’s crunch time on climate change, and companies, governments, philanthropists, and NGOs around the world are starting to take action, be it through donating huge sums of money to the cause, building a database for precise tracking of carbon emissions, creating a plan for a clean hydrogen economy, or advocating for solar geoengineering—among many other initiatives.
But according to Nvidia, to really know where and how to take action on climate change, we need more data, better modeling, and faster computers. That’s why the company is building what it calls “the world’s most powerful AI supercomputer dedicated to predicting climate change.”
The system will be called Earth-2 and will be built using Nvidia’s Omniverse, a multi-GPU development platform for 3D simulation based on Pixar’s Universal Scene Description. In a blog post announcing Earth-2 late last week, Nvidia’s founder and CEO Jensen Huang described his vision for the system as a “digital twin” of Earth.
Digital twins aren’t a new concept; they’ve become popular in manufacturing as a way to simulate a product’s performance and tweak the product based on feedback from the simulation. But advances in computing power and AI mean these simulations have become much more granular and powerful, with the ability to drive meaningful change—and that’s just what Huang is hoping for with Earth-2.
“We need to confront climate change now. Yet, we won’t feel the impact of our efforts for decades,” he wrote. “It’s hard to mobilize action for something so far in the future. But we must know our future today—see it and feel it—so we can act with urgency.”
Plenty of climate models already exist. They quantify factors like air pressure, wind magnitude, and temperature and plug them into equations to get a view of climate patterns in a given region, representing those regions as 3D grids. The smaller the region, the more accurate a model can be before becoming unwieldy (in other words, models must solve more equations to achieve higher resolution, but trying to take on too many equations will make a model so slow that it stops being useful).
This means most existing climate models lack both granularity and accuracy. The solution? A bigger, better, faster computer. “Greater resolution is needed to model changes in the global water cycle,” Huang wrote. “Meter-scale resolution is needed to simulate clouds that reflect sunlight back to space. Scientists estimate that these resolutions will demand millions to billions of times more computing power than what’s currently available.”
Earth-2 will employ three technologies to achieve ultra-high-resolution climate modeling: GPU-accelerated computing; deep learning and breakthroughs in physics-informed neural networks; and AI supercomputers—and a ton of data.
The ultimate aim of this digital twin of our planet is to spur action that will drive meaningful change, both in terms of mitigating the negative impacts of climate change on populations and mitigating climate change itself. Extreme weather events like hurricanes, wildfires, heat waves, and flash floods are increasingly taking lives, damaging property, and forcing people to flee from their homes; you’ve doubtless seen the dire headlines and heartbreaking images on the news. If we could accurately predict these events much further in advance, those headlines would change.
Huang hopes Nvidia’s model will be able to predict extreme weather changes in designated regions decades ahead of time. People would then know to either not move to certain areas at all, or to build the infrastructure in those areas in a way that’s compatible with the impending climate events. The model will also aim to help find solutions, running simulations of various courses of actions to figure out which would have the greatest impact at the lowest cost.
Nvidia has not shared a timeline for Earth-2’s development nor when the supercomputer will be ready to launch. But if its Cambridge-1 supercomputer for healthcare resea...
-
Thanks to deep learning, the central mysteries of structural biology are falling like dominos.
Just last year, DeepMind shocked the biomedical field with AlphaFold, an algorithm that predicts protein structures with jaw-dropping accuracy. The University of Washington (UW) soon unveiled RoseTTAFold, an AI that rivaled AlphaFold in predictive ability. A few weeks later, DeepMind released a near complete catalog of all protein structures in the human body.
Together, the teams essentially solved a 50-year-old grand challenge in biology, and because proteins are at the heart of most medications, they may also have seeded a new era of drug development. For the first time, we have unprecedented insight into the protein engines of our cells, many of which had remained impervious to traditional lab techniques.
Yet one glaring detail was missing. Proteins don’t operate alone. They often associate into complexes—small groups that interact to carry out critical tasks in our cells and bodies.
This month, the UW team upped their game.
Tapping into both AlphaFold and RoseTTAFold, they tweaked the programs to predict which proteins are likely to tag-team and sketched up the resulting complexes into a 3D models.
Using AI, the team predicted hundreds of complexes—many of which are entirely new—that regulate DNA repair, govern the cell’s digestive system, and perform other critical biological functions. These under-the-hood insights could impact the next generation of DNA editors and spur new treatments for neurodegenerative disorders or anti-aging therapies.
“It’s a really cool result,” said Dr. Michael Snyder at Stanford University, who was not involved in the study, to Science.
Like a compass, the results can guide experimental scientists as they test the predictions and search for new insights into how our cells grow, age, die, malfunction, and reproduce. Several predictions further highlighted how our cells absorb external molecules—a powerful piece of information that could help us coerce normally reluctant cells to gulp up medications.
“It.gives you a lot of potential new drug targets,” said study author Dr. Qian Cong at the University of Texas Southwestern Medical Center.
The Cell’s Lego Blocks
Our bodies are governed by proteins, each of which intricately folds into 3D shapes. Like unique Lego bricks, these shapes allow the proteins to combine into larger structures, which in turn conduct the biological processes that propel life.
Too abstract? An example: when cells live out their usual lifespan, they go through a process called apoptosis—in Greek, the falling of the leaves—in which the cell gently falls apart without disturbing its neighbors by leaking toxic chemicals. The entire process is a cascade of protein-protein interactions. One protein grabs onto another protein to activate it. The now-activated protein is subsequently released to stir up the next protein in the chain, and so on, eventually causing the aging or diseased cell to sacrifice itself.
Another example: in neurons during learning, synapses (the hubs that connect brain cells) call upon a myriad of proteins that form a complex together. This complex, in turn, spurs the neuron’s DNA to make proteins that etch the new memory into the brain.
“Everything in biology works in complexes. So, knowing who works with who is critical,” said Snyder.
For decades, scientists have relied on painfully slow processes to parse out those interactions. One approach is computational: map out a protein’s structure down to the atomic level and predict “hot spots” that might interact with another protein. Another is experimental: using both biological lab prowess and physics ingenuity, scientists can isolate protein complexes from cells—like sugar precipitating from lemonade when there’s too much of it—and use specialized equipment to analyze the proteins. It’s tiresome, expensive, and often plagued with errors.
Here Comes the Sun
Deep learning is now shining light on the whole enterprise....
-
Fehlende Folgen?
-
With the pace of emissions reductions looking unlikely to prevent damaging climate change, controversial geoengineering approaches are gaining traction. But aversion to even studying such a drastic option makes it hard to have a sensible conversation, say researchers.
Geoengineering refers to large-scale interventions designed to alter the Earth’s climate system in response to global warming. Some have suggested it may end up being a crucial part of the toolbox for tackling global warming, given that efforts to head off warming by reducing emissions seem well behind schedule.
One major plank of geoengineering is the idea of removing excess CO2 from the atmosphere, either through reforestation or carbon capture technology that will scrub emissions from industrial exhausts or directly from the air. There are limits to nature-based CO2 removal, though, and so-called “negative emissions technology” is a long way from maturity.
The other option is solar geoengineering, which involves deflecting sunlight away from the Earth by boosting the reflectivity of the atmosphere or the planet’s surface. Leading proposals involve injecting tiny particles into the stratosphere, making clouds whiter by spraying sea water into the atmosphere, or thinning out high cirrus clouds that trap heat.
In theory, this could reduce global warming fairly cheaply and quickly, but interfering with the Earth’s climate system carries unpredictable and potentially enormous risks. This has led to widespread opposition to even basic research into the idea. Earlier this year, a test of the approach by Sweden’s space agency was cancelled following concerted opposition.
But this lack of research means policymakers are flying blind when weighing the pros and cons of the approach, researchers write in a series of articles in the latest issue of Science. They outline why research into the approach is necessary and how social science in particular can help us better understand the potential trade-offs.
In an editorial, Edward A. Parson from the University of California, Los Angeles, notes that critics often point to the fact that solar geoengineering is a short-term solution to a long-term problem that is likely to be imperfect and whose effects could be uneven and unjust. More importantly, if solar geoengineering becomes acceptable to use, we may end up over-relying on it and putting less effort into emissions reductions or carbon removal.
This point is often used to argue that solar geoengineering can never be acceptable, and therefore research into it isn’t warranted. But Parson argues that both the potential harms and benefits of solar geoengineering are currently hypothetical due to a lack of research.
Rejecting an activity due to unknown harms might be justified in extreme circumstances and when the alternative is acceptable, he writes. But the alternative to solar geoengineering is potentially catastrophic climate change—unless we drastically ramp up emissions reductions and removals, which is far from a sure thing.
Part of the rationale for preventing solar geoengineering research is that it will drive socio-political lock-in that makes its deployment more likely. But Parson points out that rather than preventing its deployment, blocking research into solar geoengineering may actually lead to less-informed, more dangerous deployments by desperate policymakers further down the line.
One way to overcome some of the resistance to research in this area might be to make the debate around it more constructive, writes David W Keith from Harvard University in a policy paper. And the best way to do that is to disentangle the technical, political, and ethical aspects of the debate.
Appraising the pros and cons of solar geoengineering involves many different fields, from engineering to climate science to economics. But often, experts in one of these areas will give an overall judgment on the technology despite not being in a position to assess critical aspects of it.
The...
-
Just a few months ago, most of us had never heard of an NFT. Even once we figured out what they were, it seemed like maybe they’d be a short-lived fad, a distraction for the tech-savvy to funnel money into as the pandemic dragged on.
But it seems NFTs are here to stay. The internet has exploded with digital artwork that’s being bought and sold like crazy; in the third quarter of this year, trading volume of NFTs hit $10.67 billion, a more than 700-percent increase from the second quarter. Last month, both Coinbase and Sotheby’s announced plans to launch NFT marketplaces.
As a quick refresher in case you, like me, still don’t totally get it: NFT stands for non-fungible token, and it’s a digital certificate that represents ownership of a digital asset. The certificates are one-of-a-kind (that’s the non-fungible part), are verified by and stored on a blockchain, and allow digital assets to be transferred or sold.
Depending who you ask, NFTs were a thing as early as 2013 or 2014—but they didn’t really hit headlines until earlier this year, when artists like Grimes and Beeple sold their digital creations for millions of dollars. Soon everyone from Jack Dorsey to George Church to the NBA started jumping on the NFT bandwagon. And you’ve probably heard about the bizarre phenomenon that is the Bored Ape Yacht Club. This is just the beginning of an ever-growing list of artists, celebrities, crypto-enthusiasts, and others who are betting NFTs are the future of collectible art.
Re-enter Beeple, the American artist (whose given name is Mike Winkelmann) whose collage of 5,000 pieces of digital art, titled Everydays: The First 5000 Days, sold for $69 million in a Christie’s auction in March. Another piece of his sold this week, and though it went for less than half what Everydays did, it’s bringing a whole new twist to the NFT art world.
The new work, titled Human One, is a life-sized 3D video sculpture, and Winkelmann called it “the first portrait of a human born in the metaverse.” It shows a person in silver clothing and boots, wearing a backpack and a helmet (which is something of a cross between that of an astronaut and a motorcyclist) trekking purposefully across a changing landscape. It was purchased for $29 million at Christie’s 21st Century Evening Sale on Tuesday by Ryan Zurrer, a Swiss venture capitalist.
introducing HUMAN ONE
beeple (@beeple) October 28, 2021
The piece is a box whose four walls are video screens, with a computer at its base. It’s over seven feet tall and can be viewed from any angle. But its key feature is the fact that it will be continuously updated, supposedly for the rest of Winkelmann’s life. “I want to make something that people can continue to come back to and find new meaning in. And the meaning will continue to evolve,” he said. “That to me is super-exciting. It feels like I now have this whole other canvas.”
The artist plans to change the imagery that appears in the box regularly. It will be sort of like having one of those digital photo frames, except instead of family and friends on a small, flat screen, who-knows-what will appear in 3D and larger than life. If some of the images in Everydays are any indication, Zurrer may end up seeing some pretty striking political commentary in his living room, or office, or wherever he chooses to keep Human One.
“You could come downstairs in the morning and the piece looks one way,” Winkelmann said. “Then you come home from work, and it looks another way.” However, he won’t be changing the piece according to any sort of schedule, but rather as the fancy strikes him—and, he noted, in response to current events.
If Zurrer chooses to keep the piece in his home or another private location, that would establish a sort of artistic intimacy between him and Winkelmann, with Zurrer being privy to the artist’s ideas and creativity in real time. Though Human One was doubtless an expensive and highly complex project, it’s likely just the beginning of a whole new type of “l...
-
House prices have soared during the last year and a half, and the implications aren’t great (unless you’re a homeowner looking just to sell and not buy). Homelessness and housing insecurity have risen dramatically over the course of the pandemic, with millions of people unable to afford to live where they want to, and many unable to afford to live anywhere at all. A supply shortage is just one among many factors contributing to these problems, but it’s a big one; houses take a long time to build, require all sorts of permissions and inspections and approvals, and are, of course, expensive.
A Seattle-based company wants to be part of changing this, and they’ve just joined forces with a partner to make home building more sustainable and efficient while driving down its costs. Last week, construction tech company NODE, which got its start at Y Combinator, announced a merger with Green Canopy, a vertically-integrated developer, designer, general contractor, and fund manager. The new company’s goal is to offer accessible, green housing options at scale.
“The construction industry is ripe for disruption and evolution,” said NODE co-founder Bec Chapin. “It’s a giant industry that has been losing productivity over decades and is not meeting our most crucial demands for housing.”
NODE’s approach is similar to that of Las Vegas-based Boxabl, which ships pre-fabricated “foldable” houses to its customers in a 20-foot-wide load that can be up and running in as little as a day. In a 2018 GeekWire interview, Chapin said the company was “developing a component technology in which the walls, floors and ceilings are in separate pieces, as well as all of the things needed to make it a complete house: kitchens, baths, heating systems, etc. These houses can be packed more efficiently, then easily assembled on site.”
NODE homes come in flat-pack kits that fit in standard shipping containers, and they don’t require specialists to assemble; they’re essentially the IKEA furniture of houses (though IKEA furniture can, admittedly, be much harder to put together than the company would have you think, as you know if you’ve ever purchased one of their bed frames or shelving units.). Their assembly is guided by software and can be done by generalist construction workers, or even by homeowners themselves.
A 2019 McKinsey report noted that modular construction is seeing a comeback, largely thanks to the impact of new digital tools. Consumer perception of prefab housing is becoming more positive as the design of these homes gets more modern and visually appealing. Most importantly, modular construction assisted by digital technologies can make home building up to 50 percent faster, at a cost that’s comparable to or lower than traditional building costs.
Green Canopy NODE’s homes are priced between $90,000 (for a 260-square-foot-home that somehow fits in a kitchen and a bathroom) to $150,000 for a 500-square-foot model. These figures are well below the cost of using traditional building methods for homes of comparable size in the company’s native Seattle area.
So they seem to be on the right track. But where they’re really looking to set themselves apart from competitors is in their focus on sustainability. “We started Node because buildings account for 47 percent of carbon emissions, yet all of the technology exists for buildings to be carbon negative,” Chapin said.
The company’s homes are designed to be carbon neutral or carbon negative; they’re ultra energy-efficient and they use non-toxic materials. Their insulation, for example, is made of recycled denim, glass, and sand instead of fiberglass. The homes can also be outfitted with solar panels or mini wind turbines, and thus could end up generating more energy than they consume, enabling homeowners to sell power back to the grid.
The newly-merged company recently raised $10 million in new funding, and expects to double in size over the next year (it currently has 31 employees). Initially focused on the “...
-
Most people know that the land masses on which we all live represent just 30 percent of Earth’s surface, and the rest is covered by oceans.
The emergence of the continents was a pivotal moment in the history of life on Earth, not least because they are the humble abode of most humans. But it’s still not clear exactly when these continental landmasses first appeared on Earth, and what tectonic processes built them.
Our research, published in Proceedings of the National Academy of Sciences, estimates the age of rocks from the most ancient continental fragments (called cratons) in India, Australia, and South Africa. The sand that created these rocks would once have formed some of the world’s first beaches.
We conclude that the first large continents were making their way above sea level around three billion years ago, much earlier than the 2.5 billion years estimated by previous research.
A Three-Billion-Year-Old Beach
When continents rise above the oceans, they start to erode. Wind and rain break rocks down into grains of sand, which are transported downstream by rivers and accumulate along coastlines to form beaches.
These processes, which we can observe in action during a trip to the beach today, have been operating for billions of years. By scouring the rock record for signs of ancient beach deposits, geologists can study episodes of continent formation that happened in the distant past.
The Singhbhum craton, an ancient piece of continental crust that makes up the eastern parts of the Indian subcontinent, contains several formations of ancient sandstone. These layers were originally formed from sand deposited in beaches, estuaries and rivers, which was then buried and compressed into rock.
We determined the age of these deposits by studying microscopic grains of a mineral called zircon, which is preserved within these sandstones. This mineral contains tiny amounts of uranium, which very slowly turns into lead via radioactive decay. This allows us to estimate the age of these zircon grains, using a technique called uranium-lead dating, which is well suited to dating very old rocks.
The zircon grains reveal that the Singhbhum sandstones were deposited around three billion years ago, making them some of the oldest beach deposits in the world. This also suggests a continental landmass had emerged in what is now India by at least three billion years ago.
Interestingly, sedimentary rocks of roughly this age are also present in the oldest cratons of Australia (the Pilbara and Yilgarn cratons) and South Africa (the Kaapvaal Craton), suggesting multiple continental landmasses may have emerged around the globe at this time.
Rise Above It
How did rocky continents manage to rise above the oceans? A unique feature of continents is their thick, buoyant crust, which allows them to float on top of Earth’s mantle, just like a cork in water. Like icebergs, the top of continents with thick crust (typically more than 45km thick) sticks out above the water, whereas continental blocks with crusts thinner than about 40km remain submerged.
So if the secret of the continents’ rise is due to their thickness, we need to understand how and why they began to grow thicker in the first place.
Most ancient continents, including the Singhbhum Craton, are made of granites, which formed through the melting of pre-existing rocks at the base of the crust. In our research, we found the granites in the Singhbhum Craton formed at increasingly greater depths between about 3.5 billion and 3 billion years ago, implying the crust was becoming thicker during this time window.
Because granites are one of the least dense types of rock, the ancient crust of the Singhbhum Craton would have become progressively more buoyant as it grew thicker. We calculate that by around three billion years ago, the continental crust of the Singhbhum Craton had grown to be about 50km thick, making it buoyant enough to begin rising above sea level.
The rise of continents had a profound inf...
-
When it comes to brain computing, timing is everything. It’s how neurons wire up into circuits. It’s how these circuits process highly complex data, leading to actions that can mean life or death. It’s how our brains can make split-second decisions, even when faced with entirely new circumstances. And we do so without frying the brain from extensive energy consumption.
To rephrase, the brain makes an excellent example of an extremely powerful computer to mimic—and computer scientists and engineers have taken the first steps towards doing so. The field of neuromorphic computing looks to recreate the brain’s architecture and data processing abilities with novel hardware chips and software algorithms. It may be a pathway towards true artificial intelligence.
But one crucial element is lacking. Most algorithms that power neuromorphic chips only care about the contribution of each artificial neuron—that is, how strongly they connect to one another, dubbed “synaptic weight.” What’s missing—yet tantamount to our brain’s inner working—is timing.
This month, a team affiliated with the Human Brain Project, the European Union’s flagship big data neuroscience endeavor, added the element of time to a neuromorphic algorithm. The results were then implemented on physical hardware—the BrainScaleS-2 neuromorphic platform—and pitted against state-of-the-art GPUs and conventional neuromorphic solutions.
“Compared to the abstract neural networks used in deep learning, the more biological archetypes.still lag behind in terms of performance and scalability” due to their inherent complexity, the authors said.
In several tests, the algorithm compared “favorably, in terms of accuracy, latency, and energy efficiency” on a standard benchmark test, said Dr. Charlotte Frenkel at the University of Zurich and ETH Zurich in Switzerland, who was not involved in the study. By adding a temporal component into neuromorphic computing, we could usher in a new era of highly efficient AI that moves from static data tasks—say, image recognition—to one that better encapsulates time. Think videos, biosignals, or brain-to-computer speech.
To lead author Dr. Mihai Petrovici, the potential goes both ways. “Our work is not only interesting for neuromorphic computing and biologically inspired hardware. It also acknowledges the demand . to transfer so-called deep learning approaches to neuroscience and thereby further unveil the secrets of the human brain,” he said.
Let’s Talk Spikes
At the root of the new algorithm is a fundamental principle in brain computing: spikes.
Let’s take a look at a highly abstracted neuron. It’s like a tootsie roll, with a bulbous middle section flanked by two outward-reaching wrappers. One side is the input—an intricate tree that receives signals from a previous neuron. The other is the output, blasting signals to other neurons using bubble-like ships filled with chemicals, which in turn triggers an electrical response on the receiving end.
Here’s the crux: for this entire sequence to occur, the neuron has to “spike.” If, and only if, the neuron receives a high enough level of input—a nicely built-in noise reduction mechanism—the bulbous part will generate a spike that travels down the output channels to alert the next neuron.
But neurons don’t just use one spike to convey information. Rather, they spike in a time sequence. Think of it like Morse Code: the timing of when an electrical burst occurs carries a wealth of data. It’s the basis for neurons wiring up into circuits and hierarchies, allowing highly energy-efficient processing.
So why not adopt the same strategy for neuromorphic computers?
A Spartan Brain-Like Chip
Instead of mapping out a single artificial neuron’s spikes—a Herculean task—the team honed in on a single metric: how long it takes for a neuron to fire.
The idea behind “time-to-first-spike” code is simple: the longer it takes a neuron to spike, the lower its activity levels. Compared to counting spikes, it’s an extremely sp...
-
While getting humans to Mars is likely to be one of the grandest challenges humanity has ever undertaken, getting them back could be even tougher. Researchers think sending genetically engineered microbes to the Red Planet could be the solution.
Both NASA and SpaceX are mulling human missions to Mars in the coming decades. But carrying enough fuel to make sure it’s a round trip adds a lot of extra weight, which dramatically increases costs and also makes landing on the planet much riskier.
As a result, NASA has been investigating a variety of strategies that would make it possible to produce some or all of the required fuel on Mars using locally-sourced ingredients. While the planet may be pretty barren, its atmosphere is 95 percent carbon dioxide and there is abundant water ice in certain areas.
That could provide all the ingredients needed to create hydrocarbon rocket fuels and the liquid oxygen needed to support combustion. The most ambitious of NASA’s plans would be to use electrolysis to generate hydrogen and oxygen from water and then use the Sabatier reaction to combine the hydrogen with Martian CO2 to create methane for use as a fuel.
The technology to do that at scale is still immature, though, so the more likely option would see methane shipped from Earth and oxygen generated in place using solid oxide carbon dioxide electrolysis (SOCE). That would still require 7.5 tons of fuel and 1 ton of SOCE equipment to be transported to Mars, though.
Researchers from the Georgia Institute of Technology have outlined a new strategy in a paper in Nature Communications, which would use genetically engineered microbes to produce all the fuel and oxygen required for a return trip on Mars.
“Carbon dioxide is one of the only resources available on Mars,” first author Nick Kruyer said in a press release. “Knowing that biology is especially good at converting CO2 into useful products makes it a good fit for creating rocket fuel.”
The researchers’ proposal involves building four football fields’ worth of photobioreactors—essentially liquid-filled transparent tubes—which will be used to grow photosynthetic cyanobacteria.
While it is possible to get these microbes to produce fuels themselves, they are fairly inefficient at it. So instead, they will be fed into another reactor where enzymes will break them down into simple sugars, which are then fed to genetically modified E. coli bacteria that produce a chemical called 2,3-butanediol.
On Earth this chemical is primarily used to make rubber, and burns too inefficiently to be used as a fuel. But thanks to Mars’ low gravity, it is more than capable of powering a rocket engine there, and also uses less oxygen than methane.
“You need a lot less energy for lift-off on Mars, which gave us the flexibility to consider different chemicals that aren’t designed for rocket launch on Earth,” said Pamela Peralta-Yahya, who led the research.
The process also generates 44 tons of excess oxygen that could be used for life support. The one catch is that if the system was built with today’s state-of-the-art technology, it would require 2.8 times as much material to be delivered to Mars compared to the most likely NASA strategy.
However, once there it would use 32 percent less power, and resupply missions would only need to carry 3.7 tons of nutrients and chemicals rather than 6.5 tons of methane every time. And modeling studies suggest that by optimizing the biological processes involved and designing lighter-weight materials, a future system could actually weigh 13 percent less than the NASA solution and use 59 percent less power.
The biggest barrier at the minute might be the fact that current NASA regulations prohibit sending microbes to Mars due to fears of contaminating the pristine environment. The researchers acknowledge that they will have to develop foolproof biological containment strategies before the proposal could be seriously considered.
But if we want to make round trips to Mars a regular f...
-
AI research wunderkind, DeepMind, has long been all fun and games.
The London-based organization, owned by Google parent company Alphabet, has used deep learning to train algorithms that can take down world champions at the ancient game of Go and top players of the popular strategy video game Starcraft.
Then last year, things got serious when DeepMind trounced the competition at a protein folding contest. Predicting the structure of proteins, the complex molecules underpinning all biology, is notoriously difficult. But DeepMind’s AlphaFold2 made a quantum leap in capability, producing results that matched experimental data down to a resolution of a few atoms.
In July, the company published a paper describing AlphaFold2, open-sourced the code, and dropped a library of 350,000 protein structures with a promise to add 100 million more.
This week, Alphabet announced it will build on DeepMind’s AlphaFold2 breakthrough by creating a new company, Isomorphic Labs, in an effort to apply AI to drug discovery.
“We are at an exciting moment in history now where these techniques and methods are becoming powerful and sophisticated enough to be applied to real-world problems including scientific discovery itself,” wrote Demis Hassabis, DeepMind founder and CEO, in a post announcing the company. “Now the time is right to push this forward at pace, and with the dedicated focus and resources that Isomorphic Labs will bring.”
Hassabis is Isomorphic’s founder and will serve as its CEO while the fledgling company gets its feet, setting the agenda and culture, building a team, and connecting the effort to DeepMind. The two companies will collaborate, but be largely independent.
“You can think of [Isomorphic] as a sort of sister company to DeepMind,” Hassabis told Stat. “The idea is to really forge ahead with the potential for computational AI methods to reimagine the whole drug discovery process.”
While AlphaFold2’s success sparked the effort, protein folding is only one step—arguably simpler than others—in the arduous drug discovery process.
Hassabis is thinking bigger.
Though details are scarce, it appears the new company will build a line of AI models to ease key choke points in the process. Instead of identifying and developing drugs themselves, they’ll sell a platform of models as a service to pharmaceutical companies. Hassabis told Stat these might tackle how proteins interact, the design of small molecules, how well molecules bind, and the prediction of toxicity.
That the work will be separated from DeepMind itself is interesting.
The company’s not insignificant costs have largely been dedicated to pure research. DeepMind turned its first profit in 2020, but its customers are mostly Alphabet companies. Some have wondered if it’d face more pressure to focus on commercial products. The decision to create a separate enterprise based on DeepMind research seems to indicate that’s not yet the case. If it can keep pushing the field ahead as a whole, perhaps it makes sense to fund a new organization—or organizations, seeded by future breakthroughs—as opposed to diverting resources from DeepMind’s more foundational research.
Isomorphic Labs has plenty of company in its drug discovery efforts.
In 2020, AI in cancer, molecular, and drug discovery received the most private investment in the field, attracting over $13.8 billion, more than quadruple 2019’s total. There have been three AI drug discovery IPOs in the last year, and mature startups—including Exscientia, Insilico Medicine, Insitro, Atomwise, and Valo Health—have earned hundreds of millions in funding. Companies like Genentech, Pfizer, and Merck are likewise working to embed AI in their processes.
To a degree, Isomorphic will be building its business from the ground up. AlphaFold2 is without a doubt a big deal, but protein modeling is the tip of the drug discovery iceberg. Also, while AlphaFold2 had the benefit of access to hundreds of thousands of freely available, already modeled protein s...
-
Four and a half years ago, a robot named Flippy made its burger-cooking debut at a fast food restaurant called CaliBurger. The bot consisted of a cart on wheels with an extending arm, complete with a pneumatic pump that let the machine swap between tools: tongs, scrapers, and spatulas. Flippy’s main jobs were pulling raw patties from a stack and placing them on the grill, tracking each burger’s cook time and temperature, and transferring cooked burgers to a plate.
This initial iteration of the fast-food robot—or robotic kitchen assistant, as its creators called it—was so successful that a commercial version launched last year. Its maker Miso Robotics put Flippy on the market for $30,000, and the bot was no longer limited to just flipping burgers; the new and improved Flippy could cook 19 different foods, including chicken wings, onion rings, french fries, and the Impossible Burger. It got sleeker, too: rather than sitting on a wheeled cart, the new Flippy was a “robot on a rail,” with the rail located along the hood of restaurant stoves.
This week, Miso Robotics announced an even newer, more improved Flippy robot called Flippy 2 (hey, they’re consistent). Most of the updates and improvements on the new bot are based on feedback the company received from restaurant chain White Castle, the first big restaurant chain to go all-in on the original Flippy.
So how is Flippy 2 different? The new robot can do the work of an entire fry station without any human assistance, and can do more than double the number of food preparation tasks its older sibling could do, including filling, emptying, and returning fry baskets.
These capabilities have made the robot more independent, eliminating the need for a human employee to step in at the beginning or end of the cooking process. When foods are placed in fry bins, the robot’s AI vision identifies the food, picks it up, and cooks it in a fry basket designated for that food specifically (i.e., onion rings won’t be cooked in the same basket as fish sticks). When cooking is complete, Flippy 2 moves the ready-to-go items to a hot-holding area.
Miso Robotics says the new robot’s throughput is 30 percent higher than that of its predecessor, which adds up to around 60 baskets of fried food per hour. So much fried food. Luckily, Americans can’t get enough fried food, in general and especially as the pandemic drags on. Even more importantly, the current labor shortages we’re seeing mean restaurant chains can’t hire enough people to cook fried food, making automated tools like Flippy not only helpful, but necessary.
“Since Flippy’s inception, our goal has always been to provide a customizable solution that can function harmoniously with any kitchen and without disruption,” said Mike Bell, CEO of Miso Robotics. “Flippy 2 has more than 120 configurations built into its technology and is the only robotic fry station currently being produced at scale.”
At the beginning of the pandemic, many foresaw that Covid-19 would push us into quicker adoption of many technologies that were already on the horizon, with automation of repetitive tasks being high on the list. They were right, and we’ve been lucky to have tools like Zoom to keep us collaborating and Flippy to keep us eating fast food (to whatever extent you consider eating fast food an essential activity; I mean, you can’t cook every day). Now if only there was a tech fix for inflation and housing shortages.
Seeing as how there’ve been three different versions of Flippy rolled out in the last four and a half years, there are doubtless more iterations coming, each with new skills and improved technology. But the burger robot is just one of many new developments in automation of food preparation and delivery. Take this pizzeria in Paris: there are no humans involved in the cooking, ordering, or pick-up process at all. And just this week, IBM and McDonald’s announced a collaboration to create drive-through lanes run by AI.
So it may not be long before you c...
-
Self-driving cars are taking longer to become a reality than many experts predicted. But that doesn’t mean there isn’t steady progress being made; on the contrary, autonomous driving technology is consistently making incremental advancements in all sorts of vehicles, from cars to trucks to buses. Last week, General Electric (GE) and Swedish freight tech company Einride announced a partnership to launch a fleet of autonomous electric trucks.
According to the press release, the fleet will be the first of its kind to operate in the US, with trucks running on GE’s 750-acre Appliance Park campus in Louisville, Kentucky, as well as at GE facilities in Tennessee and Georgia.
Einride was founded in 2016, raised $25 million in Series A funding in 2019, and most recently raised $110 million this past May. Though the company makes electric trucks that are driven by humans, it’s primarily known for another vehicle that can’t quite be called a truck; the company’s Pods not only lack drivers, they lack cabs for drivers to sit in. The vehicles are similar to Volvo’s Vera trucks, which also lack cabs, though the Pods are meant for medium distance and capacity, like moving goods from a distribution center to a series of local stores.
The Pods will need regulatory approval to operate on public roads, so for now they’ll be limited to driving around within customers’ campuses (a safety driver isn’t required if the vehicles are on private property). According to Reuters, Einride just hired its first remote driver (based in the US) who will be able to take over control of the Pods if they get into a situation where human decision-making is required.
Despite being confined to corporate campuses for now, Einride envisions its technology making a measurable difference in terms of emissions, estimating that the GE partnership will save the company 970 tons of carbon dioxide emissions in the first year, with that quantity increasing as more trucks are added.
“Between seven and eight percent of global CO2 emissions come from heavy road freight transport,” Robert Falck, CEO and founder of Einride, told TechCrunch. “One of the drivers for starting Einride is that I’m very worried that by optimizing and making road freight transport autonomous, but based on diesel, it’s likely that we will actually increase emissions because it would become that much cheaper to operate.”
This would be a textbook example of the Jevons Paradox: when technological progress increases the efficiency with which a resource is used, pushing its cost down—but causing consumption to then rise due to increasing demand, ultimately canceling out any savings of said resource.
Electrification, then, is a key aspect of Einride’s strategy, and one of its three core focus points (along with digitization and automation; the company created a digital platform that handles customers’ planning, scheduling, routing, invoices, and billing). “You can actually electrify up to 40 perecent of the US road freight transport system with a competitive business case using existing technology,” Falck said. “It’s more about deploying new way of thinking rather than just improving the hardware.”
Einride also has contracts in place with tire maker Bridgestone and oat milk manufacturer Oatly. The company plans to open a US headquarters in New York next year, as well as offices in Austin and San Francisco.
Image Credit: Einride
-
An astonishing 82 percent decrease in the cost of solar photovoltaic (PV) energy since 2010 has given the world a fighting chance to build a zero-emissions energy system which might be less costly than the fossil-fueled system it replaces. The International Energy Agency projects that PV solar generating capacity must grow ten-fold by 2040 if we are to meet the dual tasks of alleviating global poverty and constraining warming to well below 2°C.
Critical challenges remain. Solar is “intermittent,” since sunshine varies during the day and across seasons, so energy must be stored for when the sun doesn’t shine. Policy must also be designed to ensure solar energy reaches the furthest corners of the world and places where it is most needed. And there will be inevitable tradeoffs between solar energy and other uses for the same land, including conservation and biodiversity, agriculture and food systems, and community and indigenous uses.
Colleagues and I have now published in the journal Nature the first global inventory of large solar energy generating facilities. “Large” in this case refers to facilities that generate at least 10 kilowatts when the sun is at its peak (a typical small residential rooftop installation has a capacity of around 5 kilowatts).
We built a machine learning system to detect these facilities in satellite imagery and then deployed the system on over 550 terabytes of imagery using several human lifetimes of computing.
We searched almost half of Earth’s land surface area, filtering out remote areas far from human populations. In total we detected 68,661 solar facilities. Using the area of these facilities, and controlling for the uncertainty in our machine learning system, we obtain a global estimate of 423 gigawatts of installed generating capacity at the end of 2018. This is very close to the International Renewable Energy Agency’s (IRENA) estimate of 420 GW for the same period.
Tracking the Growth of Solar Energy
Our study shows solar PV generating capacity grew by a remarkable 81 percent between 2016 and 2018, the period for which we had timestamped imagery. Growth was led particularly by increases in India (184 percent), Turkey (143 percent), China (120 percent) and Japan (119 percent).
Facilities ranged in size from sprawling gigawatt-scale desert installations in Chile, South Africa, India, and north-west China, through to commercial and industrial rooftop installations in California and Germany, rural patchwork installations in North Carolina and England, and urban patchwork installations in South Korea and Japan.
The Advantages of Facility-Level Data
Country-level aggregates of our dataset are very close to IRENA’s country-level statistics, which are collected from questionnaires, country officials, and industry associations. Compared to other facility-level datasets, we address some critical coverage gaps, particularly in developing countries, where the diffusion of solar PV is critical for expanding electricity access while reducing greenhouse gas emissions. In developed and developing countries alike, our data provides a common benchmark unbiased by reporting from companies or governments.
Geospatially-localized data is of critical importance to the energy transition. Grid operators and electricity market participants need to know precisely where solar facilities are in order to know accurately the amount of energy they are generating or will generate. Emerging in-situ or remote systems are able to use location data to predict increased or decreased generation caused by, for example, passing clouds or changes in the weather.
This increased predictability allows solar to reach higher proportions of the energy mix. As solar becomes more predictable, grid operators will need to keep fewer fossil fuel power plants in reserve, and fewer penalties for over- or under-generation will mean more marginal projects will be unlocked.
Using the back catalogue of satellite imagery, we were able to estimate ins...
-
The rules of inheritance are supposedly easy. Dad’s DNA mixes with mom’s to generate a new combination. Over time, random mutations will give some individuals better adaptability to the environment. The mutations are selected through generations, and the species becomes stronger.
But what if that central dogma is only part of the picture?
A new study in Nature Immunology is ruffling feathers in that it re-contextualizes evolution. Mice infected with a non-lethal dose of bacteria, once recovered, can pass on a turbo-boosted immune system to their kids and grandkids—all without changing any DNA sequences. The trick seems to be epigenetic changes—that is, how genes are turned on or off—in their sperm. In other words, compared to millennia of evolution, there’s a faster route for a species to thrive. For any individual, it’s possible to gain survivability and adaptability in a single lifetime, and those changes can be passed on to offspring.
“We wanted to test if we could observe the inheritance of some traits to subsequent generations, let’s say independent of natural selection,” said study author Dr. Jorge Dominguez-Andres at Radboud University Nijmegen Centre.
“The existence of epigenetic heredity is of paramount biological relevance, but the extent to which it happens in mammals remains largely unknown,” said Drs. Paola de Candia at the IRCCS MultiMedica, Milan, and Giuseppe Matarese at the Treg Cell Lab, Dipartimento di Medicina Molecolare e Biotecnologie Mediche at the Università degli Studi di Napoli in Naples, who were not involved in the study. “Their work is a big conceptual leap.”
Evolution on Steroids
The paper is controversial because it builds upon Darwin’s original theory of evolution.
You know this example: giraffes don’t have long necks because they had to stretch their necks to reach higher leaves. Rather, random mutations in the DNA that codes for long necks was eventually selected, mostly because those giraffes were the ones that survived and procreated.
Yet recent studies have thrown a wrench into the long-standing dogma around how species adapt. At their root is epigenetics, a mechanism “above” DNA to regulate how our genes are expressed. It’s helpful to think of DNA as base, low-level code—ASCII in computers. To execute the code, it needs to be translated into a higher language: proteins.
Similar to a programming language, it’s possible to silence DNA with additional bits of code. It’s how our cells develop into vastly different organs and body parts—like the heart, kidneys, and brain—even though they have the same DNA. This level of control is dubbed epigenetics, or “above genetics.” One of the most common ways to silence DNA is to add a chemical group to a gene so that, like a wheel lock, the gene gets “stuck” as it’s trying make a protein. This silences the genetic code without damaging the gene itself.
These chemical markers are dotted along our genes, and represent a powerful way to control our basic biology—anything from stress to cancer to autoimmune diseases or psychiatric struggles. But unlike DNA, the chemical tags are thought to be completely wiped out in the embryo, resulting in a blank slate for the next generation to start anew.
Not so much. A now famous study showed that a famine during the winters of 1944 and 1945 altered the metabolism of kids who, at the time, were growing fetuses. The consequence was that those kids were more susceptible to obesity and diabetes, even though their genes remained unchanged. Similar studies in mice showed that fear and trauma in parents can be passed onto pups—and grandkids—making them more susceptible, whereas some types of drug abuse increased the pups’ resilience against addiction.
Long story short? DNA inheritance isn’t the only game in town.
Superpowered Immunity
The new study plays on a similar idea: that an individual’s experiences in life can change the epigenetic makeup of his or her offspring. Here, the authors focused on trained immunity—the par...
-
As Moore’s Law slows, people are starting to look for alternatives to the silicon chips we’ve long been reliant on. A new optical switch up to 1,000 times faster than normal transistors could one day form the basis of new computers that use light rather than electricity.
The attraction of optical computing is obvious. Unlike the electrons that modern computers rely on, photons travel at the speed of light, and a computer that uses them to process information could theoretically be much faster than one that uses electronics.
The bulkiness of conventional optical equipment long stymied the idea, but in recent years the field of photonics has rapidly improved our ability to produce miniaturized optical components using many of the same techniques as the semiconductor industry.
This has not only led to a revived interest in optical computing, but could also have significant impact for the optical communications systems used to shuttle information around in data centers, supercomputers, and the internet.
Now, researchers from IBM and the Skolkovo Institute of Science and Technology in Russia have created an optical switch—a critical component in many photonic devices—that is both incredibly fast and energy-efficient.
It consists of a 35-nanometer-wide film made out of an organic semiconductor sandwiched between two mirrors that create a microcavity, which keeps light trapped inside. When a bright “pump” laser is shone onto the device, photons from its beam couple with the material to create a conglomeration of quasiparticles known as a Bose-Einstein condensate, a collection of particles that behaves like a single atom.
A second weaker laser can be used to switch the condensate between two levels with different numbers of quasiparticles. The level with more particles represents the “on” state of a transistor, while the one with fewer represents the “off” state.
What’s most promising about the new device, described in a paper in Nature, is that it can be switched between its two states a trillion times a second, which is somewhere between 100 and 1,000 times faster than today’s leading commercial transistors. It can also be switched by just a single photon, which means it requires far less energy to drive than a transistor.
Other optical switching devices with similar sensitivity have been created before, but they need to be kept at cryogenic temperatures, which severely limits their practicality. In contrast, this new device operates at room temperature.
There’s still a very long way to go until the technology appears in general-purpose optical computers, though, study senior author Pavlos Lagoudakis told IEEE Spectrum. “It took 40 years for the first electronic transistor to enter a personal computer,” he said. “It is often misunderstood how long before a discovery in fundamental physics research takes to enter the market.”
One of the challenges is that, while the device requires very little energy to switch, it still requires constant input from the pump laser. In a statement, the researchers said they are working with collaborators to develop perovskite supercrystal materials that exhibit superfluorescence to help lower this source of power consumption.
But even if it might be some time until your laptop is sporting a chip made out of these switches, Lagoudakis thinks they could find nearer-term applications in optical accelerators that perform specialized operations far faster than conventional chips, or as ultra-sensitive light detectors for the LIDAR scanners used by self-driving cars and drones.
Image Credit: Tomislav Jakupec from Pixabay
-
Ever wonder how and when animals swanned onto the evolutionary stage? When, where, and why did animals first appear? What were they like?
Life has existed for much of Earth’s 4.5-billion-year history, but for most of that time it consisted exclusively of bacteria.
Although scientists have been investigating the evidence of biological evolution for over a century, some parts of the fossil record remain maddeningly enigmatic, and finding evidence of Earth’s earliest animals has been particularly challenging.
Hidden Evolution
Information about evolutionary events hundreds of millions of years ago is mainly gleaned from fossils. Familiar fossils are shells, exoskeletons and bones that organisms make while alive. These so-called “hard parts” first appear in rocks deposited during the Cambrian explosion, slightly less than 540 million years ago.
The seemingly sudden appearance of diverse, complex animals, many with hard parts, implies that there was a preceding interval during which early soft-bodied animals with no hard parts evolved from simpler animals. Unfortunately, until now, possible evidence of fossil animals in the interval of “hidden” evolution has been very rare and difficult to understand, leaving the timing and nature of evolutionary events unclear.
This conundrum, known as Darwin’s dilemma, remains tantalizing and unresolved 160 years after the publication of On the Origin of Species.
Required Oxygen
There is indirect evidence regarding how and when animals may have appeared. Animals by definition ingest pre-existing organic matter, and their metabolisms require a certain level of ambient oxygen. It has been assumed that animals could not appear, or at least not diversify, until after a major oxygen increase in the Neoproterozoic Era, sometime between 815 and 540 million years ago, resulting from accumulation of oxygen produced by photosynthesizing cyanobacteria, also known as blue-green algae.
It is widely accepted that sponges are the most basic animal in the animal evolutionary tree and therefore probably were first to appear. Yes, sponges are animals: they use oxygen and feed by sucking water containing organic matter through their bodies. The earliest animals were probably sponge-related (the “sponge-first” hypothesis), and may have emerged hundreds of millions of years prior to the Cambrian, as suggested by a genetic method called molecular phylogeny, which analyzes genetic differences.
Based on these reasonable assumptions, sponges may have existed as much as 900 million years ago. So, why have we not found fossil evidence of sponges in rocks from those hundreds of millions of intervening years?
Part of the answer to this question is that sponges do not have standard hard parts (shells, bones). Although some sponges have an internal skeleton made of microscopic mineralized rods called spicules, no convincing spicules have been found in rocks dating from the interval of hidden early animal evolution. However, some sponge types have a skeleton made of tough protein fibers called spongin, forming a distinctive, microscopic, three-dimensional meshwork, identical to a bath sponge.
Work on modern and fossil sponges has shown that these sponges can be preserved in the rock record when their soft tissue is calcified during decay. If the calcified mass hardens around spongin fibers before they too decay, a distinctive microscopic meshwork of complexly branching tubes results appears in the rock. The branching configuration is unlike that of algae, bacteria, or fungi, and is well known from limestones younger than 540 million years.
Unusual Fossils
I am a geologist and paleobiologist who works on very old limestone. Recently, I described this exact microstructure in 890-million-year-old rocks from northern Canada, proposing that it could be evidence of sponges that are several hundred million years older than the next-youngest uncontested sponge fossil.
Although my proposal may initially seem outrageous, it is consiste...
-
AI is slowly getting more creative, and as it does it’s raising questions about the nature of creativity itself, who owns works of art made by computers, and whether conscious machines will make art humans can understand. In the spooky spirit of Halloween, one engineer used an AI to produce a very specific, seasonal kind of “art”: a haunted house.
It’s not a brick-and-mortar house you can walk through, unfortunately; like so many things these days, it’s virtual, and was created by research scientist and writer Janelle Shane. Shane runs a machine learning humor blog called AI Weirdness where she writes about the “sometimes hilarious, sometimes unsettling ways that machine learning algorithms get things wrong.”
For the virtual haunted house, Shane used CLIP, a neural network built by OpenAI, and VQGAN, a neural network architecture that combines convolutional neural networks (which are typically used for images) with transformers (which are typically used for language).
CLIP (short for Contrastive Language–Image Pre-training) learns visual concepts from natural language supervision, using images and their descriptions to rate how well a given image matches a phrase. The algorithm uses zero-shot learning, a training methodology that decreases reliance on labeled data and enables the model to eventually recognize objects or images it hasn’t seen before.
The phrase Shane focused on for this experiment was “haunted Victorian house,” starting with a photo of a regular Victorian house then letting the AI use its feedback to modify the image with details it associated with the word “haunted.”
The results are somewhat ghoulish, though also perplexing. In the first iteration, the home’s wood has turned to stone, the windows are covered in something that could be cobwebs, the cloudy sky has a dramatic tilt to it, and there appears to be fire on the house’s lower level.
Shane then upped the ante and instructed the model to create an “extremely haunted” Victorian house. The second iteration looks a little more haunted, but also a little less like a house in general, partly because there appears to be a piece of night sky under the house’s roof near its center.
Shane then tried taking the word “haunted” out of the instructions, and things just got more bizarre from there. She wrote in her blog post about the project, “Apparently CLIP has learned that if you want to make things less haunted, add flowers, street lights, and display counters full of snacks.”
“All the AI’s changes tend to make the house make less sense,” Shane said. “That’s because it’s easier for it to look at tiny details like mist than the big picture like how a house fits together. In a lot of what AI does, it’s working on the level of surface details rather than deeper meaning.”
Shane’s description matches up with where AI stands as a field. Despite impressive progress in fields like protein folding, RNA structure, natural language processing, and more, AI has not yet approached “general intelligence” and is still very much in the “narrow” domain. Researcher Melanie Mitchell argues that common fallacies in the field, like using human language to describe machine intelligence, are hampering its advancement; computers don’t really “learn” or “understand” in the way humans do, and adjusting the language we used to describe AI systems could help do away with some of the misunderstandings around their capabilities.
Shane’s haunted house is a clear example of this lack of understanding, and a playful reminder that we should move cautiously in allowing machines to make decisions with real-world impact.
Banner Image Credit: Janelle Shane, AI Weirdness
-
What secret alchemical knowledge could be so important it required sophisticated encryption?
The setting was Amsterdam, 2019. A conference organized by the Society for the History of Alchemy and Chemistry had just concluded at the Embassy of the Free Mind, in a lecture hall opened by historical fiction author Dan Brown.
At the conference, Science History Institute postdoctoral researcher Megan Piorko presented a curious manuscript belonging to English alchemists John Dee (1527–1608) and his son Arthur Dee (1579–1651). In the pre-modern world, alchemy was a means to understand nature through ancient secret knowledge and chemical experiment.
Within Dee’s alchemical manuscript was a cipher table, followed by encrypted ciphertext under the heading “Hermeticae Philosophiae medulla”—or Marrow of the Hermetic Philosophy. The table would end up being a valuable tool in decrypting the cipher, but could only be interpreted correctly once the hidden “key” was found.
It was during post-conference drinks in a dimly lit bar that Megan decided to investigate the mysterious alchemical cipher—with the help of her colleague, University of Graz postdoctoral researcher Sarah Lang.
A Recipe for the Elixir of Life
Megan and Sarah shared their initial analysis on a history of chemistry blog and presented the historical discovery to cryptology experts from around the world at the 2021 HistoCrypt conference.
Based on the rest of the notebook’s contents, they believed the ciphertext contained a recipe for the fabled Philosophers’ Stone—an elixir that supposedly prolongs the owner’s life and grants the ability to produce gold from base metals.
The mysterious cipher received much interest, and Sarah and Megan were soon inundated with emails from would-be code-breakers. That’s when Richard Bean entered the picture. Less than a week after the HistoCrypt proceedings went live, Richard contacted Lang and Piorko with exciting news: he’d cracked the code.
Megan and Sarah’s initial hypothesis was confirmed; the encrypted ciphertext was indeed an alchemical recipe for the Philosophers’ Stone. Together, the trio began to translate and analyze the 177-word passage.
The Alchemist Behind the Cipher
But who wrote this alchemical cipher in the first place, and why encrypt it?
Alchemical knowledge was shrouded in secrecy, as practitioners believed it could only be understood by true adepts.
Encrypting the most valuable trade secret, the Philosophers’ Stone, would have provided an added layer of protection against alchemical fraud and the unenlightened. Alchemists spent their lives searching for this vital substance, with many believing they had the key to successfully unlocking the secret recipe.
Arthur Dee was an English alchemist and spent most of his career as royal physician to Tsar Michael I of Russia. He continued to add to the alchemical manuscript after his father’s death—and the cipher appears to be in Arthur’s handwriting.
We don’t know the exact date John Dee, Arthur’s father, started writing in this manuscript, or when Arthur added the cipher table and encrypted text he titled “The Marrow of Hermetic Philosophy.”
However, we do know Arthur wrote another manuscript in 1634 titled Arca Arcanorum—or Secret of Secrets—where he celebrates his alchemical success with the Philosophers’ Stone, claiming he discovered the true recipe.
He decorated Arca Arcanorum with an emblem copied from a medieval alchemical scroll, illustrating the allegorical process of alchemical transmutation necessary for the Philosophers’ Stone.
Cracking the Code
What clues led to decrypting the mysterious Marrow of the Hermetic Philosophy passage?
Adjacent to the encrypted text is a table resembling one used in a traditional style of cipher called a Bellaso/Della Porta cipher—invented in 1553 by Italian cryptologist Giovan Battista Bellaso, and written about in 1563 by Giambattista della Porta. This was the first clue.
The Latin title indicated the text itself was also in Latin. This was ...
-
From buses to taxis to ambulances, the number and type of vehicles set to take to the skies in the allegedly near future keeps growing. Now another one is joining their ranks, and it seems to defy classification—it’s not a flying car, nor a drone; the closest to an accurate description may be a flying all-terrain vehicle, or the designation its creators have given it, which is a “personal electric aerial vehicle.” And it shares its name with a widely-beloved futuristic cartoon family: the Jetsons.
All sorts of technology that used to exist only in cartoon form has made its way into being since the show launched in 1962, from jet packs to 3D printed food to smartwatches.
Swedish startup Jetson Aero’s tiny electric aircraft, the Jetson One, is sort of like George Jetson’s “car”—except there’s only space for one person, there’s not a closed cabin, and you can’t press a button to be dropped out the bottom. You can, however, press a button to activate a ballistic parachute, but Jetson Aero is really hoping none of its customers will ever have to use this feature.
The parachute is a last-resort option, built into the Jetson One along with several other redundancies for passenger safety. The vehicle runs on battery power, with eight electric motors, and is like a helicopter in that it takes off and lands vertically (though the fact that it has eight propellers makes it a “multicopter”). The fastest it goes is 63 miles per hour (102 kilometers per hour), so about the same as highway driving in or near an urban area.
At 9.3 feet long by 8 feet tall by 3.4 feet wide, the Jetson One is quite small, at least as far as aircraft go, and weighs 198 pounds (90 kilograms). It can carry a passenger that weighs up to 210 pounds, though the lighter you are, the longer you can fly for; a pilot weighing 187 pounds can fly for 20 minutes before the vehicle’s batteries need recharging. In the US the aircraft is classified as “ultralight,” meaning you don’t need a pilot’s license to fly it.
Given its compact footprint, the vehicle could take off right from owners’ driveways; the company encourages potential customers to “make your garden or terrace your private airport.” It certainly would be nice to be able to fly without the hassle of first traveling to an airport, nor the expense of the associated storage and usage fees (though if you’re buying one of these little winged toys, those expenses probably won’t concern you too much; the Jetson One goes for $92,000, which actually isn’t outrageous given that it’s basically a personal mini plane). The downside of making your garden your personal airport, though, is that if you lose control of the vehicle or have a shaky takeoff or landing, you could go crashing right through your home’s roof or wall.
In the spirit of its widely-known compatriot company, Ikea, Jetson Aero delivers its aircraft to customers as a partially-assembled kit, accompanied by “detailed build instructions.” It’s a long shot from shelving unit to personal eVTOL, though, and letting customers assemble the vehicle themselves seems an odd choice given the risks associated with getting even one piece wrong.
Customers don’t seem worried, though. The company has already sold its entire 2022 production run (which, to be fair, was only 12 units) and is now taking orders for delivery in 2023.
Image Credit: Jetson Aero
-
Neurons live in a society, and scientists just found the ones that may allow us to thrive in our own society.
Like humans, individual neurons are strikingly unique. Also like humans, they’re constantly in touch with each other. They hook up into neural circuit “friend groups,” break apart when things change, and rewire into new cliques. This flexibility lets their collective society (the brain) and their owners (us) learn about and adapt to an ever-changing world.
To rebuild neural circuits, neurons constantly monitor the status of their neighbors through sinewy branches that sprout from a rotund body. These aren’t just passive phone lines. Dotted along each branch are little dual-function units called synapses, which allow neurons to both chat with a neuron partner and record previous conversations. Each synapse retains a “log” of past communications in its physical and molecular structure. By “consulting” this log, a neuron can passively determine whether or not to form a network with that particular neuron partner—or even set of partners—or to avoid any interactions in the near future.
Apparently, neurons can do the same for us. This week, by listening to the electrical chatter of single neurons in rhesus macaque monkeys, scientists at Harvard, led by Dr. Ziv Williams, honed in on a peculiar subset of neurons that helps us tell friends from foes.
The neurons, sprinkled across the frontal parts of the brain, are strikingly powerful. As their monkey hosts hung out for game night, the little computers tracked how each player behaved—were they more cooperative, or selfish? Over time, these “social agent identity cells” stealthily map out the entire group dynamic. Using this info, the monkeys can then decide whether to team up with other monkeys or shun them.
It’s not just correlation. By modeling the electrical activity of these neurons, scientists were able to accurately determine any monkey’s past decisions—and mind-blowingly, predict its future ones, basically “mind-reading” the animal’s next move. When the team tampered with the neurons’ activity with a short burst of electrical zaps, the monkeys lost their social judgment. Like the new kid in school, they could no longer decide who to befriend.
“In the frontal cortex, these neurons appear to be tuned for possible action from peers, representing them as communication partners, competitors, and collaborators,” wrote Dr. Julia Sliwa at Sorbonne Université in Paris, who was not involved in the study.
While the results are from monkeys, they’re some of the first to connect the activity of individual neurons to an extremely complicated yet necessary aspect of our lives. These data “are major steps in identifying the neural mechanisms for maneuvering in complex social structure,” she said.
My Brain’s View of You
We often think of neurons and circuits as hardware components that represent us: our perception, memory, decisions, feelings. Yet large parts of the brain are dedicated to representing other people in our external world.
One famous example is the “Jennifer Aniston neuron.” Back in 2005, an experiment showed that a single neuron in a person’s brain could react to a particular face—for example, Aniston’s. A lightbulb moment for neuroscience and computer vision, the study raised a brazen idea: that a single neuron has the computing power to encode a person’s physical identity.
In the real world, it gets more complicated than just identifying a face. A person’s face comes with history—is it my first time meeting them? What’s their reputation? How do I feel about this person?
Much work on how our brains handle social interaction comes from studying bands in music studios or teacher-student dynamics in classrooms. Here, brain activity is captured with wearables, which measure brain waves that wash over parts of the brain. These studies show that when making music or watching a movie together, our brain waves sync up. To rephrase, our brains tune into other peoples’—but when,...
-
The deep learning neural networks at the heart of modern artificial intelligence are often described as “black boxes” whose inner workings are inscrutable. But new research calls that idea into question, with significant implications for privacy.
Unlike traditional software whose functions are predetermined by a developer, neural networks learn how to process or analyze data by training on examples. They do this by continually adjusting the strength of the links between their many neurons.
By the end of this process, the way they make decisions is tied up in a tangled network of connections that can be impossible to follow. As a result, it’s often assumed that even if you have access to the model itself, it’s more or less impossible to work out the data that the system was trained on.
But a pair of recent papers have brought this assumption into question, according to MIT Technology Review, by showing that two very different techniques can be used to identify the data a model was trained on. This could have serious implications for AI systems trained on sensitive information like health records or financial data.
The first approach takes aim at generative adversarial networks (GANs), the AI systems behind deepfake images. These systems are increasingly being used to create synthetic faces that are supposedly completely unrelated to real people.
But researchers from the University of Caen Normandy in France showed that they could easily link generated faces from a popular model to real people whose data had been used to train the GAN. They did this by getting a second facial recognition model to compare the generated faces against training samples to spot if they shared the same identity.
The images aren’t an exact match, as the GAN has modified them, but the researchers found multiple examples where generated faces were clearly linked to images in the training data. In a paper describing the research, they point out that in many cases the generated face is simply the original face in a different pose.
While the approach is specific to face-generation GANs, the researchers point out that similar ideas could be applied to things like biometric data or medical images. Another, more general approach to reverse engineering neural nets could do that straight off the bat, though.
A group from Nvidia has shown that they can infer the data the model was trained on without even seeing any examples of the trained data. They used an approach called model inversion, which effectively runs the neural net in reverse. This technique is often used to analyze neural networks, but using it to recover the input data had only been achieved on simple networks under very specific sets of assumptions.
In a recent paper, the researchers described how they were able to scale the approach to large networks by splitting the problem up and carrying out inversions on each of the networks’ layers separately. With this approach, they were able to recreate training data images using nothing but the models themselves.
While carrying out either attack is a complex process that requires intimate access to the model in question, both highlight the fact that AIs may not be the black boxes we thought they were, and determined attackers could extract potentially sensitive information from them.
Given that it’s becoming increasingly easy to reverse engineer someone else’s model using your own AI, the requirement to have access to the neural network isn’t even that big of a barrier.
The problem isn’t restricted to image-based algorithms. Last year, researchers from a consortium of tech companies and universities showed that they could extract news headlines, JavaScript code, and personally identifiable information from the large language model GPT-2.
These issues are only going to become more pressing as AI systems push their way into sensitive areas like health, finance, and defense. There are some solutions on the horizon, such as differential privacy, where mode...
- Mehr anzeigen