Hi, we're new here.
We want to make a newsfeed with just news. No bots, no baby pictures, and no weird uncles sharing questionable articles. Just what's happening in the world.
Please take a look around and let us know what you think.
- 18 HOURS AGO
Purdue Pharma Pleads Guilty to Criminal Charges for Opioid Sales
The Justice Department announced an $8 billion settlement with the company. Members of the Sackler family will pay $225 million in civil penalties but criminal investigations continue.
- 18 HOURS AGO
NASA's OSIRIS-REx Successfully Touches Asteroid Bennu
The spacecraft attempted to collect samples from the asteroid for eventual return to Earth -- Read more on ScientificAmerican.com
- 18 HOURS AGO
Human Challenge Trials Will Deliberately Infect Dozens in the UK
Proponents of the trials say they can be run safely and help to identify effective vaccines, but others have questioned their value -- Read more on ScientificAmerican.com
- 17 HOURS AGO
Racism and Sexism in Science Haven't Disappeared
Those who argue that the system will magically self-correct are kidding themselves -- Read more on ScientificAmerican.com
- 18 HOURS AGO
The F.D.A. Wanted to Ban Some Hair Straighteners. It Never Happened.
In 2016, agency scientists deemed hair straighteners containing formaldehyde to be unsafe, according to newly obtained emails.
- 16 HOURS AGO
The World Needs to Ramp Up Solutions for Greener Cooling
A proliferation in traditional air conditioning meant to protect people from intense heat could also exacerbate warming -- Read more on ScientificAmerican.com
- 15 HOURS AGO
NASA's OSIRIS-REX Mission Touched Bennu Asteroid. How Much Will It Bring Home?
The OSIRIS-REX mission succeeded in pogo-sticking off the space rock, but is waiting to see whether it will need to swoop in again.
- 2 DAYS AGO
How an Ill-Fated Fishing Voyage Helped Us Understand Covid-19
The threat posed by the virus makes randomized controlled trials extremely difficult. That means “real-life experiments” are especially important.
- 8 HOURS AGO
Dinosaur Asteroid Hit Worst Case Place
The mass extinction asteroid happened to strike a place where the rock contained lots of organic matter, and sent soot into the stratosphere where it could block sunlight for years. -- Read more on ScientificAmerican.com
- A DAY AGO
N.Y. Accuses Religious Health Cost-Sharing Group of Misleading Consumers
Regulators say a major group is misrepresenting cost-sharing plans, saddling people with unpaid medical bills.
- 14 HOURS AGO
NASA’s OSIRIS-REX Mission Completes Quick Touch of Bennu Asteroid
The spacecraft attempted to suck up rocks and dirt from the asteroid, which could aid humanity’s ability to divert one that might slam into Earth.
- NAUTILUS10 HOURS AGO
Why Physics Can’t Tell Us What Life Is - Issue 92: Frontiers
There is just something obviously reasonable about the following notion: If all life is built from atoms that obey precise equations we know—which seems to be true—then the existence of life might just be some downstream consequence of these laws that we haven’t yet gotten around to calculating. This is essentially a physicist’s way of thinking, and to its credit, it has already done a great deal to help us understand how living things work.Thanks to pioneers like Max Delbrück, who crossed over from physics to biology in the middle of the 20th century, the influence of quantitative analyses from the physical sciences helped to give rise to mechanistic, molecular approaches in cell biology and biochemistry that led to many revolutionary discoveries. Imaging techniques such as X-ray crystallography, nuclear magnetic resonance, and super-resolution microscopy have provided a vivid portrait of the DNA, proteins, and other structures smaller than a single cell that make life tick on a molecular scale.1Moreover, by cracking the genetic code, we have become able to harness the machinery of living cells to do our bidding by assembling new macromolecules of our own devising. As we have gained an ever more accurate picture of how life’s tiniest and simplest building blocks fit together to form the whole, it has become increasingly tempting to imagine that biology’s toughest puzzles may only be solved once we figure out how to tackle them on physics’ terms.We did not know any physics when we invented the word “life.” But approaching the subject of life with this attitude will fail us, for at least two reasons. The first reason we might call the fallacy of reductionism. Reductionism is the presumption that any piece of the universe we might choose to study works like some specimen of antique, windup clockwork, so that it is easy (or at least eminently possible) to predict the behavior of the whole once you know the rules governing how each of its parts pushes on and moves with the others.The dream of explaining and predicting everything from a few simple rules has long captured the imagination of many scientists, particularly physicists. And, in all fairness, a great deal of good science has been propelled forward by the hunger of some researchers for a more completely reductive explanation of the phenomenon that interests them. After all, there are things in the world that can be understood as the result of known interactions among various simpler pieces. From the rise and fall of ocean tides with the moon’s gravitational tug, to the way that some genetic diseases can be traced to molecular events arising from the altered chemistry of one tiny patch on a protein’s surface, sometimes the thing we are studying looks like a comprehensible sum of its parts.Alas, the hope that all scientific puzzles would be conquered through reductionism was more popular with physicists before the 20th century rolled around. Since then, multiple Nobel laureates in physics (and countless others as well) have written lucidly about how and why reductionist thinking often fails.2 You cannot use Newton’s laws or quantum theory to predict the stock market, nor to predict even much simpler properties of “many-particle” systems, such as a turbulent fluid or a supercooled magnet.3 In all such cases, the physical laws supposedly “governing” it all are swamped with the immensity of what we do not know, cannot measure, or lack the ability to compute directly. Physics still works on such systems, but not solely by starting with fundamental equations governing the microscopic parts.The second mistake in how people have viewed the boundary between life and non-life is still rampant in the present day and originates in the way we use language. A great many people imagine that if we understand physics well enough, we will eventually comprehend what life is as a physical phenomenon in the same way we now understand how and why water freezes or boils. Indeed, it often seems people expect that a good enough physical theory could become the new gold standard for saying what is alive and what is not.However, this approach fails to acknowledge that our own role in giving names to the phenomena of the world precedes our ability to say with any clarity what it means to even call something alive. A physicist who wants to devise theories of how living things behave or emerge has to start by making intuitive choices about how to translate the characteristics of the examples of life we know into a physical language. After one has done so, it quickly becomes clear that the boundary between what is alive and what is not is something that already got drawn at the outset, through a different way of talking than physics provides.To some degree, a hopeful inclination toward reductionism is expressed in the very asking of the question of where life comes from. We look at a living organism and cannot help but wonder whether such breathtaking success in form and function could simply be the result of a bunch of more basic pieces bouncing off of each other like simple and predictable billiard balls. Is there something more in the machine other than all its dumbly vibrating parts? If there isn’t, shouldn’t that mean we can eventually understand how the whole thing fits together? Put another way, wouldn’t any proposed explanation for the emergence of life have to break it all down into a series of rationalized steps, where each next one follows sensibly and predictably from the last? If so, how is that not the same thing as saying we want to reduce life to a choreographed performance directed by a simple, calculable set of known physical rules?Biology is most certainly not founded on mathematics in the way that physics is. It must be granted that physicists have already identified some rules that prove to make highly accurate predictions in systems that once seemed hopelessly and mysteriously complicated. Thanks to the ideas of people like Kepler and Newton, the motion of heavenly bodies is now an open book, and our ability to compute where these bright lights in the sky go is such an unremarked banality that it is now possible to get an extensive education in physics at many a great university without ever delving into the specialty sideshow of rigorous orbital mechanics. Imagine, though, being a brilliant natural philosopher at any point during most of human history, and marveling at the seemingly intractable complexity of how the sun, moon, and stars seem to continually rearrange themselves in the firmament as the days and years pass. The idea that a terse pair of equations describing gravitation and motion under force could bring distant galaxies, the wandering planets, and boxes dangling by coiled springs all into one comprehensive theoretical frame must have been inconceivable even to the greatest genius of every era for thousands of years. The scope and significance of the revolution that started with Newton and his contemporaries are hard to overstate.And then came the 20 century! Einstein began with contemplating the equations that describe the motion of light, and through sheer force of insight ended up reimagining the origins of gravity, so as to finally explain the last remaining puzzle of planetary motion that Newton could not touch (namely, Mercury). Meanwhile, Erwin Schrödinger’s quantum mechanical wave equation unlocked the atom, providing an elegant quantitative explanation for the colors of light emitted from various types of electrified gases. This was a bizarre, unintuitive theory of the mathematical inner workings of objects too small to be seen or touched, yet it could still match experimental measurements with stunning accuracy. In the wake of these grand scientific victories, one might forgive the odd scientist or two for feeling like all unpredictability might eventually be swept away as newer and ever more brilliant theories arrived.On closer inspection, however, this hit parade of wins for reductive theoretical science reveals some bias. What these and many other examples of successful physical theories have in common is that they perform best when trying to predict a well-isolated piece of the world described by a relatively simple mathematical formulation involving a few different things one can measure—the one-planet solar system, the single, solitary hydrogen atom, and so on. In each of these cases, the theory succeeds by filtering out the rest of the universe and focusing on a few equations that accurately describe the relationships among a small number of physical quantities.The fact is, there are many ways in which the extreme reductionist, armed with a powerful supercomputer, is going to miss the mark by miles when trying to compute the behavior of the whole directly from the simple rules obeyed by its parts. As physics Nobel laureate P.W. Anderson once famously wrote: “More is different.”4 And while we may well succeed at coming up with very good physical theories of things like freezing crystals or viscous fluids, it will not be because we have started by perfecting our detailed models of the atoms or subatomic particles out of which these things are built.There’s no question that molecular biology has its own long and venerable history as a hard science in its own right. Thanks to countless experiments on molecules, cells, tissues, and whole organisms, it is now abundantly clear that the marvelously diverse functional capabilities of a living thing all have sound bases in the physical properties of their material parts.However, this is not to say that reductionism reigns; on the contrary, the “more is different” idea of emergent properties rears its head everywhere in the study of how life works. Blood, for example, is a liquid that flows through veins and carries oxygen, and its biochemical capacity to absorb and release oxygen is well understood in terms of the atomic structure of a protein on red blood cells known as hemoglobin. At the same time, though, a quantity such as the viscosity of blood (which in theory results from mixing water molecules with plasma proteins and many other components) would be utterly impossible for anyone to predict precisely from first principles. The number of different factors contributing to how a given cell or molecule slides by another in such a heterogeneous mixture is so particular and complexly sensitive to small differences in the interaction properties of each pair of components that there will never be a computation as reliable and informative as just doing the experiment to measure what the empirical answer is.The extreme reductionist, armed with a supercomputer, is going to miss the mark by miles. Yet this empirical answer is important! Life thrives in the realm of the particular, where quite specific and precise properties are achieved by its components that could trigger catastrophic failures if they turned out differently. We cannot assume that any small change to how sluggishly blood slides through a vessel, for example, or to the DNA sequence that instructs the cell how to build a particular protein, will necessarily only make a small difference to how the living thing functions as a whole. Life is a grab bag of different pieces, some of whose physical properties are easier to predict mechanistically than others, and it is certainly the case that at least some of the factors that matter a great deal to how a living thing works will fall into the category of highly non-universal emergent properties that are impossible to derive from first principles.At base, this challenge will always keep popping up, because talking in physical terms is never the same thing as talking in biological ones, and so biologically important questions are not picked for their physical tractability. Instead, biological and physical ways of talking ground themselves in very different conceptual spaces.Physics is an approach to science that roots itself in the measurement of particular quantities: distance, mass, duration, charge, temperature, and the like. Whether we are talking about making empirical observations or developing theories to make predictions, the language of physics is inherently metrical and mathematical. The phenomena of physics are always expressed in terms of how one set of measurable numbers behaves when other sets of measurable numbers are held fixed or varied. This is why the genius of Newton’s Second Law, F = ma, was not merely that it proposed a successful equation relating force (F), mass (m), and acceleration (a), but rather that it realized that these were all quantities in the world that could be independently measured and compared in order to discover such a general relationship.This is not how the science of biology works. It is true that doing excellent research in biology involves trafficking in numbers, especially these days: For example, statistical methods help one gain confidence in trends discovered through repeated observations (such as a significant but small increase in the rate of cell death when a drug is introduced). Nonetheless, there is nothing fundamentally quantitative about the scientific study of life. Instead, biology takes the categories of living and nonliving things for granted as a starting point, and then uses the scientific method to investigate what is predictable about the behavior and qualities of life. Biologists did not have to go around convincing humanity that the world actually divides into things that are alive and things that are not; instead, in much the same way that it is quite popular across the length and breadth of human language to coin terms for commonplace things like stars, rivers, and trees, the difference between being alive and not being alive gets denoted with vocabulary.In short, biology could not have been invented without the preexisting concept of life to inspire it, and all it needed to get going was for someone to realize that there were things to be discovered by reasoning scientifically about things that were alive. This means, though, that biology most certainly is not founded on mathematics in the way that physics is. Discovering that plants need sunlight to grow, or that fish will suffocate when taken out of water, requires no quantification of anything whatsoever. Of course, we could learn more by measuring how much sunlight the plant got, or timing how long it takes for the fish-out-of-water to expire. But the basic empirical law in biological terms only concerns itself with what conditions will enable or prevent thriving, and what it means to thrive comes from our qualitative and holistic judgment of what it looks like to succeed at being alive. If we are honest with ourselves, the ability to make this judgment was not taught to us by scientists, but comes from a more common kind of knowledge: We are alive ourselves, and constantly mete out life and death to bugs and flowers in our surroundings. Science may help us to discover new ways to make things live or die, but only once we tell the scientists how to use those words. We did not know any physics when we invented the word “life,” and it would be strange if physics only now began suddenly to start dictating to us what the word means.Jeremy England is senior director in artificial intelligence at GlaxoSmithKline, principal research scientist at Georgia Tech, and the former Thomas D. and Virginia W. Cabot career development associate professor of physics at MIT. This essay is adapted from England’s new book Every Life Is on Fire: How Thermodynamics Explains the Origins of Living Things.Read our interview with Jeremy England, “The Physicist’s New Book of Life.” Footnotes1. Watson, J.D. & Crick, F.H.C. Molecular structure of nucleic acids. Nature 171, 737–738 (1953); Wüthrich, K. Protein structure determination in solution by NMR spectroscopy. Journal of Biological Chemistry 265, 22059-22062 (1990); Rust, M.J., Bates, M., & Zhuang, X. Sub-diffraction-limit imaging by stochastic optical reconstruction microscopy (STORM). Nature Methods 3, 793 (2006).2. Laughlin, R.B. & Pines, D. The theory of everything. Proceedings of the National Academy of Sciences 97, 28–31 (2000); Anderson, P.W. More is different. Science 177, 393–396 (1972).One should say that both Anderson and Laughlin do not mean to argue that systems with many components are wholly unpredictable; on the contrary, they both made their careers discovering predictability in such devilishly complex systems. However, what often happens in the so-called world of hard condensed matter (i.e., metals and more exotic solid-state materials) is that the way of cutting through the multitudes and seeing order in the whole is to realize that the collective behavior must be governed by some very specific symmetries of the system at hand.This can get quite mathematically rarefied, but for a simple example imagine a flat, planar lattice of arrows pointing every which way in the plane. Suppose that each arrow’s energy is lower to the extent that it points in the same direction as its neighbors. Clearly, the energy is therefore lowest for the collective when all the arrows point in the same direction. Yet symmetry tells us that the lowest energy state should not exhibit an average bias to point in any one direction, because the overall way we determine the energy of the system looks exactly the same when we rotate our perspective. The resolution is to realize that there are infinitely many equivalent lowest-energy states, with all arrows aligned with each other, but with each collectively aligned state pointing in a different direction.3. It is worth stating specifically why one might have ever imagined something so outlandish as the idea that quantum theory could be used to predict the stock market. The point is that, from the perspective of a physicist, all the people and documents and computers and phones and factories and mines and forests and winds (and everything else) that act to determine the price of a stock are made of atoms. The way these atoms bind together into molecules is described quite well by known equations that govern the interaction of electrical charge, light, and matter on the tiniest of scales. So why do we not try to predict stocks (and indeed, all the events of the world influencing the stocks) using these equations? Not only does the sheer scale of the computation required to represent such fine details put the task far beyond reach, but we also have little way of knowing most of the numbers that would serve as input to the model. Accordingly, much the same way that the shareholders may not easily get to know all that is ailing a publicly traded company, we also, by default, know very little about exactly what each atom or molecule on the planet is doing. Instead of trying to measure every one of those details, we are much better served making predictive models that paint a simpler picture of the thing we are trying to model (for example, by just positing that prices are determined by a balance between supply and demand).4. Anderson, P.W. More is different. Science 177, 393–396 (1972).Lead image: Sergey NivensRead More…
- NAUTILUS10 HOURS AGO
The Self-Driving Car Is a Red Herring - Issue 92: Frontiers
Ten years ago this fall, Google gave us a glimpse of a new device unlike any it had ever built before—a computer-controlled car. It seemed such a strange thing for an Internet company to spend its time and energy on, a “moonshot” as the company’s engineers called such massive efforts. But with a single blog post, the search giant promised to reinvent our cars, and our communities, too.It was a big vision for a single invention to carry. And the details were scant. But we quickly filled in the blanks. Software was going to replace our dangerous, congested, sprawling roads with something utterly safe, seamless and organized. Humans would take the back seat in a new network of “ghost roads,” as I call them. Ghost roads didn’t demand a massive mobilization of government. The technology of autonomous driving would roll onto existing highways, invisibly weaving a new transportation system. Only this one would be modeled on the Internet. Computers would outnumber people. Code would call all the shots.Google almost succeeded. The company’s bold move spurred an arms race, drawing in the rest of Silicon Valley and automakers the world over. Hundreds of billions of dollars flowed into the quest to develop autonomous vehicles (AVs). Still in the lead, Google sister company Waymo now operates a limited self-driving ride-hail service in the Phoenix metro area.FLEXIBLE DENSITY: The COVID-19 pandemic has offered us a vision of how cities can weather existential shocks by adapting to them. Already we see roads surrendered to delivery vehicles and people converting apartments to creative workspaces.Dash MarshallBut the future has a way of veering off the road. A generation in, fully automated driving has proven far more challenging than many thought. Some 15 years after university researchers aced the Defense Department’s 2005 Urban Challenge, the first proof-of-concept of autonomous driving in populated areas, Google’s robovans are still mostly confined to the modern subdivisions of the Sun Belt. That’s because scant rainfall and wide, well-marked streets ease the engineering of split-second computer vision systems considerably.Some experts now predict that we may never achieve the original dream of a reliable fully self-driving car. And even if we do, spreading the technology beyond the Arizona suburbs will require costly retrofits to add navigational hardware to existing roads. This is a non-starter in the post-pandemic age of austerity. Most state and local governments now struggle to merely maintain a much simpler, yet more fundamental technology for orienting both human and computer drivers—the painted markings that guide us along the pavement.Never has a city flayed itself as deftly and visibly as New York City following the arrival of COVID-19. Meanwhile, as we’ve obsessed over driverless sedans and SUVs, the magic of autonomous driving has seeped into everything else on wheels. In 2019, University of Michigan urban technology professor Bryan Boyer and his students conducted a census of more than 80 commercially available robots and AVs designed for urban use. Unlike self-driving cars, these machines don’t need AV-only ghost roads to operate. They’re happy to mix it up with people, and defer to human supremacy on sidewalks, in bike lanes, and on streets.Flipping through the pages of Boyer’s Robot Survey 2019, styled after the old Jane’s guides to military equipment that filled my Cold War youth, the truth is hard to deny. The self-driving car was a red herring. An entire class of AV species, like the mammals once did, will replace the dinosaurs of the automotive age instead.These strange silicon-powered beasts will arrive just in time. That’s because—as we did when COVID-19 struck—spreading out and shutting down is going to be a tactic we’ll employ over and over again to weather the shocks of the 21st century. How well we put these machines to use to ferry goods and people around in clever new ways, and tend to dull, dirty, and dangerous work of municipal upkeep, will mean the difference between keeping our cities humming along or abandoning them altogether.Cities have come apart before when confronted with the outbreak of disease. But never has a city flayed itself as deftly and visibly as New York City following the arrival of COVID-19. In March and April alone, an estimated 420,000 people left the Big Apple. The majority fled into the surrounding suburbs. But resort communities and rural towns across a vast hinterland stretching from the Canadian border to the Virginia Tidewater absorbed many emigres. Tens of thousands fled as far as Florida and California.Even 10 years ago, such a shift would have been unthinkable. But for these self-selected refugees—whiter, richer, and more likely to hold a job suitable for remote work—videoconferencing was a game-changer. Despite the unfairness of this urban exodus, new technology also improved conditions for many of those left behind. An array of services, digitally dispatched with a scale and precision never before seen, stepped in during lockdowns to bring food, medicine, and self-care supplies to the homebound. Delivery volumes doubled week on week, straining but never breaking down.As the pandemic drama crawls toward denouement in the coming year—whether by deployment of an effective vaccine or a natural decline in transmission—we’re left to wonder what, if anything, of this ad hoc dispersal will persist? Will the digital diaspora put down roots or return to the city? Will stores reopen and if so, will people go out to shop again?MICROSPRAWL: In 2040, outer New York City neighborhoods are bigger than ever. Snobs call it “microsprawl,” but the people who live there love the affordable housing and open space that was out of reach in the old subway-centered districts.Dash MarshallOur ambivalence on these choices is palpable. Nowhere more so than in Silicon Valley, where the world’s most powerful tech firms have given workers a free pass to work from home indefinitely—while also announcing massive plans for new downtown office quarters. Facebook moved early, committing to shift a big portion of its workforce to remote over the coming decade. Yet all along, the company continued to follow through on a pre-pandemic negotiation, signing a deal to lease the landmark Farley Post Office in New York City—a building whose considerable value comes from its proximity to the Pennsylvania Station commuter rail hub, North America’s busiest. Since the start of the pandemic, despite touting work from home arrangements, Amazon, Google, Facebook, and Apple collectively expanded their New York City workforce by over 2,600 employees, more than 10 percent.The tension of dispersal is grinding us down at home, too, despite high initial enthusiasm. Women have faced an especially turbulent set of new forces—freedom from commutes has given them new choices to manage work and family responsibilities, but the shutdown of schools and child care has further increased the already disproportionate burden of child care and housekeeping they bear. For all of us, the cost of prolonged isolation on productivity, recruitment, career development, creativity, and morale are all still largely unquantified. I suspect most organizations are in bad shape, but unable or unwilling to face it. At some point, they will be pulled apart. The failures could be sudden.These new neighborhoods don’t feel like the old ones though, as new ways of getting around take hold. Stuck between a static condition that becomes more permanent with each passing week, and the gravitational field of human social networks slowly reasserting itself—what’s our next move? My hunch is that much like the tech giants, we should embrace the tension, and refashion our cities to be as good at spreading out during future emergencies as they are for crowding together during peacetime.I call this approach “flexible density,” and this kind of thinking is already visible in our responses to the COVID-19 pandemic. Restaurants shut down dining rooms but shift out into the street, colonizing curbside parking for “streateries” and expanding into the digital realm through multiple delivery platforms. Amazon adds over 1,000 local shipping hubs, slashing delivery times so much that shoppers stop thinking twice about going out to stores themselves. DJs have been locked out of dance clubs but promote live streams to backyard pod parties instead. Everyone discovers that doctor’s visits by screen and prescriptions by mail beats waiting rooms full of sick people. While many of us will go back to the familiar old ways and places after the threat of infection subsides, these new offerings won’t vanish completely when the pandemic does. A lot of them will stick around, and they’ll be there for us again the next time the alarm sounds.Until now, automation has lurked in the shadows of this transformation. Routing algorithms dispatch drivers. Conveyor droids race across warehouse floors fulfilling our just-clicked desires. Amazon has more than 100,000 conveyors already, toiling away around the clock in more than 100 town-sized shipping sheds. Inside, with humans pushed to the edges, goods-hauling AVs rocket down alleys too narrow for humans at three times the pace of a brisk walk. These robot-dominated interiors are truly inhuman. But extending these ghost roads out into the surrounding world may be the booster shot that’s needed to bolster urban immunity for future crises. Amazon certainly seems to think so. The company bought Silicon Valley AV startup Zoox for $1 billion at the height of this spring’s outbreak.While automation is one powerful tool for flexible density, paving the way for automation through post-crisis cities can’t be left to companies, however. Flexible density must be designed comprehensively into cities over the years and decades to come. We’ll need buildings that are better suited to adapt when demands for space, security, energy, and ventilation change suddenly. Infrastructure must be pliable enough to extend to dispersed locations on short notice. And a wide array of essential services must be able to find and deliver to constituents and customers wherever they may be. Special attention must be paid to ensure that vulnerable and isolated groups are not left out of the new safety net. All of this will require new investments, new regulations, and new leadership. But as we belatedly start to price in the risks of human settlement in the 21st century, it may prove to be a bargain indeed.Fast forward to 2040, a world where iron-willed conservation and extreme adaptation go hand-in-hand. The big brush strokes that reimagine our world are painted with laser light, the scanning beams of helpful robots and automated vehicles feeling their way across the urban landscape. In New York City, they now outnumber people. For every one of some 10 million inhabitants, a half-dozen or more artificial ones have taken up residence. Yet while the city is more crowded than ever before, these machines make it possible to spread out even as we pack more tightly together.The edges of the old city are where the most profound unraveling is underway. In the far reaches of Brooklyn and Queens a housing boom is in full swing. “Software trains” of automated buses—four and five coaches long, linked together only by a wireless connection—push at high speed along a vast new network of dedicated lanes. These transit-ready ghost roads are only a few feet narrower than the ones we used to lay out for human bus drivers, but with lateral gyrations limited by precise computer control, this belt-tightening makes it easier to squeeze them into a much wider range of city streets. Plowing effortlessly through neighborhoods the subway never reached or fully served, they’re unlocking opportunities to up-zone entire swaths of the city for more apartments.Even as the risks we face grow, so too do our capabilities. And technology is a tool for boosting those capabilities. These new neighborhoods don’t feel like the old ones though, as new ways of getting around take hold. Zipping around on “rovers”—a vast category of electric, automated runabouts that includes bikes, scooters, and even wheelchairs—is the way to go now. Summoned with a tap, they’ve crowded out pedal-pushers in the bike lanes, and even displaced cars and trucks from many local streets. Sporting minds of their own, these shared vehicles predictively swarm to where they’re likely to be needed next—outside schools before dismissal, at the stop for an arriving bus, or by the pub at last call. Meanwhile, two-thirds of the stuff residents used to buy in person at local shops now gets delivered by a bewildering array of sidewalk-rolling, stair-climbing, and low-flying bots.The collective impacts of these changes take some getting used to. Neighborhoods are bigger than ever, as rovers open up a range some five times what people can reach on foot. Snobs call it “microsprawl,” but the people who live there don’t seem to mind the abundance of affordable housing and open space that was always out of their reach in the old subway-centered districts. They adapt to the contradictions. Walking is passé, but cars have vanished, leaving more space for parks. Beloved shops on the high streets are long gone, but so too are most of the big box stores, which are now ball fields (at least the ones that weren’t converted to distribution centers).Despite all the new machine motion, on the streets people hardly take notice. That’s because the cloud is a clever choreographer, who makes her most important curtain call late at night. Only then do the ghost roads of the city truly come to life. Heavy haulers are sent out, keeping the streets clear of big commercial loads during the day. Municipal robofleets are unleashed to do the dull, dirty, and dangerous work of urban upkeep. Wayward bikes and scooters scurry back to charging docks and disinfection baths. The cute eight-passenger shuttles that have replaced school buses moonlight as parcel vans. Bending time with technology, it turns out, is one of flexible density’s secret weapons for sharing the city without overcrowding.The hot, close salsa step of big city life isn’t the only dance this city knows, though. Because when a crisis comes—be it pandemic, bombardment, flooding, or worse—it can shift stance to a spread-out, loose-formation waltz with ease.For starters, this robot-powered metropolis is far more prepared to shelter in place. Municipal bots surge into hotspots—disinfecting, repairing flood barriers, extinguishing fires, or whatever the crisis calls for. And the same automated supply chain that powers e-commerce to our front door in peacetime is re-tasked in an instant to carry relief supplies.When the situation worsens, though, and it comes time to evacuate, these machines prove their mettle. This city doesn’t panic, it simply spreads out in an orderly fashion. Software trains change their routes to provide instant transit service out of the danger zone. Urban “ushers,” roving street furniture with a mind of its own, close streets and direct evacuations. “Civic caravans,” entire mobile buildings that house essential government services, pick up and crawl under precise, slow-motion computer control to higher ground overnight. Overhead, drones monitor the shifting scene and relay essential communications.COVID-19 was a gentle warning compared to the stresses that will threaten cities throughout the 21st century. It also exposed striking inequalities in who will bear the brunt of these forces in large cities—migrants, racial, and ethnic minorities, the poor, women, and the elderly were all hit hard. The elite’s preparations to flee are morally inexcusable. But simply hardening cities to ease the pain for those left behind is not enough. We need to shape cities in a way that both eliminates the need for such flight, yet bakes a more controlled, calculated version of it more deeply into the urban DNA.Seen in this light, flexible density will be a shocking proposition for city builders and their advisors. For centuries, they’ve responded to pandemics by hardwiring social distancing into the urban fabric. After surviving the arrival of bubonic plague in Milan in 1485, Leonardo da Vinci sketched a series of unrealized designs for the city that included broader spaces for circulation. Four hundred years later, the early 20th-century modernists obsessed over windows and plazas in an effort to bring more sunlight and fresh air into buildings and neighborhoods. Because of their rigidity, however, these schemes didn’t weather well. The permanent separation they baked in was too high a cost for city dwellers to bear. Over time, many drifted back to the old packed-in places that pre-dated the new ones. Those classic, tight-knit neighborhoods are still the places people love most. And they are our best models for sustainable, human-centered urban design. But they are not impervious to the shocks ahead.Flexible density can help us navigate these tensions. It recognizes that dispersal is a strategy for resilience, but need not be permanent, or incongruent with the virtues of compact development. It recognizes the existential nature of the threats that cities face in the 21st century, and that the static way we’ve been thinking about resilience to date may not be enough. Yet it offers hope too. It shows that even as the risks we face grow, so too do our capabilities. And it puts technology in its place, as a tool for boosting those capabilities. Flexible density highlights the potential for advances in automated mobility to unlock new design possibilities for buildings, infrastructure, and services—while not wholeheartedly re-organizing the future around their demands. There is much to be worked out about the details of how flexible density will work in the real world. But it is clear that we can and should challenge ourselves to peek over the horizon and imagine a tomorrow where flexible density is part of how we make ourselves at home on the other side of this very weird and unsettling present.Dr. Anthony Townsend is Urbanist in Residence at Cornell Tech’s Jacobs Institute, and the author of Ghost Road: Beyond the Driverless Car (2020) and Smart Cities: Big Data, Civic Hackers and the Quest for A New Utopia (2013).Lead image: Golden Sikorka / ShutterstockRead More…
- NAUTILUS10 HOURS AGO
The Physicist’s New Book of Life - Issue 92: Frontiers
Who is Jeremy England? There are many answers to that question. He is a biochemistry graduate who became an MIT assistant professor in physics when he was 29 years old. He is an ordained rabbi. He is the grandson of Holocaust survivors. He is a descendant of the first life-form on Earth. He can also be described as an assemblage of atoms that exhibits complex, life-like behavior. England might describe himself as one of the many dissipators of energy in the universe—this, he says, seems to be a useful way to answer the question that humans have asked for so many millennia: What is life, and how did it arise?This question—and England’s answer—form the basis of his new book Every Life Is On Fire: How Thermodynamics Explains the Origin of Living Things, which explores the idea that burning up energy is the base activity of life. But England has no simple, neat tale to tell: This is a complex, multilayered subject, and must be treated as more than a scientific issue, he says. That’s why Every Life Is On Fire daringly brings ideas from the Hebrew Scriptures and uses them to unpack the science. Cultural and religious traditions have long been exploring this territory, he says, and can complement scientific angles on the question of where we ultimately came from. If we really want to understand ourselves, he suggests, we’ll need more than science.BOTH SIDES NOW: “I always wanted to do physics because I liked the predictive power of simple principles,” Jeremy England says. “At the same time, I was fascinated by the form-function relationships of biology. So I was always trying to do both.”Katherine TaylorHow did you get into combining physics and biology to come up with ideas about life’s origins?I always wanted to do physics because I liked the predictive power of simple principles. At the same time, I was fascinated by the relationships between form and function in biology—especially when you see that it’s still there when you get down to the molecular level. I started out working in a structural biology and a cell biology wet lab, and was very bad at that! By the time I was finishing undergraduate school, I was working in a theoretical lab looking at protein folding. So it feels natural for me to be drawn to this set of questions.Does it matter that no one has come up with a watertight definition of life?Not really. We get the notion of what life is. You can do plenty of great science while saying, “Let me accept that there is a category of things where fish belongs to it, and trees belong to it, but rocks don’t belong to it, and ice doesn’t belong to it.” We can continue to use the word while admitting that we don’t really have a scientific pedigree for where the development of the word came from. And yes, there will be some difficult cases such as viruses. But we can accept the category as given, and study, to the best of our ability, the properties of the things in that category that are of interest to us.That I can talk about a person as a collection of atoms should not supplant the fact that I can also talk about that person as a moral being. Part of the quest involves seeking out life-like behavior in inanimate things too, doesn’t it?In a way, yes. We had a paper in Physical Review Letters a few years ago about a simulation of a bunch of balls and springs, just jiggling and the springs were hooking and unhooking from the balls. Then you wiggled one of the springs at a certain frequency and it all jumbles and hooks together in a different way. Now you have a resonator that’s better at absorbing energy from the frequency that you’re wiggling the ball at. Learning to harvest energy better from its surroundings is a feedback process that sounds lifelike. On the other hand, if you held it up to someone and said, “Look at this jiggling mass of balls and springs, it’s alive,” they would just laugh at you, and rightly so.So it’s clear that this is a territory where there’s going to be a lot of different arguments. Some might say, “Well, the fundamental thing about life is that it does X.” And someone else might say, “Well, the fundamental thing about life is that it does Y.” What I find is that when I focus on any one of those properties, you can always find examples and counter-examples. If you were loose enough in your understanding of what it means to copy yourself, for instance, then a spreading fire is a self-copying phenomenon. But to call a fire alive is a really contentious extension of the domain of that word.What we term life is this multifarious bundle of all these different things together: You’re good at self-replication, energy harvesting, and so on. When you study each one of those things on its own, it’s a physical phenomenon that has more primitive examples. But those examples are where we have a chance of understanding the fundamental principle better.A lot of physicists’ efforts to understand life seem to invoke ideas from thermodynamics, such as entropy. To me, that feels uncomfortable, because thermodynamics was developed for another purpose entirely. Are you wedded to a thermodynamics approach?It’s necessary to talk about entropy for historical reasons, and if we are conservative enough about how we use the term, it can still be useful as a shorthand. What I advocate for now is that we try to make theories that talk about the probability of things happening. And yes, it’s true that entropy, which is counting the number of ways something could happen, is part of what weighs on the scale in determining probabilities—but it is not the only thing that impacts probability. So trying to talk about whether entropy should increase becomes very distracting. I think a better way to talk about it is more to consider what is the likely outcome, given the starting point, given the way the system is being driven, and given the sources of fluctuation in the system. Entropy will be one of the things that matters there, but not the only one.You say that life seems to demand an explanation. Do you think that you’ve found one?The focus of my line of research is more about whether we can develop the capability to bring about the different aspects of what I would call “lifelikeness” in experimental settings, with control and with theoretical principles that can be clearly articulated. We may not know exactly how our particular example of life got put together, but we start to see how one puts a bunch of things together in general. The starting point for that is to break things apart into these different phenomena like energy-harvesting, self-replication, et cetera. With each of those, we’ve made some progress. There’s more to do, but we can start to see how a story might come together.Energy harvesting is central to your ideas. You suggest a key aspect of life’s emergence is down to structures that adapt to their environment by dissipating energy. Can you elaborate on that?Imagine I have a collection of matter under the influence of an environment. The environment is essentially sources of energy that are kicking the matter and knocking into it and allowing it to change shape. I’m interested in which configurations of that matter will be likely to exist at some point in the future. That likelihood depends, in part, on how much extra energy was absorbed and dissipated on the way. Over the course of the whole history of the system, highly dissipative histories are going to lead to highly likely outcomes.What’s an example of a likely outcome?An example might be a self-copying bacterium that eats some sugar. It uses the sugar to build another copy of itself. Now I have two of them and they eat the sugar even faster, and then they make four of them, and then they eat the sugar even faster. So the chemical dissipation is accelerating toward a likely outcome, which is that I have more bacteria in my future than I had in my past. The balls and springs work that way as well. It’s a positive feedback process where you’re exploring a space with combinations of matter. There’s an energy source. And the flow of energy through the system is leading to a positive feedback relationship where you find a better energy absorber and it helps you absorb even more energy, then you find another even better energy absorbing state.Subjected to the right kinds of patterns, naive matter can exhibit computing and learning behaviors. What’s the general idea of dissipative adaptation?There’s a feedback process that’s positive: I end up in a particular place because I was in a state in my past that was good at absorbing energy and it carried me irreversibly in a certain direction that I can’t go back from. It left its mark. So the general idea with dissipative adaptation is that the current state of the system holds the signature of how I had to be in some special state in my past to absorb a lot of energy. That helped me change my shape in consequential ways. Sometimes that leads to growing energy absorption over time, and sometimes it leads to extinction of energy absorption over time. And both of those things can leave very noticeable fingerprints that are different aspects of lifelike behavior.And can we see this in biological experiments?I haven’t been able to apply these ideas rigorously to anything like living cells yet—certainly not in experiments, but also not even with theoretical models. It’s much messier and more complicated to try to get things done in the biological context. But I don’t think that doing that kind of experiment is a long way away. Usually if I show a biologist a living cell and I say, “Look, it’s behaving in this way where it’s being very smart in how it’s reacting to something its environment is doing,” the default assumption is, “Well, there’s something you don’t understand yet about the biology, and that’s the explanation to what you’re seeing.” The design of experiments will have to be done very carefully.If you can’t yet use biology, what can you use to explore these ideas?There are membrane-less droplets, for instance, that self-organize into cells under different conditions. They seem to have very plastic and flexible properties to help the cell respond to different functional needs. A biologist might say, “Oh, well, it has all of these evolved abilities that come from eons of natural selection, making it better and better at what it does.” But it’s starting to be hard to imagine that every kind of response like this has its own separate program, as though it’s all been learned from the past. There’s a growing list of experimental biologists who are interested in these kinds of emergent adaptive behaviors in biological systems.What kind of experiments are you doing along these lines?We’ve been working with primitive abiotic examples. The place we’re looking is called “active matter.” It can involve proteins chewing through chemical fuels and binding and unbinding from each other. But you can also do it with larger objects. I have a collaboration at Georgia Tech where we do this with robots swarms. There are also examples of “colloidal particles” that have special coatings on their surface—they’re like little chemical jet packs. And they already exhibit really interesting collective behaviors. Active matter is a nice experimental base camp. You don’t have to try to make sense of the living cell, where in addition to everything else you have all of the impacts of natural selection at the level of the organism. We can just study the collective behavior of things that are like soups of interacting proteins that are more primitive.You can’t describe the interesting phenomena of the world if you just start with Coulomb’s law and the Schrödinger equation. Every Life Is On Fire is still not the long-sought origin of life story, though?It’s true: There is a lot more to fill in. I’m sure there are people who will read this book and say, “Well, you’ve talked about different kinds of lifelikeness and how they might emerge, but that’s not the same thing as a full story from start to finish of how life as we know it gets put together.” Maybe we can understand how self-replicators might start to emerge, how predictive mechanisms that respond to the patterns in their environment by accurately predicting their surroundings or their behavior might emerge. And maybe energy harvesting is something whose emergence we can understand. So that certainly recalibrates our sense of how to imagine a prebiotic situation and think about what’s difficult or easy to accomplish with what would be lying around. But, no, it is not the same thing as telling a blow-by-blow story. I’m sure anyone who’s looking for that level of detail in a story that is convincing and testable will have to wait a while. Doing forensics on that kind of distance to history is pretty difficult.What’s the next stage in trying to get a handle on the origin of life?For the short term, it’s going to be about how far can we push this idea that, subjected to the right kinds of patterns, naive matter can exhibit computing and learning behaviors. I’m trying to do that right now with some of my collaborators—Dan Goldman at Georgia Tech and others who are part of this effort to control robot swarms. We want to push that envelope and show a smoking gun for that kind of an effect, creating something that can be tested and proven empirically in the laboratory. That will put the physical principles on a very firm footing. The more we can achieve impressive results in that way, the more we are going to be able to redouble our efforts to understand wider implications of the theory and tie it back into other things. To be honest, the broader question of how we start to talk about how life comes together is something I find more difficult to predict: I don’t claim that I can see which way that goes yet.One thing that makes your book particularly interesting is that it is not entirely focused on science, but weaves religious narratives—in particular, the story of Moses from Hebrew scripture—into the scientific narrative. What made you want to do that?Talking about the origin of life, or the boundary between what’s alive and what isn’t, involves broader questions that aren’t in the narrow domain of what you can understand scientifically. I didn’t want to stick my head in the sand about that. I want to understand how things work if I reason about them scientifically. But I am also a human being with other interests. I’m a practicing religious Jew—I’m an ordained Orthodox rabbi—and I care very deeply about these things. So I would feel foolish putting the scientific ideas out there but not making my own comment about a larger conversation that includes more perspectives on what some of this could mean. When I decided to write this book, I quickly realized I wanted to go and look in the Torah and see if I can find a commentary that responds to what I’m already thinking about with the science. I certainly think that it’s possible to contemplate the boundary between life and not-life from that perspective, and the text, I would argue, clearly contains such a contemplation.Do you think that including all these different perspectives is important in our quest to make sense of ourselves?It’s clear that the question of what happened in the past is not a low stakes question. You see that in how people argue about history and in the very emotional disputes people end up having about the prehistoric, or about cosmology. Ultimately, and this is something I learned from the Torah, how you describe the past is not ideologically neutral. The way you talk about who we are, and where we come from, matters to people—partly because it makes some people powerful, and enables them to convince others to do certain things. So I certainly don’t want to eliminate any frameworks of meaning that we need for talking about the past—we can’t just have frameworks that involve concepts like fundamental fields or prebiotic chemical reactions.I sometimes think fundamental physicists gin up the notion that when we’re done. We’ll just talk about strings and that will be everything. But we already know you can’t describe the interesting phenomena of the world if you just start with Coulomb’s law and the Schrödinger equation. It doesn’t work. You need different languages. We certainly shouldn’t be trying to have fewer of them. The difference between physics and biology is that they are different languages for talking about the same world. It’s a mistake to be trying to look for one language that will replace or subsume all others. The fact that I can talk about a person and see them as a collection of atoms should not supplant the fact that I can also talk about that person as a participant in an economy, or a moral being, or a participant in a relationship. These other frameworks of meaning are important, and we should grab them and hold on to them and insist on them.Is it more important to wrestle with issues around our origins than solve them?This is not a conversation that we should be hoping to exhaust. People who think that we’re done sorting it out are misguided in one way or another. People need to keep talking respectfully, with intellectual honesty, and in different languages, and sharing those languages with each other in order. That’s how we’ll progress in our understanding.Michael Brooks holds a Ph.D. in physics and is the author of The Quantum Astrologer’s Handbook.Read “Why Physics Can’t Tell Us What Life Is” by Jeremy England, also in this issue.Lead image: ping198 / ShutterstockRead More…
- 20 HOURS AGO
Trump’s Antibody ‘Cure’ Will Be in Short Supply
All the weak points of American health care — testing delays, communication breakdowns, inequity — are working against this potential treatment.