New camera on Palomar telescope will seek out supernovas, asteroids and more

A new eye on the variable sky just opened. The Zwicky Transient Facility, a robotic camera designed to rapidly scan the sky nightly for objects that move, flash or explode, took its first image on November 1.

The camera, mounted on a telescope at Caltech’s Palomar Observatory near San Diego, succeeds the Palomar Transient Factory. Between 2009 and 2017, the Palomar Transient Factory caught two separate supernovas hours after they exploded, one in 2011 (SN: 9/24/11, p. 5) and one earlier this year (SN: 2/13/17). It also found the longest-lasting supernova ever, from a star that seems to explode over and over (SN: 11/8/17).

The Zwicky survey will spot similar short-lived events and other cosmic blips, like stars being devoured by black holes (SN: 4/1/17, p. 5), as well as asteroids and comets. But Zwicky will work much faster than its predecessor: It will operate 10 times as fast, cover seven times as much of the sky in a single image and take 2.5 times as many exposures each night. Computers will search the images for any astronomical object that changes from one scan to the next.

The camera is named for Caltech astronomer Fritz Zwicky, who first used the term “supernova” in 1931 to describe the explosions that mark a star’s death (SN: 10/24/13).

Simulating the universe using Einstein’s theory of gravity may solve cosmic puzzles

If the universe were a soup, it would be more of a chunky minestrone than a silky-smooth tomato bisque.

Sprinkled with matter that clumps together due to the insatiable pull of gravity, the universe is a network of dense galaxy clusters and filaments — the hearty beans and vegetables of the cosmic stew. Meanwhile, relatively desolate pockets of the cosmos, known as voids, make up a thin, watery broth in between.

Until recently, simulations of the cosmos’s history haven’t given the lumps their due. The physics of those lumps is described by general relativity, Albert Einstein’s theory of gravity. But that theory’s equations are devilishly complicated to solve. To simulate how the universe’s clumps grow and change, scientists have fallen back on approximations, such as the simpler but less accurate theory of gravity devised by Isaac Newton.
Relying on such approximations, some physicists suggest, could be mucking with measurements, resulting in a not-quite-right inventory of the cosmos’s contents. A rogue band of physicists suggests that a proper accounting of the universe’s clumps could explain one of the deepest mysteries in physics: Why is the universe expanding at an increasingly rapid rate?

The accepted explanation for that accelerating expansion is an invisible pressure called dark energy. In the standard theory of the universe, dark energy makes up about 70 percent of the universe’s “stuff” — its matter and energy. Yet scientists still aren’t sure what dark energy is, and finding its source is one of the most vexing problems of cosmology.

Perhaps, the dark energy doubters suggest, the speeding up of the expansion has nothing to do with dark energy. Instead, the universe’s clumpiness may be mimicking the presence of such an ethereal phenomenon.
Most physicists, however, feel that proper accounting for the clumps won’t have such a drastic impact. Robert Wald of the University of Chicago, an expert in general relativity, says that lumpiness is “never going to contribute anything that looks like dark energy.” So far, observations of the universe have been remarkably consistent with predictions based on simulations that rely on approximations.
As observations become more detailed, though, even slight inaccuracies in simulations could become troublesome. Already, astronomers are charting wide swaths of the sky in great detail, and planning more extensive surveys. To translate telescope images of starry skies into estimates of properties such as the amount of matter in the universe, scientists need accurate simulations of the cosmos’s history. If the detailed physics of clumps is important, then simulations could go slightly astray, sending estimates off-kilter. Some scientists already suggest that the lumpiness is behind a puzzling mismatch of two estimates of how fast the universe is expanding.

Researchers are attempting to clear up the debate by conquering the complexities of general relativity and simulating the cosmos in its full, lumpy glory. “That is really the new frontier,” says cosmologist Sabino Matarrese of the University of Padua in Italy, “something that until a few years ago was considered to be science fiction.” In the past, he says, scientists didn’t have the tools to complete such simulations. Now researchers are sorting out the implications of the first published results of the new simulations. So far, dark energy hasn’t been explained away, but some simulations suggest that certain especially sensitive measurements of how light is bent by matter in the universe might be off by as much as 10 percent.

Soon, simulations may finally answer the question: How much do lumps matter? The idea that cosmologists might have been missing a simple answer to a central problem of cosmology incessantly nags some skeptics. For them, results of the improved simulations can’t come soon enough. “It haunts me. I can’t let it go,” says cosmologist Rocky Kolb of the University of Chicago.

Smooth universe
By observing light from different eras in the history of the cosmos, cosmologists can compute the properties of the universe, such as its age and expansion rate. But to do this, researchers need a model, or framework, that describes the universe’s contents and how those ingredients evolve over time. Using this framework, cosmologists can perform computer simulations of the universe to make predictions that can be compared with actual observations.
After Einstein introduced his theory in 1915, physicists set about figuring out how to use it to explain the universe. It wasn’t easy, thanks to general relativity’s unwieldy, difficult-to-solve suite of equations. Meanwhile, observations made in the 1920s indicated that the universe wasn’t static as previously expected; it was expanding. Eventually, researchers converged on a solution to Einstein’s equations known as the Friedmann-Lemaître-Robertson-Walker metric. Named after its discoverers, the FLRW metric describes a simplified universe that is homogeneous and isotropic, meaning that it appears identical at every point in the universe and in every direction. In this idealized cosmos, matter would be evenly distributed, no clumps. Such a smooth universe would expand or contract over time.
A smooth-universe approximation is sensible, because when we look at the big picture, averaging over the structures of galaxy clusters and voids, the universe is remarkably uniform. It’s similar to the way that a single spoonful of minestrone soup might be mostly broth or mostly beans, but from bowl to bowl, the overall bean-to-broth ratios match.

In 1998, cosmologists revealed that not only was the universe expanding, but its expansion was also accelerating (SN: 2/2/08, p. 74). Observations of distant exploding stars, or supernovas, indicated that the space between us and them was expanding at an increasing clip. But gravity should slow the expansion of a universe evenly filled with matter. To account for the observed acceleration, scientists needed another ingredient, one that would speed up the expansion. So they added dark energy to their smooth-universe framework.

Now, many cosmologists follow a basic recipe to simulate the universe — treating the cosmos as if it has been run through an imaginary blender to smooth out its lumps, adding dark energy and calculating the expansion via general relativity. On top of the expanding slurry, scientists add clumps and track their growth using approximations, such as Newtonian gravity, which simplifies the calculations.

In most situations, Newtonian gravity and general relativity are near-twins. Throw a ball while standing on the surface of the Earth, and it doesn’t matter whether you use general relativity or Newtonian mechanics to calculate where the ball will land — you’ll get the same answer. But there are subtle differences. In Newtonian gravity, matter directly attracts other matter. In general relativity, gravity is the result of matter and energy warping spacetime, creating curves that alter the motion of objects (SN: 10/17/15, p. 16). The two theories diverge in extreme gravitational environments. In general relativity, for example, hulking black holes produce inescapable pits that reel in light and matter (SN: 5/31/14, p. 16). The question, then, is whether the difference between the two theories has any impact in lumpy-universe simulations.

Most cosmologists are comfortable with the status quo simulations because observations of the heavens seem to fit neatly together like interlocking jigsaw puzzle pieces. Predictions based on the standard framework agree remarkably well with observations of the cosmic microwave background — ancient light released when the universe was just 380,000 years old (SN: 3/21/15, p. 7). And measurements of cosmological parameters — the fraction of dark energy and matter, for example — are generally consistent, whether they are made using the light from galaxies or the cosmic microwave background.

However, the reliance on Newton’s outdated theory irks some cosmologists, creating a lingering suspicion that the approximation is causing unrecognized problems. And some cosmological question marks remain. Physicists still puzzle over what makes up dark energy, along with another unexplained cosmic constituent, dark matter, an additional kind of mass that must exist to explain observations of how galaxies and galaxy clusters rotate. “Both dark energy and dark matter are a bit of an embarrassment to cosmologists, because they have no idea what they are,” says cosmologist Nick Kaiser of École Normale Supérieure in Paris.
Dethroning dark energy
Some cosmologists hope to explain the universe’s accelerating expansion by fully accounting for the universe’s lumpiness, with no need for the mysterious dark energy.

These researchers argue that clumps of matter can alter how the universe expands, when the clumps’ influence is tallied up over wide swaths of the cosmos. That’s because, in general relativity, the expansion of each local region of space depends on how much matter is within. Voids expand faster than average; dense regions expand more slowly. Because the universe is mostly made up of voids, this effect could produce an overall expansion and potentially an acceleration. Known as backreaction, this idea has lingered in obscure corners of physics departments for decades, despite many claims that backreaction’s effect is small or nonexistent.

Backreaction continues to appeal to some researchers because they don’t have to invent new laws of physics to explain the acceleration of the universe. “If there is an alternative which is based only upon traditional physics, why throw that away completely?” Matarrese asks.

Most cosmologists, however, think explaining away dark energy just based on the universe’s lumps is unlikely. Previous calculations have indicated any effect would be too small to account for dark energy, and would produce an acceleration that changes in time in a way that disagrees with observations.

“My personal view is that it’s a much smaller effect,” says astrophysicist Hayley Macpherson of Monash University in Melbourne, Australia. “That’s just basically a gut feeling.” Theories that include dark energy explain the universe extremely well, she points out. How could that be if the whole approach is flawed?

New simulations by Macpherson and others that model how lumps evolve in general relativity may be able to gauge the importance of backreaction once and for all. “Up until now, it’s just been too hard,” says cosmologist Tom Giblin of Kenyon College in Gambier, Ohio.

To perform the simulations, researchers needed to get their hands on supercomputers capable of grinding through the equations of general relativity as the simulated universe evolves over time. Because general relativity is so complex, such simulations are much more challenging than those that use approximations, such as Newtonian gravity. But, a seemingly distinct topic helped lay some of the groundwork: gravitational waves, or ripples in the fabric of spacetime.
The Advanced Laser Interferometer Gravitational-Wave Observatory, LIGO, searches for the tremors of cosmic dustups such as colliding black holes (SN: 10/28/17, p. 8). In preparation for this search, physicists honed their general relativity skills on simulations of the spacetime storm kicked up by black holes, predicting what LIGO might see and building up the computational machinery to solve the equations of general relativity. Now, cosmologists have adapted those techniques and unleashed them on entire, lumpy universes.

The first lumpy universe simulations to use full general relativity were unveiled in the June 2016 Physical Review Letters. Giblin and colleagues reported their results simultaneously with Eloisa Bentivegna of the University of Catania in Italy and Marco Bruni of the University of Portsmouth in England.

So far, the simulations have not been able to account for the universe’s acceleration. “Nearly everybody is convinced [the effect] is too small to explain away the need for dark energy,” says cosmologist Martin Kunz of the University of Geneva. Kunz and colleagues reached the same conclusion in their lumpy-universe simulations, which have one foot in general relativity and one in Newtonian gravity. They reported their first results in Nature Physics in March 2016.

Backreaction aficionados still aren’t dissuaded. “Before saying the effect is too small to be relevant, I would, frankly, wait a little bit more,” Matarrese says. And the new simulations have potential caveats. For example, some simulated universes behave like an old arcade game — if you walk to one edge of the universe, you cross back over to the other side, like Pac-Man exiting the right side of the screen and reappearing on the left. That geometry would suppress the effects of backreaction in the simulation, says Thomas Buchert of the University of Lyon in France. “This is a good beginning,” he says, but there is more work to do on the simulations. “We are in infancy.”

Different assumptions in a simulation can lead to disparate results, Bentivegna says. As a result, she doesn’t think that her lumpy, general-relativistic simulations have fully closed the door on efforts to dethrone dark energy. For example, tricks of light might be making it seem like the universe’s expansion is accelerating, when in fact it isn’t.

When astronomers observe far-away sources like supernovas, the light has to travel past all of the lumps of matter between the source and Earth. That journey could make it look like there’s an acceleration when none exists. “It’s an optical illusion,” Bentivegna says. She and colleagues see such an effect in a simulation reported in March in the Journal of Cosmology and Astroparticle Physics. But, she notes, this work simulated an unusual universe, in which matter sits on a grid — not a particularly realistic scenario.

For most other simulations, the effect of optical illusions remains small. That leaves many cosmologists, including Giblin, even more skeptical of the possibility of explaining away dark energy: “I feel a little like a downer,” he admits.
Surveying the skies
Subtle effects of lumps could still be important. In Hans Christian Andersen’s “The Princess and the Pea,” the princess felt a tiny pea beneath an impossibly tall stack of mattresses. Likewise, cosmologists’ surveys are now so sensitive that even if the universe’s lumps have a small impact, estimates could be thrown out of whack.

The Dark Energy Survey, for example, has charted 26 million galaxies using the Victor M. Blanco Telescope in Chile, measuring how the light from those galaxies is distorted by the intervening matter on the journey to Earth. In a set of papers posted online August 4 at arXiv.org, scientists with the Dark Energy Survey reported new measurements of the universe’s properties, including the amount of matter (both dark and normal) and how clumpy that matter is (SN: 9/2/17, p. 32). The results are consistent with those from the cosmic microwave background — light emitted billions of years earlier.

To make the comparison, cosmologists took the measurements from the cosmic microwave background, early in the universe, and used simulations to extrapolate to what galaxies should look like later in the universe’s history. It’s like taking a baby’s photograph, precisely computing the number and size of wrinkles that should emerge as the child ages and finding that your picture agrees with a snapshot taken decades later. The matching results so far confirm cosmologists’ standard picture of the universe — dark energy and all.

“So far, it has not yet been important for the measurements that we’ve made to actually include general relativity in those simulations,” says Risa Wechsler, a cosmologist at Stanford University and a founding member of the Dark Energy Survey. But, she says, for future measurements, “these effects could become more important.” Cosmologists are edging closer to Princess and the Pea territory.

Those future surveys include the Dark Energy Spectroscopic Instrument, DESI, set to kick off in 2019 at Kitt Peak National Observatory near Tucson; the European Space Agency’s Euclid satellite, launching in 2021; and the Large Synoptic Survey Telescope in Chile, which is set to begin collecting data in 2023.

If cosmologists keep relying on simulations that don’t use general relativity to account for lumps, certain kinds of measurements of weak lensing — the bending of light due to matter acting like a lens — could be off by up to 10 percent, Giblin and colleagues reported at arXiv.org in July. “There is something that we’ve been ignoring by making approximations,” he says.

That 10 percent could screw up all kinds of estimates, from how dark energy changes over the universe’s history to how fast the universe is currently expanding, to the calculations of the masses of ethereal particles known as neutrinos. “You have to be extremely certain that you don’t get some subtle effect that gets you the wrong answers,” Geneva’s Kunz says, “otherwise the particle physicists are going to be very angry with the cosmologists.”

Some estimates may already be showing problem signs, such as the conflicting estimates of the cosmic expansion rate (SN: 8/6/16, p. 10). Using the cosmic microwave background, cosmologists find a slower expansion rate than they do from measurements of supernovas. If this discrepancy is real, it could indicate that dark energy changes over time. But before jumping to that conclusion, there are other possible causes to rule out, including the universe’s lumps.

Until the issue of lumps is smoothed out, scientists won’t know how much lumpiness matters to the cosmos at large. “I think it’s rather likely that it will turn out to be an important effect,” Kolb says. Whether it explains away dark energy is less certain. “I want to know the answer so I can get on with my life.”

Collision illuminates the mysterious makeup of neutron stars

On astrophysicists’ charts of star stuff, there’s a substance that still merits the label “here be dragons.” That poorly understood material is found inside neutron stars — the collapsed remnants of once-mighty stars — and is now being mapped out, as scientists better characterize the weird matter.

The detection of two colliding neutron stars, announced in October (SN: 11/11/17, p. 6), has accelerated the pace of discovery. Since the event, which scientists spied with gravitational waves and various wavelengths of light, several studies have placed new limits on the sizes and masses possible for such stellar husks and on how squishy or stiff they are.
“The properties of neutron star matter are not very well known,” says physicist Andreas Bauswein of the Heidelberg Institute for Theoretical Studies in Germany. Part of the problem is that the matter inside a neutron star is so dense that a teaspoonful would weigh a billion tons, so the substance can’t be reproduced in any laboratory on Earth.

In the collision, the two neutron stars merged into a single behemoth. This remnant may have immediately collapsed into a black hole. Or it may have formed a bigger, spinning neutron star that, propped up by its own rapid rotation, existed for a few milliseconds — or potentially much longer — before collapsing. The speed of the object’s demise is helping scientists figure out whether neutron stars are made of material that is relatively soft, compressing when squeezed like a pillow, or whether the neutron star stuff is stiff, standing up to pressure. This property, known as the equation of state, determines the radius of a neutron star of a particular mass.

An immediate collapse seems unlikely, two teams of researchers say. Telescopes spotted a bright glow of light after the collision. That glow could only appear if there were a delay before the merged neutron star collapsed into a black hole, says physicist David Radice of Princeton University because when the remnant collapses, “all the material around falls inside of the black hole immediately.” Instead, the neutron star stuck around for at least several milliseconds, the scientists propose.

Simulations indicate that if neutron stars are soft, they will collapse more quickly because they will be smaller than stiff neutron stars of the same mass. So the inferred delay allows Radice and colleagues to rule out theories that predict neutron stars are extremely squishy, the researchers report in a paper published November 13 at arXiv.org.
Using similar logic, Bauswein and colleagues rule out some of the smallest sizes that neutron stars of a particular mass might be. For example, a neutron star 60 percent more massive than the sun can’t have a radius smaller than 10.7 kilometers, they determine. These results appear in a paper published November 29 in the Astrophysical Journal Letters.

Other researchers set a limit on the maximum mass a neutron star can have. Above a certain heft, neutron stars can no longer support their own weight and collapse into a black hole. If this maximum possible mass were particularly large, theories predict that the newly formed behemoth neutron star would have lasted hours or days before collapsing. But, in a third study, two physicists determined that the collapse came much more quickly than that, on the scale of milliseconds rather than hours. A long-lasting, spinning neutron star would dissipate its rotational energy into the material ejected from the collision, making the stream of glowing matter more energetic than what was seen, physicists Ben Margalit and Brian Metzger of Columbia University report. In a paper published November 21 in the Astrophysical Journal Letters, the pair concludes that the maximum possible mass is smaller than about 2.2 times that of the sun.

“We didn’t have many constraints on that prior to this discovery,” Metzger says. The result also rules out some of the stiffer equations of state because stiffer matter tends to support larger masses without collapsing.

Some theories predict that bizarre forms of matter are created deep inside neutron stars. Neutron stars might contain a sea of free-floating quarks — particles that are normally confined within larger particles like protons or neutrons. Other physicists suggest that neutron stars may contain hyperons, particles made with heavier quarks known as strange quarks, not found in normal matter. Such unusual matter would tend to make neutron stars softer, so pinning down the equation of state with additional neutron star crashes could eventually resolve whether these exotic beasts of physics indeed lurk in this unexplored territory.

In a first, Galileo’s gravity experiment is re-created in space

Galileo’s most famous experiment has taken a trip to outer space. The result? Einstein was right yet again. The experiment confirms a tenet of Einstein’s theory of gravity with greater precision than ever before.

According to science lore, Galileo dropped two balls from the Leaning Tower of Pisa to show that they fell at the same rate no matter their composition. Although it seems unlikely that Galileo actually carried out this experiment, scientists have performed a similar, but much more sensitive experiment in a satellite orbiting Earth. Two hollow cylinders within the satellite fell at the same rate over 120 orbits, or about eight days’ worth of free-fall time, researchers with the MICROSCOPE experiment report December 4 in Physical Review Letters. The cylinders’ accelerations match within two-trillionths of a percent.

The result confirms a foundation of Einstein’s general theory of relativity known as the equivalence principle. That principle states that an object’s inertial mass, which sets the amount of force needed to accelerate it, is equal to its gravitational mass, which determines how the object responds to a gravitational field. As a result, items fall at the same rate — at least in a vacuum, where air resistance is eliminated — even if they have different masses or are made of different materials.

The result is “fantastic,” says physicist Stephan Schlamminger of OTH Regensburg in Germany who was not involved with the research. “It’s just great to have a more precise measurement of the equivalence principle because it’s one of the most fundamental tenets of gravity.”
In the satellite, which is still collecting additional data, a hollow cylinder, made of platinum alloy, is centered inside a hollow, titanium-alloy cylinder. According to standard physics, gravity should cause the cylinders to fall at the same rate, despite their different masses and materials. A violation of the equivalence principle, however, might make one fall slightly faster than the other.

As the two objects fall in their orbit around Earth, the satellite uses electrical forces to keep the pair aligned. If the equivalence principle didn’t hold, adjustments needed to keep the cylinders in line would vary with a regular frequency, tied to the rate at which the satellite orbits and rotates. “If we see any difference in the acceleration it would be a signature of violation” of the equivalence principle, says MICROSCOPE researcher Manuel Rodrigues of the French aerospace lab ONERA in Palaiseau. But no hint of such a signal was found.

With about 10 times the precision of previous tests, the result is “very impressive,” says physicist Jens Gundlach of the University of Washington in Seattle. But, he notes, “the results are still not as precise as what I think they can get out of a satellite measurement.”

Performing the experiment in space eliminates certain pitfalls of modern-day land-based equivalence principle tests, such as groundwater flow altering the mass of surrounding terrain. But temperature changes in the satellite limited how well the scientists could confirm the equivalence principle, as these variations can cause parts of the apparatus to expand or contract.

MICROSCOPE’s ultimate goal is to beat other measurements by a factor of 100, comparing the cylinders’ accelerations to see whether they match within a tenth of a trillionth of a percent. With additional data yet to be analyzed, the scientists may still reach that mark.

Confirmation of the equivalence principle doesn’t mean that all is hunky-dory in gravitational physics. Scientists still don’t know how to combine general relativity with quantum mechanics, the physics of the very small. “The two theories seems to be very different, and people would like to merge these two theories,” Rodrigues says. But some attempts to do that predict violations of the equivalence principle on a level that’s not yet detectable. That’s why scientists think the equivalence principle is worth testing to ever more precision — even if it means shipping their experiments off to space.

Elongated heads were a mark of elite status in an ancient Peruvian society

Bigwigs in a more than 600-year-old South American population were easy to spot. Their artificially elongated, teardrop-shaped heads screamed prestige, a new study finds.

During the 300 years before the Incas’ arrival in 1450, intentional head shaping among prominent members of the Collagua ethnic community in Peru increasingly centered on a stretched-out look, says bioarchaeologist Matthew Velasco of Cornell University. Having long, narrow noggins cemented bonds among members of a power elite — a unity that may have helped pave a relatively peaceful incorporation into the Incan Empire, Velasco proposes in the February Current Anthropology.
“Increasingly uniform head shapes may have encouraged a collective identity and political unity among Collagua elites,” Velasco says. These Collagua leaders may have negotiated ways to coexist with the encroaching Inca rather than fight them, he speculates. But the fate of the Collaguas and a neighboring population, the Cavanas, remains hazy. Those populations lived during a conflict-ridden time — after the collapse of two major Andean societies around 1100 (SN: 8/1/09, p. 16) and before the expansion of the Inca Empire starting in the 15th century.

For at least the past several thousand years, human groups in various parts of the world have intentionally modified skull shapes by wrapping infants’ heads with cloth or binding the head between two pieces of wood (SN: 4/29/17, p. 18). Researchers generally assume that this practice signified membership in ethnic or kin groups, or perhaps social rank.
The Callagua people lived in Colca Valley in southeastern Peru and raised alpaca for wool. By tracking Collagua skull shapes over 300 years, Velasco found that elongated skulls became increasingly linked to high social status. By the 1300s, for instance, Collagua women with deliberately distended heads suffered much less skull damage from physical attacks than other females did, he reports. Chemical analyses of bones indicates that long-headed women ate a particularly wide variety of foods.
Until now, knowledge of head-shaping practices in ancient Peru primarily came from Spanish accounts written in the 1500s. Those documents referred to tall, thin heads among Collaguas and wide, long heads among Cavanas, implying that a single shape had always characterized each group.

“Velasco has discovered that the practice of cranial modification was much more dynamic over time and across social [groups],” says bioarchaeologist Deborah Blom of the University of Vermont in Burlington.

Velasco examined 211 skulls of mummified humans interred in either of two Collagua cemeteries. Burial structures built against a cliff face were probably reserved for high-ranking individuals, whereas common burial grounds in several caves and under nearby rocky overhangs belonged to regular folk.
Radiocarbon analyses of 13 bone and sediment samples allowed Velasco to sort Collagua skulls into early and late pre-Inca groups. A total of 97 skulls, including all 76 found in common burial grounds, belonged to the early group, which dated to between 1150 and 1300. Among these skulls, 38 — or about 39 percent — had been intentionally modified. Head shapes included sharply and slightly elongated forms as well as skulls compressed into wide, squat configurations.

Of the 14 skulls with extreme elongation, 13 came from low-ranking individuals, a pattern that might suggest regular folk first adopted elongated head shapes. But with only 21 skulls from elites, the finding may underestimate the early frequency of elongated heads among the high-status crowd. Various local groups may have adopted their own styles of head modification at that time, Velasco suggests.

In contrast, among 114 skulls from elite burial sites in the late pre-Inca period, dating to between 1300 and 1450, 84 — or about 74 percent — displayed altered shapes. A large majority of those modified skulls — about 64 percent — were sharply elongated. Shortly before the Incas’ arrival, prominent Collaguas embraced an elongated style as their preferred head shape, Velasco says. No skeletal evidence has been found to determine whether low-ranking individuals also adopted elongated skulls as a signature look in the late pre-Inca period.

Are computers better than people at predicting who will commit another crime?

In courtrooms around the United States, computer programs give testimony that helps decide who gets locked up and who walks free.

These algorithms are criminal recidivism predictors, which use personal information about defendants — like family and employment history — to assess that person’s likelihood of committing future crimes. Judges factor those risk ratings into verdicts on everything from bail to sentencing to parole.

Computers get a say in these life-changing decisions because their crime forecasts are supposedly less biased and more accurate than human guesswork.
But investigations into algorithms’ treatment of different demographics have revealed how machines perpetuate human prejudices. Now there’s reason to doubt whether crime-prediction algorithms can even boast superhuman accuracy.

Computer scientist Julia Dressel recently analyzed the prognostic powers of a widely used recidivism predictor called COMPAS. This software determines whether a defendant will commit a crime within the next two years based on six defendant features — although what features COMPAS uses and how it weighs various data points is a trade secret.

Dressel, who conducted the study while at Dartmouth College, recruited 400 online volunteers, who were presumed to have little or no criminal justice expertise. The researchers split their volunteers into groups of 20, and had each group read descriptions of 50 defendants. Using such information as sex, age and criminal history, the volunteers predicted which defendants would reoffend.
A comparison of the volunteers’ answers with COMPAS’ predictions for the same 1,000 defendants found that both were about 65 percent accurate. “We were like, ‘Holy crap, that’s amazing,’” says study coauthor Hany Farid, a computer scientist at Dartmouth. “You have this commercial software that’s been used for years in courts around the country — how is it that we just asked a bunch of people online and [the results] are the same?”

There’s nothing inherently wrong with an algorithm that only performs as well as its human counterparts. But this finding, reported online January 17 in Science Advances, should be a wake-up call to law enforcement personnel who might have “a disproportionate confidence in these algorithms,” Farid says.

“Imagine you’re a judge, and I tell you I have this highly secretive, highly proprietary, expensive software built on big data, and it says the person standing in front of you is high risk” for reoffending, he says. “The judge would be like, ‘Yeah, that sounds quite serious.’ But now imagine if I tell you, ‘Twenty people online said this person is high risk.’ I imagine you’d weigh that information a little bit differently.” Maybe these predictions deserve the same amount of consideration.

Judges could get some better perspective on recidivism predictors’ performance if the Department of Justice or National Institute for Standards and Technology established a vetting process for new software, Farid says. Researchers could test computer programs against a large, diverse dataset of defendants and OK algorithms for courtroom use only if they get a passing grade for prediction.

Farid has his doubts that computers can show much improvement. He and Dressel built several simple and complex algorithms that used two to seven defendant features to predict recidivism. Like COMPAS, all their algorithms maxed out at about D-level accuracy. That makes Farid wonder whether trying to predict crime with anything approaching A+ accuracy is an exercise in futility.

“Maybe there will be huge breakthroughs in data analytics and machine learning over the next decade that [help us] do this with a high accuracy,” he says. But until then, humans may make better crime predictors than machines. After all, if a bunch of average Joe online recruits gave COMPAS a run for its money, criminal justice experts — like social workers, parole officers, judges or detectives — might just outperform the algorithm.

Even if computer programs aren’t used to predict recidivism, that doesn’t mean they can’t aid law enforcement, says Chelsea Barabas, a media researcher at MIT. Instead of creating algorithms that use historic crime data to predict who will reoffend, programmers could build algorithms that examine crime data to find trends that inform criminal justice research, Barabas and colleagues argue in a paper to be presented at the Conference on Fairness, Accountability and Transparency in New York City on February 23.

For instance, if a computer program studies crime statistics and discovers that certain features — like a person’s age or socioeconomic status — are highly related to repeated criminal activity, that could inspire new studies to see whether certain interventions, like therapy, help those at-risk groups. In this way, computer programs would do one better than just predict future crime. They could help prevent it.

Watch an experimental space shield shred a speeding bullet

Engineers are taking a counterintuitive approach to protecting future spacecraft: shooting at their experiments. The image above and high-speed video below capture a 2.8-millimeter aluminum bullet plowing through a test material for a space shield at 7 kilometers per second. The work is an effort to find structures that could stand up to the impact of space debris.

Earth is surrounded by a cloud of debris, both natural — such as micrometeorites and comet dust, which create meteor showers — and unnatural, including dead satellites and the cast-off detritus of space launches. Those pieces of flotsam can damage other spacecraft if they collide at high speeds, and bits smaller than about a centimeter are hard to track and avoid, says ESA materials engineer Benoit Bonvoisin in a statement.
To defend future spacecraft from taking a hit, Bonvoisin and colleagues are developing armor made from fiber metal laminates, or several thin metal layers bonded together. The laminates are arranged in multiple layers separated by 10 to 30 centimeters, a configuration called a Whipple shield.

In this experiment at the Fraunhofer Institute for High-Speed Dynamics in Germany, the first layer shatters the aluminum bullet into a cloud of smaller pieces, which the second layer is able to deflect. This configuration has been used for decades, but the materials are new. The next step is to test the shield in orbit with a small CubeSat, Bonvoisin says.

These petunias launch seeds that spin 1,660 times a second

Nature may have a few things to teach tennis players about backspin.

The hairyflower wild petunia (Ruellia ciliatiflora) shoots seeds that spin up to 1,660 times per second, which helps them fly farther, researchers report March 7 in Journal of the Royal Society Interface. These seeds have the fastest known rotations of any plant or animal, the authors say. Plants that disperse seeds a greater distance are likely to be more successful in reproducing and spreading.
Glue that holds the flower’s podlike fruit together breaks down on contact with water, allowing the fruit to split explosively, launching millimeter-sized seeds. Little hooks inside the pod help fling these flattened discs at speeds of around 10 meters per second.

Using high-speed cameras that record 20,000 frames per second, the researchers analyzed the seeds’ flight. “Our first thought was: ‘Why doesn’t this throw like a Frisbee?’” says Dwight Whitaker, an applied physicist at Pomona College, in Claremont, Calif. Instead of spinning horizontally, most seeds spin counterclockwise vertically, like a bicycle wheel in reverse.

Whitaker and his colleagues calculated that backspin should help stabilize the seeds as they travel through the air, reducing drag. Experiments backed this up: Stable “spinners” had less drag on average than “floppers,” seeds that tumbled as they fell. Simulations predict that lower drag lets spinners travel 6.7 meters on average — more than twice as far on average as floppers.

STEVE the aurora makes its debut in mauve

Meet STEVE, a newfound type of aurora that drapes the sky with a mauve ribbon and bedazzling green bling.

This feature of the northern lights, recently photographed and named by citizen scientists in Canada, now has a scientific explanation. The streak of color, which appears to the south of the main aurora, may be a visible version of a typically invisible process involving drifting charged particles, or ions, physicist Elizabeth MacDonald and colleagues report March 14 in Science Advances.
Measurements from ground-based cameras and a satellite that passed when STEVE was in full swing show that the luminous band was associated with a strong flow of ions in the upper atmosphere, MacDonald, of NASA’s Goddard Space Flight Center in Greenbelt, Md., and colleagues conclude. But the researchers can’t yet say how a glow arises from this flow.

Part of a project called Aurorasaurus (SN Online: 4/3/15), the citizen scientists initially gave the phenomenon its moniker before its association with ion drift was known. MacDonald and colleagues kept the name, but gave it a backronym: “Strong Thermal Emission Velocity Enhancement.”

We’ll just stick with STEVE.

Live heart cells make this material shift color like a chameleon

To craft a new color-switching material, scientists have again taken inspiration from one of nature’s masters of disguise: the chameleon.

Thin films made of heart cells and hydrogel change hues when the films shrink or stretch, much like chameleon skin. This material, described online March 28 in Science Robotics, could be used to test new medications or possibly to build camouflaging robots.

The material is made of a paper-thin hydrogel sheet engraved with nanocrystal patterns, topped with a layer of living heart muscle cells from rats. These cells contract and expand — just as they would inside an actual rat heart to make it beat — causing the underlying hydrogel to shrink and stretch too. That movement changes the way light bounces off the etched crystal, making the material reflect more blue light when it contracts and more red light when it’s relaxed.
This design is modeled after nanocrystals embedded in chameleon skin, which also reflect different colors of light when stretched (SN Online: 3/13/15).

When researchers treated the material with a drug normally used to boost heart rate, the films changed color more quickly — indicating the heart cells were pulsating more rapidly. That finding suggests the material could help drug developers monitor how heart cells react to new medications, says study coauthor Luoran Shang, a physicist at Southeast University in Nanjing, China. Or these kinds of films could also be used to make color-changing skins for soft robots, Shang says.