On astrophysicists’ charts of star stuff, there’s a substance that still merits the label “here be dragons.” That poorly understood material is found inside neutron stars — the collapsed remnants of once-mighty stars — and is now being mapped out, as scientists better characterize the weird matter.
The detection of two colliding neutron stars, announced in October (SN: 11/11/17, p. 6), has accelerated the pace of discovery. Since the event, which scientists spied with gravitational waves and various wavelengths of light, several studies have placed new limits on the sizes and masses possible for such stellar husks and on how squishy or stiff they are. “The properties of neutron star matter are not very well known,” says physicist Andreas Bauswein of the Heidelberg Institute for Theoretical Studies in Germany. Part of the problem is that the matter inside a neutron star is so dense that a teaspoonful would weigh a billion tons, so the substance can’t be reproduced in any laboratory on Earth.
In the collision, the two neutron stars merged into a single behemoth. This remnant may have immediately collapsed into a black hole. Or it may have formed a bigger, spinning neutron star that, propped up by its own rapid rotation, existed for a few milliseconds — or potentially much longer — before collapsing. The speed of the object’s demise is helping scientists figure out whether neutron stars are made of material that is relatively soft, compressing when squeezed like a pillow, or whether the neutron star stuff is stiff, standing up to pressure. This property, known as the equation of state, determines the radius of a neutron star of a particular mass.
An immediate collapse seems unlikely, two teams of researchers say. Telescopes spotted a bright glow of light after the collision. That glow could only appear if there were a delay before the merged neutron star collapsed into a black hole, says physicist David Radice of Princeton University because when the remnant collapses, “all the material around falls inside of the black hole immediately.” Instead, the neutron star stuck around for at least several milliseconds, the scientists propose.
Simulations indicate that if neutron stars are soft, they will collapse more quickly because they will be smaller than stiff neutron stars of the same mass. So the inferred delay allows Radice and colleagues to rule out theories that predict neutron stars are extremely squishy, the researchers report in a paper published November 13 at arXiv.org. Using similar logic, Bauswein and colleagues rule out some of the smallest sizes that neutron stars of a particular mass might be. For example, a neutron star 60 percent more massive than the sun can’t have a radius smaller than 10.7 kilometers, they determine. These results appear in a paper published November 29 in the Astrophysical Journal Letters.
Other researchers set a limit on the maximum mass a neutron star can have. Above a certain heft, neutron stars can no longer support their own weight and collapse into a black hole. If this maximum possible mass were particularly large, theories predict that the newly formed behemoth neutron star would have lasted hours or days before collapsing. But, in a third study, two physicists determined that the collapse came much more quickly than that, on the scale of milliseconds rather than hours. A long-lasting, spinning neutron star would dissipate its rotational energy into the material ejected from the collision, making the stream of glowing matter more energetic than what was seen, physicists Ben Margalit and Brian Metzger of Columbia University report. In a paper published November 21 in the Astrophysical Journal Letters, the pair concludes that the maximum possible mass is smaller than about 2.2 times that of the sun.
“We didn’t have many constraints on that prior to this discovery,” Metzger says. The result also rules out some of the stiffer equations of state because stiffer matter tends to support larger masses without collapsing.
Some theories predict that bizarre forms of matter are created deep inside neutron stars. Neutron stars might contain a sea of free-floating quarks — particles that are normally confined within larger particles like protons or neutrons. Other physicists suggest that neutron stars may contain hyperons, particles made with heavier quarks known as strange quarks, not found in normal matter. Such unusual matter would tend to make neutron stars softer, so pinning down the equation of state with additional neutron star crashes could eventually resolve whether these exotic beasts of physics indeed lurk in this unexplored territory.
Galileo’s most famous experiment has taken a trip to outer space. The result? Einstein was right yet again. The experiment confirms a tenet of Einstein’s theory of gravity with greater precision than ever before.
According to science lore, Galileo dropped two balls from the Leaning Tower of Pisa to show that they fell at the same rate no matter their composition. Although it seems unlikely that Galileo actually carried out this experiment, scientists have performed a similar, but much more sensitive experiment in a satellite orbiting Earth. Two hollow cylinders within the satellite fell at the same rate over 120 orbits, or about eight days’ worth of free-fall time, researchers with the MICROSCOPE experiment report December 4 in Physical Review Letters. The cylinders’ accelerations match within two-trillionths of a percent.
The result confirms a foundation of Einstein’s general theory of relativity known as the equivalence principle. That principle states that an object’s inertial mass, which sets the amount of force needed to accelerate it, is equal to its gravitational mass, which determines how the object responds to a gravitational field. As a result, items fall at the same rate — at least in a vacuum, where air resistance is eliminated — even if they have different masses or are made of different materials.
The result is “fantastic,” says physicist Stephan Schlamminger of OTH Regensburg in Germany who was not involved with the research. “It’s just great to have a more precise measurement of the equivalence principle because it’s one of the most fundamental tenets of gravity.” In the satellite, which is still collecting additional data, a hollow cylinder, made of platinum alloy, is centered inside a hollow, titanium-alloy cylinder. According to standard physics, gravity should cause the cylinders to fall at the same rate, despite their different masses and materials. A violation of the equivalence principle, however, might make one fall slightly faster than the other.
As the two objects fall in their orbit around Earth, the satellite uses electrical forces to keep the pair aligned. If the equivalence principle didn’t hold, adjustments needed to keep the cylinders in line would vary with a regular frequency, tied to the rate at which the satellite orbits and rotates. “If we see any difference in the acceleration it would be a signature of violation” of the equivalence principle, says MICROSCOPE researcher Manuel Rodrigues of the French aerospace lab ONERA in Palaiseau. But no hint of such a signal was found.
With about 10 times the precision of previous tests, the result is “very impressive,” says physicist Jens Gundlach of the University of Washington in Seattle. But, he notes, “the results are still not as precise as what I think they can get out of a satellite measurement.”
Performing the experiment in space eliminates certain pitfalls of modern-day land-based equivalence principle tests, such as groundwater flow altering the mass of surrounding terrain. But temperature changes in the satellite limited how well the scientists could confirm the equivalence principle, as these variations can cause parts of the apparatus to expand or contract.
MICROSCOPE’s ultimate goal is to beat other measurements by a factor of 100, comparing the cylinders’ accelerations to see whether they match within a tenth of a trillionth of a percent. With additional data yet to be analyzed, the scientists may still reach that mark.
Confirmation of the equivalence principle doesn’t mean that all is hunky-dory in gravitational physics. Scientists still don’t know how to combine general relativity with quantum mechanics, the physics of the very small. “The two theories seems to be very different, and people would like to merge these two theories,” Rodrigues says. But some attempts to do that predict violations of the equivalence principle on a level that’s not yet detectable. That’s why scientists think the equivalence principle is worth testing to ever more precision — even if it means shipping their experiments off to space.
Bigwigs in a more than 600-year-old South American population were easy to spot. Their artificially elongated, teardrop-shaped heads screamed prestige, a new study finds.
During the 300 years before the Incas’ arrival in 1450, intentional head shaping among prominent members of the Collagua ethnic community in Peru increasingly centered on a stretched-out look, says bioarchaeologist Matthew Velasco of Cornell University. Having long, narrow noggins cemented bonds among members of a power elite — a unity that may have helped pave a relatively peaceful incorporation into the Incan Empire, Velasco proposes in the February Current Anthropology. “Increasingly uniform head shapes may have encouraged a collective identity and political unity among Collagua elites,” Velasco says. These Collagua leaders may have negotiated ways to coexist with the encroaching Inca rather than fight them, he speculates. But the fate of the Collaguas and a neighboring population, the Cavanas, remains hazy. Those populations lived during a conflict-ridden time — after the collapse of two major Andean societies around 1100 (SN: 8/1/09, p. 16) and before the expansion of the Inca Empire starting in the 15th century.
For at least the past several thousand years, human groups in various parts of the world have intentionally modified skull shapes by wrapping infants’ heads with cloth or binding the head between two pieces of wood (SN: 4/29/17, p. 18). Researchers generally assume that this practice signified membership in ethnic or kin groups, or perhaps social rank. The Callagua people lived in Colca Valley in southeastern Peru and raised alpaca for wool. By tracking Collagua skull shapes over 300 years, Velasco found that elongated skulls became increasingly linked to high social status. By the 1300s, for instance, Collagua women with deliberately distended heads suffered much less skull damage from physical attacks than other females did, he reports. Chemical analyses of bones indicates that long-headed women ate a particularly wide variety of foods. Until now, knowledge of head-shaping practices in ancient Peru primarily came from Spanish accounts written in the 1500s. Those documents referred to tall, thin heads among Collaguas and wide, long heads among Cavanas, implying that a single shape had always characterized each group.
“Velasco has discovered that the practice of cranial modification was much more dynamic over time and across social [groups],” says bioarchaeologist Deborah Blom of the University of Vermont in Burlington.
Velasco examined 211 skulls of mummified humans interred in either of two Collagua cemeteries. Burial structures built against a cliff face were probably reserved for high-ranking individuals, whereas common burial grounds in several caves and under nearby rocky overhangs belonged to regular folk. Radiocarbon analyses of 13 bone and sediment samples allowed Velasco to sort Collagua skulls into early and late pre-Inca groups. A total of 97 skulls, including all 76 found in common burial grounds, belonged to the early group, which dated to between 1150 and 1300. Among these skulls, 38 — or about 39 percent — had been intentionally modified. Head shapes included sharply and slightly elongated forms as well as skulls compressed into wide, squat configurations.
Of the 14 skulls with extreme elongation, 13 came from low-ranking individuals, a pattern that might suggest regular folk first adopted elongated head shapes. But with only 21 skulls from elites, the finding may underestimate the early frequency of elongated heads among the high-status crowd. Various local groups may have adopted their own styles of head modification at that time, Velasco suggests.
In contrast, among 114 skulls from elite burial sites in the late pre-Inca period, dating to between 1300 and 1450, 84 — or about 74 percent — displayed altered shapes. A large majority of those modified skulls — about 64 percent — were sharply elongated. Shortly before the Incas’ arrival, prominent Collaguas embraced an elongated style as their preferred head shape, Velasco says. No skeletal evidence has been found to determine whether low-ranking individuals also adopted elongated skulls as a signature look in the late pre-Inca period.
In courtrooms around the United States, computer programs give testimony that helps decide who gets locked up and who walks free.
These algorithms are criminal recidivism predictors, which use personal information about defendants — like family and employment history — to assess that person’s likelihood of committing future crimes. Judges factor those risk ratings into verdicts on everything from bail to sentencing to parole.
Computers get a say in these life-changing decisions because their crime forecasts are supposedly less biased and more accurate than human guesswork. But investigations into algorithms’ treatment of different demographics have revealed how machines perpetuate human prejudices. Now there’s reason to doubt whether crime-prediction algorithms can even boast superhuman accuracy.
Computer scientist Julia Dressel recently analyzed the prognostic powers of a widely used recidivism predictor called COMPAS. This software determines whether a defendant will commit a crime within the next two years based on six defendant features — although what features COMPAS uses and how it weighs various data points is a trade secret.
Dressel, who conducted the study while at Dartmouth College, recruited 400 online volunteers, who were presumed to have little or no criminal justice expertise. The researchers split their volunteers into groups of 20, and had each group read descriptions of 50 defendants. Using such information as sex, age and criminal history, the volunteers predicted which defendants would reoffend. A comparison of the volunteers’ answers with COMPAS’ predictions for the same 1,000 defendants found that both were about 65 percent accurate. “We were like, ‘Holy crap, that’s amazing,’” says study coauthor Hany Farid, a computer scientist at Dartmouth. “You have this commercial software that’s been used for years in courts around the country — how is it that we just asked a bunch of people online and [the results] are the same?”
There’s nothing inherently wrong with an algorithm that only performs as well as its human counterparts. But this finding, reported online January 17 in Science Advances, should be a wake-up call to law enforcement personnel who might have “a disproportionate confidence in these algorithms,” Farid says.
“Imagine you’re a judge, and I tell you I have this highly secretive, highly proprietary, expensive software built on big data, and it says the person standing in front of you is high risk” for reoffending, he says. “The judge would be like, ‘Yeah, that sounds quite serious.’ But now imagine if I tell you, ‘Twenty people online said this person is high risk.’ I imagine you’d weigh that information a little bit differently.” Maybe these predictions deserve the same amount of consideration.
Judges could get some better perspective on recidivism predictors’ performance if the Department of Justice or National Institute for Standards and Technology established a vetting process for new software, Farid says. Researchers could test computer programs against a large, diverse dataset of defendants and OK algorithms for courtroom use only if they get a passing grade for prediction.
Farid has his doubts that computers can show much improvement. He and Dressel built several simple and complex algorithms that used two to seven defendant features to predict recidivism. Like COMPAS, all their algorithms maxed out at about D-level accuracy. That makes Farid wonder whether trying to predict crime with anything approaching A+ accuracy is an exercise in futility.
“Maybe there will be huge breakthroughs in data analytics and machine learning over the next decade that [help us] do this with a high accuracy,” he says. But until then, humans may make better crime predictors than machines. After all, if a bunch of average Joe online recruits gave COMPAS a run for its money, criminal justice experts — like social workers, parole officers, judges or detectives — might just outperform the algorithm.
Even if computer programs aren’t used to predict recidivism, that doesn’t mean they can’t aid law enforcement, says Chelsea Barabas, a media researcher at MIT. Instead of creating algorithms that use historic crime data to predict who will reoffend, programmers could build algorithms that examine crime data to find trends that inform criminal justice research, Barabas and colleagues argue in a paper to be presented at the Conference on Fairness, Accountability and Transparency in New York City on February 23.
For instance, if a computer program studies crime statistics and discovers that certain features — like a person’s age or socioeconomic status — are highly related to repeated criminal activity, that could inspire new studies to see whether certain interventions, like therapy, help those at-risk groups. In this way, computer programs would do one better than just predict future crime. They could help prevent it.
Engineers are taking a counterintuitive approach to protecting future spacecraft: shooting at their experiments. The image above and high-speed video below capture a 2.8-millimeter aluminum bullet plowing through a test material for a space shield at 7 kilometers per second. The work is an effort to find structures that could stand up to the impact of space debris.
Earth is surrounded by a cloud of debris, both natural — such as micrometeorites and comet dust, which create meteor showers — and unnatural, including dead satellites and the cast-off detritus of space launches. Those pieces of flotsam can damage other spacecraft if they collide at high speeds, and bits smaller than about a centimeter are hard to track and avoid, says ESA materials engineer Benoit Bonvoisin in a statement. To defend future spacecraft from taking a hit, Bonvoisin and colleagues are developing armor made from fiber metal laminates, or several thin metal layers bonded together. The laminates are arranged in multiple layers separated by 10 to 30 centimeters, a configuration called a Whipple shield.
In this experiment at the Fraunhofer Institute for High-Speed Dynamics in Germany, the first layer shatters the aluminum bullet into a cloud of smaller pieces, which the second layer is able to deflect. This configuration has been used for decades, but the materials are new. The next step is to test the shield in orbit with a small CubeSat, Bonvoisin says.
Immediately after a 19-year-old shot and killed 17 people and wounded 17 others at a Florida high school on Valentine’s Day, people leaped to explain what had caused the latest mass slaughter.
By now, it’s a familiar drill: Too many readily available guns. Too much untreated mental illness. Too much warped masculinity. Don’t forget those shoot-’em-up video games and movies. Add (or repeat, with voice raised) your own favorite here.
Now the national debate has received an invigorated dose of activism. Inspired by students from the targeted Florida high school, as many as 500,000 people are expected to rally against gun violence and in favor of stricter gun laws on March 24 in Washington, D.C., with sister marches taking place in cities across the world. But a big problem haunts the justifiable outrage over massacres of innocents going about their daily affairs: Whatever we think we know about school shootings, or mass public shootings in general, is either sheer speculation or wrong. A science of mass shootings doesn’t exist.
“There is little good research on what are probably a host of problems contributing to mass violence,” says criminologist Grant Duwe of the Minnesota Department of Corrections in St. Paul. Duwe has spent more than two decades combing through federal crime records and newspaper accounts to track trends in mass killings. Perhaps this dearth of data is no surprise. Research on any kind of gun violence gets little federal funding (SN Online: 3/9/18; SN: 5/14/16, p. 16). Criminologist James Alan Fox of Northeastern University in Boston has argued for more than 20 years that crime researchers mostly ignore mass shootings. Some of these researchers assume that whatever causes people to commit any form of murder explains mass shootings. Others regard mass killings as driven by severe mental disorders, thus falling outside the realm of crime studies.
When a research vacuum on a matter of public safety meets a 24-hour news cycle juiced up on national anguish, a thousand speculations bloom. “Everybody’s an expert on this issue, but we’re relying on anecdotes,” says sociologist Michael Rocque of Bates College in Lewiston, Maine.
Rocque and Duwe published a review of what’s known about reasons for mass public shootings, sometimes called rampage shootings, in the February Current Opinion in Psychology. Their conclusion: not much. Scientific ignorance on this issue is especially concerning given that Rocque and Duwe describe a slight, but not unprecedented, recent uptick in the national rate of rampage shootings. Shooting stats Defining mass public shootings to track their frequency is tricky. A consensus among researchers is emerging that these events occur in public places, include at least four people killed by gunshots within a 24-hour period and are not part of a robbery or any other separate crime, Rocque and Duwe say. Such incidents include workplace and school shootings. Overall, mass public shootings are rare, Duwe says, though intense media coverage may suggest the opposite. Even less obvious is that rampage shootings have been occurring for at least 100 years.
Using Federal Bureau of Investigation homicide reports, Congressional Research Service data on mass shootings and online archives of news accounts about multiple murders, Duwe has tracked U.S. rates of mass public shootings from 1915 to 2017.
He has identified a total of 185 such events through 2017, 150 of which have occurred since 1966. (In 2016, he published results up to 2013 in the Wiley Handbook of the Psychology of Mass Shootings.) In the earliest known case, from 1915, a Georgia man shot five people dead in the street, after killing an attorney he blamed for financial losses, and wounded 32 others. Another lawyer, who came to the crime scene upon hearing gunshots and was wounded by a bullet, ended the rampage when he grabbed a pistol from a hardware store and killed the shooter.
What stands out more than a century later is that, contrary to popular opinion, mass public shooting rates have not ballooned to record highs. While the average rate of these crimes has increased since 2005, it’s currently no greater than rates for some earlier periods. Crime trends are usually calculated as rates per 100,000 people for, say, robberies and assaults. But because of the small number of mass public shootings, Duwe calculates annual rates per 100 million people in the United States.
The average annual rate of mass public shootings since 2010 is about 1.44 per 100 million people. That roughly equals the 1990s rate of 1.41, Duwe finds.
The average annual rate from 1988 to 1993 reached 1.52, about the same as the 1.51 rate from 2007 to 2012. After dropping to just below 1 per 100 million people in 2013 and 2014, rates increased to nearly 1.3 the next three years.
From 1994 to 2004, rates mostly hovered around 1 per 100 million people or below, but spiked to over 2.5 in 1999. That’s the year two teens killed 13 people at Columbine High School in Colorado.
In contrast, rates were minuscule from 1950 to 1965, when only three mass public shootings were recorded. The average annual rate for 1970 to 1979 reached 0.52, based on 13 mass public shootings.
Numbers of people killed and wounded per shooting incident have risen in the last decade, though. Two events in 2012 were particularly horrific. Shootings at a movie theater in Aurora, Colo., and an elementary school in Newtown, Conn., resulted in 40 murders, many of children, and 60 nonfatal gunshot wounds. Whether this trend reflects an increasing use of guns with large-capacity magazines or other factors “is up for grabs,” Duwe says. The unknowns No good evidence exists that either limiting or loosening gun access would reduce mass shootings, Rocque says. Virtually no research has examined whether a federal ban on assault weapons from 1994 to 2004 contributed to the relatively low rate of mass public shootings during that period. The same questions apply to concealed-carry laws, promoted as a way to deter rampage killers. As a gun owner and longtime hunter in his home state of Maine, Rocque calls for “an evidence-based movement” to establish links between gun laws and trends in mass shootings.
Mental illness also demands closer scrutiny, Duwe says. Of 160 mass public shooters from 1915 to 2013, about 60 percent had been assigned a psychiatric diagnosis or had shown signs of serious mental illness before the attack, Duwe has found. In general, mental illness is not linked to becoming violent. But, he says, many mass shooters are tormented and paranoid individuals who want to end their painful lives after evening the score with those they feel have wronged them.
Masculinity also regularly gets raised as a contributor to mass public shootings. It’s a plausible idea, since males committed all but one of the tragedies in Duwe’s review. Sociologist Michael Kimmel of Stony Brook University in New York contends that a sense of wounded masculinity as a result of various life failures inspires rage and even violence. But researchers have yet to examine how any facet of masculinity plays into school or workplace shootings, Rocque says.
Although school shooters often report feeling a desperate need to make up for having been inadequate as men, many factors contribute to their actions, argues clinical psychologist Peter Langman. Based in Allentown, Pa., Langman has interviewed and profiled several dozen school shooters in the United States and other countries. He divides perpetrators into three psychological categories: psychopathic (lacking empathy and concern for others), psychotic (experiencing paranoid delusions, hearing voices and having poor social skills) and traumatized (coming from families marked by drug addiction, sexual abuse and other severe problems).
But only a few of the millions of people who qualify for those categories translate their personal demons into killing sprees. Any formula to tag mass shooters in the making will inevitably round up lots of people who would never pose a deadly threat.
“There is no good evidence on what differentiates a bitter, aggrieved man from a bitter, aggrieved and dangerous man,” says psychologist Benjamin Winegard of Carroll College in Helena, Mont.
Nor does any published evidence support claims that being a bully or a victim of bullying, or watching violent video games and movies, leads to mass public shootings, Winegard contends. Bullying affects a disturbingly high proportion of youngsters and has been linked to later anxiety and depression (SN: 5/30/15, p. 12) but not to later violence. In laboratory studies, youngsters who play violent computer games or watch violent videos generally don’t become more aggressive or violent in experimental situations. Investigators have found that some school shooters, including the Newtown perpetrator, preferred playing nonviolent video games, Winegard says.
He and a colleague presented this evidence in the Wiley Handbook of the Psychology of Mass Shootings. Northeastern’s Fox also coauthored a chapter in that publication.
Still, a small but tragic group of kids lead lives that somehow turn them into killers of classmates or random strangers (SN: 5/27/06, p. 328). If some precise mix of, say, early brain damage, social ineptitude, paranoia and fury over life’s unfair twists cooks up mass killers, scientists don’t know the toxic recipe. And it won’t be easy to come up with one given the small number of mass public shooters to study.
Duwe recommends that researchers first do a better job of documenting the backgrounds of individual mass shooters and any events or experiences that may have precipitated their deadly actions. Then investigators can address broader social influences on mass shootings, including gun legislation and media coverage.
But more than a century after a distraught Georgia man mowed down six of his fellow citizens, research on mass violence still takes a backseat to public fear and outrage. “If we’re bemoaning the state of research,” Duwe says, “we have no one to blame but ourselves.”
To craft a new color-switching material, scientists have again taken inspiration from one of nature’s masters of disguise: the chameleon.
Thin films made of heart cells and hydrogel change hues when the films shrink or stretch, much like chameleon skin. This material, described online March 28 in Science Robotics, could be used to test new medications or possibly to build camouflaging robots.
The material is made of a paper-thin hydrogel sheet engraved with nanocrystal patterns, topped with a layer of living heart muscle cells from rats. These cells contract and expand — just as they would inside an actual rat heart to make it beat — causing the underlying hydrogel to shrink and stretch too. That movement changes the way light bounces off the etched crystal, making the material reflect more blue light when it contracts and more red light when it’s relaxed. This design is modeled after nanocrystals embedded in chameleon skin, which also reflect different colors of light when stretched (SN Online: 3/13/15).
When researchers treated the material with a drug normally used to boost heart rate, the films changed color more quickly — indicating the heart cells were pulsating more rapidly. That finding suggests the material could help drug developers monitor how heart cells react to new medications, says study coauthor Luoran Shang, a physicist at Southeast University in Nanjing, China. Or these kinds of films could also be used to make color-changing skins for soft robots, Shang says.
The center of the Milky Way may be abuzz with black holes. For the first time, a dozen small black holes have been spotted within the inner region of the galaxy in an area spanning just a few light-years — and there could be thousands more.
Astrophysicist Charles Hailey of Columbia University and his colleagues spotted the black holes thanks to the holes’ interactions with stars slowly spiraling inward, the team reports in Nature on April 4. Isolated black holes emit no light, but black holes stealing material from orbiting stars will heat that material until it emits X-rays. In 12 years of telescope data from NASA’s orbiting Chandra X-ray Observatory, Hailey and colleagues found 12 objects emitting the right X-ray energy to be black holes with stellar companions. Based on theoretical predictions of how many black holes are paired with stars, there should be up to 20,000 invisible solo black holes just in that small part of the galaxy. The discovery follows decades of astronomers searching for small black holes in the galactic center, where a supermassive black hole lives (SN: 3/4/17, p. 8). Theory predicted that the galaxy should contain millions or even 100 million black holes overall, with a glut of black holes piled up near the center (SN: 9/16/17, p. 7). But none had been found. “It was always kind of a mystery,” Hailey says. “If there’s so many that are supposed to be jammed into the central parsec [about 3.26 light-years], why haven’t we seen any evidence?” Finding the 12 was “really hard,” he admits.
It’s unclear how the black holes got to the galaxy’s center. Gravity could have tugged them toward the supermassive black hole. Or a new theory from Columbia astronomer Aleksey Generozov suggests black holes could be born in a disk around the supermassive black hole.
The researchers ruled out other objects emitting X-rays, such as neutron stars and white dwarfs, but acknowledged that up to half of the sources they found could be fast-spinning stellar corpses called millisecond pulsars rather than black holes. That could add to the debate over whether a mysterious excess in gamma rays at the galactic center is from pulsars or dark matter (SN: 12/23/17, p. 12).
“The theorists are going to have to slug it out and figure out what’s going on,” Hailey says.
Every few years, a buzz fills the air in the southeastern United States as adolescent cicadas crawl out from the soil to molt and make babies. After a childhood spent sipping tree sap underground, some species emerge every 13 years, others every 17 years, rarely overlapping. Yet somehow in this giant cicada orgy, hybridization happens between species that should be out of sync.
Researchers have sought to explain how the two life cycle lengths developed. A new study published online April 19 in Communications Biology fails to pin the difference on genetics, but finds some interesting things along the way. Cicadas fall into three species groups that diverged from one another about 3.9 million to 2.5 million years ago. Within each of those groups, species on a 13-year schedule diverged from 17-year-cycle cicadas about 200,000 to 100,000 years ago, the researchers from the United States and Japan report.
But the researchers also found that the 17-year and 13-year broods within each group share genetic code — evidence of hybridization. It’s possible that neighboring broods swapped DNA when their emergence overlapped — something that happens every 221 years — or if stragglers emerged early or late.
Everybody agrees that medical treatments should be based on sound evidence. Hardly anybody agrees on what sort of evidence counts as sound.
Sure, some people say the “gold standard” of medical evidence is the randomized controlled clinical trial. But such trials have their flaws, and translating their findings into sound real-world advice isn’t so straightforward. Besides, the best evidence rarely resides within any single study. Sound decisions come from considering the evidentiary database as a whole. That’s why meta-analyses are also a popular candidate for best evidence. And in principle, meta-analyses make sense. By aggregating many studies and subjecting them to sophisticated statistical analysis, a meta-analysis can identify beneficial effects (or potential dangers) that escape detection in small studies. But those statistical techniques are justified only if all the studies done on the subject can be obtained and if they all use essential similar methods on sufficiently similar populations. Those criteria are seldom met. So it is usually not wise to accept a meta-analysis as the final word.
Still, meta-analysis is often a part of what some people consider to be the best way of evaluating medical evidence: the systematic review.
A systematic review entails using “a predetermined structured method to search, screen, select, appraise and summarize study findings to answer a narrowly focused research question,” physician and health care researcher Trisha Greenhalgh of the University of Oxford and colleagues write in a new paper. “Using an exhaustive search methodology, the reviewer extracts all possibly relevant primary studies, and then limits the dataset using explicit inclusion and exclusion criteria.”
Systematic reviews are highly focused; while hundreds or thousands of studies are initially identified, most are culled out so only a few are reviewed thoroughly with respect to the evidence they provide on a specific medical issue. The resulting published paper reaches a supposedly objective conclusion often from a quantitative analysis of the data. Sounds good, right? And in fact, systematic reviews have gained a reputation as a superior form of medical evidence. In many quarters of medical practice and publishing, systematic reviews are considered the soundest evidence you can get.
But “systematic” is not synonymous with “high quality,” as Greenhalgh, Sally Thorne (University of British Columbia, Vancouver) and Kirsti Malterud (Uni Research Health, Bergen, Norway) point out in their paper, accepted for publication in the European Journal of Clinical Investigation. Sometimes systematic reviews are valuable, they acknowledge. “But sometimes, the term ‘systematic review’ allows a data aggregation to claim a more privileged position within the knowledge hierarchy than it actually deserves.”
Greenhalgh and colleagues question, for instance, why systematic reviews should be regarded as superior to “narrative” reviews. In a narrative review, an expert in the field surveys relevant publications and then interprets and critiques them. Such a review’s goal is to produce “an authoritative argument, based on informed wisdom,” Greenhalgh and colleagues write. Rather than just producing a paper that announces a specific conclusion, a narrative review reflects the choices and judgments by an expert about what research is worth considering and how to best interpret the body of evidence and apply it to a variety of medical issues and questions. Systematic reviews are like products recommended to you by Amazon’s computers; narrative reviews are birthday presents from friends who’ve known you long and well.
For some reason, though, an expert reviewer’s “informed wisdom” is considered an inferior source of reliable advice for medical practitioners, Greenhalgh and colleagues write. “Reviews crafted through the experience and judgment of experts are often viewed as untrustworthy (‘eminence-based’ is a pejorative term).”
Yet if you really want the best evidence, it might be a good idea to seek the counsel of people who know good evidence when they see it.
A systematic review might be fine for answering “a very specific question about how to treat a particular disease in a particular target group,” Greenhalgh and colleagues write. “But the doctor in the clinic, the nurse on the ward or the social worker in the community will encounter patients with a wide diversity of health states, cultural backgrounds, illnesses, sufferings and resources.” Real-life patients often have little in common with participants in research studies. A meaningful synthesis of evidence relevant to real life requires a reviewer to use “creativity and judgment” in assessing “a broad range of knowledge sources and strategies.”
Narrative reviews come in many versions. Some are systematic in their own way. But a key difference is that the standard systematic review focuses on process (search strategies, exclusion criteria, mathematical method) while narrative reviews emphasize thinking and interpretation. Ranking systematic reviews superior to narrative reviews “elevates the mechanistic processes of exhaustive search, wide exclusion and mathematical averaging over the thoughtful, in-depth, critically reflective processes of engagement with ideas,” Greenhalgh and collaborators assert.
Tabulating data and calculating confidence intervals are important skills, they agree. But the rigidity of the systematic review approach has its downsides. It omits the outliers, the diversity and variations in people and their diseases, diminishing the depth and nuance of medical knowledge. In some cases, a systematic review may be the right approach to a specific question. But “the absence of thoughtful, interpretive critical reflection can render such products hollow, misleading and potentially harmful,” Greenhalgh and colleagues contend.
And even when systematic reviews are useful for answering a particular question, they don’t serve many other important purposes — such as identifying new questions also in need of answers. A narrative review can provide not only guidance for current treatment but also advice on what research is needed to improve treatment in the future. Without the perspective provided by more wide-ranging narrative reviews, research funding may flow “into questions that are of limited importance, and which have often already been answered.”
Their point extends beyond the realm of medical evidence. There is value in knowledge, wisdom and especially judgment that is lost when process trumps substance. In many realms of science (and life in general), wisdom is often subordinated to following rules. Some rules, or course, are worthwhile guides to life (see Gibbs’ list, for example). But as the writing expert Robert Gunning once articulated nicely, rules are substitutes for thought.
In situations where thought is unnecessary, or needlessly time-consuming, obeying the rules is a useful strategy. But many other circumstances call for actual informed thinking and sound judgment. All too often in such cases the non-thinkers of the world rely instead on algorithms, usually designed to implement business models, with no respect for the judgments of informed and wise human experts.
In other words, bots are dolts. They are like a disease. Finding the right treatment will require gathering sound evidence. You probably won’t get it from a systematic review.