Antarctica’s iceberg graveyard could reveal the ice sheet’s future

Just beyond the tip of the Antarctic Peninsula lies an iceberg graveyard.

There, in the Scotia Sea, many of the icebergs escaping from Antarctica begin to melt, depositing sediment from the continent that had been trapped in the ice onto the seafloor. Now, a team of researchers has embarked on a two-month expedition to excavate the deposited debris, hoping to discover secrets from the southernmost continent’s climatic past.

That hitchhiking sediment, the researchers say, can help piece together how Antarctica’s vast ice sheet has waxed and waned over millennia. And knowing how much the ice melted in some of those warmest periods, such as the Pliocene Epoch about 3 million years ago, may provide clues to the ice sheet’s future. That includes how quickly the ice may melt in today’s warming world and by how much, says paleoclimatologist Michael Weber of the University of Bonn in Germany.
Weber and Maureen Raymo, a paleoclimatologist at Lamont-Doherty Earth Observatory in Palisades, N.Y., are leading the expedition, which set sail on March 25.

“By looking at material carried by icebergs that calved off of the continent, we should be able to infer which sectors of the ice sheet were most unstable in the past,” Raymo says. “We can correlate the age and mineralogy of the ice-rafted debris to the bedrock in the section of Antarctica from which the bergs originated.”
Icebergs breaking off from the edges of Antarctica’s ice sheet tend to stay close to the continent, floating counterclockwise around the continent. But when the bergs reach the Weddell Sea, on the eastern side of the peninsula, they are shunted northward through a region known as Iceberg Alley toward warmer waters in the Scotia Sea.

Because so many icebergs from all around the continent converge in one region, it is the ideal place to collect sediment cores and take stock of the debris that the bergs have dropped over millions of years.

“That area in the Scotia Sea is so exciting, because it’s a focus point between South America and the Antarctic Peninsula where the currents flow through, and there are a lot of icebergs,” says Gerhard Kuhn, a marine geologist at the Alfred Wegener Institute in Bremerhaven, Germany. “You get a picture of more or less [all of] Antarctica in that area,” says Kuhn, who has studied the region but is not aboard the current cruise.
The expedition, known as leg 382 of the International Ocean Discovery Program, plans to drill at six different sites in the Scotia Sea. At three sites, the team plans to penetrate about 600 meters into the seafloor. “That would likely bring us back to the mid-Miocene, which could translate into 12 million to 18 million years back in time,” Weber says.

At another site, the team plans to drill even deeper, 900 meters, to go further back in time, in hopes of finding sediments that date to the opening of the Drake Passage about 41 million years ago. That passage, a body of water that now lies between South America and Antarctica, opened a link between the Atlantic and Pacific oceans and may have played a role in building up Antarctica’s ice sheets at different times in its history.

A graveyard turned crystal ball
How much a melting Antarctica might have contributed to global sea-level rise following the last great ice age, which ended about 19,000 years ago, has been a subject of debate. Seas rose by about 130 meters from 19,000 to 8,000 years ago, Weber says, and much of the melting happened in the northern hemisphere.

But Antarctica may have played a larger role than once thought. In a study published in Nature in 2014, Kuhn, Weber and other colleagues reported that ice-rafted debris from that time period, as recorded in relatively short sediment cores from Iceberg Alley, often occurred in large pulses lasting a few centuries to millennia. Those data suggested that the southernmost continent was shedding lots of bergs much more quickly during those times than once thought.

Now, the researchers want to see even further into the past, to understand how quickly Antarctica’s ice sheet might have melted during even warmer periods, and how much it may have contributed to episodes of past sea-level rise.

The new drilling expedition targets several periods when the climate is thought to have warmed dramatically. One is a warm period in the middle Pliocene about 3.3 million to 3 million years ago, when average global temperatures were 2 to 3 degrees warmer than today; another is the ending of an older ice age about 130,000 years ago, when sea levels rose by about 5 to 9 meters.

Such periods may serve as analogs to the continent’s future behavior due to anthropogenic global warming. Currently, global average temperatures on Earth are projected to increase by between about 1.5 degrees and 4 degrees Celsius relative to preindustrial times, depending on greenhouse gas emissions to the atmosphere over the next few decades (SN: 10/22/18, p. 18).

“The existing [ice core] record from Iceberg Alley taught us Antarctica lost ice through a threshold reaction,” Weber says. That means that when the continent reached a certain transition point, there was sudden and massive ice loss rather than just a slow, gradual melt.

“We have rather firm evidence that this threshold is passed once the ice sheet loses contact with the underlying ocean floor,” he says, adding that at that point, the shedding of ice becomes self-sustaining, and can go on for centuries. “With mounting evidence of recent ice-mass loss in many sectors of West Antarctica of a similar fashion, we need to be concerned that a new ice-mass loss event is already underway, and there is no stopping it.”

Chickens stand sentinel against mosquito-borne disease in Florida

For 40 years, they’ve held the front line in Florida’s fight against mosquito-borne diseases. And it turns out that the chickens standing sentinel in cities, marshes, woodlands and residential backyards are clucking good at their job.

Last year, chickens in 268 coops in over a third of Florida’s counties provided scientists weekly blood samples that revealed whether the birds had been bitten by mosquitoes carrying West Nile virus or the Eastern equine encephalitis or St. Louis encephalitis viruses.
If a chicken’s blood tests positive for antibodies to one of those viruses, authorities know that the pathogen is circulating. And if enough birds have the antibodies, state officials can ratchet up mosquito-killing measures such as pesticide spraying to help halt disease spread.

The sentinel chicken surveillance programs are “a really good way of monitoring” for certain virus activity, says Thomas Unnasch, a biologist who studies vector-borne diseases at the University of South Florida in Tampa. The birds “are sampling literally hundreds or thousands of mosquitoes every day,” he says. (The chickens can’t keep tabs on dengue or Zika; the mosquitoes carrying those viruses prefer to bite people rather than birds.)
In 2018, 833 chickens tested positive for West Nile virus antibodies in Florida, but only 39 people did, according to data from the state’s health department. For Eastern equine encephalitis virus, 154 chickens tested positive in 2018, compared with only three people.
Chickens that test positive for the viruses being surveyed don’t transmit them, and people don’t either. Both are considered “dead-end hosts,” meaning that the viral concentration in the blood doesn’t get high enough to infect another mosquito after it bites. Infected cardinals, robins and other backyard birds are the animal reservoirs that help keep the three viruses spreading in the area.
Sentinel chickens, by detecting where and when disease-carrying mosquitoes are buzzing, are also providing valuable data on how a virus can spread. Data from 2005 to 2016 revealed that Eastern equine encephalitis virus is active year-round in the Florida panhandle, making the area a source from which the virus moves elsewhere in the state and along the eastern United States, Unnasch and his colleagues report online March 11 in the American Journal of Tropical Medicine and Hygiene.

In people, the viral diseases monitored by the chickens are relatively rare, but can be deadly. The chickens don’t get especially sick, though. “You don’t usually see any symptoms at all,” Unnasch says.

Any chicken whose blood tests positive for the antibodies is removed from the coops since that bird can no longer alert authorities to a new infection. For these chickens, retirement may be spent on a farm, with school or 4-H clubs, or in a backyard coop, depending on the county. The sentinel chicken programs are ready with replacements, raising chicks to supply new birds to signal “where we have a threat to human health,” Unnasch says.

The first picture of a black hole opens a new era of astrophysics

This is what a black hole looks like.

A world-spanning network of telescopes called the Event Horizon Telescope zoomed in on the supermassive monster in the galaxy M87 to create this first-ever picture of a black hole.

“We have seen what we thought was unseeable. We have seen and taken a picture of a black hole,” Sheperd Doeleman, EHT Director and astrophysicist at the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass., said April 10 in Washington, D.C., at one of seven concurrent news conferences. The results were also published in six papers in the Astrophysical Journal Letters.

“We’ve been studying black holes so long, sometimes it’s easy to forget that none of us have actually seen one,” France Córdova, director of the National Science Foundation, said in the Washington, D.C., news conference. Seeing one “is a Herculean task,” she said.
That’s because black holes are notoriously hard to see. Their gravity is so extreme that nothing, not even light, can escape across the boundary at a black hole’s edge, known as the event horizon. But some black holes, especially supermassive ones dwelling in galaxies’ centers, stand out by voraciously accreting bright disks of gas and other material. The EHT image reveals the shadow of M87’s black hole on its accretion disk. Appearing as a fuzzy, asymmetrical ring, it unveils for the first time a dark abyss of one of the universe’s most mysterious objects.

“It’s been such a buildup,” Doeleman said. “It was just astonishment and wonder… to know that you’ve uncovered a part of the universe that was off limits to us.”

The much-anticipated big reveal of the image “lives up to the hype, that’s for sure,” says Yale University astrophysicist Priyamvada Natarajan, who is not on the EHT team. “It really brings home how fortunate we are as a species at this particular time, with the capacity of the human mind to comprehend the universe, to have built all the science and technology to make it happen.” (SN Online: 4/10/19)

The image aligns with expectations of what a black hole should look like based on Einstein’s general theory of relativity, which predicts how spacetime is warped by the extreme mass of a black hole. The picture is “one more strong piece of evidence supporting the existence of black holes. And that, of course, helps verify general relativity,” says physicist Clifford Will of the University of Florida in Gainesville who is not on the EHT team. “Being able to actually see this shadow and to detect it is a tremendous first step.”

Earlier studies have tested general relativity by looking at the motions of stars (SN: 8/18/18, p. 12) or gas clouds (SN: 11/24/18, p. 16) near a black hole, but never at its edge. “It’s as good as it gets,” Will says. Tiptoe any closer and you’d be inside the black hole — unable to report back on the results of any experiments.
“Black hole environments are a likely place where general relativity would break down,” says EHT team member Feryal Özel, an astrophysicist at the University of Arizona in Tucson. So testing general relativity in such extreme conditions could reveal deviations from Einstein’s predictions.

Just because this first image upholds general relativity “doesn’t mean general relativity is completely fine,” she says. Many physicists think that general relativity won’t be the last word on gravity because it’s incompatible with another essential physics theory, quantum mechanics, which describes physics on very small scales.
The image also provides a new measurement of the black hole’s size and heft. “Our mass determination by just directly looking at the shadow has helped resolve a longstanding controversy,” Sera Markoff, a theoretical astrophysicist at the University of Amsterdam, said in the Washington, D.C., news conference. Estimates made using different techniques have ranged between 3.5 billion and 7.22 billion times the mass of the sun. But the new EHT measurements show that its mass is about 6.5 billion solar masses.

The team has also determined the behemoth’s size — its diameter stretches 38 billion kilometers — and that the black hole spins clockwise. “M87 is a monster even by supermassive black hole standards,” Markoff said.

EHT trained its sights on both M87’s black hole and Sagittarius A, the supermassive black hole at the center of the Milky Way. But, it turns out, it was easier to image M87’s monster. That black hole is 55 million light-years from Earth in the constellation Virgo, about 2,000 times as far as Sgr A. But it’s also about 1,000 times as massive as the Milky Way’s giant, which weighs the equivalent of roughly 4 million suns. That extra heft nearly balances out M87’s distance. “The size in the sky is pretty darn similar,” says EHT team member Feryal Özel.
Due to its gravitational oomph, gases swirling around M87’s black hole move and vary in brightness more slowly than they do around the Milky Way’s. “During a single observation, Sgr A* doesn’t sit still, whereas M87 does,” says Özel, an astrophysicist at the University of Arizona in Tucson. “Just based on this ‘Does the black hole sit still and pose for me?’ point of view, we knew M87 would cooperate more.”

After more data analysis, the team hopes to solve some long-standing mysteries about black holes, such as how M87’s behemoth spews a bright jet of charged particles thousands of light-years into space.

This first image is like the “shot heard round the world” that kicked off the American Revolutionary War, says Harvard University astrophysicist Avi Loeb who isn’t on the EHT team. “It’s very significant; it gives a glimpse of what the future might hold, but it doesn’t give us all the information that we want.”
Hopes are still high for a much-anticipated glimpse of Sgr A*. The EHT team was able to collect some data on the Milky Way’s behemoth and are continuing to analyze that data, in the hopes of adding its image to the new black hole portrait gallery.

Since the appearance of that black hole changes so quickly, the team is having to develop new techniques to analyze the data. “We’re very excited to work on Sgr A*,” Daniel Marrone, an astrophysicist at the University of Arizona in Tucson, said in the Washington, D.C., news conference. “We’re doing that shortly. We’re not promising anything but we hope to get that very soon.”

Studying such different environments could reveal more details of how black holes behave, Loeb says. “The Milky Way is a very different galaxy from M87.”
The next look at the M87 and Milky Way behemoths will have to wait.

Scientists got a lucky stretch of good weather at all eight sites that made up the Event Horizon Telescope in 2017. Then bad weather in 2018 and technical difficulties, which cancelled the 2019 observing run, stymied the team.

The good news is that by 2020, there will be more observatories to work with. The Greenland Telescope joined the consortium in 2018, and the Kitt Peak National Observatory outside Tucson, Ariz., and the NOrthern Extended Millimeter Array (NOEMA) in the French Alps will join EHT in 2020.

Adding more telescopes could allow the team to extend the image, to better capture the jets that spew from the black hole. The researchers also plan to make observations using light of slightly higher frequency, which can further sharpen the image. And even bigger plans are on the horizon: “World domination is not enough for us; we also want to go to space,” Doeleman said.

These extra eyes may be just what’s needed to bring black holes into even greater focus.

Wildfires in boreal forests released a record amount of CO2 in 2021

WASHINGTON — In 2021, wildfires pillaged the world’s carbon-rich snow forests.

That year, burning boreal forests released 1.76 billion metric tons of carbon dioxide, researchers reported March 2 in a news conference at the annual meeting of the American Association for the Advancement of Science.

That’s a new record for the region, which stores about one-third of the world’s land-based carbon. “It’s also roughly double the emissions in that year from aviation,” said earth system scientist Steven Davis of the University of California, Irvine. The trend, if it continues, threatens to make fighting climate change even more difficult.
Boreal forests are part of the taiga, a vast region that necklaces the Earth just south of the Arctic Circle. Blazes in tropical forests like the Amazon tend to garner more attention for their potential to contribute large amounts of climate-warming gases to the atmosphere (SN: 9/28/17). But scientists estimate that on a per area basis, boreal forests store about twice as much carbon in their trees and soils as tropical forests.

Climate change is causing the taiga to warm about twice as fast as the global average. And wildfires are growing more widespread in the region, releasing more of the trapped carbon, which in turn can worsen climate change (SN: 5/19/21).

Davis and his colleagues analyzed satellite data on carbon emissions from boreal regions from 2000 to 2021. In 2021, emissions from boreal wildfires made up a whopping 23 percent of all the CO2 emitted by wildfires around the world, the researchers report in the March 3 Science. In contrast, CO2 emissions during an average year from 2000 to 2021 were about 10 percent.

The record-breaking emissions coincided with widespread heat waves and droughts in Siberia and northern Canada, probably fueled by human-caused climate change.

There’s no data yet to show if 2022 saw a similar surge in emissions. But, Davis said, “there’s not actually that much evidence that this record will stand for long.”

The fastest claw in the sea belongs to young snapping shrimp

Full-grown snapping shrimp were already known to have some of the fastest claws under the waves. But it turns out they’re nothing compared with their kids.

Juvenile snapping shrimp produce the highest known underwater accelerations of any reusable body part, researchers report February 28 in the Journal of Experimental Biology. While the claws’ top speed isn’t terribly impressive, they go from zero to full throttle in record time.

To deter predators or competitors, snapping shrimp create shock waves with their powerful claws. The shrimp store energy in the flexing exoskeleton of their claw as it opens, latching it in place much like a bow-and-arrow mechanism, says Jacob Harrison, a biologist at Georgia Tech in Atlanta.
Firing the claw and releasing this elastic energy produces a speeding jet of water. Bubbles form behind it and promptly implode, liberating a huge amount of energy, momentarily flashing as hot as the sun and creating a deafening crack (SN: 10/3/01).

But it was unclear how early in their lives the shrimp could use this weaponry. “We knew that the snapping shrimp did this really impressive behavior,” Harrison says. “But we really didn’t know anything about how this mechanism developed.”

While a grad student at Duke University, Harrison and his adviser, biomechanist Sheila Patek, reared bigclaw snapping shrimp (Alpheus heterochaelis) from eggs in the laboratory. At 1 month old, the tiny shrimp — less than a centimeter long — began firing their claws when disturbed. The researchers took high-speed video footage of these snaps and calculated their speed.

The wee shrimp could create the collapsing bubbles just like adults. Despite being a tenth the adults’ size or smaller, the juveniles’ claws accelerated 20 times as fast when firing. This acceleration — about 600 kilometers per second per second — is on “the same order of magnitude as a 9-millimeter bullet leaving a gun,” Harrison says.
Dracula ants (Mystrium camillae) and some termites produce more explosive bites but aren’t pushing against water. The stinging cells of jellyfish launch their venomous harpoons about 100 times as fast, but their firing mechanism is inherently single use. Snapping shrimp, on the other hand, can fire their claws again and again.
The juveniles’ firing and bubble creation weren’t very reliable at the smallest sizes, but the shrimp routinely tried snapping anyway. The team wonders if the young shrimp could be practicing and training the necessary musculature.

If so, that training might ultimately be crucial to the claw’s function, says Kate Feller, a visual ecologist at Union College in Schenectady, N.Y., who studies similarly ultrafast mantis shrimp and was not involved in the new study. “If you were to somehow manipulate the claws so that they couldn’t properly close and they couldn’t snap,” she wonders, “would that affect their ability to develop these mechanisms?”

Understanding the storage of elastic energy in biological materials and how it flows through them is “tricky,” Harrison says. Figuring out how such tiny claws store so much energy without fracturing may help researchers illuminate this superpower.

‘We Are Electric’ delivers the shocking story of bioelectricty

It took just a 9-volt battery and a little brain zapping to turn science writer Sally Adee into a stone-cold sharpshooter.

She had flown out to California to test an experimental DARPA technology that used electric jolts to speed soldiers’ sniper training. When the juice was flowing, Adee could tell. In a desert simulation that pit her against virtual bad guys, she hit every one.

“Getting my neurons slapped around by an electric field instantly sharpened my ability to focus,” Adee writes in her new book, We Are Electric. That brain-stimulating experience ignited her 10-year quest to understand how electricity and biology intertwine. And she’s not just talking neurons.
Bioelectricity, Adee makes the case, is a shockingly under­explored area of science that spans all parts of the body. Its story is one of missed opportunity, scientific threads exposed and abandoned, tantalizing clues and claims, “electroquacks” and unproven medical devices — and frogs. Oh so many frogs.

Adee takes us back to the 18th century lab of Luigi Galvani, an Italian scientist hunting for what gives animals the spark of life. His gruesome experiments on twitching frog legs offered proof that animal bodies generate their own electricity, an idea that was hotly debated at the time. (So many scientists repeated Galvani’s experiments, in fact, that Europe began to run out of frogs.)

But around the same time, Galvani critic Alessandro Volta, another Italian scientist, invented the electric battery. It was the kind of razzle-dazzle, history-shaking device that stole the spotlight from animal electricity, and the fledgling field fizzled. “The idea had been set,” Adee writes. “Electricity was not for biology. It was for machines, and telegraphs, and chemical reactions.”
It took decades for scientists to pick up Galvani’s experimental threads and get the study of bioelectricity back on track. Since then, we’ve learned just how much electricity orchestrates our lives, and how much more remains to be discovered. Electricity zips through our neurons, makes our hearts tick and flows in every cell of the body. We’re made up of 40 trillion tiny rechargeable batteries, Adee writes.

She describes how cells use ion channels to usher charged molecules in and out. One thing readers might not expect from a book that illustrates the intricacies of ion channels: It’s surprisingly funny.
Chloride ions, for example, are “perpetually low-key ashamed” because they carry a measly -1 charge. Bogus medical contraptions (here’s looking at you, electric penis belts) were “electro-foolery.” In her acknowledgements, Adee jokes about the “life-saving powers of Voltron” and thanks people for enduring her caffeine jitters. That energy thrums through the book, charging her storytelling like a staticky balloon.

Adee is especially electrifying in a chapter about spinal nerve regeneration and why initial experiments juddered to a halt. Decades ago, scientists tried coaxing severed nerves to link up again by applying an electric field. The controversial technique sparked scientific drama, but the idea of using electricity to heal may have been ahead of its time. Fast-forward to 2020, and DARPA has awarded $16 million to researchers with a similar concept: a bio­electric bandage that speeds wound healing.

Along with zingy Band-Aids of the future, Adee describes other sci-fi–sounding devices in the works. One day, for example, surgeons may sprinkle your brain with neurograins, neural lace or neural dust, tiny electronic implants that could help scientists monitor brain activity or even help people control robotic arms or other devices (SN: 9/3/16, p. 10).

Such implants bring many challenges — like how to marry electronics to living tissue — but Adee’s book leaves readers with a sense of excitement. Not only could bioelectricity inspire new and improved medical devices, it could also reveal a current of unexpected truths about the body.

As Adee writes: “We are electrical machines whose full dimensions we have not even yet dreamed of.”

Nepal quake’s biggest shakes relatively spread out

The April 25 Nepal earthquake killed more than 8,000 people and caused several billion dollars in damage, but new research suggests the toll could have been a lot worse.

GPS readings taken during the quake indicate that most of the tremors vibrated through the ground as long shakes rather than quick pulses. That largely spared the low-lying buildings that make up much of Nepal’s capital, Kathmandu, geophysicists report online August 6 in Science. Those same low-frequency rumbles, though, toppled Kathmandu’s handful of larger buildings, such as the historic 62-meter tall Dharahara Tower.

Understanding why the fault produced a quake at such low frequencies could help seismologists better identify future seismic hazards, says Jean-Philippe Avouac of the University of Cambridge. “This could be some good news not only for this major fault, but also potentially for similar faults around the world.”

Nepal sits over a tectonic boundary where the Indian Plate slips under the Eurasian Plate. At places, the two plates snag together, building stress that abruptly releases as an earthquake (SN: 5/16/15, p. 12).
Earthquakes stronger than April’s magnitude 7.8 shakedown have hit Nepal before, including a magnitude 8.0 quake in 1934. Despite the recent quake’s feebler intensity, its trembles somehow destroyed large buildings that had previously endured mightier earthquakes.

Avouac and colleagues monitored April’s quake using a network of 35 solar-powered GPS stations, the first time such an accurate system was in place during a major quake on this type of fault. The stations measured ground movements five times each second. The earthquake shook most intensely at 0.25 hertz, or one full wave every four seconds, with only moderate shaking above 1 hertz, or one or more complete waves each second.

A building is most vulnerable when shook near its resonance frequency, a range where even small outside forces can result in big vibrations in the structure. Because taller structures have lower resonance frequencies, the April quake’s low-frequency rumbles caused larger buildings to sway and crumble while largely sparing smaller dwellings, the researchers found.

The low frequencies resulted from the smooth and relatively long duration of the tectonic slipping that initiated the quake, the researchers propose. The low-frequency waves then echoed across the region and produced protracted violent shaking.

Determining where future low-frequency quakes will strike could save lives by identifying which building types are most vulnerable to collapse, says geologist Kristin Morell of the University of Victoria in Canada. “These are things that should be built into building codes.”

Claim of memory transfer made 50 years ago

Memory Transfer Seen — Experiments with rats, showing how chemicals from one rat brain influence the memory of an untrained animal, indicate that tinkering with the brain of humans is also possible.

In the rat tests, brain material from an animal trained to go for food either at a light flash or at a sound signal was injected into an untrained rat. The injected animals then “remembered” whether light or sound meant food.
Update:
After this report, scientists from eight labs attempted to repeat the memory transplants. They failed, as they reported in Science in 1966.

Science fiction authors and futurists often predict that a person’s memories might be transferred to another person or a computer, but the idea is likely to remain speculation, says neuroscientist Eric Kandel, who won a Nobel Prize in 2000 for his work on memory. Brain wiring is too intricate and complicated to be exactly replicated, and scientists are still learning about how memories are made, stored and retrieved.

Climate ‘teleconnections’ may link droughts and fires across continents

Large-scale climate patterns that can impact weather across thousands of kilometers may have a hand in synchronizing multicontinental droughts and stoking wildfires around the world, two new studies find.

These profound patterns, known as climate teleconnections, typically occur as recurring phases that can last from weeks to years. “They are a kind of complex butterfly effect, in that things that are occurring in one place have many derivatives very far away,” says Sergio de Miguel, an ecosystem scientist at Spain’s University of Lleida and the Joint Research Unit CTFC-Agrotecnio in Solsona, Spain.
Major droughts arise around the same time at drought hot spots around the world, and the world’s major climate teleconnections may be behind the synchronization, researchers report in one study. What’s more, these profound patterns may also regulate the scorching of more than half of the area burned on Earth each year, de Miguel and colleagues report in the other study.

The research could help countries around the world forecast and collaborate to deal with widespread drought and fires, researchers say.

The El Niño-Southern Oscillation, or ENSO, is perhaps the most well-known climate teleconnection (SN: 8/21/19). ENSO entails phases during which weakened trade winds cause warm surface waters to amass in the eastern tropical Pacific Ocean, known as El Niño, and opposite phases of cooler tropical waters called La Niña.

These phases influence wind, temperature and precipitation patterns around the world, says climate scientist Samantha Stevenson of the University of California, Santa Barbara, who was not involved in either study. “If you change the temperature of the ocean in the tropical Pacific or the Atlantic … that energy has to go someplace,” she explains. For instance, a 1982 El Niño caused severe droughts in Indonesia and Australia and deluges and floods in parts of the United States.

Past research has predicted that human-caused climate change will provoke more intense droughts and worsen wildfire seasons in many regions (SN: 3/4/20). But few studies have investigated how shorter-lived climate variations — teleconnections — influence these events on a global scale. Such work could help countries improve forecasting efforts and share resources, says climate scientist Ashok Mishra of Clemson University in South Carolina.

In one of the new studies, Mishra and his colleagues tapped data on drought conditions from 1901 to 2018. They used a computer to simulate the world’s drought history as a network of drought events, drawing connections between events that occurred within three months of each other.

The researchers identified major drought hot spots across the globe — places in which droughts tended to appear simultaneously or within just a few months. These hot spots included the western and midwestern United States, the Amazon, the eastern slope of the Andes, South Africa, the Arabian deserts, southern Europe and Scandinavia.
“When you get a drought in one, you get a drought in others,” says climate scientist Ben Kravitz of Indiana University Bloomington, who was not involved in the study. “If that’s happening all at once, it can affect things like global trade, [distribution of humanitarian] aid, pollution and numerous other factors.”

A subsequent analysis of sea surface temperatures and precipitation patterns suggested that major climate teleconnections were behind the synchronization of droughts on separate continents, the researchers report January 10 in Nature Communications. El Niño appeared to be the main driver of simultaneous droughts spanning parts of South America, Africa and Australia. ENSO is known to exert a widespread influence on precipitation patterns (SN: 4/16/20). So that finding is “a good validation of the method,” Kravitz says. “We would expect that to appear.”
In the second study, published January 27 in Nature Communications, de Miguel and his colleagues investigated how climate teleconnections influence the amount of land burned around the world. Researchers knew that the climate patterns can influence the frequency and intensity of wildfires. In the new study, the researchers compared satellite data on global burned area from 1982 to 2018 with data on the strength and phase of the globe’s major climate teleconnections.

Variations in the yearly pattern of burned area strongly aligned with the phases and range of climate teleconnections. In all, these climate patterns regulate about 53 percent of the land burned worldwide each year, the team found. According to de Miguel, teleconnections directly influence the growth of vegetation and other conditions such as aridity, soil moisture and temperature that prime landscapes for fires.

The Tropical North Atlantic teleconnection, a pattern of shifting sea surface temperatures just north of the equator in the Atlantic Ocean, was associated with about one-quarter of the global burned area — making it the most powerful driver of global burning, especially in the Northern Hemisphere.

These researchers are showing that wildfire scars around the world are connected to these climate teleconnections, and that’s very useful, Stevenson says. “Studies like this can help us prepare how we might go about constructing larger scale international plans to deal with events that affect multiple places at once.”

A chemical imbalance doesn’t explain depression. So what does?

You’d be forgiven for thinking that depression has a simple explanation.

The same mantra — that the mood disorder comes from a chemical imbalance in the brain — is repeated in doctors’ offices, medical textbooks and pharmaceutical advertisements. Those ads tell us that depression can be eased by tweaking the chemicals that are off-kilter in the brain. The only problem — and it’s a big one — is that this explanation isn’t true.

The phrase “chemical imbalance” is too vague to be true or false; it doesn’t mean much of anything when it comes to the brain and all its complexity. Serotonin, the chemical messenger often tied to depression, is not the one key thing that explains depression. The same goes for other brain chemicals.
The hard truth is that despite decades of sophisticated research, we still don’t understand what depression is. There are no clear descriptions of it, and no obvious signs of it in the brain or blood.

The reasons we’re in this position are as complex as the disease itself. Commonly used measures of depression, created decades ago, neglect some important symptoms and overemphasize others, particularly among certain groups of people. Even if depression could be measured perfectly, the disorder exists amid myriad levels of complexity, from biological confluences of minuscule molecules in the brain all the way out to the influences of the world at large. Countless combinations of genetics, personality, history and life circumstances may all conspire to create the disorder in any one person. No wonder the science is stuck.

It’s easy to see why a simple “chemical imbalance” explanation holds appeal, even if it’s false, says Awais Aftab, a psychiatrist at Case Western Reserve University in Cleveland. What causes depression is nuanced, he says — “not something that can easily be captured in a slogan or buzzword.”

So here, up front, is your fair warning: There will be no satisfying wrap-up at the end of this story. You will not come away with a scientific explanation for depression, because one does not exist. But there is a way forward for depression researchers, Aftab says. It requires grappling with nuances, complexity and imperfect data.

Those hard examinations are under way. “There’s been some really interesting and exciting scientific and philosophical work,” Aftab says. That forward motion, however slow, gives him hope and may ultimately benefit the millions of people around the world weighed down by depression.

How is depression measured?
Many people who feel depressed go into a doctor’s office and get assessed with a checklist. “Yes” to trouble sleeping, “yes” to weight loss and “yes” to a depressed mood would all yield points that get tallied into a cumulative score. A high enough score may get someone a diagnosis. The process seems straightforward. But it’s not. “Even basic issues regarding measurement of depression are actually still quite open for debate,” Aftab says.

That’s why there are dozens of methods to assess depression, including the standard description set by the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders, or DSM-5. This manual is meant to standardize categories of illness.

Variety in measurement is a real problem for the field and points to the lack of understanding of the disease itself, says Eiko Fried, a clinical psychologist at Leiden University in the Netherlands. Current ways of measuring depression “leave you with a really impoverished, tiny look,” Fried says.

Scales can miss important symptoms, leaving people out. “Mental pain,” for instance, was described by patients with depression and their caregivers as an important feature of the illness, researchers reported in 2020 in Lancet Psychiatry. Yet the term doesn’t show up on standard depression measurements.

One reason for the trouble is that the experience of depression is, by its nature, deeply personal, says clinical psychologist Ioana Alina Cristea of the University of Pavia in Italy. Individual patient complaints are often the best tool for diagnosing the disorder, she says. “We can never let these elements of subjectivity go.”

In the middle of the 20th century, depression was diagnosed through subjective conversation and psychoanalysis, and considered by some to be an illness of the soul. In 1960, psychiatrist Max Hamilton attempted to course-correct toward objectivity. Working at the University of Leeds in England, he published a depression scale. Today, that scale, known by its acronyms HAM-D or HRSD, is one of the most widely used depression screening tools, often used in studies measuring depression and evaluating the promise of possible treatments.
“It’s a great scheme for a scale that was made in 1960,” Fried says. Since the HRSD was published, “we have put a man on the moon, invented the internet and created powerful computers small enough to fit in people’s pockets,” Fried and his colleagues wrote in April in Nature Reviews Psychology. Yet this 60-year-old tool remains a gold standard.

Hamilton developed his scale by observing patients who had already been diagnosed with depression. They exhibited symptoms such as weight loss and slowed speech. But those mixtures of symptoms don’t apply to everyone with depression, nor do they capture nuance in symptoms.

To spot these nuances, Fried looked at 52 depression symptoms across seven different scales for depression, including Hamilton’s scale. On average, each symptom appeared in three of the seven scales. A whopping 40 percent of the symptoms appeared in only one scale, Fried reported in 2017 in the Journal of Affective Disorders. The only specific symptom common to all seven scales? “Sad mood.”

In a study that examined depression symptoms reported by 3,703 people, Fried and Randolph Nesse, an evolutionary psychiatrist at the University of Michigan Medical School in Ann Arbor, found 1,030 unique symptom profiles. Roughly 14 percent of participants had combinations of symptoms that were not shared with anyone else, the researchers reported in 2015 in the Journal of Affective Disorders.

Before reliable thermometers, the concept of temperature was murky. How do you understand the science of hot and cold without the tools to measure it? “You don’t,” Fried says. “You make a terrible measurement, and you have a terrible theory of what it is.” Depression presents a similar challenge, he says. Without good measurements, how can you possibly diagnose depression, determine whether symptoms get better with treatments or even prevent it in the first place?

Depression differs by gender, race and culture
The story gets murkier when considering who these depression scales were made for. Symptoms differ among groups of people, making the diagnosis even less relevant for certain groups.
Behavioral researcher Leslie Adams of Johns Hopkins Bloomberg School of Public Health studies depression in Black men. “It’s clear that [depression] is negatively impacting their work lives, social lives and relationships. But they’re not being diagnosed at the same rate” as other groups, she says. For instance, white people have a lifetime risk of major depression disorder of almost 18 percent; Black people’s lifetime risk is 10.4 percent, researchers reported in 2007 in JAMA Psychiatry. This discrepancy led Adams to ask: “Could there be a problem with diagnostic tools?”

Turns out, there is. Black men with depression have several characteristics that common scales miss, such as feelings of internal conflict, not communicating with others and feeling the burdens of societal pressure, Adams and colleagues reported in 2021 in BMC Public Health. A lot of depression measurements are based on questions that don’t capture these symptoms, Adams says. “ ‘Are you very sad?’ ‘Are you crying?’ Some people do not emote in the same way,” she says. “You may be missing things.”

American Indian women living in the Southeast United States also experience symptoms that aren’t adequately caught by the scales, Adams and her team found in a separate study. These women also reported experiences that do not necessarily signal depression for them but generally do for wider populations.

On common scales, “there are some items that really do not capture the experience of depression for these groups,” Adams says. For instance, a common question asks how well someone agrees with the sentence: “I felt everything I did was an effort.” That “can mean a lot of things, and it’s not necessarily tied to depression,” Adams says. The same goes for items such as, “People dislike me.” A person of color faced with racism and marginalization might agree with that, regardless of depression, she says.

Our ways to measure depression capture only a tiny slice of the big picture. The same can be said about our understanding of what’s happening in the brain.

The flawed serotonin hypothesis
Serotonin came into the spotlight in part because of the serendipitous discovery of drugs that affected serotonin receptors, called selective serotonin re­uptake inhibitors, or SSRIs. After getting its start in the late 1960s, the “serotonin hypothesis” flourished in the late ’90s, as advertisers ran commercials that told viewers that SSRIs fixed the serotonin deficit that can accompany depression. These messages changed the way people talked and thought about depression. Having a simple biological explanation helped some people and their doctors, in part by easing the shame some people felt for not being able to snap out of it on their own. It gave doctors ways to talk with people about the mood disorder.

But it was a simplified picture. A recent review of evidence, published in July in Molecular Psychiatry, finds no consistent data supporting the idea that low serotonin causes depression. Some headlines declared that the study was a grand takedown of the serotonin hypothesis. To depression researchers, the findings weren’t a surprise. Many had already realized this simple description wasn’t helpful.

There’s plenty of data suggesting that serotonin, and other chemical messengers such as dopamine and norepinephrine, are somehow involved in depression, including a study by neuropharmacologist Gitte Moos Knudsen of the University of Copenhagen. She and colleagues recently found that 17 people who were in the midst of a depressive episode released, on average, less serotonin in certain brain areas than 20 people who weren’t depressed. The study is small, but it’s one of the first to look at serotonin release in living human brains of people with depression.

But Knudsen cautions that those results, published in October in Biological Psychiatry, don’t mean that depression is fully caused by low serotonin levels. “It’s easy to defer to simple explanations,” she says.

SSRIs essentially form a molecular blockade, stopping serotonin from being reabsorbed into nerve cells and keeping the levels high between the cells. Those high levels are thought to influence nerve cell activity in ways that help people feel better.

Because the drugs can ease symptoms in about half of people with depression, it seemed to make sense that depression was caused by problems with serotonin. But just because a treatment works by doing something doesn’t mean the disease works in the opposite way. That’s backward logic, psychiatrist Nassir Ghaemi of Tufts University School of Medicine in Boston wrote in October in a Psychology Today essay. Aspirin can ease a headache, but a headache isn’t caused by low aspirin.

“We think we have a much more nuanced picture of what depression is today,” Knudsen says. The trouble is figuring out the many details. “We need to be honest with patients, to say that we don’t know everything about this,” she says.

The brain contains seven distinct classes of receptors that sense serotonin. That’s not even accounting for sensors for other messengers such as dopamine and norepinephrine. And these receptors sit on a wide variety of nerve cells, some that send signals when they sense serotonin, some that dampen signals. And serotonin, dopamine and norepinephrine are just a few of dozens of chemicals that carry information throughout a multitude of interconnected brain circuits. This complexity is so great that it renders the phrase “chemical imbalance” meaningless.

Overly simple claims — low serotonin causes depression, or low serotonin isn’t involved — serve only to keep us stymied, Aftab says. “[It] just keeps up that unhelpful binary.”
Depression research can’t ignore the world
In the 1990s, Aftab says, depression researchers got intensely focused on the brain. “They were trying to find the broken part of the brain that causes depression.” That limited view “really hurt depression research,” Aftab says. In the last 10 years or so, “there’s a general recognition that that sort of mind-set is not going to give us the answers.”

Reducing depression to specific problems of biology in the brain didn’t work, Cristea says. “If you were a doctor 10 years ago, the dream was that the neuroscience would give us the markers. We would look at the markers and say, ‘OK. You [get] this drug. You, this kind of therapy.’ But it hasn’t happened.” Part of that, she says, is because depression is an “existentially complicated disorder” that’s tough to simplify, quantify and study in a lab.

Our friendships, our loves, our setbacks and our stress can all influence our health. Take a recent study of first-year doctors in the United States. The more these doctors worked, the higher the rate of depression, scientists reported in October in the New England Journal of Medicine. Similar trends exist for caregivers of people with dementia and health care workers who kept emergency departments open during the COVID-19 pandemic. Their high-stress experiences may have prompted depression in some way.

“Depression is linked to the state of the world — and there is no denying it,” Aftab says.
Today’s research on depression ought to be more pluralistic, Adams says. “There are so many factors at play that we can’t just rest on one solution,” she says. Research from neuroscience and genetics has helped identify brain circuits, chemical messengers, cell types, molecules and genes that all may be involved in the disorder. But researchers aren’t satisfied with that. “There is other evidence that remains unexplored,” Adams says. “With our neuro­science advances, there should be similar advances in public health and psychiatric work.”

That’s happening. For her part, Adams and colleagues have just begun a study looking at moment-to-moment stressors in the lives of Black adolescents, ages 12 to 18, as measured by cell phone questionnaires. Responses, she hopes, will yield clues about depression and risk of suicide.

Other researchers are trying to fit together all of these different ways of seeing the problem. Fried, for example, is developing new concepts of depression that acknowledge the interacting systems. You tug on one aspect of it — using an antidepressant for instance, or changing sleep patterns — and see how the rest of the system reacts.

Approaches like these recognize the complexity of the problem and aim to figure out ways to handle it. We will never have a simple explanation for depression; we are now learning that one cannot possibly exist. That may sound like cold comfort to people in depression’s grip. But seeing the challenge with clear eyes may be the thing that moves us forward.