An experimental treatment for endometriosis, a painful gynecological disease that affects some 190 million people worldwide, may one day offer new hope for easing symptoms.
Monthly antibody injections reversed telltale signs of endometriosis in monkeys, researchers report February 22 in Science Translational Medicine. The antibody targets IL-8, a molecule that whips up inflammation inside the scattered, sometimes bleeding lesions that mark the disease. After neutralizing IL-8, those hallmark lesions shrink, the team found.
The new treatment is “pretty potent,” says Philippa Saunders, a reproductive scientist at the University of Edinburgh who was not involved with work. The study’s authors haven’t reported a cure, she points out, but their antibody does seem to have an impact. “I think it’s really very promising,” she says.
Many scientists think endometriosis occurs when bits of the uterine lining — the endometrium — slough off during menstruation. Instead of exiting via the vagina, they voyage in the other direction: up through the fallopian tubes. Those bits of tissue then trespass through the body, sprouting lesions where they land. They’ll glom onto the ovaries, fallopian tubes, bladder and other spots outside of the uterus and take on a life of their own, Saunders says. The lesions can grow nerve cells, form tough nubs of tissue and even bleed during menstrual cycles. They can also kick off chronic bouts of pelvic pain. If you have endometriosis, you can experience “pain when you urinate, pain when you defecate, pain when you have sex, pain when you move around,” Saunders says. People with the disease can also struggle with infertility and depression, she adds. “It’s really nasty.” Once diagnosed, patients face a dearth of treatment options — there’s no cure, only therapies to alleviate symptoms. Surgery to remove lesions can help, but symptoms often come back.
The disease affects at least 10 percent of girls, women and transgender men in their reproductive years, Saunders says. And people typically suffer for years — about eight on average — before a diagnosis. “Doctors consider menstrual pelvic pain a very common thing,” says Ayako Nishimoto-Kakiuchi, a pharmacologist at Chugai Pharmaceutical Co. Ltd. in Tokyo. Endometriosis “is underestimated in the clinic,” she says. “I strongly believe that this disease has been understudied.”
Hormonal drugs that stop ovulation and menstruation can also offer relief, says Serdar Bulun, a reproductive endocrinologist at Northwestern University Feinberg School of Medicine in Chicago not involved with the new study. But those drugs come with side effects and aren’t ideal for people trying to become pregnant. “I see these patients day in and day out,” he says. “I see how much they suffer, and I feel like we are not doing enough.”
Nishimoto-Kakiuchi’s team engineered an antibody that grabs onto the inflammatory factor IL-8, a protein that scientists have previously fingered as one potential culprit in the disease. The antibody acts like a garbage collector, Nishimoto-Kakiuchi says. It grabs IL-8, delivers it to the cell’s waste disposal machinery, and then heads out to snare more IL-8.
The team tested the antibody in cynomolgus monkeys that were surgically modified to have the disease. (Endometriosis rarely shows up spontaneously in these monkeys, the scientists discovered previously after screening more than 600 females.) The team treated 11 monkeys with the antibody injection once a month for six months. In these animals, lesions shriveled and the adhesive tissue that glues them to the body thinned out, too. Before this study, Nishimoto-Kakiuchi says, the team didn’t think such signs of endometriosis were reversible. Her company has now started a Phase I clinical trial to test the safety of therapy in humans. The treatment is one of several endometriosis therapies scientists are testing (SN: 7/19/19) . Other trials will test new hormonal drugs, robot-assisted surgery and behavioral interventions.
Doctors need new options to help people with the disease, Saunders says. “There’s a huge unmet clinical need.”
LAS VEGAS — It’s a bold claim: The quest to create a superconductor that works under practical conditions is finally fulfilled, a team of researchers says. But controversy has dogged the team’s earlier claim of record-breaking superconductivity, and the new result is already facing extreme scrutiny.
The ultimate test will be whether the result can be confirmed by other researchers, says physicist Mikhail Eremets of the Max Planck Institute for Chemistry in Mainz, Germany. “I repeat it like [a] mantra: ‘Reproduce.’” Many materials become superconductors, able to transmit electricity with no resistance, provided they’re cooled to very low temperatures. A few superconductors work under warmer conditions, but those must be squeezed to crushing pressures, so they’re impractical to use.
Now physicist Ranga Dias of the University of Rochester in New York and colleagues say they have created a superconductor that works at both room temperature and relatively low pressure. A superconductor that operates under such commonplace conditions could herald a new age of high-efficiency machines, supersensitive instrumentation and revolutionary electronics.
“This is the start of the new type of material that’s useful for practical applications,” Dias said March 7 at the American Physical Society meeting, where he reported the feat.
The superconductor is made of hydrogen mixed with nitrogen and a rare earth element called lutetium, Dias and colleagues report March 8 in Nature. The team combined the elements and squeezed them in a device known as a diamond anvil cell. The researchers then varied the pressure and temperature and measured the resistance to electrical flow in the compound.
At temperatures as high as 294 kelvins (about 21° Celsius or 70° Fahrenheit), the material seemed to lose any electrical resistance. It still required pressures of 10 kilobar, which is about 10,000 times the pressure of Earth’s atmosphere. But that’s far lower than the millions of atmospheres of pressure typically required for superconductors that operate near room temperature. If confirmed, that makes the material much more promising for real-world applications.
The material displayed several hallmarks of a superconductor, the team reports. Not only did the electrical resistance suddenly drop as it became superconducting, but the material also expelled magnetic fields and exhibited an abrupt change in its heat capacity, Dias says.
When the researchers put the squeeze on the material in the diamond anvil cell, it suddenly turned from a bluish hue to hot pink. “I had never seen a color change like this in a material,” Dias says. “It was like, wow.” That color change indicated a shift in the electrical properties of the material as it became a superconductor, Dias says. This superconductor might be able to escape the confines of a diamond anvil cell, Dias says, opening it up to practical applications. A technique called strain engineering, for example, could mimic the required pressure. In such a process, researchers grow a material on a surface that constrains growth, putting a strain on the material that replicates the effects of externally applied squeezing.
Still, the research faces significant skepticism, in part because of the firestorm over the team’s earlier publication that claimed the discovery of superconductivity in a compound of carbon, sulfur and hydrogen at 15° C (SN: 10/14/20). Editors at Nature retracted that paper, over the objection of Dias and his coauthors, citing irregularities in the researchers’ data handling that undermined the editors’ confidence in the claims (SN: 10/3/22).
Several experts have expressed a lack of confidence in the new results presented by Dias’ group, based on that history. Not only was the previous result retracted, but other researchers were unable to reproduce it, says Eremets, including his own group at the Max Planck Institute. “The main test of validity — reproducibility — was failed, and from my point of view that’s the most important thing.”
The stakes are high. “If it’s true, it’s a great discovery,” says physicist Eugene Gregoryanz of the University of Edinburgh. But he views the researchers with suspicion. “Whether it’s true or not, I guess time will show.”
Others are more positive. “It’s an excellent study,” says materials chemist Russell Hemley of the University of Illinois Chicago. “The data as presented, in terms of evidence for superconductivity, is very strong.” Hemley was not involved with the study but has collaborated with Dias in the past, including on a follow-up to the retracted superconductor paper. Submitted February 16 at arXiv.org, that paper, which has not yet passed peer-review, reports that the previously claimed superconductor does function near room temperature.
The new superconductor is a hydrogen-rich type known as a hydride. Scientists predict that pure hydrogen should be a room-temperature superconductor, but only at extremely high pressures that make it difficult to produce. To lower the pressure, scientists have added in other elements, making hydride superconductors.
In 2015, Eremets and colleagues produced a compound of sulfur and hydrogen that was superconducting up to −70° C, a record high temperature at the time (SN: 12/15/15). A few years later, a compound of lanthanum and hydrogen was found to superconduct under still chilly conditions, but even closer to room temperature (SN: 9/10/18). Both materials require pressures too high for practical use.
It’s difficult to understand how the new superconductor fits in with other hydrides. Theoretical calculations of how similar hydrides behave wouldn’t suggest that such a material would be superconducting at the reported temperatures and pressures, says theoretical physicist Lilia Boeri of the Sapienza University of Rome. “For me, it looks very strange,” Boeri says. “It’s something completely unexpected…. If it’s true, it’s very different from the other hydrides.”
WASHINGTON — The tale of the first horseback riders may be written on the bones of the ancient Yamnaya people.
Five excavated skeletons dated to about 3000 to 2500 B.C. show clear signs of physical stress that hint these Yamnaya individuals may have frequently ridden horses, researchers reported March 3 at the American Association for the Advancement of Science Annual Meeting and in Science Advances. That makes the Yamnaya the earliest humans identified as likely horseback riders so far. Five thousand years ago, the Yamnaya migrated widely, spreading Indo-European languages and altering the human gene pool across Europe and Asia (SN: 11/15/17; SN: 9/5/19). Their travels eventually stretched from modern-day Hungary to Mongolia, roughly 4,500 kilometers, and are thought to have taken place over only a couple of centuries.
“In many ways, [the Yamnaya] changed the history of Eurasia,” says archaeologist Volker Heyd of the University of Helsinki.
Horse domestication became widely established around 3500 B.C., probably for milk and meat (SN: 7/6/17). Some researchers have suggested the Botai people in modern-day Kazakhstan started riding horses during that time, but that’s debated (SN: 3/5/09). The Yamnaya had horses as well, and archaeologists have speculated that the people probably rode them, but evidence was lacking.
But the oldest known depictions of horseback riding are from about 2000 B.C. Complicating efforts to determine when the behavior emerged, possible riding gear would have been made of long-decayed natural materials, and scientists rarely, if ever, find complete horse skeletons from that time. Heyd and colleagues weren’t seeking evidence of horsemanship. They were working on a massive project called the Yamnaya Impact on Prehistoric Europe to understand every aspect of the people’s lives.
While assessing over 200 human skeletons excavated from countries including Romania, Bulgaria and Hungary, bioanthropologist Martin Trautmann noticed that one individual’s bones carried distinct traits on the femur and elsewhere that he’d seen before. He immediately suspected horseback riding.
“It was just kind of a surprise,” says Trautmann, also of the University of Helsinki.
If it were a one-off case, he says he would have dismissed it. But as he continued analyzing skeletons, he noticed that several had the same traits.
Trautmann, Heyd and colleagues assessed all the skeletons for the presence of six physical signs of horseback riding that have been documented in previous research, a constellation of traits dubbed horsemanship syndrome. These signs included pelvis and femur marks that could have come from the biomechanical stress of sitting with spread legs while holding onto a horse, as well as healed vertebrae damage from injuries that could have come from falling off. The team also created a scoring system to account for the skeletal traits’ severity, preservation and relative importance.
“Bones are living tissue,” Trautmann says. “So they react to any type of environmental stimulus.”
The team deemed five Yamnaya male individuals as frequent horseback riders because they had four or more signs of horsemanship. Nine other Yamnaya males probably rode horses, but the researchers were less confident because the skeletons each displayed only three markers. “Hypothetically speaking, it’s very logical,” says bioarchaeologist Maria Mednikova of the Russian Academy of Sciences in Moscow, who was not involved in the new study. The Yamnaya were very close to horses, she says, so at some point, they probably experimented with riding.
She now plans to check for the horse-riding traits in the Yamnaya skeletons she has access to. “The human skeletal system is like a book — if you have some knowledge, you can read it,” Mednikova says.
Archaeologist Ursula Brosseder, who also was not involved in the work, warns not to interpret this finding as equestrianism reaching its full bloom within the Yamnaya culture. Brosseder, formerly of the University of Bonn in Germany, sees the paper’s discovery as humans still figuring out what they could do with horses as part of early domestication.
As for Heyd, he says he has long suspected that the Yamnaya rode horses, considering that they had the animals and expanded so rapidly across such a large area. “Now, we have proof.”
For generations of dogs, home is the radioactive remains of the Chernobyl Nuclear Power Plant.
In the first genetic analysis of these animals, scientists have discovered that dogs living in the power plant industrial area are genetically distinct from dogs living farther away.
Though the team could distinguish between dog populations, the researchers did not pinpoint radiation as the reason for any genetic differences. But future studies that build on the findings, reported March 3 in Science Advances, may help uncover how radioactive environments leave their mark on animal genomes. That could have implications for other nuclear disasters and even human space travel, says Timothy Mousseau, an evolutionary ecologist at the University of South Carolina in Columbia. “We have high hopes that what we learn from these dogs … will be of use for understanding human exposures in the future,” he says.
Since his first trip in 1999, Mousseau has stopped counting how many times he’s been to Chernobyl. “I lost track after we hit about 50 visits.”
He first encountered Chernobyl’s semi-feral dogs in 2017, on a trip with the Clean Futures Fund+, an organization that provides veterinary care to the animals. Not much is known about how local dogs survived after the nuclear accident. In 1986, an explosion at one of the power plant’s reactors kicked off a disaster that lofted vast amounts of radioactive isotopes into the air. Contamination from the plant’s radioactive cloud largely settled nearby, in a region now called the Chernobyl Exclusion Zone.
Dogs have lived in the area since the disaster, fed by Chernobyl cleanup workers and tourists. Some 250 strays were living in and around the power plant, among spent fuel-processing facilities and in the shadow of the ruined reactor. Hundreds more roam farther out in the exclusion zone, an area about the size of Yosemite National Park. During Mousseau’s visits, his team collected blood samples from these dogs for DNA analysis, which let the researchers map out the dogs’ complex family structures. “We know who’s related to who,” says Elaine Ostrander, a geneticist at the National Human Genome Research Institute in Bethesda, Md. “We know their heritage.”
The canine packs are not just a hodgepodge of wild feral dogs, she says. “There are actually families of dogs breeding, living, existing in the power plant,” she says. “Who would have imagined?”
Dogs within the exclusion zone share ancestry with German shepherds and other shepherd breeds, like many other free-breeding dogs from Eastern Europe, the team reports. And though their work revealed that dogs in the power plant area look genetically different from dogs in Chernobyl City, about 15 kilometers away, the team does not know whether radiation caused these differences or not, Ostrander says. The dogs may be genetically distinct simply because they’re living in a relatively isolated area.
The new finding is not so surprising, says Jim Smith, an environmental scientist at the University of Portsmouth in England. He was not part of the new study but has worked in this field for decades. He’s concerned that people might assume “that the radiation has something to do with it,” he says. But “there’s no evidence of that.”
Scientists have been trying to pin down how radiation exposure at Chernobyl has affected wildlife for decades (SN: 5/2/14). “We’ve been looking at the consequences for birds and rodents and bacteria and plants,” Mousseau says. His team has found animals with elevated mutation rates, shortened life spans and early-onset cataracts.
It’s not easy to tease out the effects of low-dose radiation among other factors, Smith says. “[These studies] are so hard … there’s lots of other stuff going in the natural environment.” What’s more, animals can reap some benefits when humans leave contaminated zones, he says.
How, or if, radiation damage is piling up in dogs’ genomes is something the team is looking into now, Ostrander says. Knowing the dogs’ genetic backgrounds will make it easier to spot any radiation red flags, says Bridgett vonHoldt, an evolutionary geneticist at Princeton University, who was not involved in the work.
“I feel like it’s a cliffhanger,” she says. “I want to know more.”
SpaceX’s rapidly growing fleet of Starlink internet satellites now make up half of all active satellites in Earth orbit.
On February 27, the aerospace company launched 21 new satellites to join its broadband internet Starlink fleet. That brought the total number of active Starlink satellites to 3,660, or about 50 percent of the nearly 7,300 active satellites in orbit, according to analysis by astronomer Jonathan McDowell using data from SpaceX and the U.S. Space Force. “These big low-orbit internet constellations have come from nowhere in 2019, to dominating the space environment in 2023,” says McDowell, of the Harvard-Smithsonian Center for Astrophysics in Cambridge, Mass. “It really is a massive shift and a massive industrialization of low orbit.”
SpaceX has been launching Starlink satellites since 2019 with the goal of bringing broadband internet to remote parts of the globe. And for just as long, astronomers have been warning that the bright satellites could mess up their view of the cosmos by leaving streaks on telescope images as they glide past (SN: 3/12/20).
Even the Hubble Space Telescope, which orbits more than 500 kilometers above the Earth’s surface, is vulnerable to these satellite streaks, as well as those from other satellite constellations. From 2002 to 2021, the percentage of Hubble images affected by light from low-orbit satellites increased by about 50 percent, astronomer Sandor Kruk of the Max-Planck Institute for Extraterrestrial Physics in Garching, Germany, and colleagues report March 2 in Nature Astronomy.
The number of images partially blocked by satellites is still small, the team found, rising from nearly 3 percent of images taken between 2002 and 2005 to just over 4 percent between 2018 and 2021 for one of Hubble’s cameras. But there are already thousands more Starlink satellites now than there were in 2021.
“The fraction of [Hubble] images crossed by satellites is currently small with a negligible impact on science,” Kruk and colleagues write. “However, the number of satellites and space debris will only increase in the future.” The team predicts that by the 2030s, the probability of a satellite crossing Hubble’s field of view any time it takes an image will be between 20 and 50 percent. The sudden jump in Starlink satellites also poses a problem for space traffic, says astronomer Samantha Lawler of the University of Regina in Canada. Starlink satellites all orbit at a similar distance from Earth, just above 500 kilometers.
“Starlink is the densest patch of space that has ever existed,” Lawler says. The satellites are constantly navigating out of each other’s way to avoid collisions (SN: 2/12/09). And it’s a popular orbital altitude — Hubble is there, and so is the International Space Station and the Chinese space station. “If there is some kind of collision [between Starlinks], some kind of mishap, it could immediately affect human lives,” Lawler says.
SpaceX launches Starlink satellites roughly once per week — it launched 51 more on March 3. And they’re not the only company launching constellations of internet satellites. By the 2030s, there could be 100,000 satellites crowding low Earth orbit.
So far, there are no international regulations to curb the number of satellites a private company can launch or to limit which orbits they can occupy.
“The speed of commercial development is much faster than the speed of regulation change,” McDowell says. “There needs to be an overhaul of space traffic management and space regulation generally to cope with these massive commercial projects.”
The oldest known fossils of pollen-laden insects are of earwig-like ground-dwellers that lived in what is now Russia about 280 million years ago, researchers report. Their finding pushes back the fossil record of insects transporting pollen from one plant to another, a key aspect of modern-day pollination, by about 120 million years.
The insects — from a pollen-eating genus named Tillyardembia first described in 1937 — were typically about 1.5 centimeters long, says Alexander Khramov, a paleoentomologist at the Borissiak Paleontological Institute in Moscow. Flimsy wings probably kept the creatures mostly on the forest floor, he says, leaving them to climb trees to find and consume their pollen.
Recently, Khramov and his colleagues scrutinized 425 fossils of Tillyardembia in the institute’s collection. Six had clumps of pollen grains trapped on their heads, legs, thoraxes or abdomens, the team reports February 28 in Biology Letters. A proportion that small isn’t surprising, Khramov says, because the fossils were preserved in what started out as fine-grained sediments. The early stages of fossilization in such material would tend to wash away pollen from the insects’ remains. The pollen-laden insects had only a couple of types of pollen trapped on them, the team found, suggesting that the critters were very selective in the tree species they visited. “That sort of specialization is in line with potential pollinators,” says Michael Engel, a paleoentomologist at the University of Kansas in Lawrence who was not involved in the study. “There’s probably vast amounts of such specialization that occurred even before Tillyardembia, we just don’t have evidence of it yet.”
Further study of these fossils might reveal if Tillyardembia had evolved special pollen-trapping hairs or other such structures on their bodies or heads, says Conrad Labandeira, a paleoecologist at the National Museum of Natural History in Washington, D.C., also not part of the study. It would also be interesting, he says, to see if something about the pollen helped it stick to the insects. If the pollen grains had structures that enabled them to clump more readily, for example, then those same features may have helped them grab Velcro-like onto any hairlike structures on the insects’ bodies.
A form of lightning with a knack for sparking wildfires may surge under climate change.
An analysis of satellite data suggests “hot lightning” — strikes that channel electrical charge for an extended period — may be more likely to set landscapes ablaze than more ephemeral flashes, researchers report February 10 in Nature Communications. Each 1 degree Celsius of warming could spur a 10 percent increase in the most incendiary of these Promethean bolts, boosting their flash rate to about four times per second by 2090 — up from nearly three times per second in 2011. That’s dangerous, warns physicist Francisco Javier Pérez-Invernón of the Institute of Astrophysics of Andalusia in Granada, Spain. “There will be more risk of lightning-ignited wildfires.”
Among all the forces of nature, lightning sets off the most blazes. Flashes that touch down amid minimal or no rainfall — known as dry lightning — are especially effective fire starters. These bolts have initiated some of the most destructive wildfires in recent years, such as the 2020 blazes in California (SN: 12/21/20).
But more than parched circumstances can influence a blast’s ability to spark flames. Field observations and laboratory experiments have suggested the most enduring form of hot lightning — “long continuing current lightning”— may be especially combustible. These strikes channel current for more than 40 milliseconds. Some last longer than one-third of a second — the typical duration of a human eye blink.
“This type of lightning can transport a huge amount of electrical discharge from clouds to the ground or to vegetation,” Pérez-Invernón says. Hot lightning’s flair for fire is analogous to lighting a candle; the more time a wick or vegetation is exposed to incendiary energy, the easier it kindles.
Previous research has proposed lightning may surge under climate change (SN: 11/13/14). But it has remained less clear how hot lightning — and its ability to spark wildfires — might evolve.
Pérez-Invernón and his colleagues examined the relationship between hot lightning and U.S. wildfires, using lightning data collected by a weather satellite and wildfire data from 1992 to 2018.
Long continuing current lightning could have sparked up to 90 percent of the roughly 5,600 blazes encompassed in the analysis, the team found. Since less than 10 percent of all lightning strikes during the summer in the western United States have long continuing current, the relatively high ignition count led the researchers to infer that flashes of hot lightning were more prone to sparking fire than typical bolts. The researchers also probed the repercussions of climate change. They ran computer simulations of the global activity of lightning during 2009 to 2011 and from 2090 to 2095, under a future scenario in which annual greenhouse gas emissions peak in 2080 and then decline.
The team found that in the later period, climate change may boost updraft within thunderstorms, causing hot lightning flashes to increase in frequency to about 4 strikes per second globally — about a 40 percent increase from 2011. Meanwhile, the rate of all cloud-to-ground strikes might increase to nearly 8 flashes per second, a 28 percent increase.
After accounting for changes in precipitation, humidity and temperature, the researchers predicted wildfire risk will significantly increase in Southeast Asia, South America, Africa and Australia, and risk will go up most dramatically in North America and Europe. However, risk may decrease in many polar regions, where rainfall is projected to increase while hot lightning rates remain constant.
It’s valuable to show that risk may evolve differently in different places, says Earth systems scientist Yang Chen of the University of California, Irvine, who was not involved in the study. But, he notes, the analysis uses sparse data from polar regions, so there is a lot of uncertainty. Harnessing additional data from ground-based lightning detectors and other data sources could help, he says. “That [region is] important, because a lot of carbon can be released from permafrost.”
Pérez-Invernón agrees more data will help improve projections of rates of lightning-induced wildfire, not just in the polar regions, but also in Africa, where blazes are common but fire reports are lacking.
To shrink error rates in quantum computers, sometimes more is better. More qubits, that is.
The quantum bits, or qubits, that make up a quantum computer are prone to mistakes that could render a calculation useless if not corrected. To reduce that error rate, scientists aim to build a computer that can correct its own errors. Such a machine would combine the powers of multiple fallible qubits into one improved qubit, called a “logical qubit,” that can be used to make calculations (SN: 6/22/20).
Scientists now have demonstrated a key milestone in quantum error correction. Scaling up the number of qubits in a logical qubit can make it less error-prone, researchers at Google report February 22 in Nature. Future quantum computers could solve problems impossible for even the most powerful traditional computers (SN: 6/29/17). To build those mighty quantum machines, researchers agree that they’ll need to use error correction to dramatically shrink error rates. While scientists have previously demonstrated that they can detect and correct simple errors in small-scale quantum computers, error correction is still in its early stages (SN: 10/4/21).
The new advance doesn’t mean researchers are ready to build a fully error-corrected quantum computer, “however, it does demonstrate that it is indeed possible, that error correction fundamentally works,” physicist Julian Kelly of Google Quantum AI said in a news briefing February 21. Logical qubits store information redundantly in multiple physical qubits. That redundancy allows a quantum computer to check if any mistakes have cropped up and fix them on the fly. Ideally, the larger the logical qubit, the smaller the error rate should be. But if the original qubits are too faulty, adding in more of them will cause more problems than it solves.
Using Google’s Sycamore quantum chip, the researchers studied two different sizes of logical qubits, one consisting of 17 qubits and the other of 49 qubits. After making steady improvements to the performance of the original physical qubits that make up the device, the researchers tallied up the errors that still slipped through. The larger logical qubit had a lower error rate, about 2.9 percent per round of error correction, compared to the smaller logical qubit’s rate of about 3.0 percent, the researchers found. That small improvement suggests scientists are finally tiptoeing into the regime where error correction can begin to squelch errors by scaling up. “It’s a major goal to achieve,” says physicist Andreas Wallraff of ETH Zurich, who was not involved with the research.
However, the result is only on the cusp of showing that error correction improves as scientists scale up. A computer simulation of the quantum computer’s performance suggests that, if the logical qubit’s size were increased even more, its error rate would actually get worse. Additional improvement to the original faulty qubits will be needed to enable scientists to really capitalize on the benefits of error correction.
Still, milestones in quantum computation are so difficult to achieve that they’re treated like pole jumping, Wallraff says. You just aim to barely clear the bar.
Fungi may help some tree-killer beetles turn a tree’s natural defense system against itself.
The Eurasian spruce bark beetle (Ips typographus) has massacred millions of conifers in forests across Europe. Now, research suggests that fungi associated with these bark beetles are key players in the insect’s hostile takeovers. These fungi warp the chemical defenses of host trees to create an aroma that attracts beetles to burrow, researchers report February 21 in PLOS Biology.
This fungi-made perfume might explain why bark beetles tend to swarm the same tree. As climate change makes Europe’s forests more vulnerable to insect invasions, understanding this relationship could help scientists develop new countermeasures to ward off beetle attacks. Bark beetles are a type of insect found around the world that feed and breed inside trees (SN: 12/17/10). In recent years, several bark beetle species have aggressively attacked forests from North America to Australia, leaving ominous strands of dead trees in their wake.
But trees aren’t defenseless. Conifers — which include pine and fir trees — are veritable chemical weapons factories. The evergreen smell of Christmas trees and alpine forests comes from airborne varieties of these chemicals. But while they may smell delightful, these chemicals’ main purpose is to trap and poison invaders.
Or at least, that’s what they’re meant to do.
“Conifers are full of resin and other stuff that should do horrible things to insects,” says Jonathan Gershenzon, a chemical ecologist at the Max Planck Institute for Chemical Ecology in Jena, Germany. “But bark beetles don’t seem to mind at all.”
This ability of bark beetles to overcome the powerful defense system of conifers has led some scientists to wonder if fungi might be helping. Fungi break down compounds in their environment for food and protection (SN: 11/30/21). And some type of fungi — including some species in the genus Grosmannia — are always found in association with Eurasian spruce bark beetles. Gershenzon and his colleagues compared the chemicals released by spruce bark infested with Grosmannia and other fungi to the chemical profile of uninfected trees. The presence of the fungi fundamentally changed the chemical profile of spruce trees, the team found. More than half the airborne chemicals — made by fungi breaking down monoterpenes and other chemicals that are likely part of the tree defense system — were unique to infected trees after 12 days.
This is surprising because researchers had previously assumed that invading fungi hardly changed the chemical profile of trees, says Jonathan Cale, a fungal ecologist at the University of Northern British Columbia in Prince George, Canada, who was not involved with the research. Later experiments revealed that bark beetles can detect many of these fungi-made chemicals. The team tested this by attaching tiny electrodes on bark beetles’ heads and detecting electrical activity when the chemicals wafted passed their antennae. What’s more, the smell of these chemicals combined with beetle pheromones led the insects to burrow at higher rates than the smell of pheromones alone.
The study suggests that these fungi-made chemicals can help beetles tell where to feed and breed, possibly by advertising that the fungi has taken down some of the tree’s defenses. The attractive nature of the chemicals could also explain the beetle’s swarming behavior, which drives the death of healthy adult trees.
But while the fungi aroma might doom trees, it could also lead to the beetles’ demise. Beetle traps in Europe currently use only beetle pheromones to attract their victims. Combining pheromones with fungi-derived chemicals might be the secret to entice more beetles into traps, making them more effective.
The results present “an exciting direction for developing new tools to manage destructive bark beetle outbreaks” for other beetle species as well, Cale says. In North America, mild winters and drought have put conifer forests at greater risk from mountain pine beetle (Dendroctonus pendersoae) attacks. Finding and using fungi-derived chemicals might be one way to fend off the worst of the bark beetle invasions in years to come.
When New Zealand Prime Minister Jacinda Ardern, who garnered international praise for how she handled the pandemic in her country, recently announced her intention to resign, here’s how she summed up her surprise decision: “I know what the job takes, and I know that I no longer have enough in the tank to do it justice.”
Social scientists and journalists worldwide largely interpreted Ardern’s words in her January 19 speech as a reference to burnout. “She’s talking about an empty tank,” says Christina Maslach, a psychological researcher who has been interviewing and observing workers struggling with workplace-related distress for decades. In almost 50 years of interviews, says Maslach of the University of California, Berkeley, “that phrase [has come] up again and again and again.”
Numerous studies and media reports suggest that burnout, already high before the pandemic, has since skyrocketed worldwide, particularly among workers in certain professions, such as health care, teaching and service. The pandemic makes clear that the jobs needed for a healthy, functioning society are burning people out, Maslach says.
But disagreement over how to define and measure burnout is pervasive, with some researchers even questioning if the syndrome is simply depression by another name. Such controversy has made it difficult to estimate the prevalence of burnout or identify how to best help those who are suffering.
Here are some key questions researchers are asking to get a handle on the problem.
When did today’s understanding of burnout emerge? Some researchers argue that burnout is a strictly modern-day phenomenon, brought on by overwork and hustle culture. But others contend that burnout is merely the latest iteration of a long line of exhaustion disorders, starting with the Ancient Greek concept of acedia. This condition, wrote 5th century monk and theologian John Cassian, is marked by “bodily listlessness and yawning hunger.”
The more contemporary notion of burnout originated in the 1970s. Herbert Freudenberger, the consulting psychologist for volunteers working with drug addicts at St. Mark’s Free Clinic in New York City, used the term to describe the volunteers’ gradual loss of motivation, emotional depletion and reduced commitment to the cause. Roughly simultaneously, Maslach was interviewing social service workers in California and began observing similar characteristics. That prompted Maslach and her then–graduate student, Susan Jackson, now at Rutgers University in Piscataway, N.J., to develop the first tool to measure burnout, the Maslach Burnout Inventory. The duo defined burnout as comprising of three components: exhaustion, cynicism and inefficacy, or persistent feelings of low personal accomplishment.
Respondents rated statements on a scale from 0 (“never”) to 6 (“daily”). Sample statements read: “I feel emotionally drained from my work” for exhaustion; “I doubt the significance of my work” for cynicism; and “I have accomplished many worthwhile things in this job” for inefficacy. High scores for exhaustion and cynicism, and low scores for inefficacy, indicated that a person was struggling with burnout.
Maslach’s scale turned burnout into a legitimate area of inquiry, says Renzo Bianchi, an occupational health psychologist at the Norwegian University of Science and Technology in Trondheim. “Before [the Maslach Burnout Inventory], burnout was pop psychology.”
What is the best way to define burnout? Maslach’s inventory remains the most widely used tool to study burnout. But many criticize that definition of the syndrome (SN: 10/26/22).
Conceptualizing burnout as a combination of exhaustion, cynicism and inefficacy is “arbitrary,” wrote organizational psychologists Wilmar Schaufeli and Dirk Enzmann in their 1998 book, The Burnout Companion to Study and Practice: A Critical Analysis. “What would have happened if other items had been included? Most likely, other dimensions would have appeared.”
Moreover, those three components and what’s causing them are themselves poorly defined, says work and organizational psychologist Evangelia Demerouti of Eindhoven University of Technology in the Netherlands. For instance, numerous nonwork factors can trigger exhaustion, such as health problems and caregiving responsibilities.
Disagreements over what constitutes burnout, and how to measure the phenomenon, has led to a chaotic body of literature. A key point of contention is how to use Maslach’s inventory. Maslach never designated a cutoff point at which a worker tips from not burnt out to burnt out. Rather the inventory was designed as a tool to help researchers identify patterns of burnout within a given work environment or profession.
But in practice, Maslach has little control over how researchers use the inventory. A review of 182 studies on physician burnout in 45 countries reported in September 2018 in JAMA is illustrative. Almost 86 percent of studies in that review used a version of the Maslach Burnout Inventory. But roughly a quarter of those studies used unofficial versions of Maslach’s scale, such as halving the number of statements or measuring exhaustion only. Those versions are clinically invalid, Maslach contends.
Moreover, most researchers using the inventory, or a modified version, did designate cutoff scores, though teams’ definitions for high, medium and low burnout showed little agreement. Consequently, estimates for the prevalence of physician burnout varied from 0 to 80.5 percent — figures that are impossible to interpret, the researchers note.
What’s more, across all the studies, the JAMA team identified 142 definitions of burnout. And among the subset of studies not using a version of the inventory, the researchers identified 11 unique methods for measuring burnout.
Those many concerns are prompting some researchers to call for a return to the drawing board on how to define and measure burnout. That process should start with qualitative interviews to see how people struggling at work speak about their own experiences, Demerouti says. “We don’t [have] a good conceptualization and diagnosis of burnout.… We need to start from scratch.”
Do researchers agree on any features of burnout? Surprisingly, yes. Researchers concur that exhaustion is a core feature of the syndrome, wrote Bianchi and his team in March 2021 in Clinical Psychological Science.
Research in the past two decades is also converging on the idea that burnout appears to involve changes to cognition, such as problems with memory and concentration. Those cognitive problems can take the form of people becoming forgetful — missing a recurring meeting or struggling to perform routine tasks, for instance, says Charlie Renaud, an occupational health psychologist at the University of Rennes in France. Such struggles can carry over into people’s personal lives, causing leisure activities, such as reading and watching movies, to become laborious.
As these findings mount, some researchers have begun to incorporate questions on cognitive changes into their burnout scales, Renaud says. Is burnout a form of depression? At first glance, the two concepts appear contradictory. Depression is typically seen as stemming from within the individual and burnout as stemming from societal forces, chiefly the workplace (SN: 2/12/23). But some researchers have begun to question if burnout exists as a standalone diagnosis. The concepts are not mutually exclusive, research shows. Chronic stress in one’s environment can trigger depression and certain temperaments can make one more prone to burnout.
For instance, scoring high for the personality trait neuroticism — characterized by irritability and a tendency to worry — better predicted a person’s likelihood of experiencing burnout than certain work-related factors, such as poor supervisor support and lack of rapport with colleagues, Bianchi and his team reported in 2018 in Psychiatry Research.
Moreover, exhaustion occurred together with depression more frequently than with either cynicism or inefficacy, Bianchi and his team reported in the 2021 paper. If burnout is characterized by a suite of symptoms, then exhaustion and depression appear a more promising combination than the Maslach trifecta, the team reported.
“The real problem is that we want to believe that burnout is not a depressive condition, [or] as severe as a depressive condition,” Bianchi says. But that, he adds, simply isn’t true.
Should people be able to get a diagnosis of “burnout”? Not everyone thinks that’s a good idea. “Burnout was never, ever thought of as a clinical diagnosis,” Maslach says.
Bianchi and his team disagree. The researchers have developed their own scale, the Occupational Depression Inventory, which assesses nine core symptoms associated with major depression, including cognitive impairment and suicidal thinking, through the lens of work. For instance, instead of rating a statement like “I feel like a failure,” participants rate the statement, “My experience at work made me feel like a failure.”
If burnout is a form of depression, then it can be treated as such, Bianchi says. And, unlike burnout, treatments for depression, such as therapy and, in severe cases, medication, are already established. “Hopefully the interventions, the treatments, the forms of support that exist for depressed people can then be applied for occupational depression,” he says.
But treating the individual, while often a necessary first step, does nothing to alleviate the work-related stress that triggered the crisis, says occupational health psychologist Kirsi Ahola of the Finnish Institute of Occupational Health in Helsinki. “[Imagine] the person is on sick leave, for example, for a few weeks and recuperates and rests … and he comes back to the exactly same situation where the demands are too high and no support and whatever. Then he or she starts burning out again.” That cycle is difficult to break.
Burnout is not included in the American Psychiatric Association’s current Diagnostic and Statistical Manual. The World Health Organization adopted Maslach’s conceptualization of burnout when they outlined the syndrome in their 2019 International Classification of Diseases. Burnout constitutes “an occupational phenomenon,” not a medical condition, the agency noted.
With the evidence so murky, is there any help for people struggling at work? Most researchers agree that interventions must target work-related distress at all levels, from the individual to the workplace to governing bodies.
Interventions at the individual level include therapy, exercise, developing hobbies outside of work and crafting one’s job to better fit one’s goals (SN: 1/10/23). Additionally, cognitive training programs that help restore memory, attention and other cognitive deficits have shown promise in alleviating the cognitive problems associated with burnout, Renaud and University of Rennes developmental psychologist Agnès Lacroix reported January 2 in the International Journal of Stress Management.
At the workplace level, simple fixes, such as fewer video meetings and reducing distractions during the workday, can alleviate distress (SN: 4/7/21). It’s time to chip away at all the little changes that have increased people’s workload over time, Maslach says. “Everybody adds stuff to people’s work. They never subtract.”
Ultimately, though, it may take systemic changes, such as more stringent labor laws, to combat burnout in countries like the United States, where sick leave is seldom guaranteed and few rules protect employees from overwork and job insecurity.
But even without regulations forcing employers’ hands, governments and companies that prioritize healthy workplaces have a competitive advantage. “When people are feeling well and cope well and have energy, they are also better workers,” Ahola says.