Red flowers have a ‘magic trait’ to attract birds and keep bees away

For flowering plants, reproduction is a question of the birds and the bees. Attracting the right pollinator can be a matter of survival – and new research shows how flowers do it is more intriguing than anyone realised, and might even involve a little bit of magic.

In our new paper, published in Current Biology, we discuss how a single “magic” trait of some flowering plants simultaneously camouflages them from bees and makes them stand out brightly to birds.

How animals see

We humans typically have three types of light receptors in our eyes, which enable our rich sense of colours.

These are cells sensitive to blue, green or red light. From the input from these cells, the brain generates many colours including yellow via what is called colour opponent processing.

The way colour opponent processing works is that different sensed colours are processed by the brain in opposition. For example, we see some signals as red and some as green – but never a colour in between.

Many other animals also see colour and show evidence of also using opponent processing.

Bees see their world using cells that sense ultraviolet, blue and green light, while birds have a fourth type sensitive to red light as well.

Our colour perception illustrated with the spectral bar is different to bees that are sensitive to UV, blue and green, or birds with four colour photoreceptors including red sensitivity. Adrian Dyer & Klaus Lunau, CC BY

The problem flowering plants face

So what do these differences in colour vision have to do with plants, genetics and magic?

Flowers need to attract pollinators of the right size, so their pollen ends up on the correct part of an animal’s body so it’s efficiently flown to another flower to enable pollination.

Accordingly, birds tend to visit larger flowers. These flowers in turn need to provide large volumes of nectar for the hungry foragers.

But when large amounts of sweet-tasting nectar are on offer, there’s a risk bees will come along to feast on it – and in the process, collect valuable pollen. And this is a problem because bees are not the right size to efficiently transfer pollen between larger flowers.

Flowers “signal” to pollinators with bright colours and patterns – but these plants need a signal that will attract birds without drawing the attention of bees.

We know bee pollination and flower signalling evolved before bird pollination. So how could plants efficiently make the change to being pollinated by birds, which enables the transfer of pollen over long distances?

Avoiding bees or attracting birds?

A walk through nature lets us see with our own eyes that most red flowers are visited by birds, rather than bees. So bird-pollinated flowers have successfully made the transition. Two different theories have been developed that may explain what we observe.

One theory is the bee avoidance hypotheses where bird pollinated flowers just use a colour that is hard for bees to see.

A second theory is that birds might prefer red.

But neither of these theories seemed complete, as inexperienced birds don’t demonstrate a preference for a stronger red hue. However, bird-pollinated flowers do have a very distinct red hue, which suggests avoiding bees can’t solely explain why consistently salient red flower colours evolved.

Most red flowers are visited by birds, rather than bees. Jim Moore/iNaturalist, CC BY

A magical solution

In evolutionary science, the term magic trait refers to an evolved solution where one genetic modification may yield fitness benefits in multiple ways.

Earlier this month, a team working on how this might apply to flowering plants showed that a gene that modulates UV-absorbing pigments in flower petals can indeed have multiple benefits. This is because of how bees and birds view colour signals differently.

Bee-pollinated flowers come in a diverse range of colours. Bees even pollinate some plants with red flowers. But these flowers tend to also reflect a lot of UV, which helps bees find them.

The magic gene has the effect of reducing the amount of UV light reflected from the petal, making flowers harder for bees to see. But (and this is where the magic comes in) reducing UV reflection from a petal of a red flower simultaneously makes it look redder for animals – such as birds – which are believed to have a colour opponent system.

Red flowers look similar for humans, but as flowers evolved for bird vision a genetic change down-regulates UV reflection, making flowers more colourful for birds and less visible to bees. Adrian Dyer & Klaus Lunau, CC BY

Birds that visit these bright red flowers gain rewards – and with experience, they learn to go repeatedly to the red flowers.

One small gene change for colour signalling in the UV yields multiple beneficial outcomes by avoiding bees and displaying enhanced colours to entice multiple visits from birds.

We lucky humans are fortunate that our red perception can also see the result of this clever little trick of nature to produce beautiful red flower colours. So on your next walk on a nice day, take a minute to view one of nature’s great experiments on finding a clever solution to a complex problem.The Conversation

Adrian Dyer, Associate Professor, Department of Physiology, Monash University and Klaus Lunau, Professor, Institute of Sensory Ecology, Heinrich Heine UniversitÀt DÌsseldorf

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Simply Shining Light on Skin Can Replace Finger Pricks for People With Diabetes

Blood-glucose monitor uses light to spare diabetes patients from finger pricks – Credit: Christine Daniloff / MIT

A new method for measuring blood glucose levels, developed at MIT, could save diabetes patients from having to prick their fingers several times a day.

The MIT team used a technique that reveals the chemical composition of tissue by shining near-infrared light on them—and developed a shoebox-sized device that can measure blood glucose levels without any needles.

The researchers found that the measurements from their device were similar to those obtained by commercial continuous glucose monitoring sensors that require a wire to be implanted under the skin. While the device presented in this study is too large to be used as a wearable sensor, the researchers have since developed a wearable version that they are now testing in a small clinical study.


“For a long time, the finger stick has been the standard method for measuring blood sugar, but nobody wants to prick their finger every day, multiple times a day,” says Jeon Woong Kang, an MIT research scientist and the senior author of the study.

“Naturally, many diabetic patients are under-testing their blood glucose levels, which can cause serious complications. If we can make a noninvasive glucose monitor with high accuracy, then almost everyone with diabetes will benefit from this new technology.”

MIT postdoc Arianna Bresci is the lead author of the new study published this month in the journal Analytical Chemistry.

Some patients use wearable monitors, which have a sensor inserted just under the skin to provide glucose measurements from the interstitial fluid—but they can cause skin irritation and they need to be replaced every 10 to 15 days.

The MIT team bases their noninvasive sensors based on Raman spectroscopy, a type that reveals the chemical composition of tissue or cells by analyzing how near-infrared light is scattered, or deflected, as it encounters different kinds of molecules.

A recent breakthrough allowed them to directly measure glucose Raman signals from the skin. Normally, this glucose signal is too small to pick out from all of the other signals generated by molecules in tissue. The MIT team found a way to filter out much of the unwanted signal by shining near-infrared light onto the skin at a different angle from which they collected the resulting Raman signal.

Typically, a Raman spectrum may contain about 1,000 bands. However, the MIT team found that they could determine blood glucose levels by measuring just three bands—one from the glucose plus two background measurements. This approach allowed the researchers to reduce the amount and cost of equipment needed, allowing them to perform the measurement with a cost-effective device about the size of a shoebox.

“With this new approach, we can change the components commonly used in Raman-based devices, and save space, time, and cost,” Bresci told MIT News.
Toward a watch-sized sensor

In a clinical study performed at the MIT Center for Clinical Translation Research (CCTR), the researchers used the new device to take readings from a healthy volunteer over a four-hour period, as the subject rested their arm on top of the device.

Each measurement takes a little more than 30 seconds, and the researchers took a new reading every five minutes.

During the study, the subject consumed two 75-gram glucose drinks, allowing the researchers to monitor significant changes in blood glucose concentration. They found that the Raman-based device showed accuracy levels similar to those of two commercially available, invasive glucose monitors worn by the subject.

Since finishing that study, the researchers have developed a smaller prototype, about the size of a cellphone, that they’re currently testing at the MIT CCTR as a wearable monitor in healthy and pre-diabetic volunteers.

The researchers are also working on making the device even smaller, about the size of a watch, and next year they plan to run a larger study working with a local hospital, which will include people with diabetes.Edited from article by Anne Trafton | MIT News Simply Shining Light on Skin Can Replace Finger Pricks for People With Diabetes
Read More........

Polar bears are adapting to climate change at a genetic level – and it could help them avoid extinction

Alice Godden, University of East Anglia: The Arctic Ocean current is at its warmest in the last 125,000 years, and temperatures continue to rise. Due to these warming temperatures more than two-thirds of polar bears are expected to be extinct by 2050 with total extinction predicted by the end of this century.

But in our new study my colleagues and I found that the changing climate was driving changes in the polar bear genome, potentially allowing them to more readily adapt to warmer habitats. Provided these polar bears can source enough food and breeding partners, this suggests they may potentially survive these new challenging climates.

We discovered a strong link between rising temperatures in south-east Greenland and changes in polar bear DNA. DNA is the instruction book inside every cell, guiding how an organism grows and develops. In processes called transcription and translation, DNA is copied to generate RNA (molecules that reflect gene activity) and can lead to the production of proteins, and copies of transposons (TEs), also known as “jumping genes”, which are mobile pieces of the genome that can move around and influence how other genes work.

In carrying out our recent research we found that there were big differences in the temperatures observed in the north-east, compared with the south-east regions of Greenland. Our team used publicly available polar bear genetic data from a research group at the University of Washington, US, to support our study. This dataset was generated from blood samples collected from polar bears in both northern and south-eastern Greenland.

Our work built on the Washington University study which discovered that this south-eastern population of Greenland polar bears was genetically different to the north-eastern population. South-east bears had migrated from the north and became isolated and separate approximately 200 years ago, it found.

Researchers from Washington had extracted RNA from polar bear blood samples and sequenced it. We used this RNA sequencing to look at RNA expression — the molecules that act like messengers, showing which genes are active, in relation to the climate. This gave us a detailed picture of gene activity, including the behaviour of TEs. Temperatures in Greenland have been closely monitored and recorded by the Danish Meteorological Institute. So we linked this climate data with the RNA data to explore how environmental changes may be influencing polar bear biology.

Does temperature change anything?

From our analysis we found that temperatures in the north-east of Greenland were colder and less variable, while south-east temperatures fluctuated and were significantly warmer. The figure below shows our data as well as how temperature varies across Greenland, with warmer and more volatile conditions in the south-east. This creates many challenges and changes to the habitats for the polar bears living in these regions.

In the south-east of Greenland, the ice-sheet margin, which is the edge of the ice sheet and spans 80% of Greenland, is rapidly receding, causing vast ice and habitat loss.

The loss of ice is a substantial problem for the polar bears, as this reduces the availability of hunting platforms to catch seals, leading to isolation and food scarcity. The north-east of Greenland is a vast, flat Arctic tundra, while south-east Greenland is covered by forest tundra (the transitional zone between coniferous forest and Arctic tundra). The south-east climate has high levels of rain, wind, and steep coastal mountains.

Temperature across Greenland and bear locations

Author data visualisation using temperature data from the Danish Meteorological Institute. Locations of bears in south-east (red icons) and north-east (blue icons). CC BY-NC-ND

How climate is changing polar bear DNA

Over time the DNA sequence can slowly change and evolve, but environmental stress, such as warmer climate, can accelerate this process.

TEs are like puzzle pieces that can rearrange themselves, sometimes helping animals adapt to new environments. In the polar bear genome approximately 38.1% of the genome is made up of TEs. TEs come in many different families and have slightly different behaviours, but in essence they all are mobile fragments that can reinsert randomly anywhere in the genome.

In the human genome, 45% is comprised of TEs and in plants it can be over 70%. There are small protective molecules called piwi-interacting RNAs (piRNAs) that can silence the activity of TEs.

Despite this, when an environmental stress is too strong, these protective piRNAs cannot keep up with the invasive actions of TEs. In our work we found that the warmer south-east climate led to a mass mobilisation from these TEs across the polar bear genome, changing its sequence. We also found that these TE sequences appeared younger and more abundant in the south-east bears, with over 1,500 of them “upregulated”, which suggests recent genetic changes that may help bears adapt to rising temperatures.

Some of these elements overlap with genes linked to stress responses and metabolism, hinting at a possible role in coping with climate change. By studying these jumping genes, we uncovered how the polar bear genome adapts and responds, in the shorter term, to environmental stress and warmer climates.

Our research found that some genes linked to heat-stress, ageing and metabolism are behaving differently in the south-east population of polar bears. This suggests they might be adjusting to their warmer conditions. Additionally, we found active jumping genes in parts of the genome that are involved in areas tied to fat processing – important when food is scarce. This could mean that polar bears in the south-east are slowly adapting to eating the rougher plant-based diets that can be found in the warmer regions. Northern populations of bears eat mainly fatty seals.

Overall, climate change is reshaping polar bear habitats, leading to genetic changes, with south-eastern bears evolving to survive these new terrains and diets. Future research could include other polar bear populations living in challenging climates. Understanding these genetic changes help researchers see how polar bears might survive in a warming world – and which populations are most at risk.

Don’t have time to read about climate change as much as you’d like?
Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 47,000+ readers who’ve subscribed so far.The Conversation

Alice Godden, Senior Research Associate, School of Biological Sciences, University of East Anglia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Cement Supercapacitors Could Turn the Concrete Around Us into Massive Energy Storage Systems

credit – MIT Sustainable Concrete Lab

Scientists from MIT have created a conductive “nanonetwork” inside a unique concrete mixture that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy.

It’s perhaps the most ubiquitous man-made material on Earth by weight, but every square foot of it could, with the addition of some extra materials, power the world that it has grown to cover.

Known as e c-cubed (ec3) the electron-conductive carbon concrete is made by adding an ultra-fine paracrystalline form of carbon known as carbon black, with electrolytes and carbon nanoscales.

Not a new technology, MIT reported in 2023 that 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement, could power the whole home, but advancements in materials sciences and manufacturing processes has improved the efficiency by orders of magnitude.

Now, just 5 cubic meters can do the job thanks to an improved electrolyte.

“A key to the sustainability of concrete is the development of ‘multifunctional concrete,’ which integrates functionalities like this energy storage, self-healing, and carbon sequestration,” said Admir Masic, lead author of the new study and associate professor of civil and environmental engineering at MIT.

“Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?”

The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope.

The team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like “web” that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system.

“Understanding how these materials ‘assemble’ themselves at the nanoscale is key to achieving these new functionalities,” adds Masic.

Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, “we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”

At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.

The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3—about the size of a refrigerator—can store over 2 kilowatt-hours of energy. That’s about enough to power an actual refrigerator for a day.

While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements—from slabs and walls to domes and vaults—and last as long as the structure itself.

“The Ancient Romans made great advances in concrete construction. Massive structures like the Pantheon stand to this day without reinforcement. If we keep up their spirit of combining material science with architectural vision, we could be at the brink of a new architectural revolution with multifunctional concretes like ec3,” proposes Masic.

Taking inspiration from Roman architecture, the team built a miniature ec3 arch to show how structural form and energy storage can work together. Operating at 9 volts, the arch supported its own weight and additional load while powering an LED light.

The latest developments in ec³ technology bring it a step closer to real-world scalability. It’s already been used to heat sidewalk slabs in Sapporo, Japan, due to its thermally conductive properties, representing a potential alternative to salting.

“What excites us most is that we’ve taken a material as ancient as concrete and shown that it can do something entirely new,” says James Weaver, a co-author on the paper who is an associate professor of design technology and materials science and engineering at Cornell University, as well as a former EC³ Hub researcher. “By combining modern nanoscience with an ancient building block of civilization, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.” Cement Supercapacitors Could Turn the Concrete Around Us into Massive Energy Storage Systems
Read More........

Indian researchers develop smart portable device to detect toxic pesticides in water, food

IANS Photo

New Delhi, (IANS): A team of researchers from the Indian Institute of Technology (IIT) Madras and Panjab University has developed a portable, automated optical device capable of detecting extremely low concentrations of pesticide residues in water, food, and the environment that can pose serious risks to human and environmental health.

Conventional laboratory methods for detecting such residues, particularly the commonly used organophosphate Malathion, are expensive, time-consuming, and require skilled personnel.

The new research, supported by the Department of Science and Technology, under its ‘Technology Development and Transfer’ Programme, addressed the challenge by designing a field-deployable, user-friendly device that offers real-time, ultra-sensitive pesticide detection.

The new ‘Smart MDD (Malathion Detection Device)’ is a colourimetric detection system that employs gold nanoparticles (AuNPs) and comes with an aptamer molecule engineered to recognise Malathion specifically.

The interaction causes a visible colour shift -- from red to blue --indicating the presence of the pesticide, a change that the device’s built-in optical system precisely measures. This automated process eliminates manual handling and enables quick, reliable results, said the team. The findings were published in the peer-reviewed journal Review of Scientific Instruments.

“This technology can have a significant real-world impact. It can help farmers, food safety agencies, and environmental regulators rapidly monitor pesticide contamination on-site -- whether in irrigation water, produce, or soil -- thereby ensuring compliance with safety standards and reducing public health risks," Prof. Sujatha Narayanan Unni, Department of Applied Mechanics and Biomedical Engineering, IIT Madras, told IANS.

"It can also aid in tracking pesticide runoff in water bodies, a major environmental concern,” Unni added.

The team demonstrated a detection limit of about 250 picomolar and correlation with lab spectrophotometer results -- metrics that are rarely seen in portable devices.

Currently tested under laboratory conditions, the device will next undergo validation with real-world samples such as fruits, vegetables, and field water sources."We plan to extend the platform to detect a broader range of pesticides, strengthening its role in sustainable agricultural management and environmental monitoring,” Dr. Rohit Kumar Sharma, Department of Chemistry and Centre for Advanced Studies in Chemistry, Panjab University, told IANS. Indian researchers develop smart portable device to detect toxic pesticides in water, food | MorungExpress | morungexpress.com
Read More........

New Airship-style Wind Turbine Can Find Gusts at Higher Altitudes for Constant, Cheaper Power

The S1500 from Sawes – credit, handout

A new form of wind energy is under development that promises more consistent power and lower deployment costs by adapting the design of a dirigible, or zeppelin.

Suspended 1,000 feet up where the wind is always blowing, it presents as an ideal energy source for rural communities, disaster areas, or places where wind turbines aren’t feasible to build.

The design has grown through multilateral innovation by dozens of engineers and scientists, but an MIT startup called Altaeros, and Beijing-based start-up Sawes Energy Technology have taken it to market. Both have already produced prototypes that boast some serious performance.


In 2014, Altaeros’ Buoyant Air Turbine (or BAT) was ready for commercial deployment in rural Alaska, where diesel generators are still heavily relied on for power. Its 35-foot-long inflatable shell, made of the same materials as modern blimps, provided 30 kilowatts of wind energy.

As a power provider, though, Altaeros could never get off the ground, and now has adopted much of its technology to the provision of wireless telecommunication services for civil and commercial contracting.

Heir to Altaeros’ throne, Sawes has managed to greatly exceed the former’s power generation, and now hopes to achieve nothing less than contributing a Chinese solution to the world’s energy transition.

Altaeros’ BAT – credit, Altaeros, via MIT

During a mid-September test, Sawes’ airship-like S1500, as long and wide as a basketball court and as tall as a 13-storey building, generated 1 megawatt of power which it delivered through its tether cable down to a generator below.

Conducted in the windy, western desert province of Xinjiang, the S1500 surpassed the capabilities of its predecessor turbine by 10-times, which achieved 100 kilowatts in October of last year.

Dun Tianrui, the company’s CEO and chief designer, called the megawatt-mark “a critical step towards putting the product into real-world use” which would happen next year when the company expects to begin mass production.

At the same time, the Sawes R&D team is looking into advances in materials sciences and optimization of manufacturing that will ensure the cost of supplying that megawatt to rural grids will be around $0.01 per kilowatt-hour—literally 100-times cheaper than what was theorized as the cost for Altaeros’ model from 10 years ago.

One of the major positives of the BAT is that by floating 1,000 to 2,000 feet above the ground, they render irrelevant the main gripe and failing of wind energy—that some days the wind doesn’t blow. A conventional turbine reaches only between 100 and 300 feet up, putting birds at risk as well as not collecting all the air that’s blowing over the landscape.

Sawes’ unit is about 40% cheaper to build and deploy than a normal turbine, presenting the opportunity for a 30% lower cost for buying the wind energy.According to a piece in the Beijing Daily, reported on by South China Morning Post, challenges remain before commercial deployment can begin, including what to do during storms, and whether or not it will compete in communities with existing coal-power supply. New Airship-style Wind Turbine Can Find Gusts at Higher Altitudes for Constant, Cheaper Power
Read More........

Biodegradable Plastic Made from Bamboo Is Stronger and Easy to Recycle

Bamboo forest – credit Bady Abbas, via Unsplash

GNN has reported previously on how versatile bamboo is for construction and craft, so it maybe shouldn’t be a surprise that researchers in China have found a way to turn this miracle plant into plastic.

While many biodegradable materials have already been developed for replacing lighter, flexible plastic, durable or rigid plastic replacements are few. The kinds of plastic used for tools, car interiors, and appliance exteriors have few if any biodegradable replacements.

Enter Dawei Zhao at Shenyang University of Chemical Technology in China’s far northeast, who has developed a method for turning cellulose from bamboo into a rigid yet biodegradable plastic that outperforms not only alternative biodegradable options, but plastic itself for mechanical strength and thermo-mechanical properties.

“Bamboo’s rapid growth makes it a highly renewable resource, providing a sustainable alternative to traditional timber sources, but its current applications are still largely limited to more traditional woven products,” Zhao told New Scientist.

His method takes cellulose from bamboo and subjects it to zinc chloride and a simple acid to break up the complex polysaccharide bonds that hold this plant fiber together. Next they add ethanol into the soup of smaller molecules, and from that derive a plastic for use in injection, molding, and machining manufacturing techniques.

One major drawback is the bamboo plastic’s inflexibility, which limits its incorporation into the full gamut of products that petroleum-based plastics can fulfil. On the other hand, however, these are often the plastics that remain in the ecosystem longest, and are the hardest to recycle. Therefore replacing them still represents a valuable contribution to reducing the overall plastic burden in the environment and waste streams.

Zhao and his team published a paper on the process and properties of the bamboo plastic in Nature, including in which is a cost-analysis that finds the bioplastic’s recyclability emerges as a value that sees it attain cost-competitiveness with conventional plastic. Biodegradable Plastic Made from Bamboo Is Stronger and Easy to Recycle
Read More........

The Subtle Power of Unhearable Sound: Mood and Cognition-Altering Agents

For representational purpose (Image by Gerd Altmann from Pixabay)

Shreyas Kannan, Plaksha University: The human ear has a maximum hearing range of 20 Hz to 20,000 Hz. However, in all reality the range at which we are most sensitive is from 1000 Hz to 4000 Hz at which most natural speech occurs. As frequency decreases, the sound energy or decibels needed to hear sounds increases, which makes the sound effectively “too soft” unless played at a high enough volume. What this means is that the lower and higher frequencies are both difficult to perceive normally, and frequencies outside of this range entirely, Infrasound, which vibrates below 20 Hz, and ultrasound, which is above 20,000 Hz, are simply imperceptible.

These imperceptible sounds however, have a very perceptible effect. Vic Tandy, a British engineer, believed his laboratory was haunted—until he discovered that a silent 19 Hz sound wave, produced by a fan, was resonating with his eyeballs and triggering shadowy hallucinations. Even though these sounds were below the threshold of human hearing, it could still alter mood, physiology, and cognition.

Infrasound and ultrasound can also have indirect subliminal effects. They can very subtly and over long durations of time have a negative or positive effect on the psyche of the listener. Infrasound, although inaudible, can cause a range of adverse effects, including fatigue, sleep disturbance, and cognitive dysfunction.

How does this work, especially for sounds we can’t even hear? The sounds in the Ultrasonic range tend to stimulate the emotional centers of the brain, which generally are the amygdala and hippocampus, to name a few. A study proceeded to track this and found that sounds containing inaudible high-frequency components induced activation in deep brain structures associated with emotion and reward. This effectively demonstrates a reflexive unconscious emotional response, be it positive or negative, toward a specific band of sound frequencies.

The issues do not end here. There is a persistent worry of chronic exposure to just basic sound, not just ultrasonic or infrasonic sound, having long term effects on the brain. Symptoms such as ‘chronic fatigue,’ ‘repeated headache,’ and ‘backache’ are observed to be highly associated with low- and mid-octave band center frequency noise exposure among the sampled workers. Among the major psychological symptoms... It is evident that ‘irritability’ is highly associated with low- and mid-octave band noise frequency characteristics. In conclusion even when the noise isn't painfully loud, its frequency can still degrade physical and mental health over time which should be raising ethical and public health concerns.

These effects, as can be surmised, are highly weaponizable “smart consumer devices produce possibly imperceptible sound at both high (17–21kHz) and low (60–100Hz) frequencies, at the maximum available volume setting, potentially turning them into acoustic cyber-weapons.”

The physical and systemic effects that can be caused by long exposure to something that can technically originate from our devices, especially considering previously what the Infrasonic and ultrasonic bands can potentially do. Overall, we find that many of the devices tested are capable of reproducing frequencies within both high and low ranges, at levels exceeding those recommended in published guidelines. Such attacks are often trivial to develop and, in many cases, could be added to existing malware payloads, as they may be attractive to adversaries with specific motivations or targets.

One particular patent actually claims that 1/2 Hz frequency (Around 0.5 Hz) affects the autonomic nervous system and can produce a variety of effects, not limited to Eyelid drooping (ptosis) Relaxation and drowsiness Feeling of pressure on the forehead Visual effects with eyes closed Stomach sensations Tenseness (at certain frequencies). It goes on to propose how this can be used in law enforcement in the form of Non-lethal crowd control Creating disorientation in standoff situations and Remote manipulation from a distance. It goes on to list the effects of the 2.5 Hz range and the other set of effects this has.

However, not all sound effects are bad. Certain ways of application of sound can be used to actually help treat mental issues. One example is through the use of binaural beats, a form of imperceptible or subtle auditory stimulation, which are being studied for their effects on mood regulation, anxiety, and depression. Binaural beats are a type of sound that can influence brainwave activity by playing two slightly different frequencies in each ear, creating a perceived third “beat” in the brain in the way of a non-invasive sound-based intervention. A systematic study conducted to this end found positive effects in the short term while stressing that further research was needed in the long term to determine the full scope of positive effects.

It should also be noted that while these frequencies can be used negatively, it is perfectly possible for them to be used positively. Playing the right type of sound, be it music or a particular frequency set at a volume too low to be heard tended to elicit a positive response on mood and well being.

From the different sources of literature and patent claims, it can be surmised that with the exact know-how and mapping of which exact frequency to use to affect a person in a certain manner, one could be completely manipulated to actually feel a certain way about a topic that we might actually dislike. Any emotion can be aroused as necessary. Furthermore, it can be done through the speakers in everyday devices! An advertisement for a product could play the right sounds to make you view it more favourably, documentaries could potentially use this to make you feel particularly worse about a certain topic to increase the impact, electoral candidates can subtly change their image playing the right sounds at the right time, interviewees could potentially be influenced to feel uneasy for no ‘explainable’ reason as a form of sabotage, etc! The actual potential for abuse of the sounds we cannot even hear, is extreme.

How can we protect ourselves from these phenomena? The answer is quite difficult, especially at this age where sounds come from everywhere around us. The solution to this is to call for scientific transparency, proper protocols to monitor the actual playing sounds and strictly maintaining awareness of one's surroundings. In this day and age we must learn to listen to sounds that we cannot hear.Shreyas Kannan is a B.Tech student in Robotics and Cyber-Physical Systems (RCPS) at Plaksha University, and part of its inaugural graduating batch. He has an ardent passion for all things related to movement and propulsion in vehicles, and brings boundless curiosity and energy to projects that make objects move—whether on land, underwater, or in space. From autonomous underwater navigation to aerospace systems, Shreyas is eager to explore and contribute to the frontier of motion-driven technologies. The Subtle Power of Unhearable Sound: Mood and Cognition-Altering Agents | MorungExpress | morungexpress.com
Read More........

Blue, green, brown, or something in between – the science of eye colour explained

You’re introduced to someone and your attention catches on their eyes. They might be a rich, earthy brown, a pale blue, or the rare green that shifts with every flicker of light. Eyes have a way of holding us, of sparking recognition or curiosity before a single word is spoken. They are often the first thing we notice about someone, and sometimes the feature we remember most.

Across the world, human eyes span a wide palette. Brown is by far the most common shade, especially in Africa and Asia, while blue is most often seen in northern and eastern Europe. Green is the rarest of all, found in only about 2% of the global population. Hazel eyes add even more diversity, often appearing to shift between green and brown depending on the light.

So, what lies behind these differences?

It’s all in the melanin

The answer rests in the iris, the coloured ring of tissue that surrounds the pupil. Here, a pigment called melanin does most of the work.

Brown eyes contain a high concentration of melanin, which absorbs light and creates their darker appearance. Blue eyes contain very little melanin. Their colour doesn’t come from pigment at all but from the scattering of light within the iris, a physical effect known as the Tyndall effect, a bit like the effect that makes the sky look blue.

In blue eyes, the shorter wavelengths of light (such as blue) are scattered more effectively than longer wavelengths like red or yellow. Due to the low concentration of melanin, less light is absorbed, allowing the scattered blue light to dominate what we perceive. This blue hue results not from pigment but from the way light interacts with the eye’s structure.

Green eyes result from a balance, a moderate amount of melanin layered with light scattering. Hazel eyes are more complex still. Uneven melanin distribution in the iris creates a mosaic of colour that can shift depending on the surrounding ambient light.

What have genes got to do with it?

The genetics of eye colour is just as fascinating.

For a long time, scientists believed a simple “brown beats blue” model, controlled by a single gene. Research now shows the reality is much more complex. Many genes contribute to determining eye colour. This explains why children in the same family can have dramatically different eye colours, and why two blue-eyed parents can sometimes have a child with green or even light brown eyes.

Eye colour also changes over time. Many babies of European ancestry are born with blue or grey eyes because their melanin levels are still low. As pigment gradually builds up over the first few years of life, those blue eyes may shift to green or brown.

In adulthood, eye colour tends to be more stable, though small changes in appearance are common depending on lighting, clothing, or pupil size. For example, blue-grey eyes can appear very blue, very grey or even a little green depending on ambient light. More permanent shifts are rarer but can occur as people age, or in response to certain medical conditions that affect melanin in the iris.

The real curiosities

Then there are the real curiosities.

Heterochromia, where one eye is a different colour from the other, or one iris contains two distinct colours, is rare but striking. It can be genetic, the result of injury, or linked to specific health conditions. Celebrities such as Kate Bosworth and Mila Kunis are well-known examples. Musician David Bowie’s eyes appeared as different colours because of a permanently dilated pupil after an accident, giving the illusion of heterochromia.

In the end, eye colour is more than just a quirk of genetics and physics. It’s a reminder of how biology and beauty intertwine. Each iris is like a tiny universe, rings of pigment, flecks of gold, or pools of deep brown that catch the light differently every time you look.

Eyes don’t just let us see the world, they also connect us to one another. Whether blue, green, brown, or something in-between, every pair tells a story that’s utterly unique, one of heritage, individuality, and the quiet wonder of being human.The Conversation

Davinia Beaver, Postdoctoral research fellow, Clem Jones Centre for Regenerative Medicine, Bond University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

The science behind a freediver’s 29-minute breath hold world record

Croatian freediver Vitomir Maričić. Facebook.com @molchanovs, Instagram.com @maverick2go, Facebook.com @Vitomir Maričić, CC BY 

Most of us can hold our breath for between 30 and 90 seconds.

A few minutes without oxygen can be fatal, so we have an involuntary reflex to breathe.

But freediver Vitomir Maričić recently held his breath for a new world record of 29 minutes and three seconds, lying on the bottom of a 3-metre-deep pool in Croatia.

Vitomir Maričić set a new Guinness World Record for “the longest breath held voluntarily under water using oxygen”.

This is about five minutes longer than the previous world record set in 2021 by another Croatian freediver, Budimir Å obat.

Interestingly, all world records for breath holds are by freedivers, who are essentially professional breath-holders.
They do extensive physical and mental training to hold their breath under water for long periods of time.

So how do freedivers delay a basic human survival response and how was Maričić able to hold his breath about 60 times longer than most people?

Increased lung volumes and oxygen storage

Freedivers do cardiovascular training – physical activity that increases your heart rate, breathing and overall blood flow for a sustained period – and breathwork to increase how much air (and therefore oxygen) they can store in their lungs.

This includes exercise such as swimming, jogging or cycling, and training their diaphragm, the main muscle of breathing.

Diaphragmatic breathing and cardiovascular exercise train the lungs to expand to a larger volume and hold more air.

This means the lungs can store more oxygen and sustain a longer breath hold.

Freedivers can also control their diaphragm and throat muscles to move the stored oxygen from their lungs to their airways. This maximises oxygen uptake into the blood to travel to other parts of the body.

To increase the oxygen in his lungs even more before his world record breath-hold, Maričić inhaled pure (100%) oxygen for ten minutes.

This gave Maričić a larger store of oxygen than if he breathed normal air, which is only about 21% oxygen.

This is classified as an oxygen-assisted breath-hold in the Guiness Book of World Records.

Even without extra pure oxygen, Maričić can hold his breath for 10 minutes and 8 seconds.

Resisting the reflex to take another breath

Oxygen is essential for all our cells to function and survive. But it is high carbon dioxide, not low oxygen that causes the involuntary reflex to breathe.

When cells use oxygen, they produce carbon dioxide, a damaging waste product.

Carbon dioxide can only be removed from our body by breathing it out.

When we hold our breath, the brain senses the build-up in carbon dioxide and triggers us to breathe again.

Freedivers practice holding their breath to desensitise their brains to high carbon dioxide and eventually low oxygen. This delays the involuntary reflex to breathe again.

When someone holds their breath beyond this, they reach a “physiological break-point”. This is when their diaphragm involuntarily contracts to force a breath.

This is physically challenging and only elite freedivers who have learnt to control their diaphragm can continue to hold their breath past this point.

Indeed, Maričić said that holding his breath longer:

got worse and worse physically, especially for my diaphragm, because of the contractions. But mentally I knew I wasn’t going to give up.

Mental focus and control is essential

Those who freedive believe it is not only physical but also a mental discipline.

Freedivers train to manage fear and anxiety and maintain a calm mental state. They practice relaxation techniques such as meditation, breath awareness and mindfulness.

Interestingly, Maričić said:

after the 20-minute mark, everything became easier, at least mentally.

Reduced mental and physical activity, reflected in a very low heart rate, reduces how much oxygen is needed. This makes the stored oxygen last longer.

That is why Maričić achieved this record lying still on the bottom of a pool.

Don’t try this at home

Beyond competitive breath-hold sports, many other people train to hold their breath for recreational hunting and gathering.

For example, ama divers who collect pearls in Japan, and Haenyeo divers from South Korea who harvest seafood.

But there are risks of breath holding.

Maričić described his world record as:

a very advanced stunt done after years of professional training and should not be attempted without proper guidance and safety.

Indeed, both high carbon dioxide and a lack of oxygen can quickly lead to loss of consciousness.

Breathing in pure oxygen can cause acute oxygen toxicity due to free radicals, which are highly reactive chemicals that can damage cells.

Unless you’re trained in breath holding, it’s best to leave this to the professionals.The Conversation

Theresa Larkin, Associate Professor of Medical Sciences, University of Wollongong and Gregory Peoples, Senior Lecturer - Physiology, University of Wollongong

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Retooled Cancer Drugs Eliminated Aggressive Tumors in ‘Remarkable’ New Trial


Cancer drugs that have been used for two decades were retooled until they were able to eliminate aggressive tumors in a “remarkable” clinical trial.

Two of the patients—one with the deadliest form of skin cancer called melanoma and another with breast cancer—were told their tumors disappeared completely.

Scientists at Rockefeller University in New York engineered an upgrade to an antibody that improved a class of drugs—called CD40 agonist antibodies—which have struggled to make good on their early promise, but showed great potential.

While effectively activating the immune system to kill cancer cells in animal models, the drugs had only “limited” impact on humans, while also triggering dangerous adverse reactions.

So, five years ago, the team at the New York university engineered an enhanced CD40 agonist antibody so that it improved its efficiency and limited any serious side effects for mice, with the next step being a clinical trial with cancer patients.

The results from the phase 1 clinical trial of the drug, dubbed 2141-V11, showed that six out of 12 cancer patients saw their tumors shrink, including two that saw them disappear completely.

“Seeing these significant shrinkages and even complete remission in such a small subset of patients is quite remarkable,” said study first author Dr. Juan Osorio.

He said the effect wasn’t limited to tumors that were injected with the drug; tumors elsewhere in the body either got smaller or were destroyed by immune cells.

“This effect—where you inject locally but see a systemic response—that’s not something seen very often in any clinical treatment,” said Professor Jeffrey Ravetch who oversaw the study.

“It’s another very dramatic and unexpected result from our trial.”

Oral squamous cancer cell (white) being attacked by two cytotoxic T cells (red) – Credit: NIH

He explained that CD40 is a cell surface receptor and member of the tumor necrosis factor (TNF) receptor “superfamily”—proteins that are largely expressed by immune cells. When triggered, CD40 prompts the rest of immune system to spring into action, promoting anti-tumor immunity and developing tumor-specific T cell responses.

In 2018, Prof. Ravetch’s lab engineered the 2141-V11, a CD40 antibody that binds tightly to human CD40 receptors and is modified to enhance its cross-linking by also engaging a specific Fc receptor.

It proved to be 10 times more powerful in its capacity to elicit an anti-tumor immune response.

The research team then changed how they administered the drug. When previously given intravenously, too many non-cancerous cells picked it up, leading to the well-known toxic side effects.

They instead injected the drug directly into tumors. When they did that, they saw “only mild toxicity”, said Prof. Ravetch.

The new trial included 12 patients who had various types of cancer, and of those 12, six experienced systemic tumor reduction, of which two had their cancers (notorious for being aggressive and recurring) disappear entirely.

“The melanoma patient had dozens of metastatic tumors on her leg and foot, and we injected just one tumor up on her thigh. After multiple injections of that one tumor, all the other tumors disappeared,” Ravetch said.

“The same thing happened in the patient with metastatic breast cancer, who also had tumors in her skin, liver, and lung. And even though we only injected the skin tumor, we saw all the tumors disappear.”

Tissue samples from the tumor sites revealed the immune activity that the drug stimulated.

“We were quite surprised to see that the tumors became full of immune cells—including different types of dendritic cells, T cells, and mature B cells—that formed aggregates resembling something like a lymph node,” said Dr. Osorio.

“The drug creates an immune micro-environment within the tumor, and essentially replaces the tumor with these tertiary lymphoid structures, which are associated with improved prognosis and response to immunotherapy.”

The team also found TLS in the tumors they didn’t inject.

“Once the immune system identifies the cancer cells, immune cells migrate to the non-injected tumor sites,” said Dr. Osorio.

The findings, published in the journal Cancer Cell, sparked several other clinical trials that the Ravetch lab is currently working on with researchers at Memorial Sloan Kettering and Duke University.

The trials are investigating 2141-V11’s effect on specific cancers, including bladder cancer, prostate cancer, and glioblastoma—all aggressive and hard to treat.Nearly 200 people are enrolled in the various studies that the researchers hope will explain why some patients respond to 2141-V11 and others do not—and how to potentially change that.Retooled Cancer Drugs Eliminated Aggressive Tumors in ‘Remarkable’ New Trial
Read More........

What’s a ‘Strombolian eruption’? A volcanologist explains what happened at Mount Etna

Thermal camera images show the eruption and flows of lava down the side of Mount Etna. National Institute of Geophysics and Volcanology, CC BY

Teresa Ubide, The University of Queensland

On Monday morning local time, a huge cloud of ash, hot gas and rock fragments began spewing from Italy’s Mount Etna.

An enormous plume was seen stretching several kilometres into the sky from the mountain on the island of Sicily, which is the largest active volcano in Europe.

While the blast created an impressive sight, the eruption resulted in no reported injuries or damage and barely even disrupted flights on or off the island. Mount Etna eruptions are commonly described as “Strombolian eruptions” – though as we will see, that may not apply to this event.

What happened at Etna?

The eruption began with an increase of pressure in the hot gases inside the volcano. This led to the partial collapse of part of one of the craters atop Etna.

The collapse allowed what is called a pyroclastic flow: a fast-moving cloud of ash, hot gas and fragments of rock bursting out from inside the volcano.

Thermal camera images show the eruption and flows of lava down the side of Mount Etna. National Institute of Geophysics and Volcanology, CC BY

Next, lava began to flow in three different directions down the mountainside. These flows are now cooling down. On Monday evening, Italy’s National Institute of Geophysics and Volcanology announced the volcanic activity had ended.

Etna is one of the most active volcanoes in the world, so this eruption is reasonably normal.

What is a Strombolian eruption?

Volcanologists classify eruptions by how explosive they are. More explosive eruptions tend to be more dangerous, because they move faster and cover a larger area.

At the mildest end are Hawaiian eruptions. You have probably seen pictures of these: lava flowing sedately down the slope of the volcano. The lava damages whatever it runs into, but it’s a relatively local effect.

As eruptions grow more explosive, they send ash and rock fragments flying further afield.

At the more explosive end of the scale are Plinian eruptions. These include the famous eruption of Mount Vesuvius in 79AD, described by the Roman writer Pliny the Younger, which buried the Roman towns of Pompeii and Herculaneum under metres of ash.

In a Plinian eruption, hot gas, ash, and rock can explode high enough to reach the stratosphere – and when the eruption column collapses, the debris falls to Earth and can wreak terrifying destruction over a huge area.

What about Strombolian eruptions? These relatively mild eruptions are named after Stromboli, another Italian volcano which belches out a minor eruption every 10 to 20 minutes.

In a Strombolian eruption, chunks of rock and cinders may travel tens or hundreds of metres through the air, but rarely further. The pyroclastic flow from yesterday’s eruption at Etna was rather more explosive than this – so it wasn’t strictly Strombolian.

Can we forecast volcano eruptions?

Volcanic eruptions are a bit like weather. They are very hard to predict in detail, but we are a lot better than we used to be at forecasting them.

To understand what a volcano will do in the future, we first need to know what is happening inside it right now. We can’t look inside directly, but we do have indirect measurements.

For example, before an eruption magma travels from deep inside the Earth up to the surface. On the way, it pushes rocks apart and can generate earthquakes. If we record the vibrations of these quakes, we can track the magma’s journey from the depths.

Rising magma can also make the ground near a volcano bulge upwards very slightly, by a few millimetres or centimetres. We can monitor this bulging, for example with satellites, to gather clues about an upcoming eruption.

Some volcanoes release gas even when they are not strictly erupting. We can measure the chemicals in this gas – and if they change, it can tell us that new magma is on its way to the surface.

When we have this information about what’s happening inside the volcano, we also need to understand its “personality” to know what the information means for future eruptions.

Are volcanic eruptions more common than in the past?

As a volcanologist, I often hear from people that it seems there are more volcanic eruptions now than in the past. This is not the case.

What is happening, I tell them, is that we have better monitoring systems now, and a very active global media system. So we know about more eruptions – and even see photos of them.

Monitoring is extremely important. We are fortunate that many volcanoes in places such as Italy, the United States, Indonesia and New Zealand have excellent monitoring in place.

This monitoring allows local authorities to issue warnings when an eruption is imminent. For a visitor or tourist out to see the spectacular natural wonder of a volcano, listening to these warnings is all-important.The Conversation

Teresa Ubide, ARC Future Fellow and Associate Professor in Igneous Petrology/Volcanology, The University of Queensland

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Scientists Define a Color Never Before Seen by Human Eyes, Called 'Olo'–a Blue-Green of Intense Saturation

Photo by Hamish on Unsplash

An experiment in human photoreceptors allowed scientists to recently define a new color, imperceptible by the human eye, that lies along the blue-green spectrum but is different from the two.

The team, who experimented on themselves and others, hope their findings could one day help improve tools for studying color blindness or lead to new technologies for creating colors in digital imagery.

“Theoretically, novel colors are possible through bypassing the constraints set by the cone spectral sensitivities…” the authors write in their abstract. “In practice, we confirm a partial expansion of colorspace toward that theoretical ideal.”

The team from University of California, Berkeley and the University of Washington used pioneering laser technology which they called “Oz” to “directly control the human eye’s photoreceptor activity via cell-by-cell light delivery.”

Color is generated in our vision through the transmission of light in cells called photoreceptors. Eye tissue contain a series of cones for this task, and the cones are labeled as L, S, or M cones.

In normal color vision, the authors explain, any light that stimulates an M cone cell must also stimulate its neighboring L and/or S cones because the M cone spectral response function lies between that of the L and S cones.

“However, Oz stimulation can by definition target light to only M cones and not L or S, which in principle would send a color signal to the brain that never occurs in natural vision,” they add.

Described as a kind of blue-green with “unprecedented saturation” the new color, which the researchers named “olo” was confirmed as being beyond the normal blue-green spectrum by each participant who saw it, as they needed to add substantial amounts of white for olo to fit somewhere within that spectrum.

“The Oz system represents a new experimental platform in vision science, aiming to control photo receptor activation with great precision,” the study says.


Although the authors are confidant that olo has never been seen before by humans, the spectrum of blue-green has received international attention before as a field of vision discovery.

A groundbreaking study of the Himba people in Namibia conducted in 2005 and published in journal of the American Psychological Association demonstrated that these traditional landowners seemed to perceive various colors as the same because they used the same word for them. A grouping of colors we in the West would separate into pink, red, and orange, is all serandu to them.

That was only half of the cause for fascination with the study. The other half came from the Himba people’s unbelievable sensitivity to the blue-green spectrum, such that they could reliably pick out the fainest differences in green that Western viewers by comparison missed.

This also corresponded with more words for shades of green which Westerners would never bother specifying, and in fact, the Himba had a harder time pointing out that a blue square was different from green squares when shown a chart, but could reliably select the square of a slightly different shade of green to the rest.But then it got even stranger. Further studies in the following years included genetic testing on the Himba, and it showed they possess an increased number of cone cells in their eyes. This higher density of cones enables them to perceive more shades and nuances of color than the average person, according to the lead author of the genetic research. Scientists Define a Color Never Before Seen by Human Eyes, Called 'Olo'–a Blue-Green of Intense Saturation
Read More........

Discovery of Genetically-Varied Worms in Chernobyl Could Help Human Cancer Research

Worms collected in the Chornobyl Exclusion Zone – SWNS / New York University

The 1986 disaster at the Chernobyl nuclear power plant transformed the surrounding area into the most radioactive landscape on Earth, and now the discovery of a worm that seems to be right at home in the rads is believed to be a boon for human cancer research.

Though humans were evacuated after the meltdown of Reactor 4, many plants and animals continued to live in the region, despite the high levels of radiation that have persisted to our time.

In recent years, researchers have found that some animals living in the Chernobyl Exclusion Zone are physically and genetically different from their counterparts elsewhere, raising questions about the impact of chronic radiation on DNA.

In particular, a new study led by researchers at New York University finds that exposure to chronic radiation from Chernobyl has not damaged the genomes of microscopic worms living there today, and the team suggests the invertebrates have become exceptionally resilient.

The finding could offer clues as to why humans with a genetic predisposition to cancer develop the disease, while others do not.

“Chernobyl was a tragedy of incomprehensible scale, but we still don’t have a great grasp on the effects of the disaster on local populations,” said Sophia Tintori, a postdoctoral associate in the Department of Biology at NYU and the first author of the study, published in the Proceedings of the National Academy of Sciences.

“Did the sudden environmental shift select for species, or even individuals within a species, that are naturally more resistant to ionizing radiation?”

Tintori and her colleagues turned to nematodes, tiny worms with simple genomes and rapid reproduction, which makes them particularly useful for understanding basic biological phenomena.

“These worms live everywhere, and they live quickly, so they go through dozens of generations of evolution while a typical vertebrate is still putting on its shoes,” said Matthew Rockman, a professor of biology at NYU and the study’s senior author.

“I had seen footage of the Exclusion Zone and was surprised by how lush and overgrown it looked—I’d never thought of it as teeming with life,” added Tintori. “If I want to find worms that are particularly tolerant to radiation exposure, this is a landscape that might have already selected for that.”

In collaboration with scientists in Ukraine and U.S. colleagues, including biologist Timothy Mousseau of the University of South Carolina, who studies the effects of radiation from the Chernobyl and Fukushima disasters, Tintori and Rockman visited the Chernobyl Exclusion Zone in 2019 to see if chronic radiation has had a detectable impact on the region’s worms.

With Geiger counters in hand to measure local levels of radiation and personal protective gear to guard against radioactive dust, they gathered worms from samples of soil, rotting fruit, and other organic material.
The ruins of Reactor 4, Chernobyl Exclusion Zone. credit Matt Shalvatis – CC BY-4.0. SA

Worms were collected from locations throughout the zone with different amounts of radiation, ranging from low levels on par with New York City (negligibly radioactive) to high-radiation sites on par with outer space (dangerous for humans, but of unclear if it would be dangerous to worms).

After collecting samples in the field, the team brought them to Mousseau’s field lab in a former residential home in Chernobyl, where they separated hundreds of nematodes from the soil or fruit. From there, they headed to a Kyiv hotel where, using travel microscopes, they isolated and established cultures from each worm.

Back in the lab at NYU, the researchers continued studying the worms by freezing them.

“We can cryopreserve worms, and then thaw them for study later. That means that we can stop evolution from happening in the lab, something impossible with most other animal models, and very valuable when we want to compare animals that have experienced different evolutionary histories,” said Rockman.

They focused their analyses on 15 worms of a nematode species called Oscheius tipulae, which has been used in genetic and evolutionary studies. They sequenced the genomes of the 15 O. tipulae worms from Chernobyl and compared them with the genomes of five O. tipulae from other parts of the world.

The researchers were surprised to find that, using several different analyses, they could not detect a signature of radiation damage on the genomes of the worms from Chernobyl.

“This doesn’t mean that Chernobyl is safe—it more likely means that nematodes are really resilient animals and can withstand extreme conditions,” noted Tintori. “We also don’t know how long each of the worms we collected was in the Zone, so we can’t be sure exactly what level of exposure each worm and its ancestors received over the past four decades.”

Wondering whether the lack of genetic signature was because the worms living in Chernobyl are unusually effective at protecting or repairing their DNA, the researchers designed a system to compare how quickly populations of worms grow and used it to measure how sensitive the descendants of each of the 20 genetically distinct worms were to different types of DNA damage.

The surprise in this story is that while the lineages of worms were different from each other in how well they tolerated DNA damage, these differences didn’t correspond to the levels of radiation at each collection site, meaning that unlike the origin stories of several superheroes, radiation exposure doesn’t seem to create super worms just as much as it can’t turn you or I into Spiderman or the Hulk.

Instead, the teams’ findings suggest that worms from Chernobyl are not necessarily more tolerant of radiation and the radioactive landscape has not forced them to evolve.

The results give researchers clues into how DNA repair can vary from individual to individual—and despite the genetic simplicity of O. tipulae, could lead to a better understanding of natural variation in humans.

“Now that we know which strains of O. tipulae are more sensitive or more tolerant to DNA damage, we can use these strains to study why different individuals are more likely than others to suffer the effects of carcinogens,” said Tintori.

How different individuals in a species respond to DNA damage is top of mind for cancer researchers seeking to understand why some humans with a genetic predisposition to cancer develop the disease, while others do not.

“Thinking about how individuals respond differently to DNA-damaging agents in the environment is something that will help us have a clear vision of our own risk factors,” added Tintori. Discovery of Genetically-Varied Worms in Chernobyl Could Help Human Cancer Research
Read More........