The future remains bleak for corals – but not all reefs are doomed

 
Christopher Cornwall, CC BY-NC-ND
Christopher Cornwall, Te Herenga Waka — Victoria University of Wellington and Orlando Timmerman, University of Cambridge

A recent report on global tipping points warned that coral reefs face widespread dieback and have reached a point from which they cannot recover.

But in our new research, we show this might not be the case for some reefs if corals can gain tolerance to rising temperatures, or if we can cut greenhouse gas emissions and restore reefs with heat-tolerant corals at scale.

Nevertheless, the outlook likely remains bleak.

 
All coral reefs are under threat but some may be more tolerant to warming waters. Christopher Cornwall, CC BY-NC-ND

Coral reefs provide habitat for thousands of other species in tropical oceans. They deliver economic value through fisheries and tourism and provide shoreline protection from storm surges and extreme weather by dampening the impact of waves.

However, coral reefs are vulnerable to the effects of climate change. Our study combines previously published assessments of climate impacts on different coral reefs and reviews the scientific consensus to examine how long reef structures could persist as climate change intensifies.

Ocean warming, acidification, darkening and deoxygenation all threaten the persistence of coral reefs. Ocean warming brings marine heatwaves, which are the leading cause of mass coral bleaching that has led to a global decline in coral cover.

Marine heatwaves have already led to a global decline in coral reefs. Christopher Cornwall, CC BY-NC-ND

Corals are animals that house microalgae within their tissues that provide sugar in exchange for nitrogen. When temperatures become too hot, corals expel these symbiotic microalgae, leaving behind white skeletons.

Ocean acidification reduces the ability of corals to build their skeletons through a process called calcification. Warming, darkening and deoxygenation can also reduce calcification.

When corals expel their symbiotic algae, all that remains are bleached skeletons. Chris Perry, CC BY-NC-ND

Coral reefs are built by adding calcium carbonate, coming mostly from corals but also coralline algae and other calcareous seaweeds. But as the ocean’s pH (a measure of acidity) is reduced, processes called bio-erosion and dissolution act to remove calcium carbonate.

Our meta-analysis examined how climate change affects the calcification and bio-erosion of coral reefs and we then applied these results to a global data set of reef growth.

There is no scientific consensus on which organisms will build future coral reefs. We explore four most likely scenarios:

1. Present-day extreme reefs represent the future of coral reefs. These are locations where temperatures are already warmer, waters are becoming more acidic and oxygen has dropped to conditions similar to those expected at the end of the century. These reefs are dominated by coralline algae and slow-growing heat-resistant corals.

Some reefs already experience conditions expected at the end of the century. Steeve Comeau, CC BY-NC-ND

2. Presently degraded reefs take over future reefs. These reefs are dominated by bio-eroders such as sponges and sea urchins and have low coral cover.

3. Corals can gain heat tolerance to an extent that keeps pace with low to moderate greenhouse gas emissions scenarios. Under these scenarios, only about 36% of global corals would be lost and there would be a moderate reduction in growth. These heat-tolerant reefs are dominated by faster growing corals with symbiotic microalgae that can evolve heat tolerance.

4. Reefs where restoration practices include using heat-tolerant corals that can then disperse to other regions. These restored reefs would have lower coral cover in remote regions lacking restoration or with unsuccessful restoration practices. This kind of reef restoration would need to cover half of global coral reefs to maintain net growth – an unlikely scenario.

We found coral reefs transition to net erosion under all scenarios, even under low to moderate greenhouse gas emissions, meaning they are dissolving or being eaten faster than they can grow. Only reefs with heat-tolerant corals could prevent this from occurring.

The next step for the scientific community is to determine which reefs can persist in the future using global efforts to combine information. The major issues is that we are missing measurements from large parts of the Pacific, and we do not know how deoxygenation or coastal darkening will impact coral reefs. The processes of reef bioerosion and dissolution are also poorly described.

Although the climate has been altered to the point of threatening the future survival of coral reefs, their fate is not doomed yet if we act now.

Another question is how long reef structures will persist after living corals are removed. We do not have an answer yet. It will take global efforts to rapidly obtain these measurements to better manage and protect coral reefs before climate change intensifies.

It is up to governments everywhere, including New Zealand, to better support these initiatives before it is too late.The Conversation

Christopher Cornwall, Lecturer in Marine Biology, Te Herenga Waka — Victoria University of Wellington and Orlando Timmerman, Doctoral Candidate in Earth Sciences, University of Cambridge

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Microbes in Antarctica survive the freezing and dark winter by living on air

Ry Holland, Monash University

Winter in Antarctica is long and dark. Temperatures remain well below freezing. In many places, the Sun sets in April and does not rise above the horizon again until August. Without sunlight, photosynthetic life such as plants, mosses and algae cannot make energy.

But that’s not to say all life stops.

In a new study published in The ISME Journal, my colleagues and I show that Antarctic microbes make energy from the air at temperatures as low as –20°C. This finding improves our understanding of how life survives at temperature extremes in Antarctica – and how climate change will affect this important process.

How to make energy from air

In 2017, scientists showed that a large number of Antarctic microbes can generate energy from atmospheric gases present at very low concentrations.

This process is called “aerotrophy”. By using enzymes that are very finely tuned to “sniff out” the hydrogen and carbon monoxide in the atmosphere, these microbes have found a way to make energy from the air itself – a huge advantage in Antarctica’s nutrient-poor desert soils.

What remained unknown until now was the temperature limits of this process. Could aerotrophy be a way to power the continent’s soil communities through the winter?

Taking the lab down south

Measuring how quickly these microbes consume such a small amount of fuel can be difficult.

From 2022–24, we collected surface soil samples from different areas across East Antarctica and analysed them in our lab.

We measured how quickly they can use the atmospheric gases. We also extracted all the DNA from the soil microbes and sequenced it. This tells us what microbes are present, what genes they have, and what they are capable of using as energy sources.

We showed aerotrophy happening in the lab at representative summer (4°C) and winter (–20°C) temperatures. This means hydrogen and carbon monoxide are a viable food source not just over the summer months, but year-round. What was even more surprising though, was the upper temperature limit.

Soil temperatures in Antarctica rarely rise above 20°C. Yet we found microbes in these soils that continued to generate energy from hydrogen up to a staggering 75°C. It seems as though microbes in Antarctic soils are well adapted to the continent’s cold temperatures, but not restricted to them. It’s a bit like seeing a penguin thrive in a tropical jungle.

We also wanted to see this process occurring in Antarctica itself, so two years ago we brought the lab down south. We collected fresh soil samples, sealed them in the glass vials, and took gas samples.

For the first time, it was clear that under real-world conditions these soil microbes were still munching their way through hydrogen.

The primary producers of Antarctica

DNA sequencing has showed us that the vast majority of microbes in Antarctic soils encode the genes to gain energy from hydrogen. Many of these bacteria also have genes to take carbon from the atmosphere.

These aerotrophs are “primary producers”, generating new biomass from the air itself.

In most land-based ecosystems, photosynthesis is thought to be the bottom of the food chain. Photosynthesis takes energy from sunlight and carbon from the atmosphere and turns it into yummy organic compounds.

It’s what makes plants grow. Plants are primary producers that are eaten by herbivores, which are then eaten by carnivores.

In Antarctica’s desert soils, photosynthesis is relatively rare. Instead, we hypothesise that aerotrophy fulfils the primary producer role in many places.

This makes sense because, unlike sunlight-dependent photosythesis, we now know that aerotrophy can happen year-round. Another benefit is that it doesn’t require liquid water, whereas photosynthesis does.

Hydrogen in a heating world

Aerotrophy clearly has an important role in Antarctic ecosystems. So next, we wanted to determine how global warming might affect this process.

Under low-emissions scenarios, we predict a 4% increase in how quickly aerotrophs use atmospheric hydrogen. Under very high-emissions scenarios, this increase rises to 35%. The numbers are similar for carbon monoxide.

Although hydrogen isn’t a greenhouse gas itself, it is important because it affects how long some greenhouse gases, including methane, hang around in the atmosphere.

Soils (including the microbes that live in them) are responsible for 82% of all hydrogen consumed on Earth globally. In other words, they are a hydrogen sink. This is a crucial component in the global hydrogen cycle.

There are a lot of factors that determine how microorganisms will respond to climate change. Temperature is just one of them. This study is an important piece of the puzzle as scientists figure out how resilient Antarctica’s unique microbal ecosystems are.The Conversation

Ry Holland, Research Fellow in Microbial Ecology, Monash University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Why your brain has to work harder in an open-plan office than private offices: study

Since the pandemic, offices around the world have quietly shrunk. Many organisations don’t need as much floor space or as many desks, given many staff now do a mix of hybrid work from home and the office.

But on days when more staff are required to be in, office spaces can feel noticeably busier and noisier. Despite so much focus on getting workers back into offices, there has been far less focus on the impacts of returning to open-plan workspaces.

Now, more research confirms what many suspected: our brains have to work harder in open-plan spaces than in private offices.

What the latest study tested

In a recently published study, researchers at a Spanish university fitted 26 people, aged in their mid-20s to mid-60s, with wireless electroencephalogram (EEG) headsets. EEG testing can measure how hard the brain is working by tracking electrical activity through sensors on the scalp.

Participants completed simulated office tasks, such as monitoring notifications, reading and responding to emails, and memorising and recalling lists of words.

Each participant was monitored while completing the tasks in two different settings: an open-plan workspace with colleagues nearby, and a small enclosed work “pod” with clear glazed panels on one side.

The researchers focused on the frontal regions of the brain, responsible for attention, concentration, and filtering out distractions. They measured different types of brain waves.

As neuroscientist Susan Hillier explains in more detail, different brain waves reveal distinct mental states:

  • “gamma” is linked with states or tasks that require more focused concentration
  • “beta” is linked with higher anxiety and more active states, with attention often directed externally
  • “alpha” is linked with being very relaxed, and passive attention (such as listening quietly but not engaging)
  • “theta” is linked with deep relaxation and inward focus
  • and “delta” is linked with deep sleep.

The Spanish study found that the same tasks done inside the enclosed pod vs the open-plan workspace produced completely opposite patterns.

It takes effort to filter out distractions

In the work pod, the study found beta waves – associated with active mental processing – dropped significantly over the experiment, as did alpha waves linked to passive attention and overall activity in the frontal brain regions.

This meant people’s brains needed progressively less effort to sustain the same work.

The open-plan office testing showed the reverse.

Gamma waves, linked to complex mental processing, climbed steadily. Theta waves, which track both working memory and mental fatigue, increased. Two key measures also rose significantly: arousal (how alert and activated the brain is) and engagement (how much mental effort is being applied).

In other words, in the open-plan office participants’ brains had to work harder to maintain performance.

Even when we try to ignore distractions, our brain has to expend mental effort to filter them out.

In contrast, the pod eliminated most background noise and visual disruptions, allowing participant’s brains to work more efficiently.

Researchers also found much wider variability in the open office. Some people’s brain activity increased dramatically, while others showed modest changes. This suggests individual differences in how distracting we find open-plan spaces.

With only 26 participants, this was a relatively small study. But its findings echo a significant body of research from the past decade.

What past research has shown

In our 2021 study, my colleagues and I found a significant causal relationship between open-plan office noise and physiological stress. Studying 43 participants in controlled conditions – using heart rate, skin conductivity and AI facial emotion recognition – we found negative mood in open plan offices increased by 25% and physiological stress by 34%.

Another study showed background conversations and noisy environments can degrade cognitive task performance and increase distraction for workers.

And a 2013 analysis of more than 42,000 office workers in the United States, Finland, Canada and Australia found those in open-plan offices were less satisfied with their work environment than those in private offices. This was largely due to increased, uncontrollable noise and lack of privacy.

Just as we now recognise poorly designed chairs cause physical strain, years of research has shown how workspace design can result in cognitive strain.

What to do about it

The ability to focus and concentrate without interruption and distraction is a fundamental requirement for modern knowledge work.

Yet the value of uninterrupted work continues to be undervalued in workplace design.

Creating zones where workers can match their workplace environment to the task is essential.

Responding to having more staff doing hybrid work post-pandemic, LinkedIn redesigned its flagship San Francisco office. LinkedIn halved the number of workstations in open plan areas, instead experimenting with 75 types of work settings, including work areas for quiet focus.

For organisations looking to look after their workers’ brains, there are practical measures to consider. These include setting up different work zones, acoustic treatments and sound-masking technologies, and thoughtfully placed partitions to reduce visual and auditory distractions.

While adding those extra features in may cost more upfront than an open plan office, they can be worth it. Research has shown the significant hidden toll of poor office design on productivity, health and employee retention.

Providing workers with more choice in how much they’re exposed to noise and other interruptions is not a luxury. To get more done, with less strain on our brains, better design at work should be seen as a necessity.The Conversation

Libby (Elizabeth) Sander, MBA Director & Associate Professor of Organisational Behaviour, Bond Business School, Bond University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Deep-sea fish larvae rewrite the rules of how eyes can be built

Fabio Cortesi, The University of Queensland and Lily Fogg, University of Helsinki

The deep sea is cold, dark and under immense pressure. Yet life has found a way to prevail there, in the form of some of Earth’s strangest creatures.

Since deep-sea critters have adapted to near darkness, their eyes are particularly unique – pitch-black and fearsome in dragonfish, enormous in giant squid, barrel-shaped in telescope fish. This helps them catch the remaining rays of sunlight penetrating to depth and see the faint glow of bioluminescence.

Deep-sea fishes, however, typically start life in shallower waters in the twilight zone of the ocean (roughly 50–200 metres deep). This is a safe refuge to feed on plankton and grow while avoiding becoming a snack for larger predators.

Our new study, published in Science Advances, shows deep-sea fish larvae have evolved a unique way to maximise their vision in this dusky environment – a finding that challenges scientific understanding of vertebrate vision.

The nightmare of seeing in the twilight zone

The vertebrate retina, located at the back of the eye, has two main types of light-sensitive photoreceptor cells: rod-shaped for dim light and cone-shaped for bright light.

The rods and cones slowly change position inside the retina when moving between dim and bright conditions, which is why you temporarily go blind when you flick on the light switch on your way to the bathroom at night.

While vertebrates that are active during the daytime and predominantly inhabit bright light environments favour cone-dominated vision, animals that live in dim conditions, such as the deep sea or caves, have lost or reduced their cone cells in favour of more rods.

However, vision in twilight is a bit of a nightmare – neither rods nor cones are working at their best. This raises the question of how some animals, such as larval deep-sea fishes, can overcome the limitations of the cone-and-rod retina not only to survive but even to thrive in twilight conditions.

Starting where the fish start

To understand how newly born deep-sea fishes see, we had to start where they do: in the twilight zone of the ocean.

We caught larval fish from the Red Sea using fine-meshed nets towed from near the surface to a depth of around 200m. This way we got hold of three different species – the lightfish (Vinciguerria mabahiss) and the hatchetfish (Maurolicus mucronatus), both members of the dragonfishes, and a member of the lanternfishes, the skinnycheek lanternfish (Benthosema pterotum). Next, we studied what their photoreceptor cells looked like on the outside and how they were wired on the inside.

First, we used high-resolution microscopy to examine the cells’ shape in great detail. Then we investigated retinal gene expression to identify which vision genes were activated as the fish grew. Finally, we got some experts in computational modelling of visual proteins on board to simulate which wavelengths of light these tiny fishes may perceive.

By combining all the approaches, we were able to piece together a picture of how these animals see their world. This sounds relatively simple, but working with deep-sea fishes is anything but easy.

While these animals are generally thought of as monsters of the deep, in reality, most reach only about the size of a thumb – even when fully grown. They are also very fragile and difficult to get.

Working with larval specimens that are only a few millimetres long is even more difficult. However, by leveraging support from the deep-sea research community, we were fortunate enough to combine specimens from multiple research expeditions to piece together an unusually complete picture of visual development in these elusive animals.

So, what did we discover?

For decades, scientists have thought that, as vertebrates grow, the development of their retina follows a predictable pattern: cones form first, then rods. But the deep-sea fish we studied do not follow this rule.

We found that, as larvae, they mostly use a mix-and-match type of hybrid photoreceptor. The cells they are using early on look like rods but use the molecular machinery of cones, making them rod-like cones.

In some of the species we studied, these hybrid cells were a temporary solution, replaced by “normal” rods as the fish grew and migrated into deeper, darker waters.

However, in the hatchetfish, which spends its whole life in twilight, the adults keep their rod-like cone cells throughout life, essentially building their entire visual system around this extra type of cell.

Our research shows this is not a minor tweak to the system. Instead, it represents a fundamentally different developmental pathway for vertebrate vision.

Biology doesn’t fit into neat boxes

So why bother with these hybrid cells?

It seems that to overcome the visual limitations of the twilight zone, rod-like cones offer the best of both worlds: the light-capturing ability of rods combined with the faster, less bright-light sensitive properties of cones. For a tiny fish trying to survive in the murky midwater, this could mean the difference between spotting dinner or becoming it.

For more than a century, biology textbooks have taught that vertebrate vision is built from two clearly defined cell types. Our findings show these tidy categories are much more blurred.

Deep-sea fish larvae combine features of both rods and cones into a single, highly specialised cell optimised for life in between light and darkness. In the murky depths of the ocean, deep-sea fish larvae have quietly rewritten the rules of how eyes can be built, and in doing so, remind us that biology rarely fits into neat boxes.The Conversation

Fabio Cortesi, ARC Future Fellow, Faculty of Science, The University of Queensland and Lily Fogg, Postdoctoral Researcher, Helsinki Institute of Life Science, University of Helsinki

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Highly Fatal Virus May Finally Be Treatable with First Vaccine–Clinical Trials Starting

The Nipah virus pictured in red – credit, US NIH

In January, India recorded a mini-outbreak of the Nipah virus, an often lethal disease spread by contact between humans and animals.

There was little that could be done for the victims, as no specialized treatment for Nipah virus exists other than normal supportive care procedures such as the treatment of the resulting symptoms, rest, and hydration.

Some well-studied antiviral medications like ribavirin, remdesivir, acyclovir, favipiravir, have seen use on a speculative basis during certain outbreaks, but real efficacy is unclear.

Now though, the University of Tokyo’s Research Center for Advanced Science and Technology has developed a potential Nipah virus vaccine by inserting some of the virus’ genetic material into the modified measles vaccine. Early trials in hamsters have shown it to be safe and effective.

Nipah virus fatality rates are 40% to 75%. It’s typically spread by contact between humans and bats, often through people consuming tree fruit contaminated with bat saliva. Once thusly contracted, it can spread quickly through humans via any form of fluid exchange.

The virus is present in the tropics and often in rural areas where access to medical care may be limited.

Tokyo University’s vaccine candidate is now on its way to Belgium for a Phase 1 testing in humans, where with the help of a nonprofit called the European Vaccine Initiative, it will be examined for safety across 60 test candidates.The trials are set to begin in April. Highly Fatal Virus May Finally Be Treatable with First Vaccine–Clinical Trials Starting
Read More........

Red flowers have a ‘magic trait’ to attract birds and keep bees away

For flowering plants, reproduction is a question of the birds and the bees. Attracting the right pollinator can be a matter of survival – and new research shows how flowers do it is more intriguing than anyone realised, and might even involve a little bit of magic.

In our new paper, published in Current Biology, we discuss how a single “magic” trait of some flowering plants simultaneously camouflages them from bees and makes them stand out brightly to birds.

How animals see

We humans typically have three types of light receptors in our eyes, which enable our rich sense of colours.

These are cells sensitive to blue, green or red light. From the input from these cells, the brain generates many colours including yellow via what is called colour opponent processing.

The way colour opponent processing works is that different sensed colours are processed by the brain in opposition. For example, we see some signals as red and some as green – but never a colour in between.

Many other animals also see colour and show evidence of also using opponent processing.

Bees see their world using cells that sense ultraviolet, blue and green light, while birds have a fourth type sensitive to red light as well.

Our colour perception illustrated with the spectral bar is different to bees that are sensitive to UV, blue and green, or birds with four colour photoreceptors including red sensitivity. Adrian Dyer & Klaus Lunau, CC BY

The problem flowering plants face

So what do these differences in colour vision have to do with plants, genetics and magic?

Flowers need to attract pollinators of the right size, so their pollen ends up on the correct part of an animal’s body so it’s efficiently flown to another flower to enable pollination.

Accordingly, birds tend to visit larger flowers. These flowers in turn need to provide large volumes of nectar for the hungry foragers.

But when large amounts of sweet-tasting nectar are on offer, there’s a risk bees will come along to feast on it – and in the process, collect valuable pollen. And this is a problem because bees are not the right size to efficiently transfer pollen between larger flowers.

Flowers “signal” to pollinators with bright colours and patterns – but these plants need a signal that will attract birds without drawing the attention of bees.

We know bee pollination and flower signalling evolved before bird pollination. So how could plants efficiently make the change to being pollinated by birds, which enables the transfer of pollen over long distances?

Avoiding bees or attracting birds?

A walk through nature lets us see with our own eyes that most red flowers are visited by birds, rather than bees. So bird-pollinated flowers have successfully made the transition. Two different theories have been developed that may explain what we observe.

One theory is the bee avoidance hypotheses where bird pollinated flowers just use a colour that is hard for bees to see.

A second theory is that birds might prefer red.

But neither of these theories seemed complete, as inexperienced birds don’t demonstrate a preference for a stronger red hue. However, bird-pollinated flowers do have a very distinct red hue, which suggests avoiding bees can’t solely explain why consistently salient red flower colours evolved.

Most red flowers are visited by birds, rather than bees. Jim Moore/iNaturalist, CC BY

A magical solution

In evolutionary science, the term magic trait refers to an evolved solution where one genetic modification may yield fitness benefits in multiple ways.

Earlier this month, a team working on how this might apply to flowering plants showed that a gene that modulates UV-absorbing pigments in flower petals can indeed have multiple benefits. This is because of how bees and birds view colour signals differently.

Bee-pollinated flowers come in a diverse range of colours. Bees even pollinate some plants with red flowers. But these flowers tend to also reflect a lot of UV, which helps bees find them.

The magic gene has the effect of reducing the amount of UV light reflected from the petal, making flowers harder for bees to see. But (and this is where the magic comes in) reducing UV reflection from a petal of a red flower simultaneously makes it look redder for animals – such as birds – which are believed to have a colour opponent system.

Red flowers look similar for humans, but as flowers evolved for bird vision a genetic change down-regulates UV reflection, making flowers more colourful for birds and less visible to bees. Adrian Dyer & Klaus Lunau, CC BY

Birds that visit these bright red flowers gain rewards – and with experience, they learn to go repeatedly to the red flowers.

One small gene change for colour signalling in the UV yields multiple beneficial outcomes by avoiding bees and displaying enhanced colours to entice multiple visits from birds.

We lucky humans are fortunate that our red perception can also see the result of this clever little trick of nature to produce beautiful red flower colours. So on your next walk on a nice day, take a minute to view one of nature’s great experiments on finding a clever solution to a complex problem.The Conversation

Adrian Dyer, Associate Professor, Department of Physiology, Monash University and Klaus Lunau, Professor, Institute of Sensory Ecology, Heinrich Heine Universität Düsseldorf

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Simply Shining Light on Skin Can Replace Finger Pricks for People With Diabetes

Blood-glucose monitor uses light to spare diabetes patients from finger pricks – Credit: Christine Daniloff / MIT

A new method for measuring blood glucose levels, developed at MIT, could save diabetes patients from having to prick their fingers several times a day.

The MIT team used a technique that reveals the chemical composition of tissue by shining near-infrared light on them—and developed a shoebox-sized device that can measure blood glucose levels without any needles.

The researchers found that the measurements from their device were similar to those obtained by commercial continuous glucose monitoring sensors that require a wire to be implanted under the skin. While the device presented in this study is too large to be used as a wearable sensor, the researchers have since developed a wearable version that they are now testing in a small clinical study.


“For a long time, the finger stick has been the standard method for measuring blood sugar, but nobody wants to prick their finger every day, multiple times a day,” says Jeon Woong Kang, an MIT research scientist and the senior author of the study.

“Naturally, many diabetic patients are under-testing their blood glucose levels, which can cause serious complications. If we can make a noninvasive glucose monitor with high accuracy, then almost everyone with diabetes will benefit from this new technology.”

MIT postdoc Arianna Bresci is the lead author of the new study published this month in the journal Analytical Chemistry.

Some patients use wearable monitors, which have a sensor inserted just under the skin to provide glucose measurements from the interstitial fluid—but they can cause skin irritation and they need to be replaced every 10 to 15 days.

The MIT team bases their noninvasive sensors based on Raman spectroscopy, a type that reveals the chemical composition of tissue or cells by analyzing how near-infrared light is scattered, or deflected, as it encounters different kinds of molecules.

A recent breakthrough allowed them to directly measure glucose Raman signals from the skin. Normally, this glucose signal is too small to pick out from all of the other signals generated by molecules in tissue. The MIT team found a way to filter out much of the unwanted signal by shining near-infrared light onto the skin at a different angle from which they collected the resulting Raman signal.

Typically, a Raman spectrum may contain about 1,000 bands. However, the MIT team found that they could determine blood glucose levels by measuring just three bands—one from the glucose plus two background measurements. This approach allowed the researchers to reduce the amount and cost of equipment needed, allowing them to perform the measurement with a cost-effective device about the size of a shoebox.

“With this new approach, we can change the components commonly used in Raman-based devices, and save space, time, and cost,” Bresci told MIT News.
Toward a watch-sized sensor

In a clinical study performed at the MIT Center for Clinical Translation Research (CCTR), the researchers used the new device to take readings from a healthy volunteer over a four-hour period, as the subject rested their arm on top of the device.

Each measurement takes a little more than 30 seconds, and the researchers took a new reading every five minutes.

During the study, the subject consumed two 75-gram glucose drinks, allowing the researchers to monitor significant changes in blood glucose concentration. They found that the Raman-based device showed accuracy levels similar to those of two commercially available, invasive glucose monitors worn by the subject.

Since finishing that study, the researchers have developed a smaller prototype, about the size of a cellphone, that they’re currently testing at the MIT CCTR as a wearable monitor in healthy and pre-diabetic volunteers.

The researchers are also working on making the device even smaller, about the size of a watch, and next year they plan to run a larger study working with a local hospital, which will include people with diabetes.Edited from article by Anne Trafton | MIT News Simply Shining Light on Skin Can Replace Finger Pricks for People With Diabetes
Read More........

Polar bears are adapting to climate change at a genetic level – and it could help them avoid extinction

Alice Godden, University of East Anglia: The Arctic Ocean current is at its warmest in the last 125,000 years, and temperatures continue to rise. Due to these warming temperatures more than two-thirds of polar bears are expected to be extinct by 2050 with total extinction predicted by the end of this century.

But in our new study my colleagues and I found that the changing climate was driving changes in the polar bear genome, potentially allowing them to more readily adapt to warmer habitats. Provided these polar bears can source enough food and breeding partners, this suggests they may potentially survive these new challenging climates.

We discovered a strong link between rising temperatures in south-east Greenland and changes in polar bear DNA. DNA is the instruction book inside every cell, guiding how an organism grows and develops. In processes called transcription and translation, DNA is copied to generate RNA (molecules that reflect gene activity) and can lead to the production of proteins, and copies of transposons (TEs), also known as “jumping genes”, which are mobile pieces of the genome that can move around and influence how other genes work.

In carrying out our recent research we found that there were big differences in the temperatures observed in the north-east, compared with the south-east regions of Greenland. Our team used publicly available polar bear genetic data from a research group at the University of Washington, US, to support our study. This dataset was generated from blood samples collected from polar bears in both northern and south-eastern Greenland.

Our work built on the Washington University study which discovered that this south-eastern population of Greenland polar bears was genetically different to the north-eastern population. South-east bears had migrated from the north and became isolated and separate approximately 200 years ago, it found.

Researchers from Washington had extracted RNA from polar bear blood samples and sequenced it. We used this RNA sequencing to look at RNA expression — the molecules that act like messengers, showing which genes are active, in relation to the climate. This gave us a detailed picture of gene activity, including the behaviour of TEs. Temperatures in Greenland have been closely monitored and recorded by the Danish Meteorological Institute. So we linked this climate data with the RNA data to explore how environmental changes may be influencing polar bear biology.

Does temperature change anything?

From our analysis we found that temperatures in the north-east of Greenland were colder and less variable, while south-east temperatures fluctuated and were significantly warmer. The figure below shows our data as well as how temperature varies across Greenland, with warmer and more volatile conditions in the south-east. This creates many challenges and changes to the habitats for the polar bears living in these regions.

In the south-east of Greenland, the ice-sheet margin, which is the edge of the ice sheet and spans 80% of Greenland, is rapidly receding, causing vast ice and habitat loss.

The loss of ice is a substantial problem for the polar bears, as this reduces the availability of hunting platforms to catch seals, leading to isolation and food scarcity. The north-east of Greenland is a vast, flat Arctic tundra, while south-east Greenland is covered by forest tundra (the transitional zone between coniferous forest and Arctic tundra). The south-east climate has high levels of rain, wind, and steep coastal mountains.

Temperature across Greenland and bear locations

Author data visualisation using temperature data from the Danish Meteorological Institute. Locations of bears in south-east (red icons) and north-east (blue icons). CC BY-NC-ND

How climate is changing polar bear DNA

Over time the DNA sequence can slowly change and evolve, but environmental stress, such as warmer climate, can accelerate this process.

TEs are like puzzle pieces that can rearrange themselves, sometimes helping animals adapt to new environments. In the polar bear genome approximately 38.1% of the genome is made up of TEs. TEs come in many different families and have slightly different behaviours, but in essence they all are mobile fragments that can reinsert randomly anywhere in the genome.

In the human genome, 45% is comprised of TEs and in plants it can be over 70%. There are small protective molecules called piwi-interacting RNAs (piRNAs) that can silence the activity of TEs.

Despite this, when an environmental stress is too strong, these protective piRNAs cannot keep up with the invasive actions of TEs. In our work we found that the warmer south-east climate led to a mass mobilisation from these TEs across the polar bear genome, changing its sequence. We also found that these TE sequences appeared younger and more abundant in the south-east bears, with over 1,500 of them “upregulated”, which suggests recent genetic changes that may help bears adapt to rising temperatures.

Some of these elements overlap with genes linked to stress responses and metabolism, hinting at a possible role in coping with climate change. By studying these jumping genes, we uncovered how the polar bear genome adapts and responds, in the shorter term, to environmental stress and warmer climates.

Our research found that some genes linked to heat-stress, ageing and metabolism are behaving differently in the south-east population of polar bears. This suggests they might be adjusting to their warmer conditions. Additionally, we found active jumping genes in parts of the genome that are involved in areas tied to fat processing – important when food is scarce. This could mean that polar bears in the south-east are slowly adapting to eating the rougher plant-based diets that can be found in the warmer regions. Northern populations of bears eat mainly fatty seals.

Overall, climate change is reshaping polar bear habitats, leading to genetic changes, with south-eastern bears evolving to survive these new terrains and diets. Future research could include other polar bear populations living in challenging climates. Understanding these genetic changes help researchers see how polar bears might survive in a warming world – and which populations are most at risk.

Don’t have time to read about climate change as much as you’d like?
Get a weekly roundup in your inbox instead. Every Wednesday, The Conversation’s environment editor writes Imagine, a short email that goes a little deeper into just one climate issue. Join the 47,000+ readers who’ve subscribed so far.The Conversation

Alice Godden, Senior Research Associate, School of Biological Sciences, University of East Anglia

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Cement Supercapacitors Could Turn the Concrete Around Us into Massive Energy Storage Systems

credit – MIT Sustainable Concrete Lab

Scientists from MIT have created a conductive “nanonetwork” inside a unique concrete mixture that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy.

It’s perhaps the most ubiquitous man-made material on Earth by weight, but every square foot of it could, with the addition of some extra materials, power the world that it has grown to cover.

Known as e c-cubed (ec3) the electron-conductive carbon concrete is made by adding an ultra-fine paracrystalline form of carbon known as carbon black, with electrolytes and carbon nanoscales.

Not a new technology, MIT reported in 2023 that 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement, could power the whole home, but advancements in materials sciences and manufacturing processes has improved the efficiency by orders of magnitude.

Now, just 5 cubic meters can do the job thanks to an improved electrolyte.

“A key to the sustainability of concrete is the development of ‘multifunctional concrete,’ which integrates functionalities like this energy storage, self-healing, and carbon sequestration,” said Admir Masic, lead author of the new study and associate professor of civil and environmental engineering at MIT.

“Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?”

The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope.

The team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like “web” that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system.

“Understanding how these materials ‘assemble’ themselves at the nanoscale is key to achieving these new functionalities,” adds Masic.

Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, “we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”

At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.

The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3—about the size of a refrigerator—can store over 2 kilowatt-hours of energy. That’s about enough to power an actual refrigerator for a day.

While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements—from slabs and walls to domes and vaults—and last as long as the structure itself.

“The Ancient Romans made great advances in concrete construction. Massive structures like the Pantheon stand to this day without reinforcement. If we keep up their spirit of combining material science with architectural vision, we could be at the brink of a new architectural revolution with multifunctional concretes like ec3,” proposes Masic.

Taking inspiration from Roman architecture, the team built a miniature ec3 arch to show how structural form and energy storage can work together. Operating at 9 volts, the arch supported its own weight and additional load while powering an LED light.

The latest developments in ec³ technology bring it a step closer to real-world scalability. It’s already been used to heat sidewalk slabs in Sapporo, Japan, due to its thermally conductive properties, representing a potential alternative to salting.

“What excites us most is that we’ve taken a material as ancient as concrete and shown that it can do something entirely new,” says James Weaver, a co-author on the paper who is an associate professor of design technology and materials science and engineering at Cornell University, as well as a former EC³ Hub researcher. “By combining modern nanoscience with an ancient building block of civilization, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.” Cement Supercapacitors Could Turn the Concrete Around Us into Massive Energy Storage Systems
Read More........

Indian researchers develop smart portable device to detect toxic pesticides in water, food

IANS Photo

New Delhi, (IANS): A team of researchers from the Indian Institute of Technology (IIT) Madras and Panjab University has developed a portable, automated optical device capable of detecting extremely low concentrations of pesticide residues in water, food, and the environment that can pose serious risks to human and environmental health.

Conventional laboratory methods for detecting such residues, particularly the commonly used organophosphate Malathion, are expensive, time-consuming, and require skilled personnel.

The new research, supported by the Department of Science and Technology, under its ‘Technology Development and Transfer’ Programme, addressed the challenge by designing a field-deployable, user-friendly device that offers real-time, ultra-sensitive pesticide detection.

The new ‘Smart MDD (Malathion Detection Device)’ is a colourimetric detection system that employs gold nanoparticles (AuNPs) and comes with an aptamer molecule engineered to recognise Malathion specifically.

The interaction causes a visible colour shift -- from red to blue --indicating the presence of the pesticide, a change that the device’s built-in optical system precisely measures. This automated process eliminates manual handling and enables quick, reliable results, said the team. The findings were published in the peer-reviewed journal Review of Scientific Instruments.

“This technology can have a significant real-world impact. It can help farmers, food safety agencies, and environmental regulators rapidly monitor pesticide contamination on-site -- whether in irrigation water, produce, or soil -- thereby ensuring compliance with safety standards and reducing public health risks," Prof. Sujatha Narayanan Unni, Department of Applied Mechanics and Biomedical Engineering, IIT Madras, told IANS.

"It can also aid in tracking pesticide runoff in water bodies, a major environmental concern,” Unni added.

The team demonstrated a detection limit of about 250 picomolar and correlation with lab spectrophotometer results -- metrics that are rarely seen in portable devices.

Currently tested under laboratory conditions, the device will next undergo validation with real-world samples such as fruits, vegetables, and field water sources."We plan to extend the platform to detect a broader range of pesticides, strengthening its role in sustainable agricultural management and environmental monitoring,” Dr. Rohit Kumar Sharma, Department of Chemistry and Centre for Advanced Studies in Chemistry, Panjab University, told IANS. Indian researchers develop smart portable device to detect toxic pesticides in water, food | MorungExpress | morungexpress.com
Read More........

New Airship-style Wind Turbine Can Find Gusts at Higher Altitudes for Constant, Cheaper Power

The S1500 from Sawes – credit, handout

A new form of wind energy is under development that promises more consistent power and lower deployment costs by adapting the design of a dirigible, or zeppelin.

Suspended 1,000 feet up where the wind is always blowing, it presents as an ideal energy source for rural communities, disaster areas, or places where wind turbines aren’t feasible to build.

The design has grown through multilateral innovation by dozens of engineers and scientists, but an MIT startup called Altaeros, and Beijing-based start-up Sawes Energy Technology have taken it to market. Both have already produced prototypes that boast some serious performance.


In 2014, Altaeros’ Buoyant Air Turbine (or BAT) was ready for commercial deployment in rural Alaska, where diesel generators are still heavily relied on for power. Its 35-foot-long inflatable shell, made of the same materials as modern blimps, provided 30 kilowatts of wind energy.

As a power provider, though, Altaeros could never get off the ground, and now has adopted much of its technology to the provision of wireless telecommunication services for civil and commercial contracting.

Heir to Altaeros’ throne, Sawes has managed to greatly exceed the former’s power generation, and now hopes to achieve nothing less than contributing a Chinese solution to the world’s energy transition.

Altaeros’ BAT – credit, Altaeros, via MIT

During a mid-September test, Sawes’ airship-like S1500, as long and wide as a basketball court and as tall as a 13-storey building, generated 1 megawatt of power which it delivered through its tether cable down to a generator below.

Conducted in the windy, western desert province of Xinjiang, the S1500 surpassed the capabilities of its predecessor turbine by 10-times, which achieved 100 kilowatts in October of last year.

Dun Tianrui, the company’s CEO and chief designer, called the megawatt-mark “a critical step towards putting the product into real-world use” which would happen next year when the company expects to begin mass production.

At the same time, the Sawes R&D team is looking into advances in materials sciences and optimization of manufacturing that will ensure the cost of supplying that megawatt to rural grids will be around $0.01 per kilowatt-hour—literally 100-times cheaper than what was theorized as the cost for Altaeros’ model from 10 years ago.

One of the major positives of the BAT is that by floating 1,000 to 2,000 feet above the ground, they render irrelevant the main gripe and failing of wind energy—that some days the wind doesn’t blow. A conventional turbine reaches only between 100 and 300 feet up, putting birds at risk as well as not collecting all the air that’s blowing over the landscape.

Sawes’ unit is about 40% cheaper to build and deploy than a normal turbine, presenting the opportunity for a 30% lower cost for buying the wind energy.According to a piece in the Beijing Daily, reported on by South China Morning Post, challenges remain before commercial deployment can begin, including what to do during storms, and whether or not it will compete in communities with existing coal-power supply. New Airship-style Wind Turbine Can Find Gusts at Higher Altitudes for Constant, Cheaper Power
Read More........

Biodegradable Plastic Made from Bamboo Is Stronger and Easy to Recycle

Bamboo forest – credit Bady Abbas, via Unsplash

GNN has reported previously on how versatile bamboo is for construction and craft, so it maybe shouldn’t be a surprise that researchers in China have found a way to turn this miracle plant into plastic.

While many biodegradable materials have already been developed for replacing lighter, flexible plastic, durable or rigid plastic replacements are few. The kinds of plastic used for tools, car interiors, and appliance exteriors have few if any biodegradable replacements.

Enter Dawei Zhao at Shenyang University of Chemical Technology in China’s far northeast, who has developed a method for turning cellulose from bamboo into a rigid yet biodegradable plastic that outperforms not only alternative biodegradable options, but plastic itself for mechanical strength and thermo-mechanical properties.

“Bamboo’s rapid growth makes it a highly renewable resource, providing a sustainable alternative to traditional timber sources, but its current applications are still largely limited to more traditional woven products,” Zhao told New Scientist.

His method takes cellulose from bamboo and subjects it to zinc chloride and a simple acid to break up the complex polysaccharide bonds that hold this plant fiber together. Next they add ethanol into the soup of smaller molecules, and from that derive a plastic for use in injection, molding, and machining manufacturing techniques.

One major drawback is the bamboo plastic’s inflexibility, which limits its incorporation into the full gamut of products that petroleum-based plastics can fulfil. On the other hand, however, these are often the plastics that remain in the ecosystem longest, and are the hardest to recycle. Therefore replacing them still represents a valuable contribution to reducing the overall plastic burden in the environment and waste streams.

Zhao and his team published a paper on the process and properties of the bamboo plastic in Nature, including in which is a cost-analysis that finds the bioplastic’s recyclability emerges as a value that sees it attain cost-competitiveness with conventional plastic. Biodegradable Plastic Made from Bamboo Is Stronger and Easy to Recycle
Read More........

The Subtle Power of Unhearable Sound: Mood and Cognition-Altering Agents

For representational purpose (Image by Gerd Altmann from Pixabay)

Shreyas Kannan, Plaksha University: The human ear has a maximum hearing range of 20 Hz to 20,000 Hz. However, in all reality the range at which we are most sensitive is from 1000 Hz to 4000 Hz at which most natural speech occurs. As frequency decreases, the sound energy or decibels needed to hear sounds increases, which makes the sound effectively “too soft” unless played at a high enough volume. What this means is that the lower and higher frequencies are both difficult to perceive normally, and frequencies outside of this range entirely, Infrasound, which vibrates below 20 Hz, and ultrasound, which is above 20,000 Hz, are simply imperceptible.

These imperceptible sounds however, have a very perceptible effect. Vic Tandy, a British engineer, believed his laboratory was haunted—until he discovered that a silent 19 Hz sound wave, produced by a fan, was resonating with his eyeballs and triggering shadowy hallucinations. Even though these sounds were below the threshold of human hearing, it could still alter mood, physiology, and cognition.

Infrasound and ultrasound can also have indirect subliminal effects. They can very subtly and over long durations of time have a negative or positive effect on the psyche of the listener. Infrasound, although inaudible, can cause a range of adverse effects, including fatigue, sleep disturbance, and cognitive dysfunction.

How does this work, especially for sounds we can’t even hear? The sounds in the Ultrasonic range tend to stimulate the emotional centers of the brain, which generally are the amygdala and hippocampus, to name a few. A study proceeded to track this and found that sounds containing inaudible high-frequency components induced activation in deep brain structures associated with emotion and reward. This effectively demonstrates a reflexive unconscious emotional response, be it positive or negative, toward a specific band of sound frequencies.

The issues do not end here. There is a persistent worry of chronic exposure to just basic sound, not just ultrasonic or infrasonic sound, having long term effects on the brain. Symptoms such as ‘chronic fatigue,’ ‘repeated headache,’ and ‘backache’ are observed to be highly associated with low- and mid-octave band center frequency noise exposure among the sampled workers. Among the major psychological symptoms... It is evident that ‘irritability’ is highly associated with low- and mid-octave band noise frequency characteristics. In conclusion even when the noise isn't painfully loud, its frequency can still degrade physical and mental health over time which should be raising ethical and public health concerns.

These effects, as can be surmised, are highly weaponizable “smart consumer devices produce possibly imperceptible sound at both high (17–21kHz) and low (60–100Hz) frequencies, at the maximum available volume setting, potentially turning them into acoustic cyber-weapons.”

The physical and systemic effects that can be caused by long exposure to something that can technically originate from our devices, especially considering previously what the Infrasonic and ultrasonic bands can potentially do. Overall, we find that many of the devices tested are capable of reproducing frequencies within both high and low ranges, at levels exceeding those recommended in published guidelines. Such attacks are often trivial to develop and, in many cases, could be added to existing malware payloads, as they may be attractive to adversaries with specific motivations or targets.

One particular patent actually claims that 1/2 Hz frequency (Around 0.5 Hz) affects the autonomic nervous system and can produce a variety of effects, not limited to Eyelid drooping (ptosis) Relaxation and drowsiness Feeling of pressure on the forehead Visual effects with eyes closed Stomach sensations Tenseness (at certain frequencies). It goes on to propose how this can be used in law enforcement in the form of Non-lethal crowd control Creating disorientation in standoff situations and Remote manipulation from a distance. It goes on to list the effects of the 2.5 Hz range and the other set of effects this has.

However, not all sound effects are bad. Certain ways of application of sound can be used to actually help treat mental issues. One example is through the use of binaural beats, a form of imperceptible or subtle auditory stimulation, which are being studied for their effects on mood regulation, anxiety, and depression. Binaural beats are a type of sound that can influence brainwave activity by playing two slightly different frequencies in each ear, creating a perceived third “beat” in the brain in the way of a non-invasive sound-based intervention. A systematic study conducted to this end found positive effects in the short term while stressing that further research was needed in the long term to determine the full scope of positive effects.

It should also be noted that while these frequencies can be used negatively, it is perfectly possible for them to be used positively. Playing the right type of sound, be it music or a particular frequency set at a volume too low to be heard tended to elicit a positive response on mood and well being.

From the different sources of literature and patent claims, it can be surmised that with the exact know-how and mapping of which exact frequency to use to affect a person in a certain manner, one could be completely manipulated to actually feel a certain way about a topic that we might actually dislike. Any emotion can be aroused as necessary. Furthermore, it can be done through the speakers in everyday devices! An advertisement for a product could play the right sounds to make you view it more favourably, documentaries could potentially use this to make you feel particularly worse about a certain topic to increase the impact, electoral candidates can subtly change their image playing the right sounds at the right time, interviewees could potentially be influenced to feel uneasy for no ‘explainable’ reason as a form of sabotage, etc! The actual potential for abuse of the sounds we cannot even hear, is extreme.

How can we protect ourselves from these phenomena? The answer is quite difficult, especially at this age where sounds come from everywhere around us. The solution to this is to call for scientific transparency, proper protocols to monitor the actual playing sounds and strictly maintaining awareness of one's surroundings. In this day and age we must learn to listen to sounds that we cannot hear.Shreyas Kannan is a B.Tech student in Robotics and Cyber-Physical Systems (RCPS) at Plaksha University, and part of its inaugural graduating batch. He has an ardent passion for all things related to movement and propulsion in vehicles, and brings boundless curiosity and energy to projects that make objects move—whether on land, underwater, or in space. From autonomous underwater navigation to aerospace systems, Shreyas is eager to explore and contribute to the frontier of motion-driven technologies. The Subtle Power of Unhearable Sound: Mood and Cognition-Altering Agents | MorungExpress | morungexpress.com
Read More........

Blue, green, brown, or something in between – the science of eye colour explained

You’re introduced to someone and your attention catches on their eyes. They might be a rich, earthy brown, a pale blue, or the rare green that shifts with every flicker of light. Eyes have a way of holding us, of sparking recognition or curiosity before a single word is spoken. They are often the first thing we notice about someone, and sometimes the feature we remember most.

Across the world, human eyes span a wide palette. Brown is by far the most common shade, especially in Africa and Asia, while blue is most often seen in northern and eastern Europe. Green is the rarest of all, found in only about 2% of the global population. Hazel eyes add even more diversity, often appearing to shift between green and brown depending on the light.

So, what lies behind these differences?

It’s all in the melanin

The answer rests in the iris, the coloured ring of tissue that surrounds the pupil. Here, a pigment called melanin does most of the work.

Brown eyes contain a high concentration of melanin, which absorbs light and creates their darker appearance. Blue eyes contain very little melanin. Their colour doesn’t come from pigment at all but from the scattering of light within the iris, a physical effect known as the Tyndall effect, a bit like the effect that makes the sky look blue.

In blue eyes, the shorter wavelengths of light (such as blue) are scattered more effectively than longer wavelengths like red or yellow. Due to the low concentration of melanin, less light is absorbed, allowing the scattered blue light to dominate what we perceive. This blue hue results not from pigment but from the way light interacts with the eye’s structure.

Green eyes result from a balance, a moderate amount of melanin layered with light scattering. Hazel eyes are more complex still. Uneven melanin distribution in the iris creates a mosaic of colour that can shift depending on the surrounding ambient light.

What have genes got to do with it?

The genetics of eye colour is just as fascinating.

For a long time, scientists believed a simple “brown beats blue” model, controlled by a single gene. Research now shows the reality is much more complex. Many genes contribute to determining eye colour. This explains why children in the same family can have dramatically different eye colours, and why two blue-eyed parents can sometimes have a child with green or even light brown eyes.

Eye colour also changes over time. Many babies of European ancestry are born with blue or grey eyes because their melanin levels are still low. As pigment gradually builds up over the first few years of life, those blue eyes may shift to green or brown.

In adulthood, eye colour tends to be more stable, though small changes in appearance are common depending on lighting, clothing, or pupil size. For example, blue-grey eyes can appear very blue, very grey or even a little green depending on ambient light. More permanent shifts are rarer but can occur as people age, or in response to certain medical conditions that affect melanin in the iris.

The real curiosities

Then there are the real curiosities.

Heterochromia, where one eye is a different colour from the other, or one iris contains two distinct colours, is rare but striking. It can be genetic, the result of injury, or linked to specific health conditions. Celebrities such as Kate Bosworth and Mila Kunis are well-known examples. Musician David Bowie’s eyes appeared as different colours because of a permanently dilated pupil after an accident, giving the illusion of heterochromia.

In the end, eye colour is more than just a quirk of genetics and physics. It’s a reminder of how biology and beauty intertwine. Each iris is like a tiny universe, rings of pigment, flecks of gold, or pools of deep brown that catch the light differently every time you look.

Eyes don’t just let us see the world, they also connect us to one another. Whether blue, green, brown, or something in-between, every pair tells a story that’s utterly unique, one of heritage, individuality, and the quiet wonder of being human.The Conversation

Davinia Beaver, Postdoctoral research fellow, Clem Jones Centre for Regenerative Medicine, Bond University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

The science behind a freediver’s 29-minute breath hold world record

Croatian freediver Vitomir Maričić. Facebook.com @molchanovs, Instagram.com @maverick2go, Facebook.com @Vitomir Maričić, CC BY 

Most of us can hold our breath for between 30 and 90 seconds.

A few minutes without oxygen can be fatal, so we have an involuntary reflex to breathe.

But freediver Vitomir Maričić recently held his breath for a new world record of 29 minutes and three seconds, lying on the bottom of a 3-metre-deep pool in Croatia.

Vitomir Maričić set a new Guinness World Record for “the longest breath held voluntarily under water using oxygen”.

This is about five minutes longer than the previous world record set in 2021 by another Croatian freediver, Budimir Šobat.

Interestingly, all world records for breath holds are by freedivers, who are essentially professional breath-holders.
They do extensive physical and mental training to hold their breath under water for long periods of time.

So how do freedivers delay a basic human survival response and how was Maričić able to hold his breath about 60 times longer than most people?

Increased lung volumes and oxygen storage

Freedivers do cardiovascular training – physical activity that increases your heart rate, breathing and overall blood flow for a sustained period – and breathwork to increase how much air (and therefore oxygen) they can store in their lungs.

This includes exercise such as swimming, jogging or cycling, and training their diaphragm, the main muscle of breathing.

Diaphragmatic breathing and cardiovascular exercise train the lungs to expand to a larger volume and hold more air.

This means the lungs can store more oxygen and sustain a longer breath hold.

Freedivers can also control their diaphragm and throat muscles to move the stored oxygen from their lungs to their airways. This maximises oxygen uptake into the blood to travel to other parts of the body.

To increase the oxygen in his lungs even more before his world record breath-hold, Maričić inhaled pure (100%) oxygen for ten minutes.

This gave Maričić a larger store of oxygen than if he breathed normal air, which is only about 21% oxygen.

This is classified as an oxygen-assisted breath-hold in the Guiness Book of World Records.

Even without extra pure oxygen, Maričić can hold his breath for 10 minutes and 8 seconds.

Resisting the reflex to take another breath

Oxygen is essential for all our cells to function and survive. But it is high carbon dioxide, not low oxygen that causes the involuntary reflex to breathe.

When cells use oxygen, they produce carbon dioxide, a damaging waste product.

Carbon dioxide can only be removed from our body by breathing it out.

When we hold our breath, the brain senses the build-up in carbon dioxide and triggers us to breathe again.

Freedivers practice holding their breath to desensitise their brains to high carbon dioxide and eventually low oxygen. This delays the involuntary reflex to breathe again.

When someone holds their breath beyond this, they reach a “physiological break-point”. This is when their diaphragm involuntarily contracts to force a breath.

This is physically challenging and only elite freedivers who have learnt to control their diaphragm can continue to hold their breath past this point.

Indeed, Maričić said that holding his breath longer:

got worse and worse physically, especially for my diaphragm, because of the contractions. But mentally I knew I wasn’t going to give up.

Mental focus and control is essential

Those who freedive believe it is not only physical but also a mental discipline.

Freedivers train to manage fear and anxiety and maintain a calm mental state. They practice relaxation techniques such as meditation, breath awareness and mindfulness.

Interestingly, Maričić said:

after the 20-minute mark, everything became easier, at least mentally.

Reduced mental and physical activity, reflected in a very low heart rate, reduces how much oxygen is needed. This makes the stored oxygen last longer.

That is why Maričić achieved this record lying still on the bottom of a pool.

Don’t try this at home

Beyond competitive breath-hold sports, many other people train to hold their breath for recreational hunting and gathering.

For example, ama divers who collect pearls in Japan, and Haenyeo divers from South Korea who harvest seafood.

But there are risks of breath holding.

Maričić described his world record as:

a very advanced stunt done after years of professional training and should not be attempted without proper guidance and safety.

Indeed, both high carbon dioxide and a lack of oxygen can quickly lead to loss of consciousness.

Breathing in pure oxygen can cause acute oxygen toxicity due to free radicals, which are highly reactive chemicals that can damage cells.

Unless you’re trained in breath holding, it’s best to leave this to the professionals.The Conversation

Theresa Larkin, Associate Professor of Medical Sciences, University of Wollongong and Gregory Peoples, Senior Lecturer - Physiology, University of Wollongong

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........