Raincoat no longer waterproof? A textile scientist explains why – and how to fix it

You pull on your rain jacket, step out into the storm, and within half an hour your undershirt is soaked. The jacket you purchased as “waterproof” seems to have stopped working, and all the marketing claims feel a bit suspect.

In reality, the jacket probably hasn’t failed overnight: a mix of how it’s built, the exact level of water protection it offers, and years of sweat, skin oil and dirt have all played a part.

But there are a few simple ways you can care for your rain jacket to ensure you stay dry, even when it’s pouring.

The science behind rain jackets

Most proper rain jackets are built around a waterproof “membrane” sandwiched inside the fabric. Gore-Tex is the most popular technology used which includes a very thin layer of chemicals known as PTFE (polytetrafluoroethylene) or expanded PTFE (ePTFE) which are full of microscopic pores.

Those pores are much smaller than liquid water droplets. But they’re big enough for individual water vapour molecules, so rain on the outside can’t push through, but sweat vapour from your body can escape outwards.

Other fabrics use solid, non-porous membranes made from polyurethane or polyester that move water vapour by absorbing it and passing it through the material molecule by molecule rather than via tiny holes. This can make them a bit more tolerant of dirt.

The outer fabric is sometimes treated with a very thin chemical finish that makes water roll off the surface instead of soaking into the fibres – a bit like wax on a car. This finish is known as “Durable Water Repellent” and helps to reduce saturation of water in the exterior of the jacket.

In the past, many of these chemical finishes used “forever chemicals” (PFAS) that repelled both water and oil, but persist in the environment and build up in wildlife and people.

Because of this, brands and regulators have started using alternatives based on silicones or hydrocarbons. These still repel water but are generally less hazardous.

It’s also useful to understand the words you see on labels.

A waterproof jacket is built to stop rain coming through, even in heavy or prolonged downpours, and usually has a membrane, a chemical finish plus fully taped seams.

“Water resistant” means the fabric slows water down and copes with light showers but will eventually let water through. It often relies on a tight weave and a chemical finish but no true membrane.

“Water repellent” just describes that beading effect from the chemical finish. It can apply to both waterproof and non-waterproof fabrics.

Some brands also say rainproof or weatherproof as a friendlier way of saying “pretty much waterproof”, but there’s rarely a separate test behind that word.

 
The outer fabric of a rain jacket is sometimes treated with a very thin chemical finish that makes water roll off the surface instead of soaking into the fibres. Claudio Schwarz/Unsplash

Why do rain jackets degrade over time?

When you realise your jacket isn’t waterproof anymore, the first thing that has usually gone wrong isn’t the membrane. It’s the chemical finish on the outside.

That ultra thin surface layer gets scuffed by backpack straps and seat belts, baked by sun, and contaminated by mud, smoke and city grime.

These coatings can gradually lose their water repellent properties through abrasion and washing if harsh detergents and washing cycles are used, and bits of them are shed into the environment over time.

Body oils, sunscreen and insect repellent also play a role, as they build up in the fabric over time. Outdoor gear care guides and lab work on waterproof fabrics both point out that these oily contaminants can damage the chemical finish and clog the pores of the membrane, making it harder both for rain to be repelled and for sweat vapour to escape.

Over many years, slow physical ageing also takes a toll. Constant flexing can cause a membrane to thin or develop tiny cracks and the finish to deteriorate. Seam tapes can also start to peel away, especially on shoulders where backpack straps press.

How to keep a jacket waterproof

The single best thing you can do for both your comfort and the planet is to keep a good jacket working for as long as possible, because making new technical fabrics has a significant environmental footprint.

Gentle washing will help extend the life of your rain jacket, as it removes the build up of contamination such as dirt and body oils. Brands and care guides recommend closing zips and Velcro, then washing on a gentle cycle with a cleaner designed for waterproof fabrics or a very mild soap, avoiding normal detergents and softeners that leave residues.

Depending on the type of chemical finish, this coat can be re-applied through spray-on or wash-in products found commercially. Some finishes can be re-activated by exposure to low heat (low dryer heat or low ironing heat). Heat makes the water-repelling molecules stand back up after they have been “flattened” by use and contamination.

Although the above will help you to keep your jacket waterproof, it is best to follow the care instructions given by the manufacturer as they change according to the type of composition of the fabric.

In any case, it is important to avoid leaving the jacket wet and scrunched up for weeks, and be mindful of heavy sunscreens and repellents.The Conversation

Carolina Quintero Rodriguez, Senior Lecturer and Program Manager, Bachelor of Fashion (Enterprise) program, RMIT University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Why do nose and ear hairs become longer and thicker as we age?

Christian Moro, Bond University and Charlotte Phelps, Bond University

Growing older often brings unexpected grooming challenges. This is particularly apparent when some areas that, when young, we could otherwise ignore start to develop hair.

This includes our nose and ears, where hair grows thicker and longer as we age. But why do hairs in these areas act like this?

The answer predominantly lies in our sex hormones.

Two types of hair

There are two types of hair that grows across our bodies.

Vellus hair is fine and colourless. This hair (also called “peach fuzz”) grows across most of our body, including our arms and neck.

Terminal hair is stiff, thick and darker. It stands up from our skin and is usually very obvious. Adult males have terminal hair on about 90% of their body, with females growing it on about 30% of their bodies.

Terminal hair stands up when we’re cold (giving goosebumps) and helps trap heat to keep us warm. It also protects us from the sun (such as hair on our scalp), and keeps dust and dirt out of our eyes through eyebrows and eyelashes.

As vellus hair is smaller, thinner and colourless, it is not usually an aesthetic problem (although it can be altered in some diseases). Instead, it is the terminal hair that is often noticed, and the primary target of our razor.

The normal process of hair development involves a growth phase (anagen), follicle-shrinking phase (catagen), and then a short resting phase (telogen) before the hair falls out and is replaced as the cycle begins again. Some 90% of the hair on our body is in the growth phase at any given time.

Nose, ear, eyelash and eyebrow hairs don’t usually grow too long. This is because the growth phase of the follicles only lasts about 100–150 days, meaning there is a limit to how long they can get.

Alternatively, the hair on your head has a growth phase that lasts several years, so it can grow to more than one meter in length if you don’t get it cut.

Why do we have hair in our nose and ears?

We have about 120 hairs growing in each of our nasal cavities, with an average length of about 1 centimetre.

As you breathe through your nostrils, the hair in your nose works with the mucus to block and collect dust, pollen and other particles that could make their way to your lungs.

The hair in the ears also plays a protective role, trapping foreign objects and working with the earwax to facilitate self-cleaning processes.

What is the effect of ageing?

Androgens are a group of sex hormones that play a key role in puberty, development, and sexual health. The most common androgen is testosterone.

These androgens influence hair growth, and are the key to understanding why we have longer and thicker hairs in our nose and ears.

Hairs in different parts of the body respond to androgens differently. Unlike some hairs that are stimulated at puberty (such as pubic hairs and facial hair in males), some hairs, such as the eyelashes, don’t respond at all to androgens. Others increase hair size much slower, like the ear canal hair that can take up to 30 years.

Females have lower levels of androgens in the body, so major hair growth changes are more localised to the underarms and pubic regions.

We don’t have much data to support various conclusions about hair growth in later life, as most studies have focused on why we lose hair (such as balding) rather than why we have too much.

Nonetheless, there are still some hypotheses about why we grow more ear and nose hair as we age.

  1. As we age, the body is exposed to androgens for a long time. This prolonged exposure makes some parts of the body more sensitive to testosterone, potentially stimulating the growth of hairs.

  2. Over time, and long-term exposure to testosterone, some of the fine vellus hairs may undergo a conversion and become the darker, longer terminal hairs. This terminal hair then sticks out of our noses and ears.

  3. Alongside increased levels of androgens as we go through puberty, a protein called SHBG (sex hormone binding globulin) is also released. This protein helps control the amount of testosterone and estrogen reaching your tissues. During ageing, the levels of SHBG levels may decrease faster than androgens, leaving testosterone to stimulate ear and nose hair growth.

  4. Hair simply changes with age. This can result in changes in colour, thinning, and follicle alterations. There might be variations occurring in the follicles that respond to our body’s changing environment, stimulating longer hair growth.

Most of the impact of hairy ears and noses is observed in males, as they have larger amounts of testosterone.

Should we be worried?

It’s not usually a problem. Having a hairy ear (auricular hypertrichosis) does not appear to impact hearing at all. Note that if you are using hearing aids, excessive hair can impact their effectiveness, so in these rarer cases it is worth having a chat with your doctor.

The largest issue appears to be the appearance of these hairs, which can make some people self-conscious.

To address this, avoid plucking hairs out (such as with tweezers), as this can lead to infections, ingrown hairs and inflammation.

Instead, it is safest to reach for the trimmers (or employ laser hair removal processes) to clean up the area a little.The Conversation

Christian Moro, Associate Professor of Science & Medicine, Bond University and Charlotte Phelps, Senior Teaching Fellow in Medicine, Bond University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Winter Olympians often compete in freezing temperatures – physiology and advances in materials science help keep them warm

Cara Ocobock, University of Notre Dame and Gabriel R. Burks, University of Notre Dame

The Winter Olympics and Paralympics are upon us once again. This year the games come to Milan and Cortina d’Ampezzo, Italy, where weather forecasts are predicting temperatures in the upper 30s to mid-40s Fahrenheit (1 to 10 degrees Celsius).

These temperatures are a good deal warmer than one might expect for winter, particularly in a mountainous area. They’re warm enough that athletes will need to adjust how they are preparing their equipment for competition, yet still cold enough to affect the physiology of athletes and spectators alike.

As a biological anthropologist and a materials scientist, we’re interested in how the human body responds to different conditions and how materials can help people improve performance and address health challenges. Both of these components will play a key role for Olympic athletes hoping to perform at their peak in Italy.

Athletes in the cold

The athletes taking part in outdoor events are no strangers to cold and unpredictable weather conditions. It is an inherent part of their sports. Though it is highly unlikely the athletes this year will be exposed to extreme cold, the outdoor conditions will still affect their performance.

One concern is dehydration, which can be less noticeable, as sweating is typically less frequent and intense in cold conditions. However, cold temperatures also mean lower relative humidity. This dry air means the body needs to use more of its own water to moisten the air before it reaches the delicate lungs. Athletes breathing heavily during competition are losing more body water that way than they would in more temperate conditions.

When cold, the body also tends to narrow its blood vessels to better maintain core body temperature. Narrower blood vessels lose less heat to the cooler air, but this results in the body pushing more fluid out of the circulatory system and toward the kidneys, which then increases urine output.

Though the athletes may not be sweating to the same degree as they would in warmer temperatures, they are still sweating. Athletes dress to improve their performance and protect themselves from cold. The layers of clothing and material used in conjunction with the heat produced from physical activity can lead to sweating and create a hot, wet space between the athlete’s body and what they are wearing.

This space is not only another site of water loss, but also a potential problem for athletes who need to take part in different rounds or runs for their competition – for example, the initial heats for skiing or snowboarding.

These athletes are physically active and working up a sweat, and then they wait around for their next heat. During this waiting period, that damp layer of sweat will make them more vulnerable to body heat loss and cold injury such as frostbite or hypothermia. Athletes must stay warm between rounds of competition.

Science of winter apparel

Staying warm is all about materials selection and construction.

Many apparel companies adopt a three-layer system approach to keep wearers warm, dry and comfortable. Specifically, there is a bottom layer – in direct contact with the skin – that is typically composed of a moisture-wicking synthetic fabric such as nylon or a natural fabric such as wool.

The second layer in winter apparel is an insulating one that is generally porous to trap warm air generated by the body and to slow heat loss. Great options for this are down and fleece.

The final layer is the external protection layer, which keeps you dry and protected from the elements. This layer needs to be waterproof and breathable to keep the inner insulating layers dry but to simultaneously let out sweat. Polyester and acrylic are good options here, as they are lightweight, durable and resist moisture.

The gear athletes wear can be customized to their needs. For example, the synthetic fabrics used on the innermost layer are versatile, and engineers can introduce new properties and functionalities for users. Adding a specific coating to a fabric like nylon can give it new properties – such as wind and water resistance.

Frequently, both the synthetic fibers and the coatings materials scientists add to them are made up of polymers, which are long chains of molecules. They can be human-made and petroleum-based, like polyethylene trash bags, polyester and Teflon. But polymers can also be natural and derived from nature. Your DNA and the proteins in your body are examples of polymers.

In addition to polymer technology, conventional battery-powered heating jackets are also an option.

Smart materials

As an added bonus, there is also a class of smart materials called phase change materials that are made of polymers and composite materials. They automatically absorb excess body heat when too much is created and release it again to the body when needed to passively regulate your body temperature. These materials release or take in heat as they transition between solid and liquid states and respond to the body’s natural cues.

Phase change materials are less about warming you up. Instead, they work by keeping your temperature balanced.

While these aren’t commonly used in the gear athletes wear, NASA has been experimenting with them for a long time, and many commercially available products leverage this technology. Cooling fabrics, such as bedding and towels, are often made of phase change textiles because they do not overheat you.

Risks to the rest of us

Athletes are not the only ones at risk for cold injury.

While most of us will be watching the Games with the comfort of indoor heating, thousands of people and support staff will be watching or working those outdoor events in person. Unlike the athletes, these individuals will not have the added benefit of their bodies producing extra heat from exercise. The nonathletes in attendance will be at greater risk in the cold.

If you’re planning to spectate or work at an event this winter, drink more water than usual and time your bathroom breaks accordingly. Plan to wear several layers of clothing you can add and remove as needed, and pay special attention to the more vulnerable parts of the body, such as the hands, feet and nose.

Colder temperatures elicit a variety of metabolic responses in the body. One example is shivering, caused by tiny muscle contractions that produce heat. Your body’s brown adipose tissue – a type of fat – also becomes active and produces heat rather than energy.

Both of these processes burn extra calories, so expect to be more hungry if you’re out in the cold for a while. Trips to the bathroom or to get food are a welcome opportunity to warm up – especially those hands and feet.

It is easy to think of Olympians as exceptional athletes at the mercy of Mother Nature’s cold wrath. However, both the human body’s natural physiology and the impressive advances scientists have made in winter apparel technology will keep these athletes warm and performing at their best.The Conversation

Cara Ocobock, Assistant Professor of Anthropology, University of Notre Dame and Gabriel R. Burks, Assistant Professor of Chemical and Biomolecular Engineering, University of Notre Dame

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

The science of weight loss – and why your brain is wired to keep you fat

When you lose weight, your body reacts as if it were a threat to survival. pexels/pavel danilyuk, CC BY
Valdemar Brimnes Ingemann Johansen, University of Copenhagen and Christoffer Clemmensen, University of Copenhagen

For decades, we’ve been told that weight loss is a matter of willpower: eat less, move more. But modern science has proven this isn’t actually the case.

More on that in a moment. But first, let’s go back a few hundred thousand years to examine our early human ancestors. Because we can blame a lot of the difficulty we have with weight loss today on our predecessors of the past – maybe the ultimate case of blame the parents.

For our early ancestors, body fat was a lifeline: too little could mean starvation, too much could slow you down. Over time, the human body became remarkably good at guarding its energy reserves through complex biological defences wired into the brain. But in a world where food is everywhere and movement is optional, those same systems that once helped us survive uncertainty now make it difficult to lose weight.

When someone loses weight, the body reacts as if it were a threat to survival. Hunger hormones surge, food cravings intensify and energy expenditure drops. These adaptations evolved to optimise energy storage and usage in environments with fluctuating food availability. But today, with our easy access to cheap, calorie-dense junk food and sedentary routines, those same adaptations that once helped us to survive can cause us a few issues.

As we found in our recent research, our brains also have powerful mechanisms for defending body weight – and can sort of “remember” what that weight used to be. For our ancient ancestors, this meant that if weight was lost in hard times, their bodies would be able to “get back” to their usual weight during better times.

But for us modern humans, it means that our brains and bodies remember any excess weight gain as though our survival and lives depend upon it. So in effect, once the body has been heavier, the brain comes to treat that higher weight as the new normal – a level it feels compelled to defend.

The fact that our bodies have this capacity to “remember” our previous heavier weight helps to explain why so many people regain weight after dieting. But as the science shows, this weight regain is not due to a lack of discipline; rather, our biology is doing exactly what it evolved to do: defend against weight loss.

Hacking biology

This is where weight-loss medications such as Wegovy and Mounjaro have offered fresh hope. They work by mimicking gut hormones that tell the brain to curb appetite.

But not everyone responds well to such drugs. For some, the side effects can make them difficult to stick with, and for others, the drugs don’t seem to lead to weight loss at all. It’s also often the case that once treatment stops, biology reasserts itself – and the lost weight returns.

Advances in obesity and metabolism research may mean that it’s possible for future therapies to be able to turn down these signals that drive the body back to its original weight, even beyond the treatment period.

Research is also showing that good health isn’t the same thing as “a good weight”. As in, exercise, good sleep, balanced nutrition, and mental wellbeing can all improve heart and metabolic health, even if the number on the scales barely moves.

A whole society approach

Of course, obesity isn’t just an individual problem – it takes a society-wide approach to truly tackle the root causes. And research suggests that a number of preventative measures might make a difference – things such as investing in healthier school meals, reducing the marketing of junk food to children, designing neighbourhoods where walking and cycling are prioritised over cars, and restaurants having standardised food portions.

Scientists are also paying close attention to key early-life stages – from pregnancy to around the age of seven – when a child’s weight regulation system is particularly malleable.

Indeed, research has found that things like what parents eat, how infants are fed, and early lifestyle habits can all shape how the brain controls appetite and fat storage for years to come.

If you’re looking to lose weight, there are still things you can do – mainly by focusing less on crash diets and more on sustainable habits that support overall wellbeing. Prioritising sleep helps regulate appetite, for example, while regular activity – even walking – can improve your blood sugar levels and heart health.

The bottom line though is that obesity is not a personal failure, but rather a biological condition shaped by our brains, our genes, and the environments we live in. The good news is that advances in neuroscience and pharmacology are offering new opportunities in terms of treatments, while prevention strategies can shift the landscape for future generations.

So if you’ve struggled to lose weight and keep it off, know that you’re not alone, and it’s not your fault. The brain is a formidable opponent. But with science, medicine and smarter policies, we’re beginning to change the rules of the game.


This article was commissioned as part of a partnership collaboration between Videnskab.dk and The Conversation. You can read the Danish version of this article, here.The Conversation

Valdemar Brimnes Ingemann Johansen, PhD Fellow in the Faculty of Health and Medical Sciences, University of Copenhagen and Christoffer Clemmensen, Associate Professor and Group Leader, Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Who invented the light bulb?

Ernest Freeberg, University of Tennessee

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Who invented the light bulb? – Preben, age 5, New York City


When people name the most important inventions in history, light bulbs are usually on the list. They were much safer than earlier light sources, and they made more activities, for both work and play, possible after the Sun went down.

More than a century after its invention, illustrators still use a lit bulb to symbolize a great idea. Credit typically goes to inventor and entrepreneur Thomas Edison, who created the first commercial light and power system in the United States.

But as a historian and author of a book about how electric lighting changed the U.S., I know that the actual story is more complicated and interesting. It shows that complex inventions are not created by a single genius, no matter how talented he or she may be, but by many creative minds and hands working on the same problem.

Thomas Edison didn’t invent the basic design of the incandescent light bulb, but he made it reliable and commercially viable.

Making light − and delivering it

In the 1870s, Edison raced against other inventors to find a way of producing light from electric current. Americans were keen to give up their gas and kerosene lamps for something that promised to be cleaner and safer. Candles offered little light and posed a fire hazard. Some customers in cities had brighter gas lamps, but they were expensive, hard to operate and polluted the air.

When Edison began working on the challenge, he learned from many other inventors’ ideas and failed experiments. They all were trying to figure out how to send a current through a thin carbon thread encased in glass, making it hot enough to glow without burning out.

In England, for example, chemist Joseph Swan patented an incandescent bulb and lit his own house in 1878. Then in 1881, at a great exhibition on electricity in Paris, Edison and several other inventors demonstrated their light bulbs.

Edison’s version proved to be the brightest and longest-lasting. In 1882 he connected it to a full working system that lit up dozens of homes and offices in downtown Manhattan.

But Edison’s bulb was just one piece of a much more complicated system that included an efficient dynamo – the powerful machine that generated electricity – plus a network of underground wires and new types of lamps. Edison also created the meter, a device that measured how much electricity each household used, so that he could tell how much to charge his customers.

Edison’s invention wasn’t just a science experiment – it was a commercial product that many people proved eager to buy.

Inventing an invention factory

As I show in my book, Edison did not solve these many technical challenges on his own.

At his farmhouse laboratory in Menlo Park, New Jersey, Edison hired a team of skilled technicians and trained scientists, and he filled his lab with every possible tool and material. He liked to boast that he had only a fourth grade education, but he knew enough to recruit men who had the skills he lacked. Edison also convinced banker J.P. Morgan and other investors to provide financial backing to pay for his experiments and bring them to market.

Historians often say that Edison’s greatest invention was this collaborative workshop, which he called an “invention factory.” It was capable of launching amazing new machines on a regular basis. Edison set the agenda for its work – a role that earned him the nickname “the wizard of Menlo Park.”

Here was the beginning of what we now call “research and development” – the network of universities and laboratories that produce technological breakthroughs today, ranging from lifesaving vaccines to the internet, as well as many improvements in the electric lights we use now.

Sparking an electric revolution

Many people found creative ways to use Edison’s light bulb. Factory owners and office managers installed electric light to extend the workday past sunset. Others used it for fun purposes, such as movie marquees, amusement parks, store windows, Christmas trees and evening baseball games.

Theater directors and photographers adapted the light to their arts. Doctors used small bulbs to peer inside the body during surgery. Architects and city planners, sign-makers and deep-sea explorers adapted the new light for all kinds of specialized uses. Through their actions, humanity’s relationship to day and night was reinvented – often in ways that Edison never could have anticipated.

Today people take for granted that they can have all the light they need at the flick of a switch. But that luxury requires a network of power stations, transmission lines and utility poles, managed by teams of trained engineers and electricians. To deliver it, electric power companies grew into an industry monitored by insurance companies and public utility regulators.

Edison’s first fragile light bulbs were just one early step in the electric revolution that has helped create today’s richly illuminated world.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.The Conversation

Ernest Freeberg, Professor of History, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Why do giraffes have such long legs? Animal simulations reveal a surprising answer

If you’ve ever wondered why the giraffe has such a long neck, the answer seems clear: it lets them reach succulent leaves atop tall acacia trees in Africa.

Only giraffes have direct access to those leaves, while smaller mammals must compete with one another near the ground. This exclusive food source appears to allow the giraffe to breed throughout the year and to survive droughts better than shorter species.

But the long neck comes at a high cost. The giraffe’s heart must produce enough pressure to pump its blood a couple of metres up to its head. The blood pressure of an adult giraffe is typically over 200mm Hg – more than twice that of most mammals.

As a result, the heart of a resting giraffe uses more energy than the entire body of a resting human, and indeed more energy than the heart of any other mammal of comparable size. However, as we show in a new study published in the Journal of Experimental Biology, the giraffe’s heart has some unrecognised helpers in its battle against gravity: the animal’s long, long legs.

Meet the ‘elaffe’

In our new study, we quantified the energy cost of pumping blood for a typical adult giraffe and compared it to what it would be in an imaginary animal with short legs but a longer neck to reach the same treetop height.

This beast was a Frankenstein-style combination of the body of a common African eland and the neck of a giraffe. We called it an “elaffe”.

We found the animal would spend a whopping 21% of its total energy budget on powering its heart, compared with 16% in the giraffe and 6.7% in humans.

By raising its heart closer to its head by means of long legs, the giraffe “saves” a net 5% of the energy it takes in from food. Over the course of a year, this energy saving would add up to more than 1.5 tonnes of food – which could make the difference between life and death on the African savannah.

How giraffes work

In his book How Giraffes Work, zoologist Graham Mitchell reveals that the ancestors of giraffes had long legs before they evolved long necks.

This makes sense from an energy point of view. Long legs make the heart’s job easier, while long necks make it work harder.

However, the evolution of long legs came with a price of its own. Giraffes are forced to splay their forelegs while drinking, which makes them slow and awkward to rise and escape if a predator should appear.

Statistics show giraffes are the most likely of all prey mammals to leave a water hole without getting a drink.

How long can a neck be?

 
In life, the Giraffatitan dinosaur would most likely have been unable to lift its head this high. Shadowgate / Wikimedia, CC BY

The energy cost of the heart increases in direct proportion to the height of the neck, so there must be a limit. A sauropod dinosaur, the Giraffatitan, towers 13 metres above the floor of the Berlin Natural History Museum.

Its neck is 8.5m high, which would require a blood pressure of about 770mm Hg if it were to get blood to its head – almost eight times what we see in the average mammal. This is implausible because the heart’s energy cost to pump that blood would have exceeded the energy cost of the entire rest of the body.

Sauropod dinosaurs could not lift their heads that high without passing out. In fact, it is unlikely that any land animal in history could exceed the height of an adult male giraffe.The Conversation

Roger S. Seymour, Professor Emeritus of Physiology, University of Adelaide and Edward Snelling, Faculty of Veterinary Science, University of Pretoria

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Why Do Animals Play? Because They Need To Play – Just Like Children Do

Photo by Tambako The Jaguar, CC license

As much as it’s a time for growing and learning, childhood is also a time for unabashed joy. Pastimes like careening down a snowy hillside on your sled, flying off a rope swing into a cool lake on a hot summer day, or even just a game of catch are part and parcel of growing up.

But the joys of playtime aren’t just reserved for human kids—animal offspring are just as likely to get into the act as well, and some of their activities are startlingly similar to our own.

Young ravens hold body-surfing “competitions” down the slopes of wintery rooftops; juvenile elephants create impromptu waterslides along muddy riverbanks; herring gulls engage in their own version of airborne hacky sack substituting seashells for bean-filled projectiles.

Scientists believe that for certain animal species, some fun and games is strictly that—play for the sake of play—but as with humans, other forms of diversion are preparing youngsters for the rigors of adulthood.

“Play is essential to development because it contributes to the cognitive, physical, social, and emotional well-being of children and youth,” wrote Kenneth R. Ginsburg in the American Journal of Pediatrics. “Play also offers an ideal opportunity for parents to engage fully with their children.”

Those same tenets, it seems, hold true in the animal kingdom as well.

“Horses…are known to engage in play almost as soon as they are born. Once they can walk, they immediately start to gallop, frolic and buck, again, honing the motor skills they may need when they’re mature,” notes BBC Earth.
Play with purpose

But along with social and motor skills, play also teaches animals essential hunting and survival skills.

Inge Wallmrod

While the antics of cute cavorting kittens is the stuff that’s spawned a myriad of viral videos, whether it’s an opportunity to take down an errant mouse or to avoid harm in the face of unexpected danger, their ninja-like antics may in fact be helping kittens learn to be ready when life hands them a surprise.

Even natural-born predators, such as kestrels, use play to hone their hunting skills by practicing with targets that look like real prey when they’re young.

In the oceans, dolphins chase underwater air rings to fine-tune their sonar skills.

And while it’s unclear why bear cubs are so quintessentially playful, zoologists believe at least some of their shenanigans have a more serious purpose that aids in their survival as adults.

One of the most important teaching aspects of play is socialization. These days, for human kids, that usually means the basics like learning to share, teamwork, and knowing boundaries.

For animals, especially those that live in packs, flocks, or herds, play (often in the form of play fighting) imparts an understanding of where each animal fits into the community hierarchy.

In ways that are remarkably similar to the training children of traditional tribal cultures receive, it is through the rules of play that lion cubs, kangaroo joeys, and wolf pups discover and establish the roles they’ll be expected to perform as adults.

But for animals, not all socializing play is about fighting or establishing dominance. Some of it’s about learning to be better parents—and that involves playing with dolls. While they might lack a perambulator and a fancy wardrobe, female chimpanzees are known to lavish their doll babies with love and emulate their own mothers’ attentive care.

So whether it’s frolicking in the pasture, hanging from a tree, or rollicking in the surf, it seems that play will always be an intrinsic—and fun—part of both human and animal development.And we’re pretty sure when those ninja-kitten TikTok stars stop climbing that curtain, they’ll be thrilled to hear about it. Why Do Animals Play? Because They Need To Play – Just Like Children Do:
Read More........

A guide: Uranium and the nuclear fuel cycle


A guide: Uranium and the nuclear fuel cycle Yellowcake (Image: Dean Calma/IAEA)

The nuclear fuel cycle is the series of industrial processes that turns uranium into electricity. Claire Maden takes a look at the steps that make up the cycle, the major players and the potential pinch-points.

The nuclear fuel cycle starts with the mining of uranium ore and ends with the disposal of nuclear waste. (Ore is simply the naturally occurring material from which a mineral or minerals of economic value can be extracted).

We talk about the front end of the fuel cycle - that is, the processes needed to mine the ore, extract uranium from it, refine it, and turn it into a fuel assembly that can be loaded into a nuclear reactor - and the back end of the fuel cycle - what happens to the fuel after it's been used. If the used fuel is treated as waste, and disposed of, this is known as an "open" fuel cycle. It can also be reprocessed to recover uranium and other fissile materials which can be reused in what is known as a "closed" fuel cycle.

The World Nuclear Association's Information Library has a detailed overview of the fuel cycle here. But in a nutshell, the front end of the fuel cycle is made up of mining and milling, conversion, enrichment and fuel fabrication. Fuel then spends typically about three years inside a reactor, after which it may go into temporary storage before reprocessing, and recycling before the waste produced is disposed of - these steps are the back end of the fuel cycle.

The processes that make up the fuel cycle are carried out by companies all over the world. Some companies specialise in one particular area or service; some offer services in several areas of the fuel cycle. Some are state-owned, some are in the private sector. Underpinning all these separate offerings is the transport sector to get the materials to where they need to be - and overarching all of it is the global market for nuclear fuel and fuel cycle services.

(Image: World Nuclear Association)


How do they do it?


Let's start at the very front of the front end: uranium mining.

Depending on the type of mineralisation and the geological setting, uranium can be mined by open pit or underground mining methods, or by dissolving and recovering it via wells. This is known as in-situ recovery - ISR - or in-situ leaching, and is now the most widely used method: Kazakhstan produces more uranium than any other country, and all by in-situ methods.

Uranium mined by conventional methods is recovered at a mill where the ore is crushed, ground and then treated with sulphuric acid (or a strong alkaline solution, depending on the circumstances) to dissolve the uranium oxides, a process known as leaching.

Whether the uranium was leached in-situ or in a mill, the next stage of the process is similar for both routes: the uranium is separated by ion exchange.

Ion exchange is a method of removing dissolved uranium ions from a solution using a specially selected resin or polymer. The uranium ions bind reversibly to the resin, while impurities are washed away. The uranium is then stripped from the resin into another solution from which it is precipitated, dried and packed, usually as uranium oxide concentrate (U3O8) powder - often referred to as "yellowcake".

More than a dozen countries produce uranium, although about two thirds of world production comes from mines in three countries - Kazakhstan, Canada and Australia Namibia, Niger and Uzbekistan are also significant producers.

The next stage in the process is conversion - a chemical process to refine the U3O8 to uranium dioxide (UO2), which can then be converted into uranium hexafluoride (UF6) gas. This is the raw material for the next stage of the cycle: enrichment.

Unenriched, or natural, uranium contains about 0.7% of the fissile uranium-235 (U-235) isotope. ("Fissile" means it's capable of undergoing the fission process by which energy is produced in a nuclear reactor). The rest is the non-fissile uranium-238 isotope. Most nuclear reactors need fuel containing between 3.5% and 5% U-235. This is also known as low-enriched uranium, or LEU. Advanced reactor designs that are now being developed - and many small modular reactors - will require higher enrichments still. This material, containing between 5% and 20% U-235 - is known as high-assay low-enriched uranium, or HALEU. And some reactors - for example the Canadian-designed Candu - use natural uranium as their fuel and don’t require enrichment services. But more of that later.

Enrichment increases the concentration of the fissile isotope by passing the gaseous UF6 through gas centrifuges, in which a fast spinning rotor inside a vacuum casing makes use of the very slight difference in mass between the fissile and non-fissile isotopes to separate them. As the rotor spins, the concentration of molecules containing heavier, non-fissile, isotopes near the outer wall of the cylinder increases, with a corresponding increase in the concentration of molecules containing the lighter U-235 isotope towards the centre. World Nuclear Association’s information paper on uranium enrichment contains more details about the enrichment process and technology.

Enriched uranium is then reconverted from the fluoride to the oxide - a powder - for fabrication into nuclear fuel assemblies.

So that's the front end of the fuel cycle. Then, there is the back end: the management of the used fuel after its removal from a nuclear reactor. This might be reprocessed to recover fissile and fertile materials in order to provide fresh fuel for existing and future nuclear power plants.
In-situ recovery (in-situ leach) operations in Kazakhstan (Image: Kazatomprom)

Who, where and when

That's a pared-down look at the processes that make up the front end of the fuel cycle - the "how" of getting uranium from the ground and into the reactor. But how does that work on a global scale when much of the world's uranium is produced in countries that do not (yet) use nuclear power? And that brings us to: the market.

The players in the nuclear fuel market are the producers and suppliers (the uranium miners, converters, enrichers and fuel fabricators), the consumers of nuclear fuel (nuclear utilities, both public and privately owned), and various other participants such as agents, traders, investors, intermediaries and governments.

As well as the uranium, there is also the market for the services needed to turn it into fuel assemblies ready for loading into a power plant. And the nuclear fuel cycle's international dimension means that uranium mined in Australia, for example, may be converted in Canada, enriched in the UK and fabricated in Sweden, for a reactor in South Africa. In practice, nuclear materials are often exchanged - swapped - to avoid the need to transport materials from place to place as they go through the various processing stages in the nuclear fuel cycle.

Uranium is traded in two ways: the spot market, for which prices are reported daily, and mid- to long-term contracts, sometimes referred to as the term market. Utilities buy some uranium on the spot market - but so do players from the financial community. In recent years, such investors have been buying physical stocks of uranium for investment purposes.

Most uranium trade is via 3-15 year long-term contracts with producers selling directly to utilities at a higher price than the spot market - although prices specified in term contracts tend to be tied to the spot price at the time of delivery. And like all mineral commodity markets, the uranium market tends to be cyclical, with prices that rise and fall depending on demand and perceptions of scarcity.

The spot market in uranium is a physical market, with traders, brokers, producers and utilities acting bilaterally. Unlike many other commodities such as gold or oil, there is no formal exchange for uranium. Uranium price indicators are developed and published by a small number of private business organisations, notably UxC, LLC and Tradetech, both of which have long-running price series.

Likewise, conversion and enrichment services are bought and sold on both spot and term contracts, but fuel fabrication services are not procured in quite the same way. Fuel assemblies are specifically designed for particular types of reactors and are made to exacting standards and regulatory requirements. In the words of World Nuclear Association's flagship fuel cycle report, nuclear fuel is not a fungible commodity, but a high-tech product accompanied by specialist support.

Drums of uranium from Cameco's Key Lake mill are transported to the company's facilities at Blind River, Ontario, for further processing (Image: Cameco)

Bottlenecks and challenges

Uranium is mined and milled at many sites around the world, but the subsequent stages of the fuel cycle are carried out in a limited number of specialised facilities.

Anyone unfamiliar with the sector might wonder why all the different stages of mining, enrichment, conversion and fabrication are not done at the same location. Simply put, conversion and enrichment services tend to be centralised because of the specialised nature and the sheer scale of the plants, and also because of the international regime to prevent the risk of nuclear weapons proliferation.

Commercial conversion plants are found in Canada, China, France, Russia and the USA.

Uranium enrichment is strategically sensitive from a non-proliferation standpoint so there are strict international controls to ensure that civilian enrichment plants are not used to produce uranium of much higher enrichment levels (90% U-235 and above) that could be used in nuclear weapons. Enrichment is also very capital intensive. For these reasons, there are relatively few commercial enrichment suppliers operating a limited number of facilities worldwide.

There are three major enrichment producers at present: Orano, Rosatom, and Urenco operating large commercial enrichment plants in France, Germany, Netherlands, the UK, USA, and Russia. CNNC is a major domestic supplier in China.

So the availability of capacity, particularly in conversion and enrichment, can potentially lead to bottlenecks and challenges to the nuclear fuel supply chain. Likewise, interruptions to transport routes and geopolitical issues can also potentially impact the supply of nuclear materials. For example, current US enrichment capacity is not sufficient to fulfil all the requirements of its domestic nuclear power plants, and the USA relies on overseas enrichment services. But in 2024, US legislation was enacted banning the import of Russian-produced LEU until the end of 2040, with Russia placing tit-for-tat restrictions on exports of the material to the USA.

The fabrication of that LEU into reactor fuel is the last step in the process of turning uranium into nuclear fuel rods. Fuel rods are batched into assemblies that are specifically designed for particular types of reactors and are made to exacting standards by specialist companies. Most of the main fuel fabricators are also reactor vendors (or owned by them), and they usually supply the initial cores and early reloads for reactors built to their own designs. The World Nuclear Association information paper on Nuclear Fuel and its Fabrication gives a deeper dive into this sector.

So - that’s an introduction to the nuclear fuel cycle - and we haven't even touched on the so-called back end, which is what happens to that fuel after it has spent around three years in the reactor core generating electricity, and the ways in which used fuel could be recycled to continue providing energy for years to come, A guide: Uranium and the nuclear fuel cycle
Read More........

Melting Antarctic ice will slow the world’s strongest ocean current – and the global consequences are profound

Flowing clockwise around Antarctica, the Antarctic Circumpolar Current is the strongest ocean current on the planet. It’s five times stronger than the Gulf Stream and more than 100 times stronger than the Amazon River.

It forms part of the global ocean “conveyor belt” connecting the Pacific, Atlantic, and Indian oceans. The system regulates Earth’s climate and pumps water, heat and nutrients around the globe.

But fresh, cool water from melting Antarctic ice is diluting the salty water of the ocean, potentially disrupting the vital ocean current.

Our new research suggests the Antarctic Circumpolar Current will be 20% slower by 2050 as the world warms, with far-reaching consequences for life on Earth.

The Antarctic Circumpolar Current keeps Antarctica isolated from the rest of the global ocean, and connects the Atlantic, Pacific and Indian oceans. Sohail, T., et al (2025), Environmental Research Letters., CC BY

Why should we care?

The Antarctic Circumpolar Current is like a moat around the icy continent.

The current helps to keep warm water at bay, protecting vulnerable ice sheets. It also acts as a barrier to invasive species such as southern bull kelp and any animals hitching a ride on these rafts, spreading them out as they drift towards the continent. It also plays a big part in regulating Earth’s climate.

Unlike better known ocean currents – such as the Gulf Stream along the United States East Coast, the Kuroshio Current near Japan, and the Agulhas Current off the coast of South Africa – the Antarctic Circumpolar Current is not as well understood. This is partly due to its remote location, which makes obtaining direct measurements especially difficult.

Understanding the influence of climate change

Ocean currents respond to changes in temperature, salt levels, wind patterns and sea-ice extent. So the global ocean conveyor belt is vulnerable to climate change on multiple fronts.

Previous research suggested one vital part of this conveyor belt could be headed for a catastrophic collapse.

Theoretically, warming water around Antarctica should speed up the current. This is because density changes and winds around Antarctica dictate the strength of the current. Warm water is less dense (or heavy) and this should be enough to speed up the current. But observations to date indicate the strength of the current has remained relatively stable over recent decades.

This stability persists despite melting of surrounding ice, a phenomenon that had not been fully explored in scientific discussions in the past.

What we did

Advances in ocean modelling allow a more thorough investigation of the potential future changes.

We used Australia’s fastest supercomputer and climate simulator in Canberra to study the Antarctic Circumpolar Current. The underlying model, ACCESS-OM2-01, has been developed by Australian researchers from various universities as part of the Consortium for Ocean-Sea Ice Modelling in Australia.

The model captures features others often miss, such as eddies. So it’s a far more accurate way to assess how the current’s strength and behaviour will change as the world warms. It picks up the intricate interactions between ice melting and ocean circulation.

In this future projection, cold, fresh melt water from Antarctica migrates north, filling the deep ocean as it goes. This causes major changes to the density structure of the ocean. It counteracts the influence of ocean warming, leading to an overall slowdown in the current of as much as 20% by 2050.

Far-reaching consequences

The consequences of a weaker Antarctic Circumpolar Current are profound and far-reaching.

As the main current that circulates nutrient-rich waters around Antarctica, it plays a crucial role in the Antarctic ecosystem.

Weakening of the current could reduce biodiversity and decrease the productivity of fisheries that many coastal communities rely on. It could also aid the entry of invasive species such as southern bull kelp to Antarctica, disrupting local ecosystems and food webs.

A weaker current may also allow more warm water to penetrate southwards, exacerbating the melting of Antarctic ice shelves and contributing to global sea-level rise. Faster ice melting could then lead to further weakening of the current, commencing a vicious spiral of current slowdown.

This disruption could extend to global climate patterns, reducing the ocean’s ability to regulate climate change by absorbing excess heat and carbon in the atmosphere.

Ocean currents around the world (NASA)

Need to reduce emissions

While our findings present a bleak prognosis for the Antarctic Circumpolar Current, the future is not predetermined. Concerted efforts to reduce greenhouse gas emissions could still limit melting around Antarctica.

Establishing long-term studies in the Southern Ocean will be crucial for monitoring these changes accurately.

With proactive and coordinated international actions, we have a chance to address and potentially avert the effects of climate change on our oceans.

The authors thank Polar Climate Senior Researcher Dr Andreas Klocker, from the NORCE Norwegian Research Centre and Bjerknes Centre for Climate Research, for his contribution to this research, and Professor Matthew England from the University of New South Wales, who provided the outputs from the model simulation for this analysis.The Conversation

Taimoor Sohail, Postdoctoral Researcher, School of Geography, Earth and Atmospheric Sciences, The University of Melbourne and Bishakhdatta Gayen, ARC Future Fellow & Associate Professor, Mechanical Engineering, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

2025 will see huge advances in quantum computing. So what is a quantum chip and how does it work?

In recent years, the field of quantum computing has been experiencing fast growth, with technological advances and large-scale investments regularly making the news.

The United Nations has designated 2025 as the International Year of Quantum Science and Technology.

The stakes are high – having quantum computers would mean access to tremendous data processing power compared to what we have today. They won’t replace your normal computer, but having this kind of awesome computing power will provide advances in medicine, chemistry, materials science and other fields.

So it’s no surprise that quantum computing is rapidly becoming a global race, and private industry and governments around the world are rushing to build the world’s first full-scale quantum computer. To achieve this, first we need to have stable and scalable quantum processors, or chips.

What is a quantum chip?

Everyday computers – like your laptop – are classical computers. They store and process information in the form of binary numbers or bits. A single bit can represent either 0 or 1.

By contrast, the basic unit of a quantum chip is a qubit. A quantum chip is made up of many qubits. These are typically subatomic particles such as electrons or photons, controlled and manipulated by specially designed electric and magnetic fields (known as control signals).

Unlike a bit, a qubit can be placed in a state of 0, 1, or a combination of both, also known as a “superposition state”. This distinct property allows quantum processors to store and process extremely large data sets exponentially faster than even the most powerful classical computer.

There are different ways to make qubits – one can use superconducting devices, semiconductors, photonics (light) or other approaches. Each method has its advantages and drawbacks.

Companies like IBM, Google and QueRa all have roadmaps to drastically scale up quantum processors by 2030.

Industry players that use semiconductors are Intel and Australian companies like Diraq and SQC. Key photonic quantum computer developers include PsiQuantum and Xanadu.

Qubits: quality versus quantity

How many qubits a quantum chip has is actually less important than the quality of the qubits.

A quantum chip made up of thousands of low-quality qubits will be unable to perform any useful computational task.

So, what makes for a quality qubit?

Qubits are very sensitive to unwanted disturbances, also known as errors or noise. This noise can come from many sources, including imperfections in the manufacturing process, control signal issues, changes in temperature, or even just an interaction with the qubit’s environment.

Being prone to errors reduces the reliability of a qubit, known as fidelity. For a quantum chip to stay stable long enough to perform complex computational tasks, it needs high-fidelity qubits.

When researchers compare the performance of different quantum chips, qubit fidelity is one of the crucial parameters they use.

How do we correct the errors?

Fortunately, we don’t have to build perfect qubits.

Over the last 30 years, researchers have designed theoretical techniques which use many imperfect or low-fidelity qubits to encode an abstract “logical qubit”. A logical qubit is protected from errors and, therefore, has very high fidelity. A useful quantum processor will be based on many logical qubits.

Nearly all major quantum chip developers are now putting these theories into practice, shifting their focus from qubits to logical qubits.

In 2024, many quantum computing researchers and companies made great progress on quantum error corrections, including Google, QueRa, IBM and CSIRO.

Quantum chips consisting of over 100 qubits are already available. They are being used by many researchers around the world to evaluate how good the current generation of quantum computers are and how they can be made better in future generations.

For now, developers have only made single logical qubits. It will likely take a few years to figure out how to put several logical qubits together into a quantum chip that can work coherently and solve complex real-world problems.

What will quantum computers be useful for?

A fully functional quantum processor would be able to solve extremely complex problems. This could lead to revolutionary impact in many areas of research, technology and economy.

Quantum computers could help us discover new medicines and advance medical research by finding new connections in clinical trial data or genetics that current computers don’t have enough processing power for.

They could also greatly improve the safety of various systems that use artificial intelligence algorithms, such as banking, military targeting and autonomous vehicles, to name a few.

To achieve all this, we first need to reach a milestone known as quantum supremacy – where a quantum processor solves a problem that would take a classical computer an impractical amount of time to do.

Late last year, Google’s quantum chip Willow finally demonstrated quantum supremacy for a contrived task – a computational problem designed to be hard for classical supercomputers but easy for quantum processors due to their distinct way of working.

Although it didn’t solve a useful real-world problem, it’s still a remarkable achievement and an important step in the right direction that’s taken years of research and development. After all, to run, one must first learn to walk.

What’s on the horizon for 2025 and beyond?

In the next few years, quantum chips will continue to scale up. Importantly, the next generation of quantum processors will be underpinned by logical qubits, able to tackle increasingly useful tasks.

While quantum hardware (that is, processors) has been progressing at a rapid pace, we also can’t overlook an enormous amount of research and development in the field of quantum software and algorithms.

Using quantum simulations on normal computers, researchers have been developing and testing various quantum algorithms. This will make quantum computing ready for useful applications when the quantum hardware catches up.

Building a full-scale quantum computer is a daunting task. It will require simultaneous advancements on many fronts, such as scaling up the number of qubits on a chip, improving the fidelity of the qubits, better error correction, quantum software, quantum algorithms, and several other sub-fields of quantum computing.

After years of remarkable foundational work, we can expect 2025 to bring new breakthroughs in all of the above.The Conversation

Muhammad Usman, Head of Quantum Systems and Principal Research Scientist, CSIRO

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Countries Are Breathing the Cleanest Air in Centuries and Offer Lessons to the Rest of Us


An article at Our World in Data recently explored trends in air quality across a selection of high and middle-income countries, and found that not only is the West breathing better air than at perhaps any point since urbanization, but that developing nations likely won’t need 100 years or more to arrive at similar outcomes.

Published by Hannah Ritchie, the article focuses on two kinds of gases emitted from industrial activity: sulfur dioxide (SO2) and nitrogen oxide (NOx). Both enter the air we breathe from the burning of fossil fuels—coal in particular—while the latter is emitted mostly from internal combustion engines.

Bad air quality is responsible for millions of lost life years worldwide from respiratory problems, cardiovascular issues, and neurological disease—all of which can develop and become exasperated under prolonged exposure to air pollutants.

UK sulphur dioxide emissions – credit Community Emissions Data System (CEDS) 2024, CC BY license.

As seen in this chart, emissions of SO2 have just dipped under levels seen at the earliest periods of British industrialization. Before this, city and town air quality would have been badly tainted through emissions of wood smoke, so it’s safe to assume that 2022 marked the best British air in many centuries, not just the last two.

SO2 enters the ambient air primarily in urban environments through the burning of coal, and the significant reduction in coal use across the West has seen this number plummet.

But as regards middle-income countries like China and India that still rely on coal for electricity—all is not lost, as the next chart shows.

Sulfur dioxide emissions around the world – credit Community Emissions Data System (CEDS) 2024, CC BY license.

While the UK consumption of coal and emissions of SO2 have fallen in lockstep, the US and China present as excellent case studies for nations—like India, the fourth example—who rely on coal for electricity.

Even if coal consumption is increasing, SO2 emissions can fall even below baseline, with the diligent application of existing technologies for “scrubbing” coal.

“In 1990, the US included a cap-and-trade scheme on SO2 as part of its Clean Air Act Amendments,” Ritchie writes. “Each coal plant was given a ‘cap’ for how much SO2 it could emit, forcing it to either implement technologies to reduce its emissions, trade credits with other plants, or pay a large fine for every tonne of extra sulfur it emitted.”

This was hugely successful, as over just a single decade, emissions had dropped double-digit percentages.

Scrubbers are an apparatus that clean the gases passing through the smokestack of a coal-burning power plant. They exist as large towers in which aqueous mixtures of lime or limestone absorbers are sprayed through the emissions, known as flue gases, exiting a coal boiler. The lime/limestone absorbs some of the sulfur from the flue gases.

These have been used to tremendous effect in China, which despite tripling its coal use since 2000, has actually reduced SO2 emissions to pre-2000 levels. India does not use, nor does it mandate coal scrubbers, which explains its upward trajectory in both use and emissions.

One important note that the article failed to mention: if a country is burning coal, it means they aren’t burning wood or dung. While seemingly more natural than coal or oil, these produce their own, more significant health hazards, as the particulate matter in wood smoke is much larger and higher than smoke from fossil fuels.

Air quality in a city will increase if switching to coal from wood and dung, in the same way that switching from coal to natural gas will accomplish the same. Additionally, more years of life will be lost for having no electricity compared to coal-powered electricity.

But not all emissions come from power production. Nitrogen oxide (NOx) is generated through the burning of gasoline, diesel, and kerosene in internal combustion engines, and much like SO2 emissions, there exists a gradual upward trend throughout the 20th century.

Nitrogen oxide emissions around the world – credit Community Emissions Data System (CEDS) 2024, CC BY license.

In the UK, NOx emissions have fallen to levels seen in 1950, even as the number of road-driven miles in the country has steadily increased to near the highest levels in the country’s history.

KICKING OUT POLLUTION: Mercury Pollution From Human Activities is Declining–With a 10% Drop in Emissions, Say MIT Scientists

This was largely accomplished by the increase in fuel efficiency and exhaust systems on automobiles mandated by the EU in the 1990s. The Euro 1 rating was introduced in 1992, and the bloc is now on Euro 6.

“To comply with regulations, car manufacturers have had to innovate on technologies that can reduce the emissions of NOx and other pollutants from car exhausts,” writes Ritchie. “These technologies have included catalytic converters, filters for particulate matter, gas recirculation—which lowers the temperature of combustion and therefore produces less NOx from the exhaust…”

In the chart above, the nations of South Africa, Brazil, and China are those that have adopted similar emissions standards, while those below are those that haven’t, demonstrating how quickly these harmful emissions can be cut out of the air if smart regulation is imposed.

Beijing—once synonymous with face masks and grimy skies, now enjoys a routine weather phenomenon called the “Beijing Blue,” in other words, a blue sky. This, GNN reported, was accomplished by a “war on pollution” that led to an average life expectancy increase of 4 years for the average Beijingren.

Read More........