Who invented the light bulb?

Ernest Freeberg, University of Tennessee

Curious Kids is a series for children of all ages. If you have a question you’d like an expert to answer, send it to CuriousKidsUS@theconversation.com.


Who invented the light bulb? – Preben, age 5, New York City


When people name the most important inventions in history, light bulbs are usually on the list. They were much safer than earlier light sources, and they made more activities, for both work and play, possible after the Sun went down.

More than a century after its invention, illustrators still use a lit bulb to symbolize a great idea. Credit typically goes to inventor and entrepreneur Thomas Edison, who created the first commercial light and power system in the United States.

But as a historian and author of a book about how electric lighting changed the U.S., I know that the actual story is more complicated and interesting. It shows that complex inventions are not created by a single genius, no matter how talented he or she may be, but by many creative minds and hands working on the same problem.

Thomas Edison didn’t invent the basic design of the incandescent light bulb, but he made it reliable and commercially viable.

Making light − and delivering it

In the 1870s, Edison raced against other inventors to find a way of producing light from electric current. Americans were keen to give up their gas and kerosene lamps for something that promised to be cleaner and safer. Candles offered little light and posed a fire hazard. Some customers in cities had brighter gas lamps, but they were expensive, hard to operate and polluted the air.

When Edison began working on the challenge, he learned from many other inventors’ ideas and failed experiments. They all were trying to figure out how to send a current through a thin carbon thread encased in glass, making it hot enough to glow without burning out.

In England, for example, chemist Joseph Swan patented an incandescent bulb and lit his own house in 1878. Then in 1881, at a great exhibition on electricity in Paris, Edison and several other inventors demonstrated their light bulbs.

Edison’s version proved to be the brightest and longest-lasting. In 1882 he connected it to a full working system that lit up dozens of homes and offices in downtown Manhattan.

But Edison’s bulb was just one piece of a much more complicated system that included an efficient dynamo – the powerful machine that generated electricity – plus a network of underground wires and new types of lamps. Edison also created the meter, a device that measured how much electricity each household used, so that he could tell how much to charge his customers.

Edison’s invention wasn’t just a science experiment – it was a commercial product that many people proved eager to buy.

Inventing an invention factory

As I show in my book, Edison did not solve these many technical challenges on his own.

At his farmhouse laboratory in Menlo Park, New Jersey, Edison hired a team of skilled technicians and trained scientists, and he filled his lab with every possible tool and material. He liked to boast that he had only a fourth grade education, but he knew enough to recruit men who had the skills he lacked. Edison also convinced banker J.P. Morgan and other investors to provide financial backing to pay for his experiments and bring them to market.

Historians often say that Edison’s greatest invention was this collaborative workshop, which he called an “invention factory.” It was capable of launching amazing new machines on a regular basis. Edison set the agenda for its work – a role that earned him the nickname “the wizard of Menlo Park.”

Here was the beginning of what we now call “research and development” – the network of universities and laboratories that produce technological breakthroughs today, ranging from lifesaving vaccines to the internet, as well as many improvements in the electric lights we use now.

Sparking an electric revolution

Many people found creative ways to use Edison’s light bulb. Factory owners and office managers installed electric light to extend the workday past sunset. Others used it for fun purposes, such as movie marquees, amusement parks, store windows, Christmas trees and evening baseball games.

Theater directors and photographers adapted the light to their arts. Doctors used small bulbs to peer inside the body during surgery. Architects and city planners, sign-makers and deep-sea explorers adapted the new light for all kinds of specialized uses. Through their actions, humanity’s relationship to day and night was reinvented – often in ways that Edison never could have anticipated.

Today people take for granted that they can have all the light they need at the flick of a switch. But that luxury requires a network of power stations, transmission lines and utility poles, managed by teams of trained engineers and electricians. To deliver it, electric power companies grew into an industry monitored by insurance companies and public utility regulators.

Edison’s first fragile light bulbs were just one early step in the electric revolution that has helped create today’s richly illuminated world.


Hello, curious kids! Do you have a question you’d like an expert to answer? Ask an adult to send your question to CuriousKidsUS@theconversation.com. Please tell us your name, age and the city where you live.

And since curiosity has no age limit – adults, let us know what you’re wondering, too. We won’t be able to answer every question, but we will do our best.The Conversation

Ernest Freeberg, Professor of History, University of Tennessee

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Why do giraffes have such long legs? Animal simulations reveal a surprising answer

If you’ve ever wondered why the giraffe has such a long neck, the answer seems clear: it lets them reach succulent leaves atop tall acacia trees in Africa.

Only giraffes have direct access to those leaves, while smaller mammals must compete with one another near the ground. This exclusive food source appears to allow the giraffe to breed throughout the year and to survive droughts better than shorter species.

But the long neck comes at a high cost. The giraffe’s heart must produce enough pressure to pump its blood a couple of metres up to its head. The blood pressure of an adult giraffe is typically over 200mm Hg – more than twice that of most mammals.

As a result, the heart of a resting giraffe uses more energy than the entire body of a resting human, and indeed more energy than the heart of any other mammal of comparable size. However, as we show in a new study published in the Journal of Experimental Biology, the giraffe’s heart has some unrecognised helpers in its battle against gravity: the animal’s long, long legs.

Meet the ‘elaffe’

In our new study, we quantified the energy cost of pumping blood for a typical adult giraffe and compared it to what it would be in an imaginary animal with short legs but a longer neck to reach the same treetop height.

This beast was a Frankenstein-style combination of the body of a common African eland and the neck of a giraffe. We called it an “elaffe”.

We found the animal would spend a whopping 21% of its total energy budget on powering its heart, compared with 16% in the giraffe and 6.7% in humans.

By raising its heart closer to its head by means of long legs, the giraffe “saves” a net 5% of the energy it takes in from food. Over the course of a year, this energy saving would add up to more than 1.5 tonnes of food – which could make the difference between life and death on the African savannah.

How giraffes work

In his book How Giraffes Work, zoologist Graham Mitchell reveals that the ancestors of giraffes had long legs before they evolved long necks.

This makes sense from an energy point of view. Long legs make the heart’s job easier, while long necks make it work harder.

However, the evolution of long legs came with a price of its own. Giraffes are forced to splay their forelegs while drinking, which makes them slow and awkward to rise and escape if a predator should appear.

Statistics show giraffes are the most likely of all prey mammals to leave a water hole without getting a drink.

How long can a neck be?

 
In life, the Giraffatitan dinosaur would most likely have been unable to lift its head this high. Shadowgate / Wikimedia, CC BY

The energy cost of the heart increases in direct proportion to the height of the neck, so there must be a limit. A sauropod dinosaur, the Giraffatitan, towers 13 metres above the floor of the Berlin Natural History Museum.

Its neck is 8.5m high, which would require a blood pressure of about 770mm Hg if it were to get blood to its head – almost eight times what we see in the average mammal. This is implausible because the heart’s energy cost to pump that blood would have exceeded the energy cost of the entire rest of the body.

Sauropod dinosaurs could not lift their heads that high without passing out. In fact, it is unlikely that any land animal in history could exceed the height of an adult male giraffe.The Conversation

Roger S. Seymour, Professor Emeritus of Physiology, University of Adelaide and Edward Snelling, Faculty of Veterinary Science, University of Pretoria

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Why Do Animals Play? Because They Need To Play – Just Like Children Do

Photo by Tambako The Jaguar, CC license

As much as it’s a time for growing and learning, childhood is also a time for unabashed joy. Pastimes like careening down a snowy hillside on your sled, flying off a rope swing into a cool lake on a hot summer day, or even just a game of catch are part and parcel of growing up.

But the joys of playtime aren’t just reserved for human kids—animal offspring are just as likely to get into the act as well, and some of their activities are startlingly similar to our own.

Young ravens hold body-surfing “competitions” down the slopes of wintery rooftops; juvenile elephants create impromptu waterslides along muddy riverbanks; herring gulls engage in their own version of airborne hacky sack substituting seashells for bean-filled projectiles.

Scientists believe that for certain animal species, some fun and games is strictly that—play for the sake of play—but as with humans, other forms of diversion are preparing youngsters for the rigors of adulthood.

“Play is essential to development because it contributes to the cognitive, physical, social, and emotional well-being of children and youth,” wrote Kenneth R. Ginsburg in the American Journal of Pediatrics. “Play also offers an ideal opportunity for parents to engage fully with their children.”

Those same tenets, it seems, hold true in the animal kingdom as well.

“Horses…are known to engage in play almost as soon as they are born. Once they can walk, they immediately start to gallop, frolic and buck, again, honing the motor skills they may need when they’re mature,” notes BBC Earth.
Play with purpose

But along with social and motor skills, play also teaches animals essential hunting and survival skills.

Inge Wallmrod

While the antics of cute cavorting kittens is the stuff that’s spawned a myriad of viral videos, whether it’s an opportunity to take down an errant mouse or to avoid harm in the face of unexpected danger, their ninja-like antics may in fact be helping kittens learn to be ready when life hands them a surprise.

Even natural-born predators, such as kestrels, use play to hone their hunting skills by practicing with targets that look like real prey when they’re young.

In the oceans, dolphins chase underwater air rings to fine-tune their sonar skills.

And while it’s unclear why bear cubs are so quintessentially playful, zoologists believe at least some of their shenanigans have a more serious purpose that aids in their survival as adults.

One of the most important teaching aspects of play is socialization. These days, for human kids, that usually means the basics like learning to share, teamwork, and knowing boundaries.

For animals, especially those that live in packs, flocks, or herds, play (often in the form of play fighting) imparts an understanding of where each animal fits into the community hierarchy.

In ways that are remarkably similar to the training children of traditional tribal cultures receive, it is through the rules of play that lion cubs, kangaroo joeys, and wolf pups discover and establish the roles they’ll be expected to perform as adults.

But for animals, not all socializing play is about fighting or establishing dominance. Some of it’s about learning to be better parents—and that involves playing with dolls. While they might lack a perambulator and a fancy wardrobe, female chimpanzees are known to lavish their doll babies with love and emulate their own mothers’ attentive care.

So whether it’s frolicking in the pasture, hanging from a tree, or rollicking in the surf, it seems that play will always be an intrinsic—and fun—part of both human and animal development.And we’re pretty sure when those ninja-kitten TikTok stars stop climbing that curtain, they’ll be thrilled to hear about it. Why Do Animals Play? Because They Need To Play – Just Like Children Do:
Read More........

A guide: Uranium and the nuclear fuel cycle


A guide: Uranium and the nuclear fuel cycle Yellowcake (Image: Dean Calma/IAEA)

The nuclear fuel cycle is the series of industrial processes that turns uranium into electricity. Claire Maden takes a look at the steps that make up the cycle, the major players and the potential pinch-points.

The nuclear fuel cycle starts with the mining of uranium ore and ends with the disposal of nuclear waste. (Ore is simply the naturally occurring material from which a mineral or minerals of economic value can be extracted).

We talk about the front end of the fuel cycle - that is, the processes needed to mine the ore, extract uranium from it, refine it, and turn it into a fuel assembly that can be loaded into a nuclear reactor - and the back end of the fuel cycle - what happens to the fuel after it's been used. If the used fuel is treated as waste, and disposed of, this is known as an "open" fuel cycle. It can also be reprocessed to recover uranium and other fissile materials which can be reused in what is known as a "closed" fuel cycle.

The World Nuclear Association's Information Library has a detailed overview of the fuel cycle here. But in a nutshell, the front end of the fuel cycle is made up of mining and milling, conversion, enrichment and fuel fabrication. Fuel then spends typically about three years inside a reactor, after which it may go into temporary storage before reprocessing, and recycling before the waste produced is disposed of - these steps are the back end of the fuel cycle.

The processes that make up the fuel cycle are carried out by companies all over the world. Some companies specialise in one particular area or service; some offer services in several areas of the fuel cycle. Some are state-owned, some are in the private sector. Underpinning all these separate offerings is the transport sector to get the materials to where they need to be - and overarching all of it is the global market for nuclear fuel and fuel cycle services.

(Image: World Nuclear Association)


How do they do it?


Let's start at the very front of the front end: uranium mining.

Depending on the type of mineralisation and the geological setting, uranium can be mined by open pit or underground mining methods, or by dissolving and recovering it via wells. This is known as in-situ recovery - ISR - or in-situ leaching, and is now the most widely used method: Kazakhstan produces more uranium than any other country, and all by in-situ methods.

Uranium mined by conventional methods is recovered at a mill where the ore is crushed, ground and then treated with sulphuric acid (or a strong alkaline solution, depending on the circumstances) to dissolve the uranium oxides, a process known as leaching.

Whether the uranium was leached in-situ or in a mill, the next stage of the process is similar for both routes: the uranium is separated by ion exchange.

Ion exchange is a method of removing dissolved uranium ions from a solution using a specially selected resin or polymer. The uranium ions bind reversibly to the resin, while impurities are washed away. The uranium is then stripped from the resin into another solution from which it is precipitated, dried and packed, usually as uranium oxide concentrate (U3O8) powder - often referred to as "yellowcake".

More than a dozen countries produce uranium, although about two thirds of world production comes from mines in three countries - Kazakhstan, Canada and Australia Namibia, Niger and Uzbekistan are also significant producers.

The next stage in the process is conversion - a chemical process to refine the U3O8 to uranium dioxide (UO2), which can then be converted into uranium hexafluoride (UF6) gas. This is the raw material for the next stage of the cycle: enrichment.

Unenriched, or natural, uranium contains about 0.7% of the fissile uranium-235 (U-235) isotope. ("Fissile" means it's capable of undergoing the fission process by which energy is produced in a nuclear reactor). The rest is the non-fissile uranium-238 isotope. Most nuclear reactors need fuel containing between 3.5% and 5% U-235. This is also known as low-enriched uranium, or LEU. Advanced reactor designs that are now being developed - and many small modular reactors - will require higher enrichments still. This material, containing between 5% and 20% U-235 - is known as high-assay low-enriched uranium, or HALEU. And some reactors - for example the Canadian-designed Candu - use natural uranium as their fuel and don’t require enrichment services. But more of that later.

Enrichment increases the concentration of the fissile isotope by passing the gaseous UF6 through gas centrifuges, in which a fast spinning rotor inside a vacuum casing makes use of the very slight difference in mass between the fissile and non-fissile isotopes to separate them. As the rotor spins, the concentration of molecules containing heavier, non-fissile, isotopes near the outer wall of the cylinder increases, with a corresponding increase in the concentration of molecules containing the lighter U-235 isotope towards the centre. World Nuclear Association’s information paper on uranium enrichment contains more details about the enrichment process and technology.

Enriched uranium is then reconverted from the fluoride to the oxide - a powder - for fabrication into nuclear fuel assemblies.

So that's the front end of the fuel cycle. Then, there is the back end: the management of the used fuel after its removal from a nuclear reactor. This might be reprocessed to recover fissile and fertile materials in order to provide fresh fuel for existing and future nuclear power plants.
In-situ recovery (in-situ leach) operations in Kazakhstan (Image: Kazatomprom)

Who, where and when

That's a pared-down look at the processes that make up the front end of the fuel cycle - the "how" of getting uranium from the ground and into the reactor. But how does that work on a global scale when much of the world's uranium is produced in countries that do not (yet) use nuclear power? And that brings us to: the market.

The players in the nuclear fuel market are the producers and suppliers (the uranium miners, converters, enrichers and fuel fabricators), the consumers of nuclear fuel (nuclear utilities, both public and privately owned), and various other participants such as agents, traders, investors, intermediaries and governments.

As well as the uranium, there is also the market for the services needed to turn it into fuel assemblies ready for loading into a power plant. And the nuclear fuel cycle's international dimension means that uranium mined in Australia, for example, may be converted in Canada, enriched in the UK and fabricated in Sweden, for a reactor in South Africa. In practice, nuclear materials are often exchanged - swapped - to avoid the need to transport materials from place to place as they go through the various processing stages in the nuclear fuel cycle.

Uranium is traded in two ways: the spot market, for which prices are reported daily, and mid- to long-term contracts, sometimes referred to as the term market. Utilities buy some uranium on the spot market - but so do players from the financial community. In recent years, such investors have been buying physical stocks of uranium for investment purposes.

Most uranium trade is via 3-15 year long-term contracts with producers selling directly to utilities at a higher price than the spot market - although prices specified in term contracts tend to be tied to the spot price at the time of delivery. And like all mineral commodity markets, the uranium market tends to be cyclical, with prices that rise and fall depending on demand and perceptions of scarcity.

The spot market in uranium is a physical market, with traders, brokers, producers and utilities acting bilaterally. Unlike many other commodities such as gold or oil, there is no formal exchange for uranium. Uranium price indicators are developed and published by a small number of private business organisations, notably UxC, LLC and Tradetech, both of which have long-running price series.

Likewise, conversion and enrichment services are bought and sold on both spot and term contracts, but fuel fabrication services are not procured in quite the same way. Fuel assemblies are specifically designed for particular types of reactors and are made to exacting standards and regulatory requirements. In the words of World Nuclear Association's flagship fuel cycle report, nuclear fuel is not a fungible commodity, but a high-tech product accompanied by specialist support.

Drums of uranium from Cameco's Key Lake mill are transported to the company's facilities at Blind River, Ontario, for further processing (Image: Cameco)

Bottlenecks and challenges

Uranium is mined and milled at many sites around the world, but the subsequent stages of the fuel cycle are carried out in a limited number of specialised facilities.

Anyone unfamiliar with the sector might wonder why all the different stages of mining, enrichment, conversion and fabrication are not done at the same location. Simply put, conversion and enrichment services tend to be centralised because of the specialised nature and the sheer scale of the plants, and also because of the international regime to prevent the risk of nuclear weapons proliferation.

Commercial conversion plants are found in Canada, China, France, Russia and the USA.

Uranium enrichment is strategically sensitive from a non-proliferation standpoint so there are strict international controls to ensure that civilian enrichment plants are not used to produce uranium of much higher enrichment levels (90% U-235 and above) that could be used in nuclear weapons. Enrichment is also very capital intensive. For these reasons, there are relatively few commercial enrichment suppliers operating a limited number of facilities worldwide.

There are three major enrichment producers at present: Orano, Rosatom, and Urenco operating large commercial enrichment plants in France, Germany, Netherlands, the UK, USA, and Russia. CNNC is a major domestic supplier in China.

So the availability of capacity, particularly in conversion and enrichment, can potentially lead to bottlenecks and challenges to the nuclear fuel supply chain. Likewise, interruptions to transport routes and geopolitical issues can also potentially impact the supply of nuclear materials. For example, current US enrichment capacity is not sufficient to fulfil all the requirements of its domestic nuclear power plants, and the USA relies on overseas enrichment services. But in 2024, US legislation was enacted banning the import of Russian-produced LEU until the end of 2040, with Russia placing tit-for-tat restrictions on exports of the material to the USA.

The fabrication of that LEU into reactor fuel is the last step in the process of turning uranium into nuclear fuel rods. Fuel rods are batched into assemblies that are specifically designed for particular types of reactors and are made to exacting standards by specialist companies. Most of the main fuel fabricators are also reactor vendors (or owned by them), and they usually supply the initial cores and early reloads for reactors built to their own designs. The World Nuclear Association information paper on Nuclear Fuel and its Fabrication gives a deeper dive into this sector.

So - that’s an introduction to the nuclear fuel cycle - and we haven't even touched on the so-called back end, which is what happens to that fuel after it has spent around three years in the reactor core generating electricity, and the ways in which used fuel could be recycled to continue providing energy for years to come, A guide: Uranium and the nuclear fuel cycle
Read More........

Melting Antarctic ice will slow the world’s strongest ocean current – and the global consequences are profound

Flowing clockwise around Antarctica, the Antarctic Circumpolar Current is the strongest ocean current on the planet. It’s five times stronger than the Gulf Stream and more than 100 times stronger than the Amazon River.

It forms part of the global ocean “conveyor belt” connecting the Pacific, Atlantic, and Indian oceans. The system regulates Earth’s climate and pumps water, heat and nutrients around the globe.

But fresh, cool water from melting Antarctic ice is diluting the salty water of the ocean, potentially disrupting the vital ocean current.

Our new research suggests the Antarctic Circumpolar Current will be 20% slower by 2050 as the world warms, with far-reaching consequences for life on Earth.

The Antarctic Circumpolar Current keeps Antarctica isolated from the rest of the global ocean, and connects the Atlantic, Pacific and Indian oceans. Sohail, T., et al (2025), Environmental Research Letters., CC BY

Why should we care?

The Antarctic Circumpolar Current is like a moat around the icy continent.

The current helps to keep warm water at bay, protecting vulnerable ice sheets. It also acts as a barrier to invasive species such as southern bull kelp and any animals hitching a ride on these rafts, spreading them out as they drift towards the continent. It also plays a big part in regulating Earth’s climate.

Unlike better known ocean currents – such as the Gulf Stream along the United States East Coast, the Kuroshio Current near Japan, and the Agulhas Current off the coast of South Africa – the Antarctic Circumpolar Current is not as well understood. This is partly due to its remote location, which makes obtaining direct measurements especially difficult.

Understanding the influence of climate change

Ocean currents respond to changes in temperature, salt levels, wind patterns and sea-ice extent. So the global ocean conveyor belt is vulnerable to climate change on multiple fronts.

Previous research suggested one vital part of this conveyor belt could be headed for a catastrophic collapse.

Theoretically, warming water around Antarctica should speed up the current. This is because density changes and winds around Antarctica dictate the strength of the current. Warm water is less dense (or heavy) and this should be enough to speed up the current. But observations to date indicate the strength of the current has remained relatively stable over recent decades.

This stability persists despite melting of surrounding ice, a phenomenon that had not been fully explored in scientific discussions in the past.

What we did

Advances in ocean modelling allow a more thorough investigation of the potential future changes.

We used Australia’s fastest supercomputer and climate simulator in Canberra to study the Antarctic Circumpolar Current. The underlying model, ACCESS-OM2-01, has been developed by Australian researchers from various universities as part of the Consortium for Ocean-Sea Ice Modelling in Australia.

The model captures features others often miss, such as eddies. So it’s a far more accurate way to assess how the current’s strength and behaviour will change as the world warms. It picks up the intricate interactions between ice melting and ocean circulation.

In this future projection, cold, fresh melt water from Antarctica migrates north, filling the deep ocean as it goes. This causes major changes to the density structure of the ocean. It counteracts the influence of ocean warming, leading to an overall slowdown in the current of as much as 20% by 2050.

Far-reaching consequences

The consequences of a weaker Antarctic Circumpolar Current are profound and far-reaching.

As the main current that circulates nutrient-rich waters around Antarctica, it plays a crucial role in the Antarctic ecosystem.

Weakening of the current could reduce biodiversity and decrease the productivity of fisheries that many coastal communities rely on. It could also aid the entry of invasive species such as southern bull kelp to Antarctica, disrupting local ecosystems and food webs.

A weaker current may also allow more warm water to penetrate southwards, exacerbating the melting of Antarctic ice shelves and contributing to global sea-level rise. Faster ice melting could then lead to further weakening of the current, commencing a vicious spiral of current slowdown.

This disruption could extend to global climate patterns, reducing the ocean’s ability to regulate climate change by absorbing excess heat and carbon in the atmosphere.

Ocean currents around the world (NASA)

Need to reduce emissions

While our findings present a bleak prognosis for the Antarctic Circumpolar Current, the future is not predetermined. Concerted efforts to reduce greenhouse gas emissions could still limit melting around Antarctica.

Establishing long-term studies in the Southern Ocean will be crucial for monitoring these changes accurately.

With proactive and coordinated international actions, we have a chance to address and potentially avert the effects of climate change on our oceans.

The authors thank Polar Climate Senior Researcher Dr Andreas Klocker, from the NORCE Norwegian Research Centre and Bjerknes Centre for Climate Research, for his contribution to this research, and Professor Matthew England from the University of New South Wales, who provided the outputs from the model simulation for this analysis.The Conversation

Taimoor Sohail, Postdoctoral Researcher, School of Geography, Earth and Atmospheric Sciences, The University of Melbourne and Bishakhdatta Gayen, ARC Future Fellow & Associate Professor, Mechanical Engineering, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

2025 will see huge advances in quantum computing. So what is a quantum chip and how does it work?

In recent years, the field of quantum computing has been experiencing fast growth, with technological advances and large-scale investments regularly making the news.

The United Nations has designated 2025 as the International Year of Quantum Science and Technology.

The stakes are high – having quantum computers would mean access to tremendous data processing power compared to what we have today. They won’t replace your normal computer, but having this kind of awesome computing power will provide advances in medicine, chemistry, materials science and other fields.

So it’s no surprise that quantum computing is rapidly becoming a global race, and private industry and governments around the world are rushing to build the world’s first full-scale quantum computer. To achieve this, first we need to have stable and scalable quantum processors, or chips.

What is a quantum chip?

Everyday computers – like your laptop – are classical computers. They store and process information in the form of binary numbers or bits. A single bit can represent either 0 or 1.

By contrast, the basic unit of a quantum chip is a qubit. A quantum chip is made up of many qubits. These are typically subatomic particles such as electrons or photons, controlled and manipulated by specially designed electric and magnetic fields (known as control signals).

Unlike a bit, a qubit can be placed in a state of 0, 1, or a combination of both, also known as a “superposition state”. This distinct property allows quantum processors to store and process extremely large data sets exponentially faster than even the most powerful classical computer.

There are different ways to make qubits – one can use superconducting devices, semiconductors, photonics (light) or other approaches. Each method has its advantages and drawbacks.

Companies like IBM, Google and QueRa all have roadmaps to drastically scale up quantum processors by 2030.

Industry players that use semiconductors are Intel and Australian companies like Diraq and SQC. Key photonic quantum computer developers include PsiQuantum and Xanadu.

Qubits: quality versus quantity

How many qubits a quantum chip has is actually less important than the quality of the qubits.

A quantum chip made up of thousands of low-quality qubits will be unable to perform any useful computational task.

So, what makes for a quality qubit?

Qubits are very sensitive to unwanted disturbances, also known as errors or noise. This noise can come from many sources, including imperfections in the manufacturing process, control signal issues, changes in temperature, or even just an interaction with the qubit’s environment.

Being prone to errors reduces the reliability of a qubit, known as fidelity. For a quantum chip to stay stable long enough to perform complex computational tasks, it needs high-fidelity qubits.

When researchers compare the performance of different quantum chips, qubit fidelity is one of the crucial parameters they use.

How do we correct the errors?

Fortunately, we don’t have to build perfect qubits.

Over the last 30 years, researchers have designed theoretical techniques which use many imperfect or low-fidelity qubits to encode an abstract “logical qubit”. A logical qubit is protected from errors and, therefore, has very high fidelity. A useful quantum processor will be based on many logical qubits.

Nearly all major quantum chip developers are now putting these theories into practice, shifting their focus from qubits to logical qubits.

In 2024, many quantum computing researchers and companies made great progress on quantum error corrections, including Google, QueRa, IBM and CSIRO.

Quantum chips consisting of over 100 qubits are already available. They are being used by many researchers around the world to evaluate how good the current generation of quantum computers are and how they can be made better in future generations.

For now, developers have only made single logical qubits. It will likely take a few years to figure out how to put several logical qubits together into a quantum chip that can work coherently and solve complex real-world problems.

What will quantum computers be useful for?

A fully functional quantum processor would be able to solve extremely complex problems. This could lead to revolutionary impact in many areas of research, technology and economy.

Quantum computers could help us discover new medicines and advance medical research by finding new connections in clinical trial data or genetics that current computers don’t have enough processing power for.

They could also greatly improve the safety of various systems that use artificial intelligence algorithms, such as banking, military targeting and autonomous vehicles, to name a few.

To achieve all this, we first need to reach a milestone known as quantum supremacy – where a quantum processor solves a problem that would take a classical computer an impractical amount of time to do.

Late last year, Google’s quantum chip Willow finally demonstrated quantum supremacy for a contrived task – a computational problem designed to be hard for classical supercomputers but easy for quantum processors due to their distinct way of working.

Although it didn’t solve a useful real-world problem, it’s still a remarkable achievement and an important step in the right direction that’s taken years of research and development. After all, to run, one must first learn to walk.

What’s on the horizon for 2025 and beyond?

In the next few years, quantum chips will continue to scale up. Importantly, the next generation of quantum processors will be underpinned by logical qubits, able to tackle increasingly useful tasks.

While quantum hardware (that is, processors) has been progressing at a rapid pace, we also can’t overlook an enormous amount of research and development in the field of quantum software and algorithms.

Using quantum simulations on normal computers, researchers have been developing and testing various quantum algorithms. This will make quantum computing ready for useful applications when the quantum hardware catches up.

Building a full-scale quantum computer is a daunting task. It will require simultaneous advancements on many fronts, such as scaling up the number of qubits on a chip, improving the fidelity of the qubits, better error correction, quantum software, quantum algorithms, and several other sub-fields of quantum computing.

After years of remarkable foundational work, we can expect 2025 to bring new breakthroughs in all of the above.The Conversation

Muhammad Usman, Head of Quantum Systems and Principal Research Scientist, CSIRO

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Countries Are Breathing the Cleanest Air in Centuries and Offer Lessons to the Rest of Us


An article at Our World in Data recently explored trends in air quality across a selection of high and middle-income countries, and found that not only is the West breathing better air than at perhaps any point since urbanization, but that developing nations likely won’t need 100 years or more to arrive at similar outcomes.

Published by Hannah Ritchie, the article focuses on two kinds of gases emitted from industrial activity: sulfur dioxide (SO2) and nitrogen oxide (NOx). Both enter the air we breathe from the burning of fossil fuels—coal in particular—while the latter is emitted mostly from internal combustion engines.

Bad air quality is responsible for millions of lost life years worldwide from respiratory problems, cardiovascular issues, and neurological disease—all of which can develop and become exasperated under prolonged exposure to air pollutants.

UK sulphur dioxide emissions – credit Community Emissions Data System (CEDS) 2024, CC BY license.

As seen in this chart, emissions of SO2 have just dipped under levels seen at the earliest periods of British industrialization. Before this, city and town air quality would have been badly tainted through emissions of wood smoke, so it’s safe to assume that 2022 marked the best British air in many centuries, not just the last two.

SO2 enters the ambient air primarily in urban environments through the burning of coal, and the significant reduction in coal use across the West has seen this number plummet.

But as regards middle-income countries like China and India that still rely on coal for electricity—all is not lost, as the next chart shows.

Sulfur dioxide emissions around the world – credit Community Emissions Data System (CEDS) 2024, CC BY license.

While the UK consumption of coal and emissions of SO2 have fallen in lockstep, the US and China present as excellent case studies for nations—like India, the fourth example—who rely on coal for electricity.

Even if coal consumption is increasing, SO2 emissions can fall even below baseline, with the diligent application of existing technologies for “scrubbing” coal.

“In 1990, the US included a cap-and-trade scheme on SO2 as part of its Clean Air Act Amendments,” Ritchie writes. “Each coal plant was given a ‘cap’ for how much SO2 it could emit, forcing it to either implement technologies to reduce its emissions, trade credits with other plants, or pay a large fine for every tonne of extra sulfur it emitted.”

This was hugely successful, as over just a single decade, emissions had dropped double-digit percentages.

Scrubbers are an apparatus that clean the gases passing through the smokestack of a coal-burning power plant. They exist as large towers in which aqueous mixtures of lime or limestone absorbers are sprayed through the emissions, known as flue gases, exiting a coal boiler. The lime/limestone absorbs some of the sulfur from the flue gases.

These have been used to tremendous effect in China, which despite tripling its coal use since 2000, has actually reduced SO2 emissions to pre-2000 levels. India does not use, nor does it mandate coal scrubbers, which explains its upward trajectory in both use and emissions.

One important note that the article failed to mention: if a country is burning coal, it means they aren’t burning wood or dung. While seemingly more natural than coal or oil, these produce their own, more significant health hazards, as the particulate matter in wood smoke is much larger and higher than smoke from fossil fuels.

Air quality in a city will increase if switching to coal from wood and dung, in the same way that switching from coal to natural gas will accomplish the same. Additionally, more years of life will be lost for having no electricity compared to coal-powered electricity.

But not all emissions come from power production. Nitrogen oxide (NOx) is generated through the burning of gasoline, diesel, and kerosene in internal combustion engines, and much like SO2 emissions, there exists a gradual upward trend throughout the 20th century.

Nitrogen oxide emissions around the world – credit Community Emissions Data System (CEDS) 2024, CC BY license.

In the UK, NOx emissions have fallen to levels seen in 1950, even as the number of road-driven miles in the country has steadily increased to near the highest levels in the country’s history.

KICKING OUT POLLUTION: Mercury Pollution From Human Activities is Declining–With a 10% Drop in Emissions, Say MIT Scientists

This was largely accomplished by the increase in fuel efficiency and exhaust systems on automobiles mandated by the EU in the 1990s. The Euro 1 rating was introduced in 1992, and the bloc is now on Euro 6.

“To comply with regulations, car manufacturers have had to innovate on technologies that can reduce the emissions of NOx and other pollutants from car exhausts,” writes Ritchie. “These technologies have included catalytic converters, filters for particulate matter, gas recirculation—which lowers the temperature of combustion and therefore produces less NOx from the exhaust…”

In the chart above, the nations of South Africa, Brazil, and China are those that have adopted similar emissions standards, while those below are those that haven’t, demonstrating how quickly these harmful emissions can be cut out of the air if smart regulation is imposed.

Beijing—once synonymous with face masks and grimy skies, now enjoys a routine weather phenomenon called the “Beijing Blue,” in other words, a blue sky. This, GNN reported, was accomplished by a “war on pollution” that led to an average life expectancy increase of 4 years for the average Beijingren.

Read More........

What’s the difference between climate and weather models? It all comes down to chaos

Weather forecasts help you decide whether to go for a picnic, hang out your washing or ride your bike to work. They also provide warnings for extreme events, and predictions to optimise our power grid.

To achieve this, services such as the Australian Bureau of Meteorology use complex mathematical representations of Earth and its atmosphere – weather and climate models.

The same software is also used by scientists to predict our future climate in the coming decades or even centuries. These predictions allow us to plan for, or avoid, the impacts of future climate change.

Weather and climate models are highly complex. The Australian Community Climate and Earth System Simulator, for example, is comprised of millions of lines of computer code.

Without climate and weather models we would be flying blind, both for short-term weather events and for our long-term future. But how do they work – and how are they different?

The same physical principles

Weather is the short-term behaviour of the atmosphere – the temperature on a given day, the wind, whether it’s raining and how much. Climate is about long-term statistics of weather events – the typical temperature in summer, or how often thunderstorms or floods happen each decade.

The reason we can use the same modelling tools for both weather and climate is because they are both based on the same physical principles.

These models compile a range of factors – the Sun’s radiation, air and water flow, land surface, clouds – into mathematical equations. These equations are solved on a bunch of tiny three-dimensional grid boxes and pieced together to predict the future state.

These boxes are sort of like pixels that come together to make the big picture.

These solutions are calculated on a computer – where using more grid boxes (finer resolution) gives better answers, but takes more computing resources. This is why the best predictions need a supercomputer, such as the National Computational Infrastructure’s Gadi, located in Canberra.

Because weather and climate are governed by the same physical processes, we can use the same software to predict the behaviour of both.

But there most of the similarities end.

The starting point

The main differences between weather and climate come down to a single concept: “initialisation”, or the starting point of a model.

In many cases, the simplest prediction for tomorrow’s weather is the “persistence” forecast: tomorrow’s weather will be similar to today. It means that, irrespective of how good your model is, if you start from the wrong conditions for today, you have no hope of predicting tomorrow.

Persistence forecasts are often quite good for temperature, but they’re less effective for other aspects of weather such as rainfall or wind. Since these are often the most important aspects of weather to predict, meteorologists need more sophisticated methods.

So, weather models use complex mathematics to create models that include weather information (from yesterday and today) and then make a good prediction of tomorrow. These predictions are a big improvement on persistence forecasts, but they won’t be perfect.

In addition, the further ahead you try to predict, the more information you forget about the initial state and the worse your forecast performs. So you need to regularly update and rerun (or, to use modelling parlance, “initialise”) the model to get the best prediction.

Weather services today can reliably predict three to seven days ahead, depending on the region, the season and the type of weather systems involved.

Chaos reigns

If we can only accurately predict weather systems about a week ahead before chaos takes over, climate models have no hope of predicting a specific storm next century.

Instead, climate models use a completely different philosophy. They aim to produce the right type and frequency of weather events, but not a specific forecast of the actual weather.

The cumulative effect of these weather events produces the climate state. This includes factors such as the average temperature and the likelihood of extreme weather events.

So, a climate model doesn’t give us an answer based on weather information from yesterday or today – it is run for centuries to produce its own equilibrium for a simulated Earth.

Because it is run for so long, a climate (also known as Earth system) model will need to account for additional, longer-term processes not factored into weather models, such as ocean circulation, the cryosphere (the frozen portions of the planet), the natural carbon cycle and carbon emissions from human activities.

The additional complexity of these extra processes, combined with the need for century-long simulations, means these models use a lot of computing power. Constraints on computing means that we often include fewer grid boxes (that is, lower resolution) in climate models than weather models.

A machine learning revolution?

Is there a faster way?

Enormous strides have been made in the past couple of years to predict the weather with machine learning. In fact, machine learning-based models can now outperform physics-based models.

But these models need to be trained. And right now, we have insufficient weather observations to train them. This means their training still needs to be supplemented by the output of traditional models.

And despite some encouraging recent attempts, it’s not clear that machine learning models will be able to simulate future climate change. The reason again comes down to training – in particular, global warming will shift the climate system to a different state for which we have no observational data whatsoever to train or verify a predictive machine learning model.

Now more than ever, climate and weather models are crucial digital infrastructure. They are powerful tools for decision makers, as well as research scientists. They provide essential support for agriculture, resource management and disaster response, so understanding how they work is vital.The Conversation

Andy Hogg, Professor and Director of ACCESS-NRI, Australian National University; Aidan Heerdegen, Leader, ACCESS-NRI Model Release Team, Australian National University, and Kelsey Druken, Associate Director (Release Management), ACCESS-NRI, Australian National University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Curious Kids: If you scoop a bucket of water out of the ocean, does it get lower?

If you scoop a bucket of water out of the ocean, does it get lower?

–Ellis, 6 and a half, Hobart

This is a great question Ellis! The short answer is yes, but the change in water level will be extremely tiny. You can actually test this idea at home.

For starters, you’ll need a glass of water and a teaspoon. Fill the glass almost to the top, and take note of the water level. Now, carefully remove a teaspoon of water. Can you see the difference in the water level? Maybe you can, but maybe not.

You could repeat this experiment in the kitchen sink, or a bathtub if you have one. The key point is that the water level does drop, but only by a very small amount. If you scoop a teaspoon of water out of the bathtub, you probably won’t see the difference with the naked eye.

Millions of buckets

So, let’s return to the ocean. It’s truly huge, especially compared to a bucket.

Let’s say that you have a bucket that fits ten litres. Using the information here, there are about 137 million, million, million buckets of water in the ocean (that is, all of Earth’s oceans combined).

I crunched the numbers. If you took a bucket of water from the ocean, the water level would drop by around 0.0000000000277 millimetre. You can see how small a millimetre is on your school ruler. We don’t have anything on Earth that can measure anything this small. For example, this is way, way, way smaller than even a single atom.

So, the more detailed answer to your question is: yes, the water level gets lower, but by such a small amount that we can’t even measure it.

But wait, there’s more

Earth is a really interesting place. When you take your bucket of water, all that water is moving through something called the water cycle.

Sea levels are actually constantly changing. Each year, a lot of water evaporates from the ocean. Some of it is even lost to outer space.

However, most of the evaporated water rains back down directly onto the ocean, or onto the ground, with that water making its way to rivers that eventually flow to the ocean. There is also a lot of water stored underground, and some of it makes its way to the ocean, as well.

So, if you poured your bucket of water onto the ground, eventually it would end up back in the ocean via the water cycle!

A few fun facts

There’s a lot to know about water. Some more fun facts (and big numbers):

  1. There are 1,500,000,000,000,000,000 molecules of water (H₂O) in a single drop of water. That’s 1.5 million, million, million.

  2. The oldest water in the world is estimated to have fallen as rain more than 1.6 billion years ago.

  3. Most (about 98%) of the world’s fresh, liquid water is underground – that’s why it’s called groundwater.The Conversation

Dylan Irvine, Outstanding Future Researcher - Northern Water Futures, Charles Darwin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

What’s the difference between liquid and powder laundry detergent? It’s not just the obvious

When shopping for a laundry detergent, the array of choices is baffling. All of the products will likely get your laundry somewhat cleaner. But what gets the best outcome for your clothes and your budget?

Do you want whiter whites? Do you need enzymes? And what’s the difference between a powder and liquid detergent?

As is often the case, knowing more about the chemistry involved will help you answer those questions.

What is a detergent?

The active ingredients in both laundry powders and liquids are “surfactants”, also known as detergents (hence the product name). These are typically charged or “ionic” molecules that have two distinct parts to their structure. One part interacts well with water and the other interacts with oils.

This useful property allows surfactants to lift grease and grime from fabrics and suspend it in the water. Surfactants can also form bubbles.

Metal salts dissolved in your water can limit the performance of the surfactants. So-called hard water contains lots of dissolved calcium and magnesium salts which can readily form soap scum.

Modern laundry detergents therefore contain phosphates, water softeners and other metal “sequestrants” to stop the formation of soap scum. Phosphates can cause algal blooms in fresh water environments. This is why modern detergent formulations contain smaller amounts of phosphates.

Many products also contain optical brighteners. These chemicals absorb ultraviolet light and release blue light, which provides the “whiter white” or “brighter colour” phenomenon.

Laundry detergents typically contain fragrances. These aren’t essential to the chemistry of cleaning, but give the impression the clothes are fresh.

Lastly, some laundry detergents contain enzymes – more on those later.

What’s in laundry powder?

While detergents and ingredients to avoid soap scum are the most important components, they aren’t the most abundant. The main ingredients in powders are salts (like sodium sulfate) that add bulk and stop the powder from clumping.

Another common salt added to laundry powders is sodium carbonate, also known as washing soda. Washing soda (a chemical cousin of baking soda) helps to chemically modify grease and grime so they dissolve in water.

Laundry powders also frequently contain oxidising agents like sodium percarbonate. This is a stable combination of washing soda and hydrogen peroxide. An additive known as tetraacetylethylenediamine activates the percarbonate to give a mild bleaching effect.

Chemically, powders have an advantage – their components can be formulated and mixed but kept separate in a solid form. (You can usually see different types of granules in your laundry powder.)

What’s in laundry liquid?

The main ingredient of laundry liquid is water. The remaining ingredients have to be carefully considered. They must be stable in the bottle and then work together in the wash.

These include similar ingredients to the powders, such as alkaline salts, metal sequestrants, water softeners and surfactants.

The surfactants in liquid products are often listed as “ionic” (charged) and “non-ionic” (non-charged). Non-ionic surfactants can be liquid by default, which makes them inappropriate for powdered formulations. Non-ionic surfactants are good at suspending oils in water and don’t form soap scum.

Liquid detergents also contain preservatives to prevent the growth of microbes spoiling the mixture.

There are also microbial implications for inside the washing machine. Liquid products can’t contain the peroxides (mild bleaching agents) found in powdered products. Peroxides kill microbes. The absence of peroxides in liquid detergents makes it more likely for mould biofilms to form in the machine and for bacteria to be transferred between items of clothing.

As an alternative to peroxides, liquids will typically contain only optical brighteners.

Liquids do have one advantage over powders – they can be added directly to stains prior to placing the item in the wash.

A recent “convenience” version of liquid formulas are highly concentrated detergent pods. Colourful and bearing a resemblance to sweet treats, these products have been found to be dangerous to young children and people with cognitive impairment.

Pods also remove the option to add less detergent if you’re running a smaller load or just want to use less detergent in general.

So, what about enzymes?

Enzymes are naturally evolved proteins included in laundry products to remove specific stains. Chemically, they are catalysts – things that speed up chemical reactions.

Enzymes are named for the molecules they work on, followed by the ending “-ase”. For example, lipase breaks down fats (lipids), protease breaks down protein, while amylase and mannanase break down starches and sugars.

These enzymes are derived from organisms found in cool climate regions, which helps them function at the low temperature of washing water.

Running an excessively hot wash cycle can damage or denature the enzyme structure, stopping them from assisting in your wash. Think of an egg white changing from translucent to white while cooked – that’s protein denaturing.

If your detergent contains enzymes, the washing temperature should be neither too hot nor too cold. As a guide, temperatures of 15–20°C are used in standard laundry tests.

Is powder or liquid better?

We make consumer choices guided by performance, psychology, cost, scent, environmental considerations and convenience.

It’s worth experimenting with different products to find what works best for you and fits your needs, household budget and environmental considerations, such as having recyclable packaging.

Personally, I wash at 20°C with half the recommended dose of a pleasant-smelling laundry powder, packaged in recyclable cardboard, and containing a wide range of enzymes and an activated peroxide source.

Knowing a little chemistry can go a long way to getting your clothes clean.

However, laundry detergent manufacturers don’t always disclose the full list of ingredients on their product packaging.

If you want more information on what’s in your product, you have to look at the product website. You can also dig a little deeper by reading documents called safety data sheets (SDS). Every product containing potentially hazardous chemicals must have an SDS.The Conversation

Nathan Kilah, Senior Lecturer in Chemistry, University of Tasmania

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

The science of happier dogs: 5 tips to help your canine friends live their best life

When you hear about “science focused on how dogs can live their best lives with us” it sounds like an imaginary job made up by a child. However, the field of animal welfare science is real and influential.

As our most popular animal companion and coworker, dogs are very deserving of scientific attention. In recent years we’ve learned more about how dogs are similar to people, but also how they are distinctly themselves.

We often think about how dogs help us – as companions, working as detectors, and keeping us safe and healthy. Dog-centric science helps us think about the world from a four-paw perspective and apply this new knowledge so dogs can enjoy a good life.

Here are five tips to keep the tails in your life wagging happily.

1. Let dogs sniff

Sniffing makes dogs happier. We tend to forget they live in a smell-based world because we’re so visual. Often taking the dog for a walk is our daily physical activity but we should remember it could be our dogs’ only time out of the home environment.

Letting them have a really good sniff of that tree or post is full of satisfying information for them. It’s their nose’s equivalent of us standing at the top of a mountain and enjoying a rich, colour-soaked, sunset view.

2. Give dogs agency

Agency is a hot topic in animal welfare science right now. For people who lived through the frustration of strict lockdowns in the early years of COVID, it’s easy to remember how not being able to go where we wanted, or see who we wanted, when we wanted, impacted our mental health.

We’ve now learned that giving animals choice and control in their lives is important for their mental wellbeing too. We can help our dogs enjoy better welfare by creating more choices and offering them control to exercise their agency.

This might be installing a doggy door so they can go outside or inside when they like. It could be letting them decide which sniffy path to take through your local park. Perhaps it’s choosing which three toys to play with that day from a larger collection that gets rotated around. Maybe it’s putting an old blanket down in a new location where you’ve noticed the sun hits the floor for them to relax on.

Providing choices doesn’t have to be complicated or expensive.

3. Recognise all dogs are individuals

People commonly ascribe certain personality traits to certain dog breeds. But just like us, dogs have their own personalities and preferences. Not all dogs are going to like the same things and a new dog we live with may be completely different to the last one.

One dog might like to go to the dog park and run around with other dogs at high speed for an hour, while another dog would much rather hang out with you chewing on something in the garden.

We can see as much behavioural variation within breeds as we do between them. Being prepared to meet dogs where they are, as individuals, is important to their welfare.

As well as noticing what dogs like to do as individuals, it’s important not to force dogs into situations they don’t enjoy. Pay attention to behaviour that indicates dogs aren’t comfortable, such as looking away, licking their lips or yawning.

4. Respect dogs’ choice to opt out

Even in our homes, we can provide options if our dogs don’t want to share in every activity with us. Having a quiet place that dogs can retreat to is really important in enabling them to opt out if they want to.

If you’re watching television loudly, it may be too much for their sensitive ears. Ensure a door is open to another room so they can retreat. Some dogs might feel overwhelmed when visitors come over; giving them somewhere safe and quiet to go rather than forcing an interaction will help them cope.

Dogs can be terrific role models for children when teaching empathy. We can demonstrate consent by letting dogs approach us for pats and depart when they want. Like seeing exotic animals perform in circuses, dressing up dogs for our own entertainment seems to have had its day. If you asked most dogs, they don’t want to wear costumes or be part of your Halloween adventures.

5. Opportunities for off-lead activity – safely.

When dogs are allowed to run off-lead, they use space differently. They tend to explore more widely and go faster than they do when walking with us on-lead. This offers them important and fun physical activity to keep them fit and healthy.

Demonstrating how dogs walk differently when on- and off-lead.

A recent exploration of how liveable cities are for dogs mapped all the designated areas for dogs to run off-leash. Doggy density ranged from one dog for every six people to one dog for every 30 people, depending on where you live.

It also considered how access to these areas related to the annual registration fees for dogs in each government area compared, with surprising differences noted across greater Melbourne. We noted fees varied between A$37 and $84, and these didn’t relate to how many off-lead areas you could access.

For dog-loving nations, such as Australia, helping our canine friends live their best life feels good. Science that comes from a four-paw perspective can help us reconsider our everyday interactions with dogs and influence positive changes so we can live well, together.The Conversation

Mia Cobb, Research Fellow, Animal Welfare Science Centre, The University of Melbourne

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........