Australia leads first human trial of one-time gene editing therapy to halve bad cholesterol
Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology

Blue, green, brown, or something in between – the science of eye colour explained
You’re introduced to someone and your attention catches on their eyes. They might be a rich, earthy brown, a pale blue, or the rare green that shifts with every flicker of light. Eyes have a way of holding us, of sparking recognition or curiosity before a single word is spoken. They are often the first thing we notice about someone, and sometimes the feature we remember most.
Across the world, human eyes span a wide palette. Brown is by far the most common shade, especially in Africa and Asia, while blue is most often seen in northern and eastern Europe. Green is the rarest of all, found in only about 2% of the global population. Hazel eyes add even more diversity, often appearing to shift between green and brown depending on the light.
So, what lies behind these differences?
It’s all in the melanin
The answer rests in the iris, the coloured ring of tissue that surrounds the pupil. Here, a pigment called melanin does most of the work.
Brown eyes contain a high concentration of melanin, which absorbs light and creates their darker appearance. Blue eyes contain very little melanin. Their colour doesn’t come from pigment at all but from the scattering of light within the iris, a physical effect known as the Tyndall effect, a bit like the effect that makes the sky look blue.
In blue eyes, the shorter wavelengths of light (such as blue) are scattered more effectively than longer wavelengths like red or yellow. Due to the low concentration of melanin, less light is absorbed, allowing the scattered blue light to dominate what we perceive. This blue hue results not from pigment but from the way light interacts with the eye’s structure.
Green eyes result from a balance, a moderate amount of melanin layered with light scattering. Hazel eyes are more complex still. Uneven melanin distribution in the iris creates a mosaic of colour that can shift depending on the surrounding ambient light.
What have genes got to do with it?
The genetics of eye colour is just as fascinating.
For a long time, scientists believed a simple “brown beats blue” model, controlled by a single gene. Research now shows the reality is much more complex. Many genes contribute to determining eye colour. This explains why children in the same family can have dramatically different eye colours, and why two blue-eyed parents can sometimes have a child with green or even light brown eyes.
Eye colour also changes over time. Many babies of European ancestry are born with blue or grey eyes because their melanin levels are still low. As pigment gradually builds up over the first few years of life, those blue eyes may shift to green or brown.
In adulthood, eye colour tends to be more stable, though small changes in appearance are common depending on lighting, clothing, or pupil size. For example, blue-grey eyes can appear very blue, very grey or even a little green depending on ambient light. More permanent shifts are rarer but can occur as people age, or in response to certain medical conditions that affect melanin in the iris.
The real curiosities
Then there are the real curiosities.
Heterochromia, where one eye is a different colour from the other, or one iris contains two distinct colours, is rare but striking. It can be genetic, the result of injury, or linked to specific health conditions. Celebrities such as Kate Bosworth and Mila Kunis are well-known examples. Musician David Bowie’s eyes appeared as different colours because of a permanently dilated pupil after an accident, giving the illusion of heterochromia.
In the end, eye colour is more than just a quirk of genetics and physics. It’s a reminder of how biology and beauty intertwine. Each iris is like a tiny universe, rings of pigment, flecks of gold, or pools of deep brown that catch the light differently every time you look.
Eyes don’t just let us see the world, they also connect us to one another. Whether blue, green, brown, or something in-between, every pair tells a story that’s utterly unique, one of heritage, individuality, and the quiet wonder of being human.![]()
Davinia Beaver, Postdoctoral research fellow, Clem Jones Centre for Regenerative Medicine, Bond University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology

This Common Fungus Found on Human Skin Wipes Out Deadly Superbug Staph Infections

Scientists use AI to reveal the neural dynamics of human conversation
An AI system has reached human level on a test for ‘general intelligence’. Here’s what that means
A new artificial intelligence (AI) model has just achieved human-level results on a test designed to measure “general intelligence”.
On December 20, OpenAI’s o3 system scored 85% on the ARC-AGI benchmark, well above the previous AI best score of 55% and on par with the average human score. It also scored well on a very difficult mathematics test.
Creating artificial general intelligence, or AGI, is the stated goal of all the major AI research labs. At first glance, OpenAI appears to have at least made a significant step towards this goal.
While scepticism remains, many AI researchers and developers feel something just changed. For many, the prospect of AGI now seems more real, urgent and closer than anticipated. Are they right?
Generalisation and intelligence
To understand what the o3 result means, you need to understand what the ARC-AGI test is all about. In technical terms, it’s a test of an AI system’s “sample efficiency” in adapting to something new – how many examples of a novel situation the system needs to see to figure out how it works.
An AI system like ChatGPT (GPT-4) is not very sample efficient. It was “trained” on millions of examples of human text, constructing probabilistic “rules” about which combinations of words are most likely.
The result is pretty good at common tasks. It is bad at uncommon tasks, because it has less data (fewer samples) about those tasks.
Until AI systems can learn from small numbers of examples and adapt with more sample efficiency, they will only be used for very repetitive jobs and ones where the occasional failure is tolerable.
The ability to accurately solve previously unknown or novel problems from limited samples of data is known as the capacity to generalise. It is widely considered a necessary, even fundamental, element of intelligence.
Grids and patterns
The ARC-AGI benchmark tests for sample efficient adaptation using little grid square problems like the one below. The AI needs to figure out the pattern that turns the grid on the left into the grid on the right.
Each question gives three examples to learn from. The AI system then needs to figure out the rules that “generalise” from the three examples to the fourth.
These are a lot like the IQ tests sometimes you might remember from school.
Weak rules and adaptation
We don’t know exactly how OpenAI has done it, but the results suggest the o3 model is highly adaptable. From just a few examples, it finds rules that can be generalised.
To figure out a pattern, we shouldn’t make any unnecessary assumptions, or be more specific than we really have to be. In theory, if you can identify the “weakest” rules that do what you want, then you have maximised your ability to adapt to new situations.
What do we mean by the weakest rules? The technical definition is complicated, but weaker rules are usually ones that can be described in simpler statements.
In the example above, a plain English expression of the rule might be something like: “Any shape with a protruding line will move to the end of that line and ‘cover up’ any other shapes it overlaps with.”
Searching chains of thought?
While we don’t know how OpenAI achieved this result just yet, it seems unlikely they deliberately optimised the o3 system to find weak rules. However, to succeed at the ARC-AGI tasks it must be finding them.
We do know that OpenAI started with a general-purpose version of the o3 model (which differs from most other models, because it can spend more time “thinking” about difficult questions) and then trained it specifically for the ARC-AGI test.
French AI researcher Francois Chollet, who designed the benchmark, believes o3 searches through different “chains of thought” describing steps to solve the task. It would then choose the “best” according to some loosely defined rule, or “heuristic”.
This would be “not dissimilar” to how Google’s AlphaGo system searched through different possible sequences of moves to beat the world Go champion.
You can think of these chains of thought like programs that fit the examples. Of course, if it is like the Go-playing AI, then it needs a heuristic, or loose rule, to decide which program is best.
There could be thousands of different seemingly equally valid programs generated. That heuristic could be “choose the weakest” or “choose the simplest”.
However, if it is like AlphaGo then they simply had an AI create a heuristic. This was the process for AlphaGo. Google trained a model to rate different sequences of moves as better or worse than others.
What we still don’t know
The question then is, is this really closer to AGI? If that is how o3 works, then the underlying model might not be much better than previous models.
The concepts the model learns from language might not be any more suitable for generalisation than before. Instead, we may just be seeing a more generalisable “chain of thought” found through the extra steps of training a heuristic specialised to this test. The proof, as always, will be in the pudding.
Almost everything about o3 remains unknown. OpenAI has limited disclosure to a few media presentations and early testing to a handful of researchers, laboratories and AI safety institutions.
Truly understanding the potential of o3 will require extensive work, including evaluations, an understanding of the distribution of its capacities, how often it fails and how often it succeeds.
When o3 is finally released, we’ll have a much better idea of whether it is approximately as adaptable as an average human.
If so, it could have a huge, revolutionary, economic impact, ushering in a new era of self-improving accelerated intelligence. We will require new benchmarks for AGI itself and serious consideration of how it ought to be governed.
If not, then this will still be an impressive result. However, everyday life will remain much the same.![]()
Michael Timothy Bennett, PhD Student, School of Computing, Australian National University and Elija Perrier, Research Fellow, Stanford Center for Responsible Quantum Technology, Stanford University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Is the bird flu virus inching closer to humans?
The first pig kidney has been transplanted into a living person. But we’re still a long way from solving organ shortages
In a world first, we heard last week that US surgeons had transplanted a kidney from a gene-edited pig into a living human. News reports said the procedure was a breakthrough in xenotransplantation – when an organ, cells or tissues are transplanted from one species to another.
Champions of xenotransplantation regard it as the solution to organ shortages across the world. In December 2023, 1,445 people in Australia were on the waiting list for donor kidneys. In the United States, more than 89,000 are waiting for kidneys.
One biotech CEO says gene-edited pigs promise “an unlimited supply of transplantable organs”.
Not, everyone, though, is convinced transplanting animal organs into humans is really the answer to organ shortages, or even if it’s right to use organs from other animals this way.
There are two critical barriers to the procedure’s success: organ rejection and the transmission of animal viruses to recipients.
But in the past decade, a new platform and technique known as CRISPR/Cas9 – often shortened to CRISPR – has promised to mitigate these issues.
What is CRISPR?
CRISPR gene editing takes advantage of a system already found in nature. CRISPR’s “genetic scissors” evolved in bacteria and other microbes to help them fend off viruses. Their cellular machinery allows them to integrate and ultimately destroy viral DNA by cutting it.
In 2012, two teams of scientists discovered how to harness this bacterial immune system. This is made up of repeating arrays of DNA and associated proteins, known as “Cas” (CRISPR-associated) proteins.
When they used a particular Cas protein (Cas9) with a “guide RNA” made up of a singular molecule, they found they could program the CRISPR/Cas9 complex to break and repair DNA at precise locations as they desired. The system could even “knock in” new genes at the repair site.
In 2020, the two scientists leading these teams were awarded a Nobel prize for their work.
In the case of the latest xenotransplantation, CRISPR technology was used to edit 69 genes in the donor pig to inactivate viral genes, “humanise” the pig with human genes, and knock out harmful pig genes.
A busy time for gene-edited xenotransplantation
While CRISPR editing has brought new hope to the possibility of xenotransplantation, even recent trials show great caution is still warranted.
In 2022 and 2023, two patients with terminal heart diseases, who were ineligible for traditional heart transplants, were granted regulatory permission to receive a gene-edited pig heart. These pig hearts had ten genome edits to make them more suitable for transplanting into humans. However, both patients died within several weeks of the procedures.
Earlier this month, we heard a team of surgeons in China transplanted a gene-edited pig liver into a clinically dead man (with family consent). The liver functioned well up until the ten-day limit of the trial.
How is this latest example different?
The gene-edited pig kidney was transplanted into a relatively young, living, legally competent and consenting adult.
The total number of gene edits edits made to the donor pig is very high. The researchers report making 69 edits to inactivate viral genes, “humanise” the pig with human genes, and to knockout harmful pig genes.
Clearly, the race to transform these organs into viable products for transplantation is ramping up.
From biotech dream to clinical reality
Only a few months ago, CRISPR gene editing made its debut in mainstream medicine.
In November, drug regulators in the United Kingdom and US approved the world’s first CRISPR-based genome-editing therapy for human use – a treatment for life-threatening forms of sickle-cell disease.
The treatment, known as Casgevy, uses CRISPR/Cas-9 to edit the patient’s own blood (bone-marrow) stem cells. By disrupting the unhealthy gene that gives red blood cells their “sickle” shape, the aim is to produce red blood cells with a healthy spherical shape.
Although the treatment uses the patient’s own cells, the same underlying principle applies to recent clinical xenotransplants: unsuitable cellular materials may be edited to make them therapeutically beneficial in the patient.
We’ll be talking more about gene-editing
Medicine and gene technology regulators are increasingly asked to approve new experimental trials using gene editing and CRISPR.
However, neither xenotransplantation nor the therapeutic applications of this technology lead to changes to the genome that can be inherited.
For this to occur, CRISPR edits would need to be applied to the cells at the earliest stages of their life, such as to early-stage embryonic cells in vitro (in the lab).
In Australia, intentionally creating heritable alterations to the human genome is a criminal offence carrying 15 years’ imprisonment.
No jurisdiction in the world has laws that expressly permits heritable human genome editing. However, some countries lack specific regulations about the procedure.
Is this the future?
Even without creating inheritable gene changes, however, xenotransplantation using CRISPR is in its infancy.
For all the promise of the headlines, there is not yet one example of a stable xenotransplantation in a living human lasting beyond seven months.
While authorisation for this recent US transplant has been granted under the so-called “compassionate use” exemption, conventional clinical trials of pig-human xenotransplantation have yet to commence.
But the prospect of such trials would likely require significant improvements in current outcomes to gain regulatory approval in the US or elsewhere.
By the same token, regulatory approval of any “off-the-shelf” xenotransplantation organs, including gene-edited kidneys, would seem some way off.![]()
Christopher Rudge, Law lecturer, University of Sydney
This article is republished from The Conversation under a Creative Commons license. Read the original article.
What is a sonar pulse and how can it injure humans under water?
Christine Erbe, Curtin University
Over the weekend, the Australian government revealed that last Tuesday its navy divers had sustained “minor injuries”, likely due to sonar pulses from a Chinese navy vessel.
The divers had been clearing fishing nets from the propellers of HMAS Toowoomba while in international waters off the coast of Japan. According to a statement from deputy prime minister Richard Marles, despite HMAS Toowoomba communicating with internationally recognised signals, the Chinese vessel approached the Australian ship and turned on its sonar, forcing the Australian divers to exit the water.
The incident prompted a response from the Australian government, who labelled the incident “unsafe and unprofessional”. But what exactly is a sonar pulse, and what kinds of injuries can sonar cause to divers?
What is sonar?
Light doesn’t travel well under water – even in clear waters, you can see perhaps some tens of metres. Sound, however, travels very well and far under water. This is because water is much denser than air, and so can respond faster and better to acoustic pressure waves – sound waves.
Because of these properties, ships use sonar to navigate through the ocean and to “see” under water. The word “sonar” stands for sound navigation and ranging.
Sonar equipment sends out short acoustic (sound) pulses or pings, and then analyses the echoes. Depending on the timing, amplitude, phase and direction of the echoes the equipment receives, you can tell what’s under water – the seafloor, canyon walls, coral, fishes, and of course ships and submarines.
Most vessels – from small, private boats to large commercial tankers – use sonar. However, compared to your off-the-shelf sonar used for finding fish, navy sonars are stronger.
What are the effects of sonar on divers?
This is a difficult topic to study, because you don’t want to deliberately expose humans to harmful levels of sound. There are, however, anecdotes from various navies and accidental exposures. There have also been studies on what humans can hear under water, with or without neoprene suits, hoods, or helmets.
We don’t hear well under water – no surprise, since we’ve evolved to live on land. Having said that, you would hear a sonar sound under water (a mid-to-high pitch noise) and would know you’ve been exposed.
When it comes to naval sonars, human divers have rated the sound as “unpleasant to severe” at levels of roughly 150dB re 1 µPa (decibel relative to a reference pressure of one micropascal, the standard reference for underwater sound). This would be perhaps, very roughly, 10km away from a military sonar. Note that we can’t compare sound exposure under water to what we’d receive through the air, because there are too many physical differences between the two.
Human tolerance limits are roughly 180dB re 1 µPa, which would be around 500m from military sonar. At such levels, humans might experience dizziness, disorientation, temporary memory and concentration impacts, or temporary hearing loss. We don’t have information on what levels the Australian divers were exposed to, but their injuries were described as minor.
At higher received levels, closer ranges, or longer exposures, you might see more severe physiological or health impacts. In extreme cases, in particular for impulsive, sudden sound (which sonar is not), sound can cause damage to tissues and organs.
What does sonar do to marine animals?
Some of the information on what noise might do to humans under water comes from studies and observations of animals.
While they typically don’t have outer ears (except for sea lions), marine mammals have inner ears that function similarly to ours. They can receive hearing damage from noise, just like we do. This might be temporary, like the ringing ears or reduced sensitivity you might experience after a loud concert, or it can be permanent.
Marine mammals living in a dark ocean rely on sound and hearing to a greater extent than your average human. They use sound to navigate, hunt, communicate with each other and to find mates. Toothed whales and dolphins have evolved a biological echo sounder or biosonar, which sends out series of clicks and listens for echoes. So, interfering with their sounds or impacting their hearing can disrupt critical behaviours.
Finally, sound may also impact non-mammalian fauna, such as fishes, which rely on acoustics rather than vision for many of their life functions.![]()
Christine Erbe, Director, Centre for Marine Science & Technology, Curtin University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
How consciousness may rely on brain cells acting collectively – new psychedelics research on rats


PÀr Halje, Associate Research Fellow of Neurophysiology, Lund University This article is republished from The Conversation under a Creative Commons license. Read the original article.UFO, Humanity & time travel are the signage of positive index

This human footprints found in Tabuk is 85,000-year-old
Humans doomed if they don't find another liveable planet: Hawking
The five biggest threats to human existence
Future imperfect:
1. Nuclear war:
2. Bioengineered pandemic:
3. Superintelligence:
4. Nanotechnology:
5. Unknown unknowns:
