First Human Cornea Transplant Using 3D Printed, Lab-Grown Tissue Restores Sight in a ‘Game Changer’ for Millions Who are Blind

File photo – credit: Maria Maximova

The first successful human implant of a 3D-printed cornea made from human eye cells cultured in a laboratory has restored a patient’s sight.

The North Carolina-based company that developed the cornea described the procedure as a ‘world first’—and a major milestone toward its goal of alleviating the lack of available donor tissue and long wait-times for people seeking transplants.

According to Precise Bio, its robotic bio-fabrication approach could potentially turn a single donated cornea into hundreds of lab-grown grafts, at a time when there’s currently only one available for an estimated 70 patients who need one to see.

“This achievement marks a turning point for regenerative ophthalmology—a moment of real hope for millions living with corneal blindness,” Aryeh Batt, Precise Bio’s co-founder and CEO, said in a statement.

“For the first time, a corneal implant manufactured entirely in the lab from cultured human corneal cells, rather than direct donor tissue, has been successfully implanted in a patient.”

The company said the transplant was performed Oct. 29 in one eye of a patient who was considered legally blind.

“This is a game changer. We’ve witnessed a cornea created in the lab, from living human cells, bring sight back to a human being,” said Dr. Michael Mimouni, director of the cornea unit at Rambam Medical Center in Israel, who performed the procedure.

“It was an unforgettable moment—a glimpse into a future where no one will have to live in darkness because of a shortage of donor tissue.”

Dubbed PB-001, the implant is designed to match the optical clarity, transparency and bio-mechanical properties of a native cornea. Previously tested in animal models, the company said its graft is capable of integrating with a patient’s own tissue.

The outer layer of the eye—covering the iris and pupil—can end up clouding a person’s vision following injuries, infections, scarring and other conditions. PB-001 is currently being tested in a single-arm phase 1 trial in Israel, which aims to enroll between 10 and 15 participants with excess fluid buildups in the cornea due to dysfunction within its inner cell layers.

Precise Bio said it plans to announce top-line results from the study in the second half of 2026, tracking six-month efficacy outcomes.

The corneas are designed to be compatible with current surgery hardware and workflows. Shipped under long-term cryopreservation, it is delivered preloaded on standard delivery devices and unrolls during implantation to form a natural corneal shape.

“PB-001 has the potential to offer a new, standardized solution to one of ophthalmology’s most urgent needs—reliable, safe, and effective corneal replacement,” said Anthony Atala, M.D., co-founder of Precise Bio and director of the Wake Forest Institute for Regenerative Medicine.


“The ability to produce patient-ready tissue on demand could lead the way towards reshaping transplant medicine as we know it.”(Edited from original article by Conor Hale) First Human Cornea Transplant Using 3D Printed, Lab-Grown Tissue Restores Sight in a ‘Game Changer’ for Millions Who are Blind
Read More........

The science of weight loss – and why your brain is wired to keep you fat

When you lose weight, your body reacts as if it were a threat to survival. pexels/pavel danilyuk, CC BY
Valdemar Brimnes Ingemann Johansen, University of Copenhagen and Christoffer Clemmensen, University of Copenhagen

For decades, we’ve been told that weight loss is a matter of willpower: eat less, move more. But modern science has proven this isn’t actually the case.

More on that in a moment. But first, let’s go back a few hundred thousand years to examine our early human ancestors. Because we can blame a lot of the difficulty we have with weight loss today on our predecessors of the past – maybe the ultimate case of blame the parents.

For our early ancestors, body fat was a lifeline: too little could mean starvation, too much could slow you down. Over time, the human body became remarkably good at guarding its energy reserves through complex biological defences wired into the brain. But in a world where food is everywhere and movement is optional, those same systems that once helped us survive uncertainty now make it difficult to lose weight.

When someone loses weight, the body reacts as if it were a threat to survival. Hunger hormones surge, food cravings intensify and energy expenditure drops. These adaptations evolved to optimise energy storage and usage in environments with fluctuating food availability. But today, with our easy access to cheap, calorie-dense junk food and sedentary routines, those same adaptations that once helped us to survive can cause us a few issues.

As we found in our recent research, our brains also have powerful mechanisms for defending body weight – and can sort of “remember” what that weight used to be. For our ancient ancestors, this meant that if weight was lost in hard times, their bodies would be able to “get back” to their usual weight during better times.

But for us modern humans, it means that our brains and bodies remember any excess weight gain as though our survival and lives depend upon it. So in effect, once the body has been heavier, the brain comes to treat that higher weight as the new normal – a level it feels compelled to defend.

The fact that our bodies have this capacity to “remember” our previous heavier weight helps to explain why so many people regain weight after dieting. But as the science shows, this weight regain is not due to a lack of discipline; rather, our biology is doing exactly what it evolved to do: defend against weight loss.

Hacking biology

This is where weight-loss medications such as Wegovy and Mounjaro have offered fresh hope. They work by mimicking gut hormones that tell the brain to curb appetite.

But not everyone responds well to such drugs. For some, the side effects can make them difficult to stick with, and for others, the drugs don’t seem to lead to weight loss at all. It’s also often the case that once treatment stops, biology reasserts itself – and the lost weight returns.

Advances in obesity and metabolism research may mean that it’s possible for future therapies to be able to turn down these signals that drive the body back to its original weight, even beyond the treatment period.

Research is also showing that good health isn’t the same thing as “a good weight”. As in, exercise, good sleep, balanced nutrition, and mental wellbeing can all improve heart and metabolic health, even if the number on the scales barely moves.

A whole society approach

Of course, obesity isn’t just an individual problem – it takes a society-wide approach to truly tackle the root causes. And research suggests that a number of preventative measures might make a difference – things such as investing in healthier school meals, reducing the marketing of junk food to children, designing neighbourhoods where walking and cycling are prioritised over cars, and restaurants having standardised food portions.

Scientists are also paying close attention to key early-life stages – from pregnancy to around the age of seven – when a child’s weight regulation system is particularly malleable.

Indeed, research has found that things like what parents eat, how infants are fed, and early lifestyle habits can all shape how the brain controls appetite and fat storage for years to come.

If you’re looking to lose weight, there are still things you can do – mainly by focusing less on crash diets and more on sustainable habits that support overall wellbeing. Prioritising sleep helps regulate appetite, for example, while regular activity – even walking – can improve your blood sugar levels and heart health.

The bottom line though is that obesity is not a personal failure, but rather a biological condition shaped by our brains, our genes, and the environments we live in. The good news is that advances in neuroscience and pharmacology are offering new opportunities in terms of treatments, while prevention strategies can shift the landscape for future generations.

So if you’ve struggled to lose weight and keep it off, know that you’re not alone, and it’s not your fault. The brain is a formidable opponent. But with science, medicine and smarter policies, we’re beginning to change the rules of the game.


This article was commissioned as part of a partnership collaboration between Videnskab.dk and The Conversation. You can read the Danish version of this article, here.The Conversation

Valdemar Brimnes Ingemann Johansen, PhD Fellow in the Faculty of Health and Medical Sciences, University of Copenhagen and Christoffer Clemmensen, Associate Professor and Group Leader, Novo Nordisk Foundation Center for Basic Metabolic Research, University of Copenhagen

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Australia leads first human trial of one-time gene editing therapy to halve bad cholesterol


IANS Photo

Melbourne, November 10 (IANS): Researchers in Australia have led a first-in-human trial for a breakthrough gene-editing therapy that halves bad cholesterol and triglycerides in people with difficult-to-treat lipid disorders.

The trial tested CTX310, a one-time CRISPR-Cas9 gene-editing therapy that uses fat-based particles to carry CRISPR editing tools into the liver, switching off the ANGPTL3 gene. Turning off this gene lowers LDL (bad) cholesterol and triglycerides, two blood fats linked to heart disease, according to a statement released Monday by Australia's Monash University.

The Victorian Heart Hospital, operated by Monash Health in partnership with Monash University, treated three of 15 patients aged 18-75 years with difficult-to-treat lipid disorders in phase 1 of the global trial conducted across Australia, New Zealand, and Britain, the statement said, Xinhua news agency reported.

At the highest dose, a single-course treatment with CTX310 resulted in a mean reduction of LDL cholesterol by 50 per cent and triglycerides by 55 per cent, remaining low for at least 60 days after two weeks of treatment, it said, adding LDL cholesterol and triglycerides were reduced by nearly 60 per cent among all participants with various doses, with only mild, short-term side effects reported.

Importantly, CTX310 is the first therapy to achieve large reductions in both LDL cholesterol and triglycerides at the same time, marking a potential breakthrough for people with mixed lipid disorders who have elevations in both, according to the trial published in the New England Journal of Medicine.

"The possibility of a single-course treatment with lasting effects could be a major step in how we prevent heart disease," said Stephen Nicholls, Director of the Victorian Heart Hospital, and study lead investigator."It makes treatment easier, reduces ongoing costs, relieves pressure on the health system, all while improving a person's quality of life," Nicholls said, emphasising plans to focus on larger and more diverse patient populations in future trials of CTX310. Australia leads first human trial of one-time gene editing therapy to halve bad cholesterol | MorungExpress | morungexpress.com
Read More........

Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology


Macular degeneration is the leading cause of blindness in developed countries, but regrowing the human cells lost to this condition was the feature of a new successful treatment that took advantage of advances in nanotechnology.

Regrowing the cells of the human retina on a scaffold of synthetic, tissue-like material showed substantial improvements over previously used materials such as cellulose, and the scientists hope they can move on to testing their method in the already blind.

Macular degeneration is increasing in prevalence in the developed world. It’s the leading cause of blindness and is caused by the loss of cells in a key part of the eye called the retina.

Humans have no ability to regrow retinal pigment cells, but scientists have determined how to do it in vitro using pluripotent stem cells. However as the study authors describe, previous examples of this procedure saw scientists growing the cells on flat surfaces rather than one resembling the retinal membrane.

This, they state, limits the effectiveness of transplanted cells.

In a study at the UK’s Nottingham Trent University, biomedical scientist Biola Egbowon and colleagues fabricated 3D scaffolds with polymer nanofibers and coated them with a steroid to reduce inflammation.

The method by which the nanofibers were made was pretty darn cool. The team would squirt polyacrylonitrile and Jeffamine polymers in molten form through an electrical current in a technique known as “electrospinning.” The high voltage caused molecular changes in the polymers that saw them become solid again, resembling a scaffold of tiny fibers that attracted water yet maintained mechanical strength.

After the scaffolding was made, it was treated with an anti-inflammatory steroid.

This unique pairing of materials mixed with the electrospinning created a unique scaffold that kept the retinal pigment cells viable for 150 days outside of any potential human patient, all while showing the phenotype of biomarkers critical for maintaining retinal physiological characteristics.“While this may indicate the potential of such cellularized scaffolds in regenerative medicine, it does not address the question of biocompatibility with human tissue,” Egbowon and colleagues caution in their paper, urging more research to be conducted, specifically regarding the orientation of the cells and whether they can maintain good blood supply. Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology
Read More........

Blue, green, brown, or something in between – the science of eye colour explained

You’re introduced to someone and your attention catches on their eyes. They might be a rich, earthy brown, a pale blue, or the rare green that shifts with every flicker of light. Eyes have a way of holding us, of sparking recognition or curiosity before a single word is spoken. They are often the first thing we notice about someone, and sometimes the feature we remember most.

Across the world, human eyes span a wide palette. Brown is by far the most common shade, especially in Africa and Asia, while blue is most often seen in northern and eastern Europe. Green is the rarest of all, found in only about 2% of the global population. Hazel eyes add even more diversity, often appearing to shift between green and brown depending on the light.

So, what lies behind these differences?

It’s all in the melanin

The answer rests in the iris, the coloured ring of tissue that surrounds the pupil. Here, a pigment called melanin does most of the work.

Brown eyes contain a high concentration of melanin, which absorbs light and creates their darker appearance. Blue eyes contain very little melanin. Their colour doesn’t come from pigment at all but from the scattering of light within the iris, a physical effect known as the Tyndall effect, a bit like the effect that makes the sky look blue.

In blue eyes, the shorter wavelengths of light (such as blue) are scattered more effectively than longer wavelengths like red or yellow. Due to the low concentration of melanin, less light is absorbed, allowing the scattered blue light to dominate what we perceive. This blue hue results not from pigment but from the way light interacts with the eye’s structure.

Green eyes result from a balance, a moderate amount of melanin layered with light scattering. Hazel eyes are more complex still. Uneven melanin distribution in the iris creates a mosaic of colour that can shift depending on the surrounding ambient light.

What have genes got to do with it?

The genetics of eye colour is just as fascinating.

For a long time, scientists believed a simple “brown beats blue” model, controlled by a single gene. Research now shows the reality is much more complex. Many genes contribute to determining eye colour. This explains why children in the same family can have dramatically different eye colours, and why two blue-eyed parents can sometimes have a child with green or even light brown eyes.

Eye colour also changes over time. Many babies of European ancestry are born with blue or grey eyes because their melanin levels are still low. As pigment gradually builds up over the first few years of life, those blue eyes may shift to green or brown.

In adulthood, eye colour tends to be more stable, though small changes in appearance are common depending on lighting, clothing, or pupil size. For example, blue-grey eyes can appear very blue, very grey or even a little green depending on ambient light. More permanent shifts are rarer but can occur as people age, or in response to certain medical conditions that affect melanin in the iris.

The real curiosities

Then there are the real curiosities.

Heterochromia, where one eye is a different colour from the other, or one iris contains two distinct colours, is rare but striking. It can be genetic, the result of injury, or linked to specific health conditions. Celebrities such as Kate Bosworth and Mila Kunis are well-known examples. Musician David Bowie’s eyes appeared as different colours because of a permanently dilated pupil after an accident, giving the illusion of heterochromia.

In the end, eye colour is more than just a quirk of genetics and physics. It’s a reminder of how biology and beauty intertwine. Each iris is like a tiny universe, rings of pigment, flecks of gold, or pools of deep brown that catch the light differently every time you look.

Eyes don’t just let us see the world, they also connect us to one another. Whether blue, green, brown, or something in-between, every pair tells a story that’s utterly unique, one of heritage, individuality, and the quiet wonder of being human.The Conversation

Davinia Beaver, Postdoctoral research fellow, Clem Jones Centre for Regenerative Medicine, Bond University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Scientists Define a Color Never Before Seen by Human Eyes, Called 'Olo'–a Blue-Green of Intense Saturation

Photo by Hamish on Unsplash

An experiment in human photoreceptors allowed scientists to recently define a new color, imperceptible by the human eye, that lies along the blue-green spectrum but is different from the two.

The team, who experimented on themselves and others, hope their findings could one day help improve tools for studying color blindness or lead to new technologies for creating colors in digital imagery.

“Theoretically, novel colors are possible through bypassing the constraints set by the cone spectral sensitivities…” the authors write in their abstract. “In practice, we confirm a partial expansion of colorspace toward that theoretical ideal.”

The team from University of California, Berkeley and the University of Washington used pioneering laser technology which they called “Oz” to “directly control the human eye’s photoreceptor activity via cell-by-cell light delivery.”

Color is generated in our vision through the transmission of light in cells called photoreceptors. Eye tissue contain a series of cones for this task, and the cones are labeled as L, S, or M cones.

In normal color vision, the authors explain, any light that stimulates an M cone cell must also stimulate its neighboring L and/or S cones because the M cone spectral response function lies between that of the L and S cones.

“However, Oz stimulation can by definition target light to only M cones and not L or S, which in principle would send a color signal to the brain that never occurs in natural vision,” they add.

Described as a kind of blue-green with “unprecedented saturation” the new color, which the researchers named “olo” was confirmed as being beyond the normal blue-green spectrum by each participant who saw it, as they needed to add substantial amounts of white for olo to fit somewhere within that spectrum.

“The Oz system represents a new experimental platform in vision science, aiming to control photo receptor activation with great precision,” the study says.


Although the authors are confidant that olo has never been seen before by humans, the spectrum of blue-green has received international attention before as a field of vision discovery.

A groundbreaking study of the Himba people in Namibia conducted in 2005 and published in journal of the American Psychological Association demonstrated that these traditional landowners seemed to perceive various colors as the same because they used the same word for them. A grouping of colors we in the West would separate into pink, red, and orange, is all serandu to them.

That was only half of the cause for fascination with the study. The other half came from the Himba people’s unbelievable sensitivity to the blue-green spectrum, such that they could reliably pick out the fainest differences in green that Western viewers by comparison missed.

This also corresponded with more words for shades of green which Westerners would never bother specifying, and in fact, the Himba had a harder time pointing out that a blue square was different from green squares when shown a chart, but could reliably select the square of a slightly different shade of green to the rest.But then it got even stranger. Further studies in the following years included genetic testing on the Himba, and it showed they possess an increased number of cone cells in their eyes. This higher density of cones enables them to perceive more shades and nuances of color than the average person, according to the lead author of the genetic research. Scientists Define a Color Never Before Seen by Human Eyes, Called 'Olo'–a Blue-Green of Intense Saturation
Read More........

Discovery of Genetically-Varied Worms in Chernobyl Could Help Human Cancer Research

Worms collected in the Chornobyl Exclusion Zone – SWNS / New York University

The 1986 disaster at the Chernobyl nuclear power plant transformed the surrounding area into the most radioactive landscape on Earth, and now the discovery of a worm that seems to be right at home in the rads is believed to be a boon for human cancer research.

Though humans were evacuated after the meltdown of Reactor 4, many plants and animals continued to live in the region, despite the high levels of radiation that have persisted to our time.

In recent years, researchers have found that some animals living in the Chernobyl Exclusion Zone are physically and genetically different from their counterparts elsewhere, raising questions about the impact of chronic radiation on DNA.

In particular, a new study led by researchers at New York University finds that exposure to chronic radiation from Chernobyl has not damaged the genomes of microscopic worms living there today, and the team suggests the invertebrates have become exceptionally resilient.

The finding could offer clues as to why humans with a genetic predisposition to cancer develop the disease, while others do not.

“Chernobyl was a tragedy of incomprehensible scale, but we still don’t have a great grasp on the effects of the disaster on local populations,” said Sophia Tintori, a postdoctoral associate in the Department of Biology at NYU and the first author of the study, published in the Proceedings of the National Academy of Sciences.

“Did the sudden environmental shift select for species, or even individuals within a species, that are naturally more resistant to ionizing radiation?”

Tintori and her colleagues turned to nematodes, tiny worms with simple genomes and rapid reproduction, which makes them particularly useful for understanding basic biological phenomena.

“These worms live everywhere, and they live quickly, so they go through dozens of generations of evolution while a typical vertebrate is still putting on its shoes,” said Matthew Rockman, a professor of biology at NYU and the study’s senior author.

“I had seen footage of the Exclusion Zone and was surprised by how lush and overgrown it looked—I’d never thought of it as teeming with life,” added Tintori. “If I want to find worms that are particularly tolerant to radiation exposure, this is a landscape that might have already selected for that.”

In collaboration with scientists in Ukraine and U.S. colleagues, including biologist Timothy Mousseau of the University of South Carolina, who studies the effects of radiation from the Chernobyl and Fukushima disasters, Tintori and Rockman visited the Chernobyl Exclusion Zone in 2019 to see if chronic radiation has had a detectable impact on the region’s worms.

With Geiger counters in hand to measure local levels of radiation and personal protective gear to guard against radioactive dust, they gathered worms from samples of soil, rotting fruit, and other organic material.
The ruins of Reactor 4, Chernobyl Exclusion Zone. credit Matt Shalvatis – CC BY-4.0. SA

Worms were collected from locations throughout the zone with different amounts of radiation, ranging from low levels on par with New York City (negligibly radioactive) to high-radiation sites on par with outer space (dangerous for humans, but of unclear if it would be dangerous to worms).

After collecting samples in the field, the team brought them to Mousseau’s field lab in a former residential home in Chernobyl, where they separated hundreds of nematodes from the soil or fruit. From there, they headed to a Kyiv hotel where, using travel microscopes, they isolated and established cultures from each worm.

Back in the lab at NYU, the researchers continued studying the worms by freezing them.

“We can cryopreserve worms, and then thaw them for study later. That means that we can stop evolution from happening in the lab, something impossible with most other animal models, and very valuable when we want to compare animals that have experienced different evolutionary histories,” said Rockman.

They focused their analyses on 15 worms of a nematode species called Oscheius tipulae, which has been used in genetic and evolutionary studies. They sequenced the genomes of the 15 O. tipulae worms from Chernobyl and compared them with the genomes of five O. tipulae from other parts of the world.

The researchers were surprised to find that, using several different analyses, they could not detect a signature of radiation damage on the genomes of the worms from Chernobyl.

“This doesn’t mean that Chernobyl is safe—it more likely means that nematodes are really resilient animals and can withstand extreme conditions,” noted Tintori. “We also don’t know how long each of the worms we collected was in the Zone, so we can’t be sure exactly what level of exposure each worm and its ancestors received over the past four decades.”

Wondering whether the lack of genetic signature was because the worms living in Chernobyl are unusually effective at protecting or repairing their DNA, the researchers designed a system to compare how quickly populations of worms grow and used it to measure how sensitive the descendants of each of the 20 genetically distinct worms were to different types of DNA damage.

The surprise in this story is that while the lineages of worms were different from each other in how well they tolerated DNA damage, these differences didn’t correspond to the levels of radiation at each collection site, meaning that unlike the origin stories of several superheroes, radiation exposure doesn’t seem to create super worms just as much as it can’t turn you or I into Spiderman or the Hulk.

Instead, the teams’ findings suggest that worms from Chernobyl are not necessarily more tolerant of radiation and the radioactive landscape has not forced them to evolve.

The results give researchers clues into how DNA repair can vary from individual to individual—and despite the genetic simplicity of O. tipulae, could lead to a better understanding of natural variation in humans.

“Now that we know which strains of O. tipulae are more sensitive or more tolerant to DNA damage, we can use these strains to study why different individuals are more likely than others to suffer the effects of carcinogens,” said Tintori.

How different individuals in a species respond to DNA damage is top of mind for cancer researchers seeking to understand why some humans with a genetic predisposition to cancer develop the disease, while others do not.

“Thinking about how individuals respond differently to DNA-damaging agents in the environment is something that will help us have a clear vision of our own risk factors,” added Tintori. Discovery of Genetically-Varied Worms in Chernobyl Could Help Human Cancer Research
Read More........

Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology


Macular degeneration is the leading cause of blindness in developed countries, but regrowing the human cells lost to this condition was the feature of a new successful treatment that took advantage of advances in nanotechnology.

Regrowing the cells of the human retina on a scaffold of synthetic, tissue-like material showed substantial improvements over previously used materials such as cellulose, and the scientists hope they can move on to testing their method in the already blind.

Macular degeneration is increasing in prevalence in the developed world. It’s the leading cause of blindness and is caused by the loss of cells in a key part of the eye called the retina.

Humans have no ability to regrow retinal pigment cells, but scientists have determined how to do it in vitro using pluripotent stem cells. However as the study authors describe, previous examples of this procedure saw scientists growing the cells on flat surfaces rather than one resembling the retinal membrane.

This, they state, limits the effectiveness of transplanted cells.

In a study at the UK’s Nottingham Trent University, biomedical scientist Biola Egbowon and colleagues fabricated 3D scaffolds with polymer nanofibers and coated them with a steroid to reduce inflammation.

The method by which the nanofibers were made was pretty darn cool. The team would squirt polyacrylonitrile and Jeffamine polymers in molten form through an electrical current in a technique known as “electrospinning.” The high voltage caused molecular changes in the polymers that saw them become solid again, resembling a scaffold of tiny fibers that attracted water yet maintained mechanical strength.

After the scaffolding was made, it was treated with an anti-inflammatory steroid.

This unique pairing of materials mixed with the electrospinning created a unique scaffold that kept the retinal pigment cells viable for 150 days outside of any potential human patient, all while showing the phenotype of biomarkers critical for maintaining retinal physiological characteristics.“While this may indicate the potential of such cellularized scaffolds in regenerative medicine, it does not address the question of biocompatibility with human tissue,” Egbowon and colleagues caution in their paper, urging more research to be conducted, specifically regarding the orientation of the cells and whether they can maintain good blood supply. Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology
Read More........

This Common Fungus Found on Human Skin Wipes Out Deadly Superbug Staph Infections


University of Oregon researchers have uncovered a molecule produced by yeast living on human skin that showed potent antimicrobial properties against a pathogen responsible for a half-million hospitalizations annually in the US.

It’s a unique approach to tackling the growing problem of antibiotic-resistant bacteria. With the global threat of drug-resistant infections, fungi inhabiting human skin are an untapped resource for identifying new antibiotics, said Caitlin Kowalski, a postdoctoral researcher at the UO who led the study.

Described in a paper published last month in Current Biology, the common skin fungus Malassezia gobbles up oil and fats on human skin to produce fatty acids that selectively eliminate Staphylococcus aureus.

One out of every three people have Staphylococcus aureus harmlessly dwelling in their nose, but the bacteria are a risk factor for serious infections when given the opportunity: open wounds, abrasions and cuts. They’re the primary cause of skin and soft tissue infections known as staph infections.

Staphylococcus aureus is also a hospital superbug notorious for being resistant to current antibiotics, elevating the pressing need for new medicines.

There are lots of studies that identify new antibiotic structures, Kowalski said, “but what was fun and interesting about ours is that we identified (a compound) that is well-known and that people have studied before.”

The compound is not toxic in normal lab conditions, but it can be potent in conditions that replicate the acidic environment of healthy skin. “I think that’s why in some cases we may have missed these kinds of antimicrobial mechanisms,” Kowalski added, “because the pH in the lab wasn’t low enough. But human skin is really acidic.”

Humans play host to a colossal array of microorganisms, known as the microbiome, but we know little about our resident fungi and their contributions to human health, Kowalski said. The skin microbiome is of special interest to her because while other body parts crowd dozens of different fungi, the skin is dominantly colonized by one kind known as Malassezia.

Malassezia can be associated with cases of dandruff and eczema, but it’s considered relatively harmless and a normal part of skin flora. The yeast has evolved to live on mammalian skin, so much so that it can’t make fatty acids without the lipids—oils and fats—secreted by skin.

Despite the abundance of Malassezia found on us, they remain understudied, Kowalski said.

“The skin is a parallel system to what’s happening in the gut, which is really well-studied,” she said in a media release. “We know that the intestinal microbiome can modify host compounds and make their own unique compounds that have new functions. Skin is lipid-rich, and the skin microbiome processes these lipids to also produce bioactive compounds. So what does this mean for skin health and diseases?”

Looking at human skin samples from healthy donors and experiments done with skin cells in the lab, Kowalski found that the fungal species Malassezia sympodialis transformed host lipids into antibacterial hydroxy fatty acids. Fatty acids have various functions in cells but are notably the building blocks for cell membranes.

The hydroxy fatty acids synthesized by Malassezia sympodialis were detergent-like, destroying the membranes of Staphylococcus aureus and causing its internal contents to leak away. The attack prevented the colonization of Staphylococcus aureus on the skin and ultimately killed the bacteria in as little as 15 minutes, Kowalski said.

But the fungus isn’t a magic bullet. After enough exposure, the staph bacteria eventually became tolerant to the fungus, as they do when clinical antibiotics are overused.

Looking at their genetics, the researchers found that the bacteria evolved a mutation in the Rel gene, which activates the bacterial stress response. Similar mutations have been previously identified in patients with Staphylococcus aureus infections.

The findings show that a bacteria’s host environment and interactions with other microbes can influence its susceptibility to antibiotics.

“There’s growing interest in applying microbes as a therapeutic, such as adding bacteria to prevent the growth of a pathogen,” Kowalski said. “But it can have consequences that we have not yet fully understood. Even though we know antibiotics lead to the evolution of resistance, it hasn’t been considered when we think about the application of microbes as a therapeutic.”

While the discovery adds a layer of complexity for drug discovery, Kowalski said she is excited about the potential of resident fungi as a new source for future antibiotics.

Identifying the antimicrobial fatty acids took three years and a cross-disciplinary effort. Kowalski collaborated with chemical microbiologists at McMaster University to track down the compound.

“It was like finding a needle in a haystack but with molecules you can’t see,” said Kowalski’s adviser, Matthew Barber, an associate professor of biology in the College of Arts and Sciences at the UO.

Kowalski is working on a follow-up study that goes deeper into the genetic mechanisms that led to the antibiotic tolerance. She is also preparing to launch her own lab to further investigate the overlooked role of the skin microbiome, parting from Barber’s lab after bringing fungi into focus.

“Antibiotic-resistant bacterial infections are a major human health threat and one that, in some ways, is getting worse,” Barber said. “We still have a lot of work to do in understanding the microorganisms but also finding new ways that we can possibly treat or prevent those infections.”[Source: By Leila Okahata, University of Oregon] This Common Fungus Found on Human Skin Wipes Out Deadly Superbug Staph Infections
Read More........

Scientists use AI to reveal the neural dynamics of human conversation


New York, (IANS): By combining artificial intelligence (AI) with electrical recordings of brain activity, researchers have been able to track the language exchanged during conversations and the corresponding neural activity in different brain regions, according to a new study.

The team from Department of Neurosurgery at Massachusetts General Hospital in the US investigated how our brains process language during real-life conversations.

“Specifically, we wanted to understand which brain regions become active when we're speaking and listening, and how these patterns relate to the specific words and context of the conversation,” said lead author Jing Cai in a paper published in Nature Communications.

They employed AI to take a closer look at how our brains handle the back-and-forth of real conversations. The team combined advanced AI, specifically language models like those behind ChatGPT, with neural recordings using electrodes placed within the brain.

This allowed them to simultaneously track the linguistic features of conversations and the corresponding neural activity in different brain regions.

“By analysing these synchronised data streams, we could map how specific aspects of language–like the words being spoken and the conversational context–were represented in the dynamic patterns of brain activity during conversation,” said Cai.

They found that both speaking and listening during a conversation engage a widespread network of brain areas in the frontal and temporal lobes.

What's interesting is that these brain activity patterns are highly specific, changing depending on the exact words being used, the context and order of those words.

“We also observed that some brain regions are active during both speaking and listening, suggesting a partially shared neural basis for these processes. Finally, we identified specific shifts in brain activity that occur when people switch from listening to speaking during a conversation,” said the authors.The findings offer significant insights into how the brain pulls off the seemingly effortless feat of conversation. Scientists use AI to reveal the neural dynamics of human conversation | MorungExpress | morungexpress.com
Read More........

An AI system has reached human level on a test for ‘general intelligence’. Here’s what that means

A new artificial intelligence (AI) model has just achieved human-level results on a test designed to measure “general intelligence”.

On December 20, OpenAI’s o3 system scored 85% on the ARC-AGI benchmark, well above the previous AI best score of 55% and on par with the average human score. It also scored well on a very difficult mathematics test.

Creating artificial general intelligence, or AGI, is the stated goal of all the major AI research labs. At first glance, OpenAI appears to have at least made a significant step towards this goal.

While scepticism remains, many AI researchers and developers feel something just changed. For many, the prospect of AGI now seems more real, urgent and closer than anticipated. Are they right?

Generalisation and intelligence

To understand what the o3 result means, you need to understand what the ARC-AGI test is all about. In technical terms, it’s a test of an AI system’s “sample efficiency” in adapting to something new – how many examples of a novel situation the system needs to see to figure out how it works.

An AI system like ChatGPT (GPT-4) is not very sample efficient. It was “trained” on millions of examples of human text, constructing probabilistic “rules” about which combinations of words are most likely.

The result is pretty good at common tasks. It is bad at uncommon tasks, because it has less data (fewer samples) about those tasks.

Until AI systems can learn from small numbers of examples and adapt with more sample efficiency, they will only be used for very repetitive jobs and ones where the occasional failure is tolerable.

The ability to accurately solve previously unknown or novel problems from limited samples of data is known as the capacity to generalise. It is widely considered a necessary, even fundamental, element of intelligence.

Grids and patterns

The ARC-AGI benchmark tests for sample efficient adaptation using little grid square problems like the one below. The AI needs to figure out the pattern that turns the grid on the left into the grid on the right.

Each question gives three examples to learn from. The AI system then needs to figure out the rules that “generalise” from the three examples to the fourth.

These are a lot like the IQ tests sometimes you might remember from school.

Weak rules and adaptation

We don’t know exactly how OpenAI has done it, but the results suggest the o3 model is highly adaptable. From just a few examples, it finds rules that can be generalised.

To figure out a pattern, we shouldn’t make any unnecessary assumptions, or be more specific than we really have to be. In theory, if you can identify the “weakest” rules that do what you want, then you have maximised your ability to adapt to new situations.

What do we mean by the weakest rules? The technical definition is complicated, but weaker rules are usually ones that can be described in simpler statements.

In the example above, a plain English expression of the rule might be something like: “Any shape with a protruding line will move to the end of that line and ‘cover up’ any other shapes it overlaps with.”

Searching chains of thought?

While we don’t know how OpenAI achieved this result just yet, it seems unlikely they deliberately optimised the o3 system to find weak rules. However, to succeed at the ARC-AGI tasks it must be finding them.

We do know that OpenAI started with a general-purpose version of the o3 model (which differs from most other models, because it can spend more time “thinking” about difficult questions) and then trained it specifically for the ARC-AGI test.

French AI researcher Francois Chollet, who designed the benchmark, believes o3 searches through different “chains of thought” describing steps to solve the task. It would then choose the “best” according to some loosely defined rule, or “heuristic”.

This would be “not dissimilar” to how Google’s AlphaGo system searched through different possible sequences of moves to beat the world Go champion.

You can think of these chains of thought like programs that fit the examples. Of course, if it is like the Go-playing AI, then it needs a heuristic, or loose rule, to decide which program is best.

There could be thousands of different seemingly equally valid programs generated. That heuristic could be “choose the weakest” or “choose the simplest”.

However, if it is like AlphaGo then they simply had an AI create a heuristic. This was the process for AlphaGo. Google trained a model to rate different sequences of moves as better or worse than others.

What we still don’t know

The question then is, is this really closer to AGI? If that is how o3 works, then the underlying model might not be much better than previous models.

The concepts the model learns from language might not be any more suitable for generalisation than before. Instead, we may just be seeing a more generalisable “chain of thought” found through the extra steps of training a heuristic specialised to this test. The proof, as always, will be in the pudding.

Almost everything about o3 remains unknown. OpenAI has limited disclosure to a few media presentations and early testing to a handful of researchers, laboratories and AI safety institutions.

Truly understanding the potential of o3 will require extensive work, including evaluations, an understanding of the distribution of its capacities, how often it fails and how often it succeeds.

When o3 is finally released, we’ll have a much better idea of whether it is approximately as adaptable as an average human.

If so, it could have a huge, revolutionary, economic impact, ushering in a new era of self-improving accelerated intelligence. We will require new benchmarks for AGI itself and serious consideration of how it ought to be governed.

If not, then this will still be an impressive result. However, everyday life will remain much the same.The Conversation

Michael Timothy Bennett, PhD Student, School of Computing, Australian National University and Elija Perrier, Research Fellow, Stanford Center for Responsible Quantum Technology, Stanford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Is the bird flu virus inching closer to humans?

New Delhi, April 29 (IANS) While there is no record to date of sustained human-to-human bird flu transmission, the recent virus mutations show it may be inching closer to humans, according to health experts on Monday.

The bird flu or avian influenza A (H5N1) virus outbreak in poultry farms is not a new occurrence. It has periodically been reported all around the world, including poultry farms in parts of India.

Migrating wild birds bring the virus to poultry farms. However, in recent years, this bird flu virus H5N1 has jumped to mammals.

In 2023, the H5N1 virus killed a record number of birds and also spread to otters, sea lions, foxes, dolphins, and seals, among others. More recently it also affected numerous cattle farms across the US. Health officials in the US found fragments of bird virus in pasteurised milk sold in stores, including in about 20 per cent of samples in initial testing across the country.

"This shows that the H5N1 bird flu virus has now adapted for circulating among mammals. It is now able to easily spread from mammal to mammal, rather than having to jump each time from bird to mammal. This shows the virus has made suitable adaptations already. And bird flu virus has moved one step closer to humans," Dr Rajeev Jayadevan, co-chairman of the Indian Medical Association’s National Covid-19 Task Force, told IANS.

Importantly, "there is no record to date of sustained human-to-human transmission. This can only occur if the virus makes more adaptations by mutating. The concern now is the virus has found a new host among cattle, which is always in contact with man," he added.

Can bird flu infect humans?

Bird flu -- a common phenomenon seen in India -- raised infection concerns among humans in Jharkhand’s Ranchi last week. Two doctors and six staff members of the Regional Poultry Farm in Hotwar were quarantined for two days. However, their throat swab samples sent for tests on April 27, were found to be negative.

According to data from the World Health Organisation, from 2003 to 2023, a total of 873 human cases of infection with influenza A (H5N1) and 458 deaths have been reported globally from 21 countries. However, to date, no sustained human-to-human transmission has been detected.

"Human infection due to avian influenza happens only with close contact with infected animals. Although the risk for human infection is rare, such occurrences come with a high mortality rate," biologist Vinod Scaria, told IANS.

The high mortality rate is because "humans have no prior immune memory for this particular type of influenza virus", said Dr Jayadevan.

The WHO believes that available epidemiological and virological evidence does not indicate that current bird flu viruses have acquired the ability of sustained transmission among humans. However, the recent episode of transmission to cattle, where it has reportedly affected one human, has raised fresh concerns.

Genomic analysis suggests that it has silently been spreading among the cattle for months - since December or January.

"Scientists are worried whether the virus will now make further adaptations where it can not only easily infect man, but also spread from man to man, in which case it could become a major catastrophic event. We hope it will not happen," Dr Jayadevan told IANS.

The WHO advises people in close contact with cattle and poultry to regularly wash hands and employ good food safety and food hygiene practices, pasteurise milk, as well as to get vaccinated against seasonal human flu, to reduce the risk that H5N1 could recombine with a human avian virus."Appropriate personal protection while handling infected birds/dead birds or excreta is very important and awareness of this among the public is important," Scaria told IANS.Is the bird flu virus inching closer to humans? | MorungExpress | morungexpress.com
Read More........

The first pig kidney has been transplanted into a living person. But we’re still a long way from solving organ shortages

In a world first, we heard last week that US surgeons had transplanted a kidney from a gene-edited pig into a living human. News reports said the procedure was a breakthrough in xenotransplantation – when an organ, cells or tissues are transplanted from one species to another.

The world’s first transplant of a gene-edited pig kidney into a live human was announced last week.

Champions of xenotransplantation regard it as the solution to organ shortages across the world. In December 2023, 1,445 people in Australia were on the waiting list for donor kidneys. In the United States, more than 89,000 are waiting for kidneys.

One biotech CEO says gene-edited pigs promise “an unlimited supply of transplantable organs”.

Not, everyone, though, is convinced transplanting animal organs into humans is really the answer to organ shortages, or even if it’s right to use organs from other animals this way.

There are two critical barriers to the procedure’s success: organ rejection and the transmission of animal viruses to recipients.

But in the past decade, a new platform and technique known as CRISPR/Cas9 – often shortened to CRISPR – has promised to mitigate these issues.

What is CRISPR?

CRISPR gene editing takes advantage of a system already found in nature. CRISPR’s “genetic scissors” evolved in bacteria and other microbes to help them fend off viruses. Their cellular machinery allows them to integrate and ultimately destroy viral DNA by cutting it.

In 2012, two teams of scientists discovered how to harness this bacterial immune system. This is made up of repeating arrays of DNA and associated proteins, known as “Cas” (CRISPR-associated) proteins.

When they used a particular Cas protein (Cas9) with a “guide RNA” made up of a singular molecule, they found they could program the CRISPR/Cas9 complex to break and repair DNA at precise locations as they desired. The system could even “knock in” new genes at the repair site.

In 2020, the two scientists leading these teams were awarded a Nobel prize for their work.

In the case of the latest xenotransplantation, CRISPR technology was used to edit 69 genes in the donor pig to inactivate viral genes, “humanise” the pig with human genes, and knock out harmful pig genes.

How does CRISPR work?

A busy time for gene-edited xenotransplantation

While CRISPR editing has brought new hope to the possibility of xenotransplantation, even recent trials show great caution is still warranted.

In 2022 and 2023, two patients with terminal heart diseases, who were ineligible for traditional heart transplants, were granted regulatory permission to receive a gene-edited pig heart. These pig hearts had ten genome edits to make them more suitable for transplanting into humans. However, both patients died within several weeks of the procedures.

Earlier this month, we heard a team of surgeons in China transplanted a gene-edited pig liver into a clinically dead man (with family consent). The liver functioned well up until the ten-day limit of the trial.

How is this latest example different?

The gene-edited pig kidney was transplanted into a relatively young, living, legally competent and consenting adult.

The total number of gene edits edits made to the donor pig is very high. The researchers report making 69 edits to inactivate viral genes, “humanise” the pig with human genes, and to knockout harmful pig genes.

Clearly, the race to transform these organs into viable products for transplantation is ramping up.

From biotech dream to clinical reality

Only a few months ago, CRISPR gene editing made its debut in mainstream medicine.

In November, drug regulators in the United Kingdom and US approved the world’s first CRISPR-based genome-editing therapy for human use – a treatment for life-threatening forms of sickle-cell disease.

The treatment, known as Casgevy, uses CRISPR/Cas-9 to edit the patient’s own blood (bone-marrow) stem cells. By disrupting the unhealthy gene that gives red blood cells their “sickle” shape, the aim is to produce red blood cells with a healthy spherical shape.

Although the treatment uses the patient’s own cells, the same underlying principle applies to recent clinical xenotransplants: unsuitable cellular materials may be edited to make them therapeutically beneficial in the patient.

We’ll be talking more about gene-editing

Medicine and gene technology regulators are increasingly asked to approve new experimental trials using gene editing and CRISPR.

However, neither xenotransplantation nor the therapeutic applications of this technology lead to changes to the genome that can be inherited.

For this to occur, CRISPR edits would need to be applied to the cells at the earliest stages of their life, such as to early-stage embryonic cells in vitro (in the lab).

In Australia, intentionally creating heritable alterations to the human genome is a criminal offence carrying 15 years’ imprisonment.

No jurisdiction in the world has laws that expressly permits heritable human genome editing. However, some countries lack specific regulations about the procedure.

Is this the future?

Even without creating inheritable gene changes, however, xenotransplantation using CRISPR is in its infancy.

For all the promise of the headlines, there is not yet one example of a stable xenotransplantation in a living human lasting beyond seven months.

While authorisation for this recent US transplant has been granted under the so-called “compassionate use” exemption, conventional clinical trials of pig-human xenotransplantation have yet to commence.

But the prospect of such trials would likely require significant improvements in current outcomes to gain regulatory approval in the US or elsewhere.

By the same token, regulatory approval of any “off-the-shelf” xenotransplantation organs, including gene-edited kidneys, would seem some way off.The Conversation

Christopher Rudge, Law lecturer, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

What is a sonar pulse and how can it injure humans under water?

Christine Erbe, Curtin University

Over the weekend, the Australian government revealed that last Tuesday its navy divers had sustained “minor injuries”, likely due to sonar pulses from a Chinese navy vessel.

The divers had been clearing fishing nets from the propellers of HMAS Toowoomba while in international waters off the coast of Japan. According to a statement from deputy prime minister Richard Marles, despite HMAS Toowoomba communicating with internationally recognised signals, the Chinese vessel approached the Australian ship and turned on its sonar, forcing the Australian divers to exit the water.

The incident prompted a response from the Australian government, who labelled the incident “unsafe and unprofessional”. But what exactly is a sonar pulse, and what kinds of injuries can sonar cause to divers?

What is sonar?

Light doesn’t travel well under water – even in clear waters, you can see perhaps some tens of metres. Sound, however, travels very well and far under water. This is because water is much denser than air, and so can respond faster and better to acoustic pressure waves – sound waves.

Because of these properties, ships use sonar to navigate through the ocean and to “see” under water. The word “sonar” stands for sound navigation and ranging.

Sonar equipment sends out short acoustic (sound) pulses or pings, and then analyses the echoes. Depending on the timing, amplitude, phase and direction of the echoes the equipment receives, you can tell what’s under water – the seafloor, canyon walls, coral, fishes, and of course ships and submarines.

Most vessels – from small, private boats to large commercial tankers – use sonar. However, compared to your off-the-shelf sonar used for finding fish, navy sonars are stronger.


What are the effects of sonar on divers?

This is a difficult topic to study, because you don’t want to deliberately expose humans to harmful levels of sound. There are, however, anecdotes from various navies and accidental exposures. There have also been studies on what humans can hear under water, with or without neoprene suits, hoods, or helmets.

We don’t hear well under water – no surprise, since we’ve evolved to live on land. Having said that, you would hear a sonar sound under water (a mid-to-high pitch noise) and would know you’ve been exposed.

When it comes to naval sonars, human divers have rated the sound as “unpleasant to severe” at levels of roughly 150dB re 1 µPa (decibel relative to a reference pressure of one micropascal, the standard reference for underwater sound). This would be perhaps, very roughly, 10km away from a military sonar. Note that we can’t compare sound exposure under water to what we’d receive through the air, because there are too many physical differences between the two.

Human tolerance limits are roughly 180dB re 1 µPa, which would be around 500m from military sonar. At such levels, humans might experience dizziness, disorientation, temporary memory and concentration impacts, or temporary hearing loss. We don’t have information on what levels the Australian divers were exposed to, but their injuries were described as minor.

At higher received levels, closer ranges, or longer exposures, you might see more severe physiological or health impacts. In extreme cases, in particular for impulsive, sudden sound (which sonar is not), sound can cause damage to tissues and organs.

What does sonar do to marine animals?

Some of the information on what noise might do to humans under water comes from studies and observations of animals.

While they typically don’t have outer ears (except for sea lions), marine mammals have inner ears that function similarly to ours. They can receive hearing damage from noise, just like we do. This might be temporary, like the ringing ears or reduced sensitivity you might experience after a loud concert, or it can be permanent.

Marine mammals living in a dark ocean rely on sound and hearing to a greater extent than your average human. They use sound to navigate, hunt, communicate with each other and to find mates. Toothed whales and dolphins have evolved a biological echo sounder or biosonar, which sends out series of clicks and listens for echoes. So, interfering with their sounds or impacting their hearing can disrupt critical behaviours.

Finally, sound may also impact non-mammalian fauna, such as fishes, which rely on acoustics rather than vision for many of their life functions.The Conversation

Christine Erbe, Director, Centre for Marine Science & Technology, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

How consciousness may rely on brain cells acting collectively – new psychedelics research on rats

Psychedelics can help uncover consciousness. agsandrew/Shutterstock PÀr Halje, Lund University
Psychedelics are known for inducing altered states of consciousness in humans by fundamentally changing our normal pattern of sensory perception, thought and emotion. Research into the therapeutic potential of psychedelics has increased significantly in the last decade. While this research is important, I have always been more intrigued by the idea that psychedelics can be used as a tool to study the neural basis of human consciousness in laboratory animals. We ultimately share the same basic neural hardware with other mammals, and possibly some basic aspects of consciousness, too. So by examining what happens in the brain when there’s a psychedelically induced change in conscious experience, we can perhaps glean insights into what consciousness is in the first place.We still don’t know a lot about how the networks of cells in the brain enable conscious experience. The dominating view is that consciousness somehow emerges as a collective phenomenon when the dispersed information processing of individual neurons (brain cells) is integrated as the cells interact.But the mechanism by which this is supposed to happen remains unclear. Now our study on rats, published in Communications Biology, suggests that psychedelics radically change the way that neurons interact and behave collectively.Our study compared two different classes of psychedelics in rats: the classic LSD type and the less-typical ketamine type (ketamine is an anaesthetic in larger doses). Both classes are known to induce psychedelic experiences in humans, despite acting on different receptors in the brain. Exploring brain waves: We used electrodes to simultaneously measure electrical activity from 128 separate areas of the brain of nine awake rats while they were given psychedelics. The electrodes could pick up two kinds of signals: electrical brain waves caused by the cumulative activity in thousands of neurons, and smaller transient electrical pulses, called action potentials, from individual neurons. The classic psychedelics, such as LSD and psilocybin (the active ingredient in magic mushrooms), activates a receptor in the brain (5-HT2A) which normally binds to serotonin, a neurotransmitter that regulates mood and many other things. Ketamine, on the other hand, works by inhibiting another receptor (NMDA), which normally is activated by glutamate, the primary neurotransmitter in the brain for making neurons fire. We speculated that, despite these differences, the two classes of psychedelics might have similar effects on the activity of brain cells. Indeed, it turned out that both drug classes induced a very similar and distinctive pattern of brain waves in multiple brain regions. The brain waves were unusually fast, oscillating about 150 times per second. They were also surprisingly synchronised between different brain regions. Short bursts of oscillations at a similar frequency are known to occur occasionally under normal conditions in some brain
Brain waves on electroencephalogram EEG. Chaikom/Shutterstock
regions. But in this case, it occurred for prolonged durations.  First, we assumed that a single brain structure was generating the wave and that it then spread to other locations. But the data was not consistent with that scenario. Instead, we saw that the waves went up and down almost simultaneously in all parts of the brain where we could detect them – a phenomenon called phase synchronisation. Such tight phase synchronisation over such long distances has to our knowledge never been observed before. We were also able to measure action potentials from individual neurons during the psychedelic state. Action potentials are electrical pulses, no longer than a thousandth of a second, that are generated by the opening and closing of ion channels in the cell membrane. The action potentials are the primary way that neurons influence each other. Consequently, they are considered to be the main carrier of information in the brain. However, the action potential activity caused by LSD and ketamine differed significantly. As such, they could not be directly linked to the general psychedelic state. For LSD, neurons were inhibited – meaning they fired fewer action potentials – in all parts of the brain. For ketamine, the effect depended on cell type – certain large neurons were inhibited, while a type of smaller, locally connecting neurons, fired more. Therefore, it is probably the synchronised wave phenomenon – how the neurons behave collectively – that is most strongly linked to the psychedelic state. Mechanistically, this makes some sense. It is likely that this type of increased synchrony has large effects on the integration of information across neural systems that normal perception and cognition rely on. I think that this possible link between neuron-level system dynamics and consciousness is fascinating. It suggests that consciousness relies on a coupled collective state rather than the activity of individual neurons – it is greater than the sum of its parts. That said, this link is still highly speculative at this point. That’s because the phenomenon has not yet been observed in human brains. Also, one should be cautious when extrapolating human experiences to other animals – it is of course impossible to know exactly what aspects of a trip we share with our rodent relatives. But when it comes to cracking the deep mystery of consciousness, every bit of information is valuable. PÀr Halje, Associate Research Fellow of Neurophysiology, Lund University This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read More........