Australia leads first human trial of one-time gene editing therapy to halve bad cholesterol


IANS Photo

Melbourne, November 10 (IANS): Researchers in Australia have led a first-in-human trial for a breakthrough gene-editing therapy that halves bad cholesterol and triglycerides in people with difficult-to-treat lipid disorders.

The trial tested CTX310, a one-time CRISPR-Cas9 gene-editing therapy that uses fat-based particles to carry CRISPR editing tools into the liver, switching off the ANGPTL3 gene. Turning off this gene lowers LDL (bad) cholesterol and triglycerides, two blood fats linked to heart disease, according to a statement released Monday by Australia's Monash University.

The Victorian Heart Hospital, operated by Monash Health in partnership with Monash University, treated three of 15 patients aged 18-75 years with difficult-to-treat lipid disorders in phase 1 of the global trial conducted across Australia, New Zealand, and Britain, the statement said, Xinhua news agency reported.

At the highest dose, a single-course treatment with CTX310 resulted in a mean reduction of LDL cholesterol by 50 per cent and triglycerides by 55 per cent, remaining low for at least 60 days after two weeks of treatment, it said, adding LDL cholesterol and triglycerides were reduced by nearly 60 per cent among all participants with various doses, with only mild, short-term side effects reported.

Importantly, CTX310 is the first therapy to achieve large reductions in both LDL cholesterol and triglycerides at the same time, marking a potential breakthrough for people with mixed lipid disorders who have elevations in both, according to the trial published in the New England Journal of Medicine.

"The possibility of a single-course treatment with lasting effects could be a major step in how we prevent heart disease," said Stephen Nicholls, Director of the Victorian Heart Hospital, and study lead investigator."It makes treatment easier, reduces ongoing costs, relieves pressure on the health system, all while improving a person's quality of life," Nicholls said, emphasising plans to focus on larger and more diverse patient populations in future trials of CTX310. Australia leads first human trial of one-time gene editing therapy to halve bad cholesterol | MorungExpress | morungexpress.com
Read More........

Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology


Macular degeneration is the leading cause of blindness in developed countries, but regrowing the human cells lost to this condition was the feature of a new successful treatment that took advantage of advances in nanotechnology.

Regrowing the cells of the human retina on a scaffold of synthetic, tissue-like material showed substantial improvements over previously used materials such as cellulose, and the scientists hope they can move on to testing their method in the already blind.

Macular degeneration is increasing in prevalence in the developed world. It’s the leading cause of blindness and is caused by the loss of cells in a key part of the eye called the retina.

Humans have no ability to regrow retinal pigment cells, but scientists have determined how to do it in vitro using pluripotent stem cells. However as the study authors describe, previous examples of this procedure saw scientists growing the cells on flat surfaces rather than one resembling the retinal membrane.

This, they state, limits the effectiveness of transplanted cells.

In a study at the UK’s Nottingham Trent University, biomedical scientist Biola Egbowon and colleagues fabricated 3D scaffolds with polymer nanofibers and coated them with a steroid to reduce inflammation.

The method by which the nanofibers were made was pretty darn cool. The team would squirt polyacrylonitrile and Jeffamine polymers in molten form through an electrical current in a technique known as “electrospinning.” The high voltage caused molecular changes in the polymers that saw them become solid again, resembling a scaffold of tiny fibers that attracted water yet maintained mechanical strength.

After the scaffolding was made, it was treated with an anti-inflammatory steroid.

This unique pairing of materials mixed with the electrospinning created a unique scaffold that kept the retinal pigment cells viable for 150 days outside of any potential human patient, all while showing the phenotype of biomarkers critical for maintaining retinal physiological characteristics.“While this may indicate the potential of such cellularized scaffolds in regenerative medicine, it does not address the question of biocompatibility with human tissue,” Egbowon and colleagues caution in their paper, urging more research to be conducted, specifically regarding the orientation of the cells and whether they can maintain good blood supply. Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology
Read More........

Blue, green, brown, or something in between – the science of eye colour explained

You’re introduced to someone and your attention catches on their eyes. They might be a rich, earthy brown, a pale blue, or the rare green that shifts with every flicker of light. Eyes have a way of holding us, of sparking recognition or curiosity before a single word is spoken. They are often the first thing we notice about someone, and sometimes the feature we remember most.

Across the world, human eyes span a wide palette. Brown is by far the most common shade, especially in Africa and Asia, while blue is most often seen in northern and eastern Europe. Green is the rarest of all, found in only about 2% of the global population. Hazel eyes add even more diversity, often appearing to shift between green and brown depending on the light.

So, what lies behind these differences?

It’s all in the melanin

The answer rests in the iris, the coloured ring of tissue that surrounds the pupil. Here, a pigment called melanin does most of the work.

Brown eyes contain a high concentration of melanin, which absorbs light and creates their darker appearance. Blue eyes contain very little melanin. Their colour doesn’t come from pigment at all but from the scattering of light within the iris, a physical effect known as the Tyndall effect, a bit like the effect that makes the sky look blue.

In blue eyes, the shorter wavelengths of light (such as blue) are scattered more effectively than longer wavelengths like red or yellow. Due to the low concentration of melanin, less light is absorbed, allowing the scattered blue light to dominate what we perceive. This blue hue results not from pigment but from the way light interacts with the eye’s structure.

Green eyes result from a balance, a moderate amount of melanin layered with light scattering. Hazel eyes are more complex still. Uneven melanin distribution in the iris creates a mosaic of colour that can shift depending on the surrounding ambient light.

What have genes got to do with it?

The genetics of eye colour is just as fascinating.

For a long time, scientists believed a simple “brown beats blue” model, controlled by a single gene. Research now shows the reality is much more complex. Many genes contribute to determining eye colour. This explains why children in the same family can have dramatically different eye colours, and why two blue-eyed parents can sometimes have a child with green or even light brown eyes.

Eye colour also changes over time. Many babies of European ancestry are born with blue or grey eyes because their melanin levels are still low. As pigment gradually builds up over the first few years of life, those blue eyes may shift to green or brown.

In adulthood, eye colour tends to be more stable, though small changes in appearance are common depending on lighting, clothing, or pupil size. For example, blue-grey eyes can appear very blue, very grey or even a little green depending on ambient light. More permanent shifts are rarer but can occur as people age, or in response to certain medical conditions that affect melanin in the iris.

The real curiosities

Then there are the real curiosities.

Heterochromia, where one eye is a different colour from the other, or one iris contains two distinct colours, is rare but striking. It can be genetic, the result of injury, or linked to specific health conditions. Celebrities such as Kate Bosworth and Mila Kunis are well-known examples. Musician David Bowie’s eyes appeared as different colours because of a permanently dilated pupil after an accident, giving the illusion of heterochromia.

In the end, eye colour is more than just a quirk of genetics and physics. It’s a reminder of how biology and beauty intertwine. Each iris is like a tiny universe, rings of pigment, flecks of gold, or pools of deep brown that catch the light differently every time you look.

Eyes don’t just let us see the world, they also connect us to one another. Whether blue, green, brown, or something in-between, every pair tells a story that’s utterly unique, one of heritage, individuality, and the quiet wonder of being human.The Conversation

Davinia Beaver, Postdoctoral research fellow, Clem Jones Centre for Regenerative Medicine, Bond University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology


Macular degeneration is the leading cause of blindness in developed countries, but regrowing the human cells lost to this condition was the feature of a new successful treatment that took advantage of advances in nanotechnology.

Regrowing the cells of the human retina on a scaffold of synthetic, tissue-like material showed substantial improvements over previously used materials such as cellulose, and the scientists hope they can move on to testing their method in the already blind.

Macular degeneration is increasing in prevalence in the developed world. It’s the leading cause of blindness and is caused by the loss of cells in a key part of the eye called the retina.

Humans have no ability to regrow retinal pigment cells, but scientists have determined how to do it in vitro using pluripotent stem cells. However as the study authors describe, previous examples of this procedure saw scientists growing the cells on flat surfaces rather than one resembling the retinal membrane.

This, they state, limits the effectiveness of transplanted cells.

In a study at the UK’s Nottingham Trent University, biomedical scientist Biola Egbowon and colleagues fabricated 3D scaffolds with polymer nanofibers and coated them with a steroid to reduce inflammation.

The method by which the nanofibers were made was pretty darn cool. The team would squirt polyacrylonitrile and Jeffamine polymers in molten form through an electrical current in a technique known as “electrospinning.” The high voltage caused molecular changes in the polymers that saw them become solid again, resembling a scaffold of tiny fibers that attracted water yet maintained mechanical strength.

After the scaffolding was made, it was treated with an anti-inflammatory steroid.

This unique pairing of materials mixed with the electrospinning created a unique scaffold that kept the retinal pigment cells viable for 150 days outside of any potential human patient, all while showing the phenotype of biomarkers critical for maintaining retinal physiological characteristics.“While this may indicate the potential of such cellularized scaffolds in regenerative medicine, it does not address the question of biocompatibility with human tissue,” Egbowon and colleagues caution in their paper, urging more research to be conducted, specifically regarding the orientation of the cells and whether they can maintain good blood supply. Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology
Read More........

This Common Fungus Found on Human Skin Wipes Out Deadly Superbug Staph Infections


University of Oregon researchers have uncovered a molecule produced by yeast living on human skin that showed potent antimicrobial properties against a pathogen responsible for a half-million hospitalizations annually in the US.

It’s a unique approach to tackling the growing problem of antibiotic-resistant bacteria. With the global threat of drug-resistant infections, fungi inhabiting human skin are an untapped resource for identifying new antibiotics, said Caitlin Kowalski, a postdoctoral researcher at the UO who led the study.

Described in a paper published last month in Current Biology, the common skin fungus Malassezia gobbles up oil and fats on human skin to produce fatty acids that selectively eliminate Staphylococcus aureus.

One out of every three people have Staphylococcus aureus harmlessly dwelling in their nose, but the bacteria are a risk factor for serious infections when given the opportunity: open wounds, abrasions and cuts. They’re the primary cause of skin and soft tissue infections known as staph infections.

Staphylococcus aureus is also a hospital superbug notorious for being resistant to current antibiotics, elevating the pressing need for new medicines.

There are lots of studies that identify new antibiotic structures, Kowalski said, “but what was fun and interesting about ours is that we identified (a compound) that is well-known and that people have studied before.”

The compound is not toxic in normal lab conditions, but it can be potent in conditions that replicate the acidic environment of healthy skin. “I think that’s why in some cases we may have missed these kinds of antimicrobial mechanisms,” Kowalski added, “because the pH in the lab wasn’t low enough. But human skin is really acidic.”

Humans play host to a colossal array of microorganisms, known as the microbiome, but we know little about our resident fungi and their contributions to human health, Kowalski said. The skin microbiome is of special interest to her because while other body parts crowd dozens of different fungi, the skin is dominantly colonized by one kind known as Malassezia.

Malassezia can be associated with cases of dandruff and eczema, but it’s considered relatively harmless and a normal part of skin flora. The yeast has evolved to live on mammalian skin, so much so that it can’t make fatty acids without the lipids—oils and fats—secreted by skin.

Despite the abundance of Malassezia found on us, they remain understudied, Kowalski said.

“The skin is a parallel system to what’s happening in the gut, which is really well-studied,” she said in a media release. “We know that the intestinal microbiome can modify host compounds and make their own unique compounds that have new functions. Skin is lipid-rich, and the skin microbiome processes these lipids to also produce bioactive compounds. So what does this mean for skin health and diseases?”

Looking at human skin samples from healthy donors and experiments done with skin cells in the lab, Kowalski found that the fungal species Malassezia sympodialis transformed host lipids into antibacterial hydroxy fatty acids. Fatty acids have various functions in cells but are notably the building blocks for cell membranes.

The hydroxy fatty acids synthesized by Malassezia sympodialis were detergent-like, destroying the membranes of Staphylococcus aureus and causing its internal contents to leak away. The attack prevented the colonization of Staphylococcus aureus on the skin and ultimately killed the bacteria in as little as 15 minutes, Kowalski said.

But the fungus isn’t a magic bullet. After enough exposure, the staph bacteria eventually became tolerant to the fungus, as they do when clinical antibiotics are overused.

Looking at their genetics, the researchers found that the bacteria evolved a mutation in the Rel gene, which activates the bacterial stress response. Similar mutations have been previously identified in patients with Staphylococcus aureus infections.

The findings show that a bacteria’s host environment and interactions with other microbes can influence its susceptibility to antibiotics.

“There’s growing interest in applying microbes as a therapeutic, such as adding bacteria to prevent the growth of a pathogen,” Kowalski said. “But it can have consequences that we have not yet fully understood. Even though we know antibiotics lead to the evolution of resistance, it hasn’t been considered when we think about the application of microbes as a therapeutic.”

While the discovery adds a layer of complexity for drug discovery, Kowalski said she is excited about the potential of resident fungi as a new source for future antibiotics.

Identifying the antimicrobial fatty acids took three years and a cross-disciplinary effort. Kowalski collaborated with chemical microbiologists at McMaster University to track down the compound.

“It was like finding a needle in a haystack but with molecules you can’t see,” said Kowalski’s adviser, Matthew Barber, an associate professor of biology in the College of Arts and Sciences at the UO.

Kowalski is working on a follow-up study that goes deeper into the genetic mechanisms that led to the antibiotic tolerance. She is also preparing to launch her own lab to further investigate the overlooked role of the skin microbiome, parting from Barber’s lab after bringing fungi into focus.

“Antibiotic-resistant bacterial infections are a major human health threat and one that, in some ways, is getting worse,” Barber said. “We still have a lot of work to do in understanding the microorganisms but also finding new ways that we can possibly treat or prevent those infections.”[Source: By Leila Okahata, University of Oregon] This Common Fungus Found on Human Skin Wipes Out Deadly Superbug Staph Infections
Read More........

Scientists use AI to reveal the neural dynamics of human conversation


New York, (IANS): By combining artificial intelligence (AI) with electrical recordings of brain activity, researchers have been able to track the language exchanged during conversations and the corresponding neural activity in different brain regions, according to a new study.

The team from Department of Neurosurgery at Massachusetts General Hospital in the US investigated how our brains process language during real-life conversations.

“Specifically, we wanted to understand which brain regions become active when we're speaking and listening, and how these patterns relate to the specific words and context of the conversation,” said lead author Jing Cai in a paper published in Nature Communications.

They employed AI to take a closer look at how our brains handle the back-and-forth of real conversations. The team combined advanced AI, specifically language models like those behind ChatGPT, with neural recordings using electrodes placed within the brain.

This allowed them to simultaneously track the linguistic features of conversations and the corresponding neural activity in different brain regions.

“By analysing these synchronised data streams, we could map how specific aspects of language–like the words being spoken and the conversational context–were represented in the dynamic patterns of brain activity during conversation,” said Cai.

They found that both speaking and listening during a conversation engage a widespread network of brain areas in the frontal and temporal lobes.

What's interesting is that these brain activity patterns are highly specific, changing depending on the exact words being used, the context and order of those words.

“We also observed that some brain regions are active during both speaking and listening, suggesting a partially shared neural basis for these processes. Finally, we identified specific shifts in brain activity that occur when people switch from listening to speaking during a conversation,” said the authors.The findings offer significant insights into how the brain pulls off the seemingly effortless feat of conversation. Scientists use AI to reveal the neural dynamics of human conversation | MorungExpress | morungexpress.com
Read More........

An AI system has reached human level on a test for ‘general intelligence’. Here’s what that means

A new artificial intelligence (AI) model has just achieved human-level results on a test designed to measure “general intelligence”.

On December 20, OpenAI’s o3 system scored 85% on the ARC-AGI benchmark, well above the previous AI best score of 55% and on par with the average human score. It also scored well on a very difficult mathematics test.

Creating artificial general intelligence, or AGI, is the stated goal of all the major AI research labs. At first glance, OpenAI appears to have at least made a significant step towards this goal.

While scepticism remains, many AI researchers and developers feel something just changed. For many, the prospect of AGI now seems more real, urgent and closer than anticipated. Are they right?

Generalisation and intelligence

To understand what the o3 result means, you need to understand what the ARC-AGI test is all about. In technical terms, it’s a test of an AI system’s “sample efficiency” in adapting to something new – how many examples of a novel situation the system needs to see to figure out how it works.

An AI system like ChatGPT (GPT-4) is not very sample efficient. It was “trained” on millions of examples of human text, constructing probabilistic “rules” about which combinations of words are most likely.

The result is pretty good at common tasks. It is bad at uncommon tasks, because it has less data (fewer samples) about those tasks.

Until AI systems can learn from small numbers of examples and adapt with more sample efficiency, they will only be used for very repetitive jobs and ones where the occasional failure is tolerable.

The ability to accurately solve previously unknown or novel problems from limited samples of data is known as the capacity to generalise. It is widely considered a necessary, even fundamental, element of intelligence.

Grids and patterns

The ARC-AGI benchmark tests for sample efficient adaptation using little grid square problems like the one below. The AI needs to figure out the pattern that turns the grid on the left into the grid on the right.

Each question gives three examples to learn from. The AI system then needs to figure out the rules that “generalise” from the three examples to the fourth.

These are a lot like the IQ tests sometimes you might remember from school.

Weak rules and adaptation

We don’t know exactly how OpenAI has done it, but the results suggest the o3 model is highly adaptable. From just a few examples, it finds rules that can be generalised.

To figure out a pattern, we shouldn’t make any unnecessary assumptions, or be more specific than we really have to be. In theory, if you can identify the “weakest” rules that do what you want, then you have maximised your ability to adapt to new situations.

What do we mean by the weakest rules? The technical definition is complicated, but weaker rules are usually ones that can be described in simpler statements.

In the example above, a plain English expression of the rule might be something like: “Any shape with a protruding line will move to the end of that line and ‘cover up’ any other shapes it overlaps with.”

Searching chains of thought?

While we don’t know how OpenAI achieved this result just yet, it seems unlikely they deliberately optimised the o3 system to find weak rules. However, to succeed at the ARC-AGI tasks it must be finding them.

We do know that OpenAI started with a general-purpose version of the o3 model (which differs from most other models, because it can spend more time “thinking” about difficult questions) and then trained it specifically for the ARC-AGI test.

French AI researcher Francois Chollet, who designed the benchmark, believes o3 searches through different “chains of thought” describing steps to solve the task. It would then choose the “best” according to some loosely defined rule, or “heuristic”.

This would be “not dissimilar” to how Google’s AlphaGo system searched through different possible sequences of moves to beat the world Go champion.

You can think of these chains of thought like programs that fit the examples. Of course, if it is like the Go-playing AI, then it needs a heuristic, or loose rule, to decide which program is best.

There could be thousands of different seemingly equally valid programs generated. That heuristic could be “choose the weakest” or “choose the simplest”.

However, if it is like AlphaGo then they simply had an AI create a heuristic. This was the process for AlphaGo. Google trained a model to rate different sequences of moves as better or worse than others.

What we still don’t know

The question then is, is this really closer to AGI? If that is how o3 works, then the underlying model might not be much better than previous models.

The concepts the model learns from language might not be any more suitable for generalisation than before. Instead, we may just be seeing a more generalisable “chain of thought” found through the extra steps of training a heuristic specialised to this test. The proof, as always, will be in the pudding.

Almost everything about o3 remains unknown. OpenAI has limited disclosure to a few media presentations and early testing to a handful of researchers, laboratories and AI safety institutions.

Truly understanding the potential of o3 will require extensive work, including evaluations, an understanding of the distribution of its capacities, how often it fails and how often it succeeds.

When o3 is finally released, we’ll have a much better idea of whether it is approximately as adaptable as an average human.

If so, it could have a huge, revolutionary, economic impact, ushering in a new era of self-improving accelerated intelligence. We will require new benchmarks for AGI itself and serious consideration of how it ought to be governed.

If not, then this will still be an impressive result. However, everyday life will remain much the same.The Conversation

Michael Timothy Bennett, PhD Student, School of Computing, Australian National University and Elija Perrier, Research Fellow, Stanford Center for Responsible Quantum Technology, Stanford University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

Is the bird flu virus inching closer to humans?

New Delhi, April 29 (IANS) While there is no record to date of sustained human-to-human bird flu transmission, the recent virus mutations show it may be inching closer to humans, according to health experts on Monday.

The bird flu or avian influenza A (H5N1) virus outbreak in poultry farms is not a new occurrence. It has periodically been reported all around the world, including poultry farms in parts of India.

Migrating wild birds bring the virus to poultry farms. However, in recent years, this bird flu virus H5N1 has jumped to mammals.

In 2023, the H5N1 virus killed a record number of birds and also spread to otters, sea lions, foxes, dolphins, and seals, among others. More recently it also affected numerous cattle farms across the US. Health officials in the US found fragments of bird virus in pasteurised milk sold in stores, including in about 20 per cent of samples in initial testing across the country.

"This shows that the H5N1 bird flu virus has now adapted for circulating among mammals. It is now able to easily spread from mammal to mammal, rather than having to jump each time from bird to mammal. This shows the virus has made suitable adaptations already. And bird flu virus has moved one step closer to humans," Dr Rajeev Jayadevan, co-chairman of the Indian Medical Association’s National Covid-19 Task Force, told IANS.

Importantly, "there is no record to date of sustained human-to-human transmission. This can only occur if the virus makes more adaptations by mutating. The concern now is the virus has found a new host among cattle, which is always in contact with man," he added.

Can bird flu infect humans?

Bird flu -- a common phenomenon seen in India -- raised infection concerns among humans in Jharkhand’s Ranchi last week. Two doctors and six staff members of the Regional Poultry Farm in Hotwar were quarantined for two days. However, their throat swab samples sent for tests on April 27, were found to be negative.

According to data from the World Health Organisation, from 2003 to 2023, a total of 873 human cases of infection with influenza A (H5N1) and 458 deaths have been reported globally from 21 countries. However, to date, no sustained human-to-human transmission has been detected.

"Human infection due to avian influenza happens only with close contact with infected animals. Although the risk for human infection is rare, such occurrences come with a high mortality rate," biologist Vinod Scaria, told IANS.

The high mortality rate is because "humans have no prior immune memory for this particular type of influenza virus", said Dr Jayadevan.

The WHO believes that available epidemiological and virological evidence does not indicate that current bird flu viruses have acquired the ability of sustained transmission among humans. However, the recent episode of transmission to cattle, where it has reportedly affected one human, has raised fresh concerns.

Genomic analysis suggests that it has silently been spreading among the cattle for months - since December or January.

"Scientists are worried whether the virus will now make further adaptations where it can not only easily infect man, but also spread from man to man, in which case it could become a major catastrophic event. We hope it will not happen," Dr Jayadevan told IANS.

The WHO advises people in close contact with cattle and poultry to regularly wash hands and employ good food safety and food hygiene practices, pasteurise milk, as well as to get vaccinated against seasonal human flu, to reduce the risk that H5N1 could recombine with a human avian virus."Appropriate personal protection while handling infected birds/dead birds or excreta is very important and awareness of this among the public is important," Scaria told IANS.Is the bird flu virus inching closer to humans? | MorungExpress | morungexpress.com
Read More........

The first pig kidney has been transplanted into a living person. But we’re still a long way from solving organ shortages

In a world first, we heard last week that US surgeons had transplanted a kidney from a gene-edited pig into a living human. News reports said the procedure was a breakthrough in xenotransplantation – when an organ, cells or tissues are transplanted from one species to another.

The world’s first transplant of a gene-edited pig kidney into a live human was announced last week.

Champions of xenotransplantation regard it as the solution to organ shortages across the world. In December 2023, 1,445 people in Australia were on the waiting list for donor kidneys. In the United States, more than 89,000 are waiting for kidneys.

One biotech CEO says gene-edited pigs promise “an unlimited supply of transplantable organs”.

Not, everyone, though, is convinced transplanting animal organs into humans is really the answer to organ shortages, or even if it’s right to use organs from other animals this way.

There are two critical barriers to the procedure’s success: organ rejection and the transmission of animal viruses to recipients.

But in the past decade, a new platform and technique known as CRISPR/Cas9 – often shortened to CRISPR – has promised to mitigate these issues.

What is CRISPR?

CRISPR gene editing takes advantage of a system already found in nature. CRISPR’s “genetic scissors” evolved in bacteria and other microbes to help them fend off viruses. Their cellular machinery allows them to integrate and ultimately destroy viral DNA by cutting it.

In 2012, two teams of scientists discovered how to harness this bacterial immune system. This is made up of repeating arrays of DNA and associated proteins, known as “Cas” (CRISPR-associated) proteins.

When they used a particular Cas protein (Cas9) with a “guide RNA” made up of a singular molecule, they found they could program the CRISPR/Cas9 complex to break and repair DNA at precise locations as they desired. The system could even “knock in” new genes at the repair site.

In 2020, the two scientists leading these teams were awarded a Nobel prize for their work.

In the case of the latest xenotransplantation, CRISPR technology was used to edit 69 genes in the donor pig to inactivate viral genes, “humanise” the pig with human genes, and knock out harmful pig genes.

How does CRISPR work?

A busy time for gene-edited xenotransplantation

While CRISPR editing has brought new hope to the possibility of xenotransplantation, even recent trials show great caution is still warranted.

In 2022 and 2023, two patients with terminal heart diseases, who were ineligible for traditional heart transplants, were granted regulatory permission to receive a gene-edited pig heart. These pig hearts had ten genome edits to make them more suitable for transplanting into humans. However, both patients died within several weeks of the procedures.

Earlier this month, we heard a team of surgeons in China transplanted a gene-edited pig liver into a clinically dead man (with family consent). The liver functioned well up until the ten-day limit of the trial.

How is this latest example different?

The gene-edited pig kidney was transplanted into a relatively young, living, legally competent and consenting adult.

The total number of gene edits edits made to the donor pig is very high. The researchers report making 69 edits to inactivate viral genes, “humanise” the pig with human genes, and to knockout harmful pig genes.

Clearly, the race to transform these organs into viable products for transplantation is ramping up.

From biotech dream to clinical reality

Only a few months ago, CRISPR gene editing made its debut in mainstream medicine.

In November, drug regulators in the United Kingdom and US approved the world’s first CRISPR-based genome-editing therapy for human use – a treatment for life-threatening forms of sickle-cell disease.

The treatment, known as Casgevy, uses CRISPR/Cas-9 to edit the patient’s own blood (bone-marrow) stem cells. By disrupting the unhealthy gene that gives red blood cells their “sickle” shape, the aim is to produce red blood cells with a healthy spherical shape.

Although the treatment uses the patient’s own cells, the same underlying principle applies to recent clinical xenotransplants: unsuitable cellular materials may be edited to make them therapeutically beneficial in the patient.

We’ll be talking more about gene-editing

Medicine and gene technology regulators are increasingly asked to approve new experimental trials using gene editing and CRISPR.

However, neither xenotransplantation nor the therapeutic applications of this technology lead to changes to the genome that can be inherited.

For this to occur, CRISPR edits would need to be applied to the cells at the earliest stages of their life, such as to early-stage embryonic cells in vitro (in the lab).

In Australia, intentionally creating heritable alterations to the human genome is a criminal offence carrying 15 years’ imprisonment.

No jurisdiction in the world has laws that expressly permits heritable human genome editing. However, some countries lack specific regulations about the procedure.

Is this the future?

Even without creating inheritable gene changes, however, xenotransplantation using CRISPR is in its infancy.

For all the promise of the headlines, there is not yet one example of a stable xenotransplantation in a living human lasting beyond seven months.

While authorisation for this recent US transplant has been granted under the so-called “compassionate use” exemption, conventional clinical trials of pig-human xenotransplantation have yet to commence.

But the prospect of such trials would likely require significant improvements in current outcomes to gain regulatory approval in the US or elsewhere.

By the same token, regulatory approval of any “off-the-shelf” xenotransplantation organs, including gene-edited kidneys, would seem some way off.The Conversation

Christopher Rudge, Law lecturer, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

What is a sonar pulse and how can it injure humans under water?

Christine Erbe, Curtin University

Over the weekend, the Australian government revealed that last Tuesday its navy divers had sustained “minor injuries”, likely due to sonar pulses from a Chinese navy vessel.

The divers had been clearing fishing nets from the propellers of HMAS Toowoomba while in international waters off the coast of Japan. According to a statement from deputy prime minister Richard Marles, despite HMAS Toowoomba communicating with internationally recognised signals, the Chinese vessel approached the Australian ship and turned on its sonar, forcing the Australian divers to exit the water.

The incident prompted a response from the Australian government, who labelled the incident “unsafe and unprofessional”. But what exactly is a sonar pulse, and what kinds of injuries can sonar cause to divers?

What is sonar?

Light doesn’t travel well under water – even in clear waters, you can see perhaps some tens of metres. Sound, however, travels very well and far under water. This is because water is much denser than air, and so can respond faster and better to acoustic pressure waves – sound waves.

Because of these properties, ships use sonar to navigate through the ocean and to “see” under water. The word “sonar” stands for sound navigation and ranging.

Sonar equipment sends out short acoustic (sound) pulses or pings, and then analyses the echoes. Depending on the timing, amplitude, phase and direction of the echoes the equipment receives, you can tell what’s under water – the seafloor, canyon walls, coral, fishes, and of course ships and submarines.

Most vessels – from small, private boats to large commercial tankers – use sonar. However, compared to your off-the-shelf sonar used for finding fish, navy sonars are stronger.


What are the effects of sonar on divers?

This is a difficult topic to study, because you don’t want to deliberately expose humans to harmful levels of sound. There are, however, anecdotes from various navies and accidental exposures. There have also been studies on what humans can hear under water, with or without neoprene suits, hoods, or helmets.

We don’t hear well under water – no surprise, since we’ve evolved to live on land. Having said that, you would hear a sonar sound under water (a mid-to-high pitch noise) and would know you’ve been exposed.

When it comes to naval sonars, human divers have rated the sound as “unpleasant to severe” at levels of roughly 150dB re 1 µPa (decibel relative to a reference pressure of one micropascal, the standard reference for underwater sound). This would be perhaps, very roughly, 10km away from a military sonar. Note that we can’t compare sound exposure under water to what we’d receive through the air, because there are too many physical differences between the two.

Human tolerance limits are roughly 180dB re 1 µPa, which would be around 500m from military sonar. At such levels, humans might experience dizziness, disorientation, temporary memory and concentration impacts, or temporary hearing loss. We don’t have information on what levels the Australian divers were exposed to, but their injuries were described as minor.

At higher received levels, closer ranges, or longer exposures, you might see more severe physiological or health impacts. In extreme cases, in particular for impulsive, sudden sound (which sonar is not), sound can cause damage to tissues and organs.

What does sonar do to marine animals?

Some of the information on what noise might do to humans under water comes from studies and observations of animals.

While they typically don’t have outer ears (except for sea lions), marine mammals have inner ears that function similarly to ours. They can receive hearing damage from noise, just like we do. This might be temporary, like the ringing ears or reduced sensitivity you might experience after a loud concert, or it can be permanent.

Marine mammals living in a dark ocean rely on sound and hearing to a greater extent than your average human. They use sound to navigate, hunt, communicate with each other and to find mates. Toothed whales and dolphins have evolved a biological echo sounder or biosonar, which sends out series of clicks and listens for echoes. So, interfering with their sounds or impacting their hearing can disrupt critical behaviours.

Finally, sound may also impact non-mammalian fauna, such as fishes, which rely on acoustics rather than vision for many of their life functions.The Conversation

Christine Erbe, Director, Centre for Marine Science & Technology, Curtin University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

How consciousness may rely on brain cells acting collectively – new psychedelics research on rats

Psychedelics can help uncover consciousness. agsandrew/Shutterstock PÀr Halje, Lund University
Psychedelics are known for inducing altered states of consciousness in humans by fundamentally changing our normal pattern of sensory perception, thought and emotion. Research into the therapeutic potential of psychedelics has increased significantly in the last decade. While this research is important, I have always been more intrigued by the idea that psychedelics can be used as a tool to study the neural basis of human consciousness in laboratory animals. We ultimately share the same basic neural hardware with other mammals, and possibly some basic aspects of consciousness, too. So by examining what happens in the brain when there’s a psychedelically induced change in conscious experience, we can perhaps glean insights into what consciousness is in the first place.We still don’t know a lot about how the networks of cells in the brain enable conscious experience. The dominating view is that consciousness somehow emerges as a collective phenomenon when the dispersed information processing of individual neurons (brain cells) is integrated as the cells interact.But the mechanism by which this is supposed to happen remains unclear. Now our study on rats, published in Communications Biology, suggests that psychedelics radically change the way that neurons interact and behave collectively.Our study compared two different classes of psychedelics in rats: the classic LSD type and the less-typical ketamine type (ketamine is an anaesthetic in larger doses). Both classes are known to induce psychedelic experiences in humans, despite acting on different receptors in the brain. Exploring brain waves: We used electrodes to simultaneously measure electrical activity from 128 separate areas of the brain of nine awake rats while they were given psychedelics. The electrodes could pick up two kinds of signals: electrical brain waves caused by the cumulative activity in thousands of neurons, and smaller transient electrical pulses, called action potentials, from individual neurons. The classic psychedelics, such as LSD and psilocybin (the active ingredient in magic mushrooms), activates a receptor in the brain (5-HT2A) which normally binds to serotonin, a neurotransmitter that regulates mood and many other things. Ketamine, on the other hand, works by inhibiting another receptor (NMDA), which normally is activated by glutamate, the primary neurotransmitter in the brain for making neurons fire. We speculated that, despite these differences, the two classes of psychedelics might have similar effects on the activity of brain cells. Indeed, it turned out that both drug classes induced a very similar and distinctive pattern of brain waves in multiple brain regions. The brain waves were unusually fast, oscillating about 150 times per second. They were also surprisingly synchronised between different brain regions. Short bursts of oscillations at a similar frequency are known to occur occasionally under normal conditions in some brain
Brain waves on electroencephalogram EEG. Chaikom/Shutterstock
regions. But in this case, it occurred for prolonged durations.  First, we assumed that a single brain structure was generating the wave and that it then spread to other locations. But the data was not consistent with that scenario. Instead, we saw that the waves went up and down almost simultaneously in all parts of the brain where we could detect them – a phenomenon called phase synchronisation. Such tight phase synchronisation over such long distances has to our knowledge never been observed before. We were also able to measure action potentials from individual neurons during the psychedelic state. Action potentials are electrical pulses, no longer than a thousandth of a second, that are generated by the opening and closing of ion channels in the cell membrane. The action potentials are the primary way that neurons influence each other. Consequently, they are considered to be the main carrier of information in the brain. However, the action potential activity caused by LSD and ketamine differed significantly. As such, they could not be directly linked to the general psychedelic state. For LSD, neurons were inhibited – meaning they fired fewer action potentials – in all parts of the brain. For ketamine, the effect depended on cell type – certain large neurons were inhibited, while a type of smaller, locally connecting neurons, fired more. Therefore, it is probably the synchronised wave phenomenon – how the neurons behave collectively – that is most strongly linked to the psychedelic state. Mechanistically, this makes some sense. It is likely that this type of increased synchrony has large effects on the integration of information across neural systems that normal perception and cognition rely on. I think that this possible link between neuron-level system dynamics and consciousness is fascinating. It suggests that consciousness relies on a coupled collective state rather than the activity of individual neurons – it is greater than the sum of its parts. That said, this link is still highly speculative at this point. That’s because the phenomenon has not yet been observed in human brains. Also, one should be cautious when extrapolating human experiences to other animals – it is of course impossible to know exactly what aspects of a trip we share with our rodent relatives. But when it comes to cracking the deep mystery of consciousness, every bit of information is valuable. PÀr Halje, Associate Research Fellow of Neurophysiology, Lund University This article is republished from The Conversation under a Creative Commons license. Read the original article.
Read More........

UFO, Humanity & time travel are the signage of positive index

Concerning about Unidentified flying objects (UFO’S) are only relate to the calculation of their positive ability index. As Positive means development and negative means demolishment, and time is the biggest terms move along with population and its exploration with respect to various field of doing. Productivity is the goal to achieve to move along with time or beyond time. It’s called moving in time frame or doing a rate of time travel. With Respect to earth as model humanity is a very big term of time as it's being with pure value of development with coordination.

As Human has a brain of vision and program with feel and curiosity which are the nucleus of human so we can say so as human we are on earth for managing the earth resources along with its proper utilization and distributions among each as with nature.

So legality is the biggest term of humanity. As it’s manage human in right direction and reduce the rate of resistance among the various field of exploration and their proper integration.

As its being with a rate of positive ability index which is defined the rate of development and human movements on earth as well all across the planetary system.

As it’s a simple formulation of humanity on earth which moves with positive frame of time with the enhancement of its population which is bounded by legal frame work for the positive directions of mass of the people on earth which provides earth explorations and its massive integration provide a rate of productivity which is the sign to do time travel or move beyond time. So acceptance of legal data, terms provides nonstop improvements in the exploration of earth which is a nonstop process runs in infinite terms.

Positive Ability Index of humans mean variable less doings of humans or any species all across the planetary system and as time progresses we use to find the springs of liberty to move far ahead on earth as well in outer space as like UFO’S (advance species with a level of advancement to move on earth or all across the planetary system)

So thinking about UFO’S (Aliens) are as demon is just a myth of imagination if they are not being with positive ability index then their movement in time and advancement is not possible.

So world must need to save itself from any kinds of war, negativity as humanity got current time after the thousands of years of hardship of generations and era of incarnations and once any strong negativity comes then humanity will run in backward of time frame.

So positivism provides explorations with term of infinity and its integration produce a real process of time travel which applicable on universe’s including earth as whole. Positivism is subject to divine and negativity is subject to demon both has time travel on upwards and second downwards. Image Pixabay LicenseFree for commercial use, No attribution required
Read More........

This human footprints found in Tabuk is 85,000-year-old


Tokyo: Human footprints dating back to about 85,000 years have been discovered on the banks of an ancient lake in the Nefud Desert in Tabuk region, Prince Sultan bin Salman, president of the Saudi Commission for Tourism and National Heritage (SCTH), announced in Tokyo on Thursday.

This amazing and rare discovery points to a new understanding of how our species came out of Africa en route to colonizing the world.

Prince Sultan’s announcement came on the sidelines of his visit on Thursday to the exhibition entitled “Trade routes in the Arabian Peninsula – the magnificent antiquities of the Kingdom of Saudi Arabia throughout the ages.”

The exhibition, organized by the SCTH in the Japanese National Museum in Tokyo, is scheduled to end on Sunday.

A joint Saudi international team discovered traces of several adults who were scattered on a muddy land in an old lake — each heading to a different destination — in the northwest of Saudi Arabia, Prince Sultan said, according to a Saudi Press Agency (SPA) report on Friday.

The research team included the Saudi Geological Survey, the SCTH, King Saud University, the Max Planck Foundation for Human History, Oxford University, Cambridge University, Australia National, and the University of New South Wales in Australia.

Prince Sultan said the age of the footprints coincides with the fossil of the finger of an adult person recently found near the central site in the province of Taima.

The finger, whose discovery was announced last month, is considered to belong to an adult of the early migrants in recent times to the Arabian Peninsula via the Nefud Desert, which then was a green pasture replete with rivers, lakes, fresh water and abundant animals – a source of food for humans.

Prince Sultan said the SCTH is working side-by-side with archeologists at the Max Planck Institute that has started work with the commission since several years.

The objective is to study these footprints in detail. The archeological and scientific exploratory works are still going on in international laboratories. Source:  ummid.com
Read More........

Humans doomed if they don't find another liveable planet: Hawking

Humanity will not survive another 1,000 years on Earth alone and the human race must find another planet to live on, one of the world's best-known physicists, Stephen Hawking has warned.

''I don't think we will survive another 1,000 years without escaping beyond our fragile planet,'' the celebrated theoretical physicist and cosmologist said.

He painted a grave picture of the future while delivering a lecture on the universe and the origins of human beings at the Oxford Union debating society.

Professor Hawking, 74, reflected on the understanding of the universe garnered from breakthroughs over the past five decades, describing 2016 as a ''glorious time to be alive and doing research into theoretical physics''.

''Our picture of the universe has changed a great deal in the last 50 years and I am happy if I have made a small contribution,'' he was quoted as saying by The Independent.

''The fact that we humans, who are ourselves mere fundamental particles of nature, have been able to come this close to understanding the laws that govern us and the universe is certainly a triumph,'' Hawking said on Monday.

Highlighting ''ambitious'' experiments that will give an even more precise picture of the universe, he said, ''We will map the position of millions of galaxies with the help of (super) computers like Cosmos. We will better understand our place in the universe.

''Perhaps one day we will be able to use gravitational waves to look right back into the heart of the Big Bang. But we must also continue to go into space for the future of humanity,'' he said.

Hawking's predictions for humanity have been bleak in recent months.

In January, he cautioned developments in science and technology are producing ''new ways things can go wrong''.

The paraplegic scientist also estimated self-sustaining human colonies on Mars would not be constructed for another 100 years, meaning the human race must be ''very careful'' in the time before then.

Hawking advised a deeply polarised Britain that ''just like children, we will have to learn to share as we face perilous times'' in an essay criticising the attitudes towards wealth that precipitated Brexit.

After the stark warning, Hawking finished with a more galvanising message encouraging students to explore the mysteries of the universe not yet solved.

''Remember to look up at the stars and not down at your feet. Try to make sense of what you see, wonder about what makes the universe exist. Be curious. However difficult life may seem, there is always something you can do and succeed at. It matters that you don't just give up,'' he said. Source: domain-b.com
Read More........

The five biggest threats to human existence


Anders Sandberg, University of Oxford: In the daily hubbub of current “crises” facing humanity, we forget about the many generations we hope are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we face risks, called existential risks, that threaten to wipe out humanity. These risks are not just for big disasters, but for the disasters that could end history.

Not everyone has ignored the long future though. Mystics like Nostradamus have regularly tried to calculate the end of the world. HG Wells tried to develop a science of forecasting and famously depicted the far future of humanity in his book The Time Machine. Other writers built other long-term futures to warn, amuse or speculate. 

But had these pioneers or futurologists not thought about humanity’s future, it would not have changed the outcome. There wasn’t much that human beings in their place could have done to save us from an existential crisis or even cause one.

We are in a more privileged position today. Human activity has been steadily shaping the future of our planet. And even though we are far from controlling natural disasters, we are developing technologies that may help mitigate, or at least, deal with them.

Future imperfect:

Yet, these risks remain understudied. There is a sense of powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing anything about problems that have not occurred yet (partially because of the availability heuristic – the tendency to overestimate the probability of events we know examples of, and underestimate events we cannot readily recall).

If humanity becomes extinct, at the very least the loss is equivalent to the loss of all living individuals and the frustration of their goals. But the loss would probably be far greater than that. Human extinction means the loss of meaning generated by past generations, the lives of all future generations (and there could be an astronomical number of future lives) and all the value they might have been able to create. If consciousness or intelligence are lost, it might mean that value itself becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from becoming reality. And we must not fail even once in this pursuit.

With that in mind, I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final. 

Over the past century we have discovered or created new existential risks – supervolcanoes were discovered in the early 1970s, and before the Manhattan project nuclear war was impossible – so we should expect others to appear. Also, some risks that look serious today might disappear as we learn more. The probabilities also change over time – sometimes because we are concerned about the risks and fix them. 

Finally, just because something is possible and potentially hazardous, doesn’t mean it is worth worrying about. There are some risks we cannot do anything at all about, such as gamma ray bursts that result from the explosions of galaxies. But if we learn we can do something, the priorities change. For instance, with sanitation, vaccines and antibiotics, pestilence went from an act of God to bad public health.

1. Nuclear war:

While only two nuclear weapons have been used in war so far – at Hiroshima and Nagasaki in World War II – and nuclear stockpiles are down from their the peak they reached in the Cold War, it is a mistake to think that nuclear war is impossible. In fact, it might not be improbable. 

The Cuban Missile crisis was very close to turning nuclear. If we assume one such event every 69 years and a one in three chance that it might go all the way to being nuclear war, the chance of such a catastrophe increases to about one in 200 per year. 

Worse still, the Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems implausible that the chances would be much lower than one in 1000 per year.

A full-scale nuclear war between major powers would kill hundreds of millions of people directly or through the near aftermath – an unimaginable disaster. But that is not enough to make it an existential risk. 

Similarly the hazards of fallout are often exaggerated – potentially deadly locally, but globally a relatively limited problem. Cobalt bombs were proposed as a hypothetical doomsday weapon that would kill everybody with fallout, but are in practice hard and expensive to build. And they are physically just barely possible. 

The real threat is nuclear winter – that is, soot lofted into the stratosphere causing a multi-year cooling and drying of the world. Modern climate simulations show that it could preclude agriculture across much of the world for years. If this scenario occurs billions would starve, leaving only scattered survivors that might be picked off by other threats such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently have no good ways of estimating this. 

2. Bioengineered pandemic:

Natural pandemics have killed more people than wars. However, natural pandemics are unlikely to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease as it spread in Europe.

Unfortunately we can now make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far more lethal and able to infect vaccinated individuals. Recent work on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted.

Right now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets better and cheaper, more groups will be able to make diseases worse.

Most work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful. But there are always some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult tried to hasten the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on. 

The number of fatalities from bioweapon and epidemic outbreaks attacks looks like it has a power-law distribution – most attacks have few victims, but a few kill many. Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier pathogens become easier to design.

3. Superintelligence:

Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-intelligence software.

The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally. In fact, it is possible to prove that certain types of superintelligent systems would not obey moral rules even if they were true.

Even more worrying is that in trying to explain things to an artificial intelligence we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do that we might not understand all the implications of what we wish for. 

Software-based intelligence may very quickly go from below human to frighteningly powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance. 

It has been proposed that an “intelligence explosion” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly set. 

The unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a whole is improving itself at the fastest possible rate. But there are good reasons to think that some technologies may speed things up far faster than current societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one most likely to either be massive or just a mirage.

This is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that superintelligence could be achieved “within a generation”, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more likely is that they just saw it as a remote future problem. 

4. Nanotechnology:

Nanotechnology is the control over matter with atomic or molecular precision. That is in itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the potential for abuses that are hard to defend against.

The big problem is not the infamous “grey goo” of self-replicating nanomachines eating everything. That would require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would eventually succeed, but there are plenty of more low-hanging fruits on the destructive technology tree. 

The most obvious risk is that atomically precise manufacturing looks ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too large advantage might be tempting. 

Weapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous “gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate engineering into the hands of anybody who wants it.

We cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be potentially disruptive just because it can give us whatever we wish for.

5. Unknown unknowns:

The most unsettling possibility is that there is something out there that is very deadly, and we have no clue about it.

The silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is extremely rare, or that intelligent life tends to get wiped out? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that didn’t help. 

Whatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We do not know about any such threats (none of the others on this list work like this), but they might exist.

Note that just because something is unknown it doesn’t mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per year, based on the relative age of Earth. 

You might wonder why climate change or meteor impacts have been left off this list. Climate change, no matter how scary, is unlikely to make the entire planet uninhabitable (but it could compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our continued existence.

The availability heuristic makes us overestimate risks that are often in the media, and discount unprecedented risks. If we want to be around in a million years we need to correct that.

Anders Sandberg, James Martin Research Fellow, University of Oxford

This article was originally published on The Conversation. Read the original article.
The Conversation
Read More........