ICRISAT develops portable technology for testing crops' nutrition level



The International Crops Research Institute for the Semi-Arid Tropics (ICRISAT) on Thursday announced that its researchers are leading a transformation in crop testing, combining AI-driven models and pocket-size near-infrared spectroscopy (NIRS) devices.

These portable sensors allow for quick evaluation of nutrition levels in indigenous food grains right at the farmer's gate or in research fields.

ICRISAT Director General, Dr Jacqueline d'Arros Hughes, championed the integration of this disruptive technology into breeding pipelines and key points of relevant value chains.

Aligned with the UN Food and Agriculture Organisation (FAO) strategy, she foresees the tool as a catalyst for the production of nutrient-dense crops, both in breeding programmes and in farmers' fields, a crucial element in the global fight against malnutrition.

"This technology is poised to expedite the breeding of nutrient-dense crops while facilitating their integration into the value chain. Our goal with this intervention is to provide quality assurance for the distribution of nutritionally fortified crops, so that they reach those who need them most," she added.

Traditionally, assessing the nutritional quality of grains and feedstock could take a number of weeks, involving manual or partially automated processes and laboratory instruments.

In contrast, mobile NIRS devices are more cost-effective and can assess over 150 samples per day per person, ICRISAT said.

These non-destructive and robust grain quality measuring devices provide timely information on grain composition and can be used to promote quality-based payments in the market—benefiting food producers, grain processing industries, and farmers alike.

"We see the adoption of portable technology for assessing grain quality as an important step in decentralising and democratising market systems, essential to promote the consumption of nutri-cereals. This transition can facilitate quality-driven payments for farmers, while providing quality assurance to health-conscious households moving forward," noted Dr Sean Mayes, Global Research Director of the Accelerated Crop Improvement Program at ICRISAT.

In Anantapur in Andhra Pradesh, ICRISAT recommends its Girnar 4 groundnut variety to ensure premium prices for farmers and to differentiate the crop from lower-value varieties. ICRISAT's Girnar 4 and Girnar 5 groundnut varieties boast oleic acid levels of 75-80 per cent, far surpassing that of the standard variety at 40-50 per cent.

Oleic acid is a heart-healthy monounsaturated fatty acid, which holds considerable importance for the groundnut market, as it provides new end-uses for the crop. Growing consumer awareness of its advantages spurred market demand for high oleic acid content in oils and related products.

This pioneering approach, initially applied in peanut breeding, could be replicated across other crops, offering efficient and cost-effective solutions to address poor nutrition.

ICRISAT's Facility for Exploratory Research on Nutrition (FERN laboratory) is expanding its prediction models to encompass various traits and crops beyond groundnuts."We are currently focusing on developing methods to assess oil, oleic acid, linoleic acid, carotenoids, starch, moisture, and phosphorus in various cereals and legumes, such as finger millet, foxtail millet, pearl millet, sorghum, maize, wheat, chickpea, mungbean, common bean, pigeon pea, cowpea, soybean, groundnut, and mustard," said Dr Jana Kholova, Cluster Leader. Crop Physiology and Modelling, ICRISAT. ICRISAT develops portable technology for testing crops' nutrition level | MorungExpress | morungexpress.com
Read More........

Cement Supercapacitors Could Turn the Concrete Around Us into Massive Energy Storage Systems

credit – MIT Sustainable Concrete Lab

Scientists from MIT have created a conductive “nanonetwork” inside a unique concrete mixture that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy.

It’s perhaps the most ubiquitous man-made material on Earth by weight, but every square foot of it could, with the addition of some extra materials, power the world that it has grown to cover.

Known as e c-cubed (ec3) the electron-conductive carbon concrete is made by adding an ultra-fine paracrystalline form of carbon known as carbon black, with electrolytes and carbon nanoscales.

Not a new technology, MIT reported in 2023 that 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement, could power the whole home, but advancements in materials sciences and manufacturing processes has improved the efficiency by orders of magnitude.

Now, just 5 cubic meters can do the job thanks to an improved electrolyte.

“A key to the sustainability of concrete is the development of ‘multifunctional concrete,’ which integrates functionalities like this energy storage, self-healing, and carbon sequestration,” said Admir Masic, lead author of the new study and associate professor of civil and environmental engineering at MIT.

“Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?”

The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope.

The team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like “web” that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system.

“Understanding how these materials ‘assemble’ themselves at the nanoscale is key to achieving these new functionalities,” adds Masic.

Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, “we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”

At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.

The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3—about the size of a refrigerator—can store over 2 kilowatt-hours of energy. That’s about enough to power an actual refrigerator for a day.

While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements—from slabs and walls to domes and vaults—and last as long as the structure itself.

“The Ancient Romans made great advances in concrete construction. Massive structures like the Pantheon stand to this day without reinforcement. If we keep up their spirit of combining material science with architectural vision, we could be at the brink of a new architectural revolution with multifunctional concretes like ec3,” proposes Masic.

Taking inspiration from Roman architecture, the team built a miniature ec3 arch to show how structural form and energy storage can work together. Operating at 9 volts, the arch supported its own weight and additional load while powering an LED light.

The latest developments in ec³ technology bring it a step closer to real-world scalability. It’s already been used to heat sidewalk slabs in Sapporo, Japan, due to its thermally conductive properties, representing a potential alternative to salting.

“What excites us most is that we’ve taken a material as ancient as concrete and shown that it can do something entirely new,” says James Weaver, a co-author on the paper who is an associate professor of design technology and materials science and engineering at Cornell University, as well as a former EC³ Hub researcher. “By combining modern nanoscience with an ancient building block of civilization, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.” Cement Supercapacitors Could Turn the Concrete Around Us into Massive Energy Storage Systems
Read More........

TinyML: The Small Technology Tackling the Biggest Climate Challenge

Image by Gerd Altmann from Pixabay | For Representational Purpose Only

Tanveer Singh: As the planet struggles under the weight of 40+ billion metric tons of CO₂ emissions in 2024 alone, and an ever-rising energy demand, the search for smarter, leaner solutions has never been more urgent. There enters the TinyML, where the power of AI meets ultra-low energy computing to drive sustainability at scale.

It may be shocking, but as you are reading this, billions of sensors are tracking the planet’s health – from the air we breathe to the energy we consume. Already, more than 14 billion IoT devices are being used to monitor climate change and are projected to reach a whopping 30 billion by the end of 2030. But the concerning part is that the energy consumed by these devices is around 200 terawatt-hours of electricity annually, which is roughly equivalent to the entire energy consumption of countries like Thailand. To meet this demand, energy is produced through the traditional method of burning fuel, which further emits millions of Carbon footprints annually, that is even more than the lifetime emissions of 4 cars, just to monitor climate change. And therein lies the irony.

Furthermore, the constant transmission of data through these sensors requires millions of dollars for their deployment and maintenance. Like a large-scale smart city as big as New York, IoT networks can cost over $10–15 million per year to operate. This is exactly where TinyML comes as the solution, offering a path that enables IoT devices to process data locally, reducing energy consumption by up to 90% and significantly lowering costs.

Tiny ML bridges the gap between artificial intelligence and embedded systems, allowing machine learning activities even in sensors as small as a grain of sand. It is based on the idea of machine learning that is focused on building machine learning models on low-power devices like microcontrollers, enabling the device to process data instantly and anywhere, without depending on external internet storage to compute it. One clear example is Alexa, which uses TinyML models to send instant responses to the device for processing instead of sending through the cloud (external storage ), which will take a longer time.

Additionally, TinyML improves privacy and data security by running locally and reduces overall operational cost by 50-60% as compared to large ML models working on external storage. Take the example of Google's TinyML image classification that runs directly on devices, keeping images private while cutting storage and cloud costs by over 50%. TinyML can be best understood as having a mini robot in your pocket that can solve problems instantly, instead of always asking a big computer far away for help. It is faster, saves energy, and keeps your information private. When this field is applied to the climate, its efficiency becomes a distinguished factor.

Besides being cost-effective and having higher efficiency, it also helps in tracking air quality to predict natural disasters and, hence, supports the fight against climate change. Tiny ML sensors enable the quick detection of forest fires through heat or smoke detection, and aid in local air and water quality checks, eliminating the need for cloud computing dependency. For instance, Arduino-based air quality sensors are used to measure air quality and provide data on the temperature and humidity of an area. These models can also be used in solar or wind farms to check the performance of the solar cells and windmills through the consumption of energy, which can further help in increasing the efficiency of the farms. For example, Google’s DeepMind AI was successfully used with wind farms in the U.S. to predict wind power output 36 hours in advance, boosting the value of wind energy by around 20%. Interestingly, these sensors can also aid in monitoring birds' and whales' calls or other animals to track migration patterns and population health, as well as because of their small size and working on low power, and hence, they can help researchers to get valuable data on ecosystems without disturbing the wildlife. Moreover, TinyML sensors used in smart grids help in improving energy utilization by constantly monitoring and managing the transport of electricity so that energy is not wasted. Besides this, these devices can help in measuring the water pressure, tidal patterns, and ground movement of an area, and the data from this can be used to detect disasters earlier. For instance, in Japan, Tiny ML sensors placed along coastlines measure tidal waves and ground vibration in real time, which helps authorities to issue faster tsunami and earthquake warnings.

However, while these applications highlight the transformative impact of Tiny ML in tackling climate related problems, the integration also brings forth several challenges that need to be addressed to ensure reliability and scalability. First and foremost is the limitation of hardware, which is that there is limited storage, approximately in kilobytes or 1 to 5 megabytes, to store data compared to traditional models that have memory in gigabytes and terabytes. As a result, small models in TinyML will be less precise than the traditional models, which can be a huge challenge in models that work on reliability, for example, disaster management models. Furthermore, the harsh conditions like weather or wildlife can damage these devices, leading to malfunctioning and increasing the cost of maintenance.
Additionally, even though these devices are cost-effective, deploying billions of devices will still require huge funding, which can limit their production and scalability.

Despite these challenges, the future of TinyML is being shaped by the integration of emerging technologies, large-scale adoption, and the expanding market of AI. The combination of TinyML with the 5 G network, which provides 100 times faster speed than 4 G and the ability to connect over one million devices per square kilometer, can enable the creation of massive, interconnected sensors all over the cities that can provide faster and reliable data. Additionally, integrating it with federated learning- an ML technique that enables multiple devices to train a model together without sharing the raw data - can help in ensuring data privacy and increasing the accuracy of the models. Furthermore, Government and Research institutes are likely to adopt TinyML models in various tasks as they provide a scalable and cost-effective solution, especially in environments with limited resources. For instance, the U.S. National Aeronautics and Space Administration (NASA) has explored TinyML to process sensor data directly on satellites, reducing the need for constant communication with Earth.

It won’t be an exaggeration to say that the Tiny ML models have the potential to shape the future of the world. By offering scalable as well as energy-efficient solutions, Tiny ML stands out as the best alternative to tackle the climate change problems. From reducing the CO2 emissions to providing faster processing of data and strengthening the privacy and accuracy of the data, the Tiny ML model can be a changemaker catalyst not only in the world of climate change but in other fields, too. Undoubtedly, Tiny ML paves the way for a future where artificial intelligence works in harmony with the planet.Tanveer Singh, a first-year student at Plaksha University, has been passionate about writing articles and poems since high school. From raising public awareness of new technologies to highlighting environmental and societal issues, he has explored a wide range of themes through his work and aspires to continue making an impact in this space for the long run. TinyML: The Small Technology Tackling the Biggest Climate Challenge | MorungExpress | morungexpress.com
Read More........

Nanotechnology breakthrough may boost treatment for aggressive breast cancer: Study

IANS Photo

Sydney, (IANS): Researchers in Australia are developing next-generation nanoparticles to supercharge current treatments for triple-negative breast cancer (TNBC) -- one of the most aggressive and deadly forms of the disease.

The researchers are designing innovative iron-based nanoparticles, or "nano-adjuvants," small enough to fit thousands on a single strand of hair, to strengthen the body's immune response against TNBC, according to the University of Queensland's Australian Institute for Bioengineering and Nanotechnology (AIBN) on Monday, Xinhua news agency reported.

Unlike other breast cancers, TNBC lacks the proteins targeted by some of the conventional treatments used against other cancers, making effective therapy a significant challenge, according to Prof. Yu Chengzhong from the AIBN.

"Despite the promise of immunotherapy, its effectiveness against triple-negative breast cancer is extremely limited, which is leaving too many women without options -- and that's what our research is trying to change," Yu said.

The nanoparticles are designed to enhance the activity of T-cells, the white blood cells used by the immune system to fight disease, within the tumour microenvironment, improving the immune system's ability to recognise and attack cancer cells, according to Yu.

Supported by a 3 million Australian dollar ($1.89 million) National Health and Medical Research Council grant, the five-year research project aims to bridge a critical treatment gap, and could pave the way for clinical applications, not only for TNBC but also for other hard-to-treat cancers like ovarian cancer.

With over two decades of experience in nanotechnology and nanomedicine, Yu hopes this breakthrough will transform cancer treatment by making immunotherapy more effective for patients with aggressive solid tumours."This research will push the boundaries of science to find innovative treatments that change the way we fight this cancer, offering hope for women facing devastating outcomes," said AIBN Director Alan Rowan. Nanotechnology breakthrough may boost treatment for aggressive breast cancer: Study | MorungExpress | morungexpress.com
Read More........

Qualcomm drives digital future with AI, 6G and 'Make in India' initiatives

IANS Photo

New Delhi, (IANS): Qualcomm India is taking a leading role in shaping India’s digital future, emphasising its commitment to inclusive, sustainable, and globally competitive technology solutions, the tech giant said on Thursday.

At the India Mobile Congress (IMC) 2025, the company showcased a wide range of innovations, from Edge AI and 6G to smart homes, connected devices, and advanced compute platforms -- highlighting how its technologies are driving India’s digital transformation.

The company presented its vision for an intelligent and connected India through three pillars -- Personal AI, Physical AI, and Industrial AI -- reflecting Qualcomm’s focus on providing scalable, secure, and India-first solutions across consumer, enterprise, and infrastructure domains.

Qualcomm has been a long-time partner in India’s technology journey, supporting the country from 3G to 5G, while actively preparing for 6G through early-stage research, strategic partnerships, and local R&D investments.

At IMC 2025, Qualcomm highlighted the power of Edge AI combined with 5G as the twin pillars of India’s digital future.

Its platforms are enabling real-time, low-latency intelligence across industries, including automotive, industrial IoT, mobile devices, and compute solutions.

Demonstrations included on-device generative AI for smartphones and industrial devices, AI-powered surveillance, intelligent wearables like smartwatches and earbuds, and connected vehicles, all delivering seamless, multimodal experiences.

Savi Soin, Senior Vice President and President of Qualcomm India, said, “IMC 2025 reflects India’s strong digital momentum. Qualcomm is proud to lead with technologies that are cutting-edge and India-first, from Edge AI and 6G to smart homes and secure video solutions.”

The company also announced key collaborations with Indian partners to expand its ecosystem.

To nurture the next generation of AI talent in India, Qualcomm launched the Qualcomm AI Upskilling Programme: Technical Foundation, aimed at students, developers, and professionals. The program covers AI and ML fundamentals, Edge AI, generative AI, and practical experience with Qualcomm’s AI Hub, helping participants build on-device AI applications.

Through these initiatives, Qualcomm India is reinforcing its role as a digital transformation partner for the nation.By supporting Make in India, advancing 6G, enabling AI upskilling, and working closely with partners and policymakers, Qualcomm is contributing to an inclusive, innovative, and globally competitive digital future for India. Qualcomm drives digital future with AI, 6G and 'Make in India' initiatives | MorungExpress | morungexpress.com
Read More........

A Combination Implant and Augmented Reality Glasses Restores Reading Vision to Blind Eyes

Study participant Sheila Irvine training with the device – credit Moorfields Eye Hospital

A “new era” has begun in the development of artificial vision after a combination electronic eye implant—with augmented reality glasses restored vision to blind eyes in patients with untreatable macular degeneration.

Those treated with the device could read, on average, five lines of a vision chart, even though some could not even see the chart before their surgery.

The results of the European clinical trial which involved 38 patients in 17 hospitals across 5 countries were published in The New England Journal of Medicine. They showed 84% of participants were able to read letters, numbers, and words using prosthetic vision through an eye that had previously lost its sight due to the untreatable progressive eye condition, “geographic atrophy with dry age-related macular degeneration (GA in dry AMD).”

The now-proven device is called PRIMA, and consists of an ultra-thin microchip implanted in the eye that receives infrared projections of the waking world by a video camera installed in a pair of augmented reality classes.

A pocket computer fixed to a small control panel worn on the waistband then runs artificial intelligence algorithms to process the information contained in the infrared projection, which is converted into an electrical signal. This signal passes through the retinal and optical nerve cells into the brain, where it’s interpreted as vision.

The patient uses their glasses to focus and scan across the main object in the projected image from the video camera, using the zoom feature to enlarge the text. Each patient goes through an intensive rehabilitation program over several months to learn to interpret these signals and start reading again.

“In the history of artificial vision, this represents a new era,” said Mr. Mahi Muqit, associate professor at the UK’s University College London’s Institute of Ophthalmology and consultant at Moorfields Eye Hospital where the UK arm of the trial was conducted.

“Blind patients are actually able to have meaningful central vision restoration, which has never been done before.”

“Getting back the ability to read is a major improvement in their quality of life, lifts their mood and helps to restore their confidence and independence. The PRIMA chip operation can safely be performed by any trained vitreoretinal surgeon in under two hours—that is key for allowing all blind patients to have access to this new medical therapy for GA in dry AMD.”

Dry AMD is a slow deterioration of the cells of the macula over many years, as the light-sensitive retinal cells die off. For most people with dry AMD, they can experience a slight loss of central vision.

Through a process known as geographic atrophy (GA), it can progress to full vision loss in the eye, as the cells die and the central macula melts away. There is currently no treatment for GA, which affects 5 million people globally. All participants in this trial had lost the central sight of the eye being tested, leaving only limited peripheral vision.

Scans of the implant in a patient’s eye – credit Science Corporation

The procedure in install the implant involves a vitrectomy, where the eye’s vitreous jelly is removed from between the lens and the retina, and the surgeon inserts the ultra-thin microchip, which is shaped like a SIM card and just 2mm x 2mm.

The PRIMA System device used in this operation is being developed by Science Corporation, which develops brain-computer interfaces and neural engineering. No significant decline in existing peripheral vison was observed in trial participants, and these findings pave the way for seeking approval to market this new device.

UCL spoke with one of the patients who received the implant for the college’s news outlet.

“I wanted to take part in research to help future generations, and my optician suggested I get in touch with Moorfields,” began Sheila Irvine, one of Moorfields’ patients on the trial. “Before receiving the implant, it was like having two black discs in my eyes, with the outside distorted.

“I was an avid bookworm, and I wanted that back. I was nervous, excited, all those things. There was no pain during the operation, but you’re still aware of what’s happening. It’s a new way of looking through your eyes, and it was dead exciting when I began seeing a letter. It’s not simple, learning to read again, but the more hours I put in, the more I pick up.”

“The team at Moorfields has given me challenges, like ‘Look at your prescription,’ which is always tiny. I like stretching myself, trying to look at the little writing on tins, doing crosswords.”

The global trial was led by Dr. Frank Holz of the University of Bonn, with participants from the UK, France, Italy, and the Netherlands.

Mr. Muqit that it left him feeling that a door was opened for medical devices in this area, because there is no treatment currently licensed for dry AMD.“I think it’s something that, in future, could be used to treat multiple eye conditions.” A Combination Implant and Augmented Reality Glasses Restores Reading Vision to Blind Eyes
Read More........

First Light Fusion presents novel approach to fusion

(Image: First Light Fusion)

British inertial fusion energy developer First Light Fusion has presented the first commercially viable, reactor-compatible path to 'high gain' fusion, which it says would drastically reduce the cost of what the company says is a limitless clean energy source.

In its white paper published today, First Light Fusion (FLF) outlines a novel and scientifically grounded approach to fusion energy called FLARE – Fusion via Low-power Assembly and Rapid Excitation. While the conventional inertial fusion energy (IFE) approach is to compress and heat the fuel at the same time to achieve ignition, FLARE splits this process into two: first compressing the fuel in a controlled and highly efficient manner and then using a separate process to ignite the compressed fuel, generating a massive surplus of energy, a concept known as 'fast ignition'.

FLARE leverages over 14 years of First Light's inertial fusion experience and its unique controlled-amplification technology, creating a system capable of reaching the high gain levels needed for cost competitive energy production. This new approach "would underpin the design for commercial reactors that can be based on much lower power systems that already exist today, opening up an opportunity for partners to build those systems, using FLF's technology as the fuel, and to roll it out worldwide," according to the company.

Gain - the ratio of energy output to energy input in a fusion reaction – is the critical metric determining commercial viability. The current record gain level stands at 4, achieved at the US Department of Energy's National Ignition Facility (NIF) in May of this year.

"The FLARE concept, as detailed in today's white paper, could produce an energy gain of up to 1000. FLF's economic modelling suggests that a gain of at least 200 is needed for fusion energy to be commercially competitive, while a gain of 1000 would enable very low-cost power," the company said.

According to FLF, an experimental gain scale facility is expected to cost one-twentieth that of NIF and could be built using existing, proven technologies. Due to the lower energy and power requirements provided by the FLARE technology, future commercial power plants would have significantly lower capital costs than other plausible IFE schemes, with lower complexity and core components such as the energy delivery system costing one-tenth of the capital cost of previous fast ignition schemes.

"By building on existing technology, First Light's approach takes the brakes off inertial fusion deployment as it has the potential to leverage existing supply chains, significantly reduce capital expenditure, speed up planning approvals and reduce regulatory hurdles in the deployment of commercial fusion plants," it said.

"This is a pivotal moment not just for First Light, but for the future of energy," said First Light Fusion CEO Mark Thomas. "With the FLARE approach, we've laid out the world's first commercially viable, reactor-compatible pathway to high gain inertial fusion - and it's grounded in real science, proven technologies, and practical engineering.

"A pathway to a gain of 1000 puts us well beyond the threshold where fusion becomes economically transformative. Through our approach, we're opening the door to a new industrial sector - and we want to bring others with us."

First Light Fusion was founded by Yiannis Ventikos of the Mechanical Engineering Department at University College, London, and Nicholas Hawker, formerly an engineering lecturer at Lady Margaret Hall, Oxford. The company was spun out from the University of Oxford in July 2011, with seed capital from IP Group plc, Parkwalk Advisors Ltd and private investors. Invesco and OSI provided follow-on capital.In February, Oxfordshire-based First Light Fusion announced it will focus on commercial partnerships with other fusion companies who want to use its amplifier technology, as well as with non-fusion applications such as NASA seeking to replicate potential high-velocity impacts in space. By dropping its plans for a fusion power plant, and instead targeting commercial partnerships with others, it aims to "capitalise on the huge inertial fusion energy market opportunities enabling earlier revenues and lowering the long-term funding requirement". First Light Fusion presents novel approach to fusion
Read More........

GLE completes landmark laser technology demonstration

LEF facility (Image: GLE)

The large-scale enrichment technology testing campaign at Global Laser Enrichment's Test Loop facility in Wilmington, North Carolina, has demonstrated the commercial viability of laser enrichment.

Global Laser Enrichment (GLE) began the large-scale demonstration testing of the SILEX laser enrichment process in May. The extensive performance data it has collected provides confidence that the process can be commercially deployed, the company said. The demonstration programme will now continue through the rest of 2025, producing hundreds of kilograms of low-enriched uranium (LEU), while continuing towards building a domestic manufacturing base and supply chain to support deployment of US domestic enrichment capacity.

"We believe the enrichment activities conducted over the past five months position GLE to be the next American uranium enrichment solution," GLE CEO Stephen Long said, adding that, with 20% of US electricity supply coming from nuclear energy, this will "allow America to end its dangerous dependency on a fragile, foreign government-owned uranium fuel supply chain."

GLE is a joint venture of Australian company Silex Systems (51%) and Cameco Corporation (49%), and is the exclusive global licensee of the SILEX laser-based uranium enrichment technology invented by Silex Systems. Earlier this year, it completed the submission of an application to the US Nuclear Regulatory Commission for the Paducah Laser Enrichment Facility (PLEF) in Kentucky, where it plans to deploy the technology commercially, re-enriching depleted uranium tails from legacy Department of Energy gaseous diffusion plant operations.

The project is underpinned by a long-term agreement signed in 2016 for the sale to GLE of some 200,000 tonnes from the US Department of Energy's depleted uranium hexafluoride inventory, from which PLEF is expected to produce up to 6 million separative work units of LEU annually, delivering a domestic, single-site solution for uranium, conversion and enrichment, GLE completes landmark laser technology demonstration
Read More........

Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology


Macular degeneration is the leading cause of blindness in developed countries, but regrowing the human cells lost to this condition was the feature of a new successful treatment that took advantage of advances in nanotechnology.

Regrowing the cells of the human retina on a scaffold of synthetic, tissue-like material showed substantial improvements over previously used materials such as cellulose, and the scientists hope they can move on to testing their method in the already blind.

Macular degeneration is increasing in prevalence in the developed world. It’s the leading cause of blindness and is caused by the loss of cells in a key part of the eye called the retina.

Humans have no ability to regrow retinal pigment cells, but scientists have determined how to do it in vitro using pluripotent stem cells. However as the study authors describe, previous examples of this procedure saw scientists growing the cells on flat surfaces rather than one resembling the retinal membrane.

This, they state, limits the effectiveness of transplanted cells.

In a study at the UK’s Nottingham Trent University, biomedical scientist Biola Egbowon and colleagues fabricated 3D scaffolds with polymer nanofibers and coated them with a steroid to reduce inflammation.

The method by which the nanofibers were made was pretty darn cool. The team would squirt polyacrylonitrile and Jeffamine polymers in molten form through an electrical current in a technique known as “electrospinning.” The high voltage caused molecular changes in the polymers that saw them become solid again, resembling a scaffold of tiny fibers that attracted water yet maintained mechanical strength.

After the scaffolding was made, it was treated with an anti-inflammatory steroid.

This unique pairing of materials mixed with the electrospinning created a unique scaffold that kept the retinal pigment cells viable for 150 days outside of any potential human patient, all while showing the phenotype of biomarkers critical for maintaining retinal physiological characteristics.“While this may indicate the potential of such cellularized scaffolds in regenerative medicine, it does not address the question of biocompatibility with human tissue,” Egbowon and colleagues caution in their paper, urging more research to be conducted, specifically regarding the orientation of the cells and whether they can maintain good blood supply. Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology
Read More........

Scientists Develop Biodegradable Smart Textile–A Big Leap Forward for Eco-Friendly Wearable Technology

Flexible inkjet printed E-textile – Credit: Marzia Dulal

Wearable electronic textiles can be both sustainable and biodegradable, shows a new study.

A research team led by the University of Southampton and UWE Bristol in the UK tested a new sustainable approach for fully inkjet-printed, eco-friendly e-textiles.

Named SWEET—for Smart, Wearable, and Eco-friendly Electronic Textiles—the new ‘fabric’ was described in findings published in the journal Energy and Environmental Materials.


E-textiles are those with embedded electrical components, such as sensors, batteries or lights. They might be used in fashion, for performance sportswear, or for medical purposes as garments that monitor people’s vital signs.

Such textiles need to be durable, safe to wear and comfortable, but also, in an industry which is increasingly concerned with clothing waste, they need to be kind to the environment when no longer required.

“Integrating electrical components into conventional textiles complicates the recycling of the material because it often contains metals, such as silver, that don’t easily biodegrade,” explained Professor Nazmul Karim at the University of Southampton.


“Our eco-friendly approach for selecting sustainable materials and manufacturing overcomes this, enabling the fabric to decompose when it is disposed of.”

The team’s design has three layers, a sensing layer, a layer to interface with the sensors and a base fabric. It uses a textile called Tencel for the base, which is made from renewable wood and is biodegradable.

The active electronics in the design are made from graphene, along with a polymer called PEDOT: PSS. These conductive materials are precision inkjet-printed onto the fabric.

The research team, which included members from the universities of Exeter, Cambridge, Leeds, and Bath, tested samples of the material for continuous monitoring of heart rates. Five volunteers were connected to monitoring equipment, attached to gloves worn by the participants. Results confirmed the material can effectively and reliably measure both heart rate and temperature at the industry standard level.
Gloves with e-textile sensors monitoring heart rate – Credit: Marzia Dulal

“Achieving reliable, industry-standard monitoring with eco-friendly materials is a significant milestone,” said Dr. Shaila Afroj, an Associate Professor of Sustainable Materials from the University of Exeter and a co-author of the study. “It demonstrates that sustainability doesn’t have to come at the cost of functionality, especially in critical applications like healthcare.”


The project team then buried the e-textiles in soil to measure its biodegradable properties.

After four months, the fabric had lost 48 percent of its weight and 98 percent of its strength, suggesting relatively rapid and also effective decomposition.

Furthermore, a life cycle assessment revealed the graphene-based electrodes had up to 40 times less impact on the environment than standard electrodes.

Four strips in a variety of decomposed states, during four months of decomposition – Credit: Marzia Dulal

Marzia Dulal from UWE Bristol, the first author of the study, highlighted the environmental impact: “Our life cycle analysis shows that graphene-based e-textiles have a fraction of the environmental footprint compared to traditional electronics. This makes them a more responsible choice for industries looking to reduce their ecological impact.”

The ink-jet printing process is also a more sustainable approach for e-textile fabrications, depositing exact numbers of functional materials on textiles as needed, with almost no material waste and less use of water and energy than conventional screen printing.“These materials will become increasingly more important in our lives,” concluded Prof. Karim, who hopes to move forward with the team to design wearable garments made from SWEET, particularly in the area of early detection and prevention of heart diseases.Scientists Develop Biodegradable Smart Textile–A Big Leap Forward for Eco-Friendly Wearable Technology
Read More........

Worldwide spending on AI is expected to be nearly $1.5 trillion in 2025: Report

IANS Photo

New Delhi, (IANS): Worldwide spending on artificial intelligence (AI) is expected to be nearly $1.5 trillion in 2025, up nearly 50 per cent up from $987,904 in 2024, a report said on Monday.

Further, the overall global AI spending is likely to top $2 trillion in 2026, led by AI integration into products such as smartphones and PCs, as well as infrastructure, according to a business and technology insights company Gartner, Inc report.

Mirroring last year's spending graph, generative AI integration in smartphones would lead the spending at $298,189 this year as well, followed by AI services ($282,556), AI-optimised servers ($267,534), AI processing semiconductor ($209,192), AI application software ($172,029) and AI infrastructure Software ($126,177).

"The forecast assumes continued investment in AI infrastructure expansion, as major hyperscalers continue to increase investments in data centres with AI-optimised hardware and GPUs to scale their services," said John-David Lovelock, Distinguished VP Analyst at Gartner.

"The AI investment landscape is also expanding beyond traditional U.S. tech giants, including Chinese companies and new AI cloud providers. Furthermore, venture capital investment in AI providers is providing additional tailwinds for AI spending," he added.

According to the report, the AI spending would reach $2.02 trillion in 2026 following a similar growth trajectory.

In 2026, spending on Generative AI integration in smartphones is likely to be at $393,297. Meanwhile, the spending on AI Services would reach $324,669, and for AI-optimised servers, it would go around $329,528

Similarly, AI processing semiconductor ($267,934), AI application software ($269,703) and AI infrastructure software ($229,885) will also put weight in spending on AI.

The other segments, attracting AI spending, would be AI PCs by ARM and x86, AI-optimised IaaS, and GenAI Models.Gartner providers equip tech leaders and their teams with role-based best practices, industry insights and strategic views into emerging trends and market changes to achieve their mission-critical priorities and build the successful organisations of tomorrow. Worldwide spending on AI is expected to be nearly $1.5 trillion in 2025: Report | MorungExpress | morungexpress.com
Read More........

Blue, green, brown, or something in between – the science of eye colour explained

You’re introduced to someone and your attention catches on their eyes. They might be a rich, earthy brown, a pale blue, or the rare green that shifts with every flicker of light. Eyes have a way of holding us, of sparking recognition or curiosity before a single word is spoken. They are often the first thing we notice about someone, and sometimes the feature we remember most.

Across the world, human eyes span a wide palette. Brown is by far the most common shade, especially in Africa and Asia, while blue is most often seen in northern and eastern Europe. Green is the rarest of all, found in only about 2% of the global population. Hazel eyes add even more diversity, often appearing to shift between green and brown depending on the light.

So, what lies behind these differences?

It’s all in the melanin

The answer rests in the iris, the coloured ring of tissue that surrounds the pupil. Here, a pigment called melanin does most of the work.

Brown eyes contain a high concentration of melanin, which absorbs light and creates their darker appearance. Blue eyes contain very little melanin. Their colour doesn’t come from pigment at all but from the scattering of light within the iris, a physical effect known as the Tyndall effect, a bit like the effect that makes the sky look blue.

In blue eyes, the shorter wavelengths of light (such as blue) are scattered more effectively than longer wavelengths like red or yellow. Due to the low concentration of melanin, less light is absorbed, allowing the scattered blue light to dominate what we perceive. This blue hue results not from pigment but from the way light interacts with the eye’s structure.

Green eyes result from a balance, a moderate amount of melanin layered with light scattering. Hazel eyes are more complex still. Uneven melanin distribution in the iris creates a mosaic of colour that can shift depending on the surrounding ambient light.

What have genes got to do with it?

The genetics of eye colour is just as fascinating.

For a long time, scientists believed a simple “brown beats blue” model, controlled by a single gene. Research now shows the reality is much more complex. Many genes contribute to determining eye colour. This explains why children in the same family can have dramatically different eye colours, and why two blue-eyed parents can sometimes have a child with green or even light brown eyes.

Eye colour also changes over time. Many babies of European ancestry are born with blue or grey eyes because their melanin levels are still low. As pigment gradually builds up over the first few years of life, those blue eyes may shift to green or brown.

In adulthood, eye colour tends to be more stable, though small changes in appearance are common depending on lighting, clothing, or pupil size. For example, blue-grey eyes can appear very blue, very grey or even a little green depending on ambient light. More permanent shifts are rarer but can occur as people age, or in response to certain medical conditions that affect melanin in the iris.

The real curiosities

Then there are the real curiosities.

Heterochromia, where one eye is a different colour from the other, or one iris contains two distinct colours, is rare but striking. It can be genetic, the result of injury, or linked to specific health conditions. Celebrities such as Kate Bosworth and Mila Kunis are well-known examples. Musician David Bowie’s eyes appeared as different colours because of a permanently dilated pupil after an accident, giving the illusion of heterochromia.

In the end, eye colour is more than just a quirk of genetics and physics. It’s a reminder of how biology and beauty intertwine. Each iris is like a tiny universe, rings of pigment, flecks of gold, or pools of deep brown that catch the light differently every time you look.

Eyes don’t just let us see the world, they also connect us to one another. Whether blue, green, brown, or something in-between, every pair tells a story that’s utterly unique, one of heritage, individuality, and the quiet wonder of being human.The Conversation

Davinia Beaver, Postdoctoral research fellow, Clem Jones Centre for Regenerative Medicine, Bond University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

The science behind a freediver’s 29-minute breath hold world record

Croatian freediver Vitomir Maričić. Facebook.com @molchanovs, Instagram.com @maverick2go, Facebook.com @Vitomir Maričić, CC BY 

Most of us can hold our breath for between 30 and 90 seconds.

A few minutes without oxygen can be fatal, so we have an involuntary reflex to breathe.

But freediver Vitomir Maričić recently held his breath for a new world record of 29 minutes and three seconds, lying on the bottom of a 3-metre-deep pool in Croatia.

Vitomir Maričić set a new Guinness World Record for “the longest breath held voluntarily under water using oxygen”.

This is about five minutes longer than the previous world record set in 2021 by another Croatian freediver, Budimir Å obat.

Interestingly, all world records for breath holds are by freedivers, who are essentially professional breath-holders.
They do extensive physical and mental training to hold their breath under water for long periods of time.

So how do freedivers delay a basic human survival response and how was Maričić able to hold his breath about 60 times longer than most people?

Increased lung volumes and oxygen storage

Freedivers do cardiovascular training – physical activity that increases your heart rate, breathing and overall blood flow for a sustained period – and breathwork to increase how much air (and therefore oxygen) they can store in their lungs.

This includes exercise such as swimming, jogging or cycling, and training their diaphragm, the main muscle of breathing.

Diaphragmatic breathing and cardiovascular exercise train the lungs to expand to a larger volume and hold more air.

This means the lungs can store more oxygen and sustain a longer breath hold.

Freedivers can also control their diaphragm and throat muscles to move the stored oxygen from their lungs to their airways. This maximises oxygen uptake into the blood to travel to other parts of the body.

To increase the oxygen in his lungs even more before his world record breath-hold, Maričić inhaled pure (100%) oxygen for ten minutes.

This gave Maričić a larger store of oxygen than if he breathed normal air, which is only about 21% oxygen.

This is classified as an oxygen-assisted breath-hold in the Guiness Book of World Records.

Even without extra pure oxygen, Maričić can hold his breath for 10 minutes and 8 seconds.

Resisting the reflex to take another breath

Oxygen is essential for all our cells to function and survive. But it is high carbon dioxide, not low oxygen that causes the involuntary reflex to breathe.

When cells use oxygen, they produce carbon dioxide, a damaging waste product.

Carbon dioxide can only be removed from our body by breathing it out.

When we hold our breath, the brain senses the build-up in carbon dioxide and triggers us to breathe again.

Freedivers practice holding their breath to desensitise their brains to high carbon dioxide and eventually low oxygen. This delays the involuntary reflex to breathe again.

When someone holds their breath beyond this, they reach a “physiological break-point”. This is when their diaphragm involuntarily contracts to force a breath.

This is physically challenging and only elite freedivers who have learnt to control their diaphragm can continue to hold their breath past this point.

Indeed, Maričić said that holding his breath longer:

got worse and worse physically, especially for my diaphragm, because of the contractions. But mentally I knew I wasn’t going to give up.

Mental focus and control is essential

Those who freedive believe it is not only physical but also a mental discipline.

Freedivers train to manage fear and anxiety and maintain a calm mental state. They practice relaxation techniques such as meditation, breath awareness and mindfulness.

Interestingly, Maričić said:

after the 20-minute mark, everything became easier, at least mentally.

Reduced mental and physical activity, reflected in a very low heart rate, reduces how much oxygen is needed. This makes the stored oxygen last longer.

That is why Maričić achieved this record lying still on the bottom of a pool.

Don’t try this at home

Beyond competitive breath-hold sports, many other people train to hold their breath for recreational hunting and gathering.

For example, ama divers who collect pearls in Japan, and Haenyeo divers from South Korea who harvest seafood.

But there are risks of breath holding.

Maričić described his world record as:

a very advanced stunt done after years of professional training and should not be attempted without proper guidance and safety.

Indeed, both high carbon dioxide and a lack of oxygen can quickly lead to loss of consciousness.

Breathing in pure oxygen can cause acute oxygen toxicity due to free radicals, which are highly reactive chemicals that can damage cells.

Unless you’re trained in breath holding, it’s best to leave this to the professionals.The Conversation

Theresa Larkin, Associate Professor of Medical Sciences, University of Wollongong and Gregory Peoples, Senior Lecturer - Physiology, University of Wollongong

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

The Third Eye: Moving from Information Age to ‘Age of Intelligence’


New Delhi, (IANS): The success of Information Technology revolution caused the transition of the world from the Industrial Age to the Age of Information but the advent of Artificial Intelligence (AI) is expediting another transformational shift- from the Information Age to the Age of Intelligence propelled by the basic fact that ‘all intelligence is information but all information is not intelligence’.

This shift is compelled by the reality that there was no competitive gain from having information that everybody else also had and that it is the ownership of ‘exclusive knowledge’ called Intelligence that gave one advantage over the others.

AI applications are becoming a means of generating and accessing such knowledge largely through Data Analytics. Any information of intelligence value has to be ‘reliable’ but also ‘futuristic’ in the sense that it indicates the ‘opportunities’ and ‘risks’ lying ahead and thus opens the pathway to gainful action. To the extent a system of algorithms can be put in place to produce ‘insights’ during the analysis of data, this came closer to bridging the gap between ‘Artificial’ and ‘Human’ intelligence. Fundamentally, however, AI was an ‘assistant’ for and not a ‘substitute’ for human intelligence.

Someone rightly said that Artificial Intelligence backed by Large Language Models(LLMs) can become the ultimate repository of human knowledge but even when it might be able to decide what was ‘factually true or false’ it could not take the stewardship of determining what is ‘right or wrong’. This can only be done by the human mind that is equipped with ‘intuition’ rooted in conscience, piety and a capacity to think for the future.

The power of logic -another singular feature of the human mind- derives from a combination of past experience, the capacity to observe and analyse information and the ability to see things in a ‘cause and effect’ mode. To a limited extent ‘logic’ can be built into the ‘machine learning’ but only in a borrowed way.

Moreover, human conduct is often conditioned by the ‘system of moral values’ followed at the personal level- biases and wishful thinking are often built into any system of morality- and this is yet another area where Artificial Intelligence would not be able to substitute the human mind.

AI essentially works on data in the memory and the Language Models enhance its outreach to demographies and customs bringing it somewhat closer to human behaviour but what stands out in all of this is the fact that AI cannot be freed from the ‘input-output’ principle.

Albert Einstein famously said that ‘imagination was more important than knowledge’- he was not referring to the trait of wildly imagining things that some people might have but was defining the human capacity to see beyond the data in front and perceive what lay ahead. In a way, he was alluding to the ability of the human mind not to ‘miss the wood for the tree’.

Imagination and human feedback are great assets in both business and personal lives and they mark out human intelligence from the machine-led operation. They both are of great help in the areas of Customer Relations Management- since they made it possible to personalise this relationship -as well as Risk Assessment which no successful enterprise could do without.

It is important to know the difference between ‘intelligence’ that tests the reach of the human mind and ‘machine learning’ that has its own boundaries.

Intelligence by definition is information that gives you an indication of ‘what lies ahead’ -Artificial Intelligence is therefore going to buy its importance from its capacity to produce ‘predictive’ readings.

AI has the limitation of being able to only read ‘patterns’ in the data examined by it and if the data was about the footprints left behind in the public domain by the ‘adversary’ or the ‘competitor’, this could enable data analytics to throw light at least on the ‘modus operandi’ of the opponent and indicate how the latter would possibly move next. There is a partial application of ‘logic’ here though not of ‘imagination’ which was an exclusive trait of the human mind.

If AI cannot be a substitute for human intelligence the best use of it is in making it an ‘assistant’ for the latter and this is precisely what explains the phenomenal advancement of AI in professional and business fields. A ‘symbiotic relationship’ between the two guarantees a bright future for humanity at large. Data analytics can aim at bringing out trends relating to the business environment, the study of the competitors and the organisation’s internal situation. It can focus on the examination of the specific requirements of a particular business, organisational entity and profession, in a bid to seek a legitimate competitive advantage.

AI is strengthening the ‘knowledge economy’ by helping to evolve new services and products, by making things more efficient through cost-cutting and optimal utilisation of the available workforce and by generally improving the ‘quality of life’ by encouraging innovation. The constant change of the business scene because of the shifting paradigms of knowledge, establishes that any AI application will not be a one-time event and will further advance the cause of research and development. The determination of the ‘direction’ of an AI operation, however, will remain with the human mind and this placed a fundamental limitation on AI.

As the field of Al gets enlarged, two things are emerging as major concerns- the challenge of establishing the reliability of the data banks used and the likely use of AI for unethical and criminal objectives. In the age of fake news and misinformation on social media, only verified information must be used for AI applications. Confirming the reliability of data is by itself a task for AI that would create value for business.

India as a matter of policy favours international oversight of AI research in the interest of transparency for safeguarding the general good. The US thinks of AI development purely as an economic instrument and wants to preserve ownership rights in research and innovation.
Indian cuisine recipes

At the strategic level, AI has the potential for providing new tools of security and intelligence and in the process, can become a source of threat for the geopolitical stability of the world itself. India has rightly taken the lead in demanding the ethical advancement of AI for the good of humanity and called for a collective approach to minimising the ‘perils’ of AI while promoting its progress for universal causes.It is instructive to note that the recent joint winners of the Nobel Prize for Physics- John J Hopfield of Princeton University and Geoffrey E Hinton of the University of Toronto- are pioneers in the field of modern ‘machine learning’ research and they have both warned that AI had the potential to cause ‘apocalypse’ for humanity. The Third Eye: Moving from Information Age to ‘Age of Intelligence’ | MorungExpress | morungexpress.com
Read More........

AI can help detect early larynx cancer from sound of voice: Study



New Delhi, (IANS): A team of US scientists showed that Artificial Intelligence (AI) can help detect early larynx or voice box cancer from the sound of the patient’s voice.

Cancer of the voice box is an important public health burden. In 2021, there were an estimated 1.1 million cases of laryngeal cancer worldwide, and approximately 100,000 people died from it.

Risk factors include smoking, alcohol abuse, and infection with human papillomavirus.

The prognosis for laryngeal cancer ranges from 35 per cent to 78 per cent survival over five years when treated, depending on the tumour’s stage and its location within the voice box.

Now, researchers from the Oregon Health & Science University showed that abnormalities of the vocal folds can be detected from the sound of the voice using AI.

Such ‘vocal fold lesions’ can be benign, like nodules or polyps, but may also represent the early stages of laryngeal cancer.

These proof-of-principle results open the door for a new application of AI: namely, to recognise the early warning stages of laryngeal cancer from voice recordings, said the team in the paper published in the journal Frontiers in Digital Health.

“Here we show that with this dataset we could use vocal biomarkers to distinguish voices from patients with vocal fold lesions from those without such lesions,” said Dr Phillip Jenkins, postdoctoral fellow in clinical informatics at Oregon.

In the study, Jenkins and team analysed variations in tone, pitch, volume, and clarity with 12,523 voice recordings of 306 participants from across North America.

A minority were from patients with known laryngeal cancer, benign vocal fold lesions, or two other conditions of the voice box: spasmodic dysphonia and unilateral vocal fold paralysis.

The researchers focused on differences in a number of acoustic features of the voice: for example, the mean fundamental frequency (pitch); jitter, variation in pitch within speech; shimmer, variation of the amplitude; and the harmonic-to-noise ratio, a measure of the relation between harmonic and noise components of speech.

They found marked differences in the harmonic-to-noise ratio and fundamental frequency between men without any voice disorder, men with benign vocal fold lesions, and men with laryngeal cancer.

They didn’t find any informative acoustic features among women, but it is possible that a larger dataset would reveal such differences.Variation in the harmonic-to-noise ratio can be helpful to monitor the clinical evolution of vocal fold lesions, and to detect laryngeal cancer at an early stage, at least in men, the researchers said. AI can help detect early larynx cancer from sound of voice: Study | MorungExpress | morungexpress.com
Read More........

Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology


Macular degeneration is the leading cause of blindness in developed countries, but regrowing the human cells lost to this condition was the feature of a new successful treatment that took advantage of advances in nanotechnology.

Regrowing the cells of the human retina on a scaffold of synthetic, tissue-like material showed substantial improvements over previously used materials such as cellulose, and the scientists hope they can move on to testing their method in the already blind.

Macular degeneration is increasing in prevalence in the developed world. It’s the leading cause of blindness and is caused by the loss of cells in a key part of the eye called the retina.

Humans have no ability to regrow retinal pigment cells, but scientists have determined how to do it in vitro using pluripotent stem cells. However as the study authors describe, previous examples of this procedure saw scientists growing the cells on flat surfaces rather than one resembling the retinal membrane.

This, they state, limits the effectiveness of transplanted cells.

In a study at the UK’s Nottingham Trent University, biomedical scientist Biola Egbowon and colleagues fabricated 3D scaffolds with polymer nanofibers and coated them with a steroid to reduce inflammation.

The method by which the nanofibers were made was pretty darn cool. The team would squirt polyacrylonitrile and Jeffamine polymers in molten form through an electrical current in a technique known as “electrospinning.” The high voltage caused molecular changes in the polymers that saw them become solid again, resembling a scaffold of tiny fibers that attracted water yet maintained mechanical strength.

After the scaffolding was made, it was treated with an anti-inflammatory steroid.

This unique pairing of materials mixed with the electrospinning created a unique scaffold that kept the retinal pigment cells viable for 150 days outside of any potential human patient, all while showing the phenotype of biomarkers critical for maintaining retinal physiological characteristics.“While this may indicate the potential of such cellularized scaffolds in regenerative medicine, it does not address the question of biocompatibility with human tissue,” Egbowon and colleagues caution in their paper, urging more research to be conducted, specifically regarding the orientation of the cells and whether they can maintain good blood supply. Scientists Regrow Retina Cells to Tackle Leading Cause of Blindness Using Nanotechnology
Read More........

Researchers Test Use of Nuclear Technology to Curb Rhino Poaching in South Africa


In South Africa, biologists and scientists have developed a novel way of disincentivizing poaching that will allow rhinos to keep hold of their horns.

Previously it was widespread practice to capture and de-horn rhinos to disincentivize poachers from killing them, but the lack of a horn deeply interfered with the animals’ social structures.


Instead, rhinos at a nursery in the northern province of Limpopo have had radioactive isotopes embedded into their horns. The idea is that the radiation given off by these isotopes will mark out anyone at any border crossing as having handled a rhino horn.

It’s a superior form of tracking because even if the tracker is removed the radiation remains on the horn, as well as anything that touches it.

Nuclear researchers at the University of the Witwatersrand’s Radiation and Health Physics Unit in South Africa injected 20 live rhinos with these isotopes.

“We are doing this because it makes it significantly easier to intercept these horns as they are being trafficked over international borders because there is a global network of radiation monitors that have been designed to prevent nuclear terrorism,” Professor James Larkin who heads the project told Africa News. “And we’re piggybacking on the back of that.”

Larkin adds that innovation in poaching prevention is urgently needed, as all existing methods have limitations, and South Africa still loses tens of rhinos every year.

Professor Nithaya Chetty, dean of the science faculty at Witwatersrand, said the dosage of the radioactivity is very low and its potential negative impact on the animal was tested extensively.While poaching elephants for their ivory yields a unique material for sculpture and craft, rhino horn is trafficked to criminal groups in Asia who sell it for the incorrect belief that it contains therapeutic properties. Researchers Test Use of Nuclear Technology to Curb Rhino Poaching in South Africa
Read More........

16-year-old Wins $75,000 for Her Award-Winning Discovery That Could Help Revolutionize Biomedical Implants

Grace Sun, credit – Society for Science

First prize in the USA’s largest and most prestigious science fair has gone to a 16-year-old girl who found new ways to optimize the components of biomedical implants, promising a future of safer, faster, and longer-lasting versions of these critical devices.

It’s not the work of science fiction; bioelectronic implants like the pacemaker have been around for decades, but also suffer from compatibility issues interfacing with the human body.

On Friday, Grace Sun from Lexington, Kentukcy, pocketed $75,000 and was recognized among 2,000 of the nation and the world’s top STEM students as having produced the “number one project.”

The award was given through the Society for Science’s Regeneron International Science and Engineering Fair, one of the largest and most prestigious in the world.

Sun’s work focused on improving the capabilities of organic electrochemical transistors or OECTs, which like other devices made of silicon, are soft, flexible, and present the possibility of more complex implants for use in the brain or the heart.

“They have performance issues right now,” Sun told Business Insider of the devices. “They have instability in the body. You don’t want some sort of implanted bioelectronic to degrade in your body.”

Sensitive OECTs could detect proteins or nucleic acids in sweat, blood, or other transporters that correspond to diseases in their earliest stages. They could replace more invasive implants like the aforementioned pacemaker, and offer unprecedented ways to track biomarkers such as blood glucose, circulating white blood cell count, or blood-alcohol content, which could be useful for people with autoimmunity, epilepsy, or diabetes.

“This was our number one project, without a shadow of a doubt,” Ian Jandrell, a judging co-chair for the materials science category at ISEF, told Business Insider about Sun’s research.

“It was crystal clear that that room was convinced that this was a significant project and worthy of consideration for a very top award because of the contribution that was made.”Sun says she is looking to develop the OECTs further, hoping to start a business in the not-too-distant future as a means of getting them out into the world and impacting real people as fast as possible. 16-year-old Wins $75,000 for Her Award-Winning Discovery That Could Help Revolutionize Biomedical Implants
Read More........