This Bracelet from Meta Translates Hand Movements into Computer Actions

Meta’s Neuromotor Interface – credit, Reality Labs, via Springer Press

A very sci-fi invention has been introduced by engineers from Facebook’s parent company that translates hand gestures into computer actions.

This includes fine motor movements like dotting a lowercase i, and translating handwriting into computer text is something the interface is particularly good at.

Designed inside Meta’s Reality Labs, it’s one of the first major offerings from the in-house moonshot department since the collapse of the company’s “Metaverse” concept which was once expected to “define the future of social connection” according to CEO Mark Zuckerberg, who renamed his company in its honor.

The Metaverse ended up being less of a future-defining technology and more like a damp squib, with the Reality Labs division of Meta losing $14 billion in 2022 and $15 billion in 2023.

Reality Labs was on the chopping block during Meta’s Year of Efficiency, with perhaps as many as 10,000 layoffs taking place in advance of a direction shift to what almost anyone would admit is a more exciting and marketable business direction: stuff that looks like it’s from Star Trek.


The device can translate the electrical signals generated by muscle movements at the wrist into computer commands without the need for personalized calibration or invasive procedures. The bracelet slips on and off as easily as, well, a bracelet.

Technical engineers Patrick Kaifosh and Thomas Reardon who oversaw its development then used deep learning to create generic decoding models that accurately interpret the muscle movements across different people without needing individual calibration, and the more participants who used it, the more accurate the deep learning decoding model became.

However, accuracy and performance was then further increased with personalization, offering a recipe for building high performance biosignal decoders for many applications.

The bracelet works on a Bluetooth connection, and among the various tasks it proved capable of carrying out, its translation of human handwriting movements into text could be done at a speed of 20.9 words per minute, around 16 fewer than the average mobile phone user’s speed.As to exactly who benefits most from the device, a variety of disabilities and paralysis situations immediately come to mind, as well as the obvious benefits for below-the-elbow amputees, or someone using multiple computers and/or monitors at the same time. This Bracelet from Meta Translates Hand Movements into Computer Actions
Read More........

Australian researchers use a quantum computer to simulate how real molecules behave

When a molecule absorbs light, it undergoes a whirlwind of quantum-mechanical transformations. Electrons jump between energy levels, atoms vibrate, and chemical bonds shift — all within millionths of a billionth of a second.

These processes underpin everything from photosynthesis in plants and DNA damage from sunlight, to the operation of solar cells and light-powered cancer therapies.

Yet despite their importance, chemical processes driven by light are difficult to simulate accurately. Traditional computers struggle, because it takes vast computational power to simulate this quantum behaviour.

Quantum computers, by contrast, are themselves quantum systems — so quantum behaviour comes naturally. This makes quantum computers natural candidates for simulating chemistry.

Until now, quantum devices have only been able to calculate unchanging things, such as the energies of molecules. Our study, published this week in the Journal of the American Chemical Society, demonstrates we can also model how those molecules change over time.

We experimentally simulated how specific real molecules behave after absorbing light.

Simulating reality with a single ion

We used what is called a trapped-ion quantum computer. This works by manipulating individual atoms in a vacuum chamber, held in place with electromagnetic fields.

Normally, quantum computers store information using quantum bits, or qubits. However, to simulate the behaviour of the molecules, we also used vibrations of the atoms in the computer called “bosonic modes”.

This technique is called mixed qudit-boson simulation. It dramatically reduces how big a quantum computer you need to simulate a molecule.

We simulated the behaviour of three molecules absorbing light: allene, butatriene, and pyrazine. Each molecule features complex electronic and vibrational interactions after absorbing light, making them ideal test cases.

Our simulation, which used a laser and a single atom in the quantum computer, slowed these processes down by a factor of 100 billion. In the real world, the interactions take femtoseconds, but our simulation of them played out in milliseconds – slow enough for us to see what happened.

A million times more efficient

What makes our experiment particularly significant is the size of the quantum computer we used.

Performing the same simulation with a traditional quantum computer (without using bosonic modes) would require 11 qubits, and to carry out roughly 300,000 “entangling” operations without errors. This is well beyond the reach of current technology.

By contrast, our approach accomplished the task by zapping a single trapped ion with a single laser pulse. We estimate our method is at least a million times more resource-efficient than standard quantum approaches.

We also simulated “open-system” dynamics, where the molecule interacts with its environment. This is typically a much harder problem for classical computers.

By injecting controlled noise into the ion’s environment, we replicated how real molecules lose energy. This showed environmental complexity can also be captured by quantum simulation.

What’s next?

This work is an important step forward for quantum chemistry. Even though current quantum computers are still limited in scale, our methods show that small, well-designed experiments can already tackle problems of real scientific interest.

Simulating the real-world behaviour of atoms and molecules is a key goal of quantum chemistry. It will make it easier to understand the properties of different materials, and may accelerate breakthroughs in medicine, materials and energy.

We believe that with a modest increase in scale — to perhaps 20 or 30 ions — quantum simulations could tackle chemical systems too complex for any classical supercomputer. That would open the door to rapid advances in drug development, clean energy, and our fundamental understanding of chemical processes that drive life itself.The Conversation

Ivan Kassal, Professor of Chemical Physics, University of Sydney and Tingrei Tan, Research Fellow, Quantum Control Laboratory, University of Sydney

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

2025 will see huge advances in quantum computing. So what is a quantum chip and how does it work?

In recent years, the field of quantum computing has been experiencing fast growth, with technological advances and large-scale investments regularly making the news.

The United Nations has designated 2025 as the International Year of Quantum Science and Technology.

The stakes are high – having quantum computers would mean access to tremendous data processing power compared to what we have today. They won’t replace your normal computer, but having this kind of awesome computing power will provide advances in medicine, chemistry, materials science and other fields.

So it’s no surprise that quantum computing is rapidly becoming a global race, and private industry and governments around the world are rushing to build the world’s first full-scale quantum computer. To achieve this, first we need to have stable and scalable quantum processors, or chips.

What is a quantum chip?

Everyday computers – like your laptop – are classical computers. They store and process information in the form of binary numbers or bits. A single bit can represent either 0 or 1.

By contrast, the basic unit of a quantum chip is a qubit. A quantum chip is made up of many qubits. These are typically subatomic particles such as electrons or photons, controlled and manipulated by specially designed electric and magnetic fields (known as control signals).

Unlike a bit, a qubit can be placed in a state of 0, 1, or a combination of both, also known as a “superposition state”. This distinct property allows quantum processors to store and process extremely large data sets exponentially faster than even the most powerful classical computer.

There are different ways to make qubits – one can use superconducting devices, semiconductors, photonics (light) or other approaches. Each method has its advantages and drawbacks.

Companies like IBM, Google and QueRa all have roadmaps to drastically scale up quantum processors by 2030.

Industry players that use semiconductors are Intel and Australian companies like Diraq and SQC. Key photonic quantum computer developers include PsiQuantum and Xanadu.

Qubits: quality versus quantity

How many qubits a quantum chip has is actually less important than the quality of the qubits.

A quantum chip made up of thousands of low-quality qubits will be unable to perform any useful computational task.

So, what makes for a quality qubit?

Qubits are very sensitive to unwanted disturbances, also known as errors or noise. This noise can come from many sources, including imperfections in the manufacturing process, control signal issues, changes in temperature, or even just an interaction with the qubit’s environment.

Being prone to errors reduces the reliability of a qubit, known as fidelity. For a quantum chip to stay stable long enough to perform complex computational tasks, it needs high-fidelity qubits.

When researchers compare the performance of different quantum chips, qubit fidelity is one of the crucial parameters they use.

How do we correct the errors?

Fortunately, we don’t have to build perfect qubits.

Over the last 30 years, researchers have designed theoretical techniques which use many imperfect or low-fidelity qubits to encode an abstract “logical qubit”. A logical qubit is protected from errors and, therefore, has very high fidelity. A useful quantum processor will be based on many logical qubits.

Nearly all major quantum chip developers are now putting these theories into practice, shifting their focus from qubits to logical qubits.

In 2024, many quantum computing researchers and companies made great progress on quantum error corrections, including Google, QueRa, IBM and CSIRO.

Quantum chips consisting of over 100 qubits are already available. They are being used by many researchers around the world to evaluate how good the current generation of quantum computers are and how they can be made better in future generations.

For now, developers have only made single logical qubits. It will likely take a few years to figure out how to put several logical qubits together into a quantum chip that can work coherently and solve complex real-world problems.

What will quantum computers be useful for?

A fully functional quantum processor would be able to solve extremely complex problems. This could lead to revolutionary impact in many areas of research, technology and economy.

Quantum computers could help us discover new medicines and advance medical research by finding new connections in clinical trial data or genetics that current computers don’t have enough processing power for.

They could also greatly improve the safety of various systems that use artificial intelligence algorithms, such as banking, military targeting and autonomous vehicles, to name a few.

To achieve all this, we first need to reach a milestone known as quantum supremacy – where a quantum processor solves a problem that would take a classical computer an impractical amount of time to do.

Late last year, Google’s quantum chip Willow finally demonstrated quantum supremacy for a contrived task – a computational problem designed to be hard for classical supercomputers but easy for quantum processors due to their distinct way of working.

Although it didn’t solve a useful real-world problem, it’s still a remarkable achievement and an important step in the right direction that’s taken years of research and development. After all, to run, one must first learn to walk.

What’s on the horizon for 2025 and beyond?

In the next few years, quantum chips will continue to scale up. Importantly, the next generation of quantum processors will be underpinned by logical qubits, able to tackle increasingly useful tasks.

While quantum hardware (that is, processors) has been progressing at a rapid pace, we also can’t overlook an enormous amount of research and development in the field of quantum software and algorithms.

Using quantum simulations on normal computers, researchers have been developing and testing various quantum algorithms. This will make quantum computing ready for useful applications when the quantum hardware catches up.

Building a full-scale quantum computer is a daunting task. It will require simultaneous advancements on many fronts, such as scaling up the number of qubits on a chip, improving the fidelity of the qubits, better error correction, quantum software, quantum algorithms, and several other sub-fields of quantum computing.

After years of remarkable foundational work, we can expect 2025 to bring new breakthroughs in all of the above.The Conversation

Muhammad Usman, Head of Quantum Systems and Principal Research Scientist, CSIRO

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Read More........

DRDO’s young scientists complete testing of 6-qubit quantum processor


New Delhi, (IANS): Scientists from DRDO's Young Scientists Laboratory for Quantum Technologies (DYSL-QT) have completed end-to-end testing of a 6-qubit quantum processor, the Ministry of Defence said.

“The project executed at TIFR Mumbai’s Colaba campus is a three-way collaboration between DYSL-QT, TIFR and Tata Consultancy Services (TCS). The DYSL-QT scientists put together the control and measurement apparatus using a combination of commercial off-the-shelf electronics and custom-programmed development boards,” the ministry said.

It added that these qubits were designed and fabricated at TIFR and the quantum processor architecture is based on a novel ring-resonator design invented at TIFR. The cloud-based interface to the quantum hardware is developed by TCS.

“The scientists are now working on optimising various aspects of the system performance before it becomes ready for operation,” the ministry said.

The ministry added that plans are underway to provide wider access to this system for education, and research and eventually as a test bed for testing superconducting quantum devices for analysis.

“The next development target is to scale up the number of qubits and assess the scaling trends to technology challenges, development effort/time and monetary resources required for development, operations and commercialisation of various sizes of quantum computers,” it added.The ministry said that this will involve a holistic view from the quantum theory to engineering to business feasibility. DRDO’s young scientists complete testing of 6-qubit quantum processor | MorungExpress | morungexpress.com
Read More........

Musk plans largest-ever supercomputer for xAI startup: report

CALIFORNIA - Billionaire tech mogul Elon Musk has told investors he plans to build a supercomputer dubbed "gigafactory of compute" to support the development of his artificial intelligence startup xAI, an industry news outlet reported on Saturday.

Musk wants the supercomputer -- which will string together 100,000 Nvidia chips -- operational by fall 2025, and "will hold himself personally responsible for delivering it on time," The Information said.

The planned supercomputer would be "at least four times the size of the biggest GPU clusters that exist today," such as those used by Meta to train its AI models, Musk was quoted as saying during a presentation to investors this month.


Since OpenAi's generative AI tool ChatGPT exploded on the scene in 2022, the technology has been an area of fierce competition between tech giants Microsoft and Google, as well as Meta and start-ups like Anthropic and Stability AI.

Musk is one of the world's few investors with deep enough pockets to compete with OpenAI, Google or Meta on AI.

xAI is developing a chatbot named Grok, which can access social media platform X, the former Twitter which is also owned by Musk, in real time.

Musk cofounded OpenAI in 2015 but left in 2018, later saying he was uncomfortable with the profit-driven direction the company was taking under the stewardship of CEO Sam Altman.

He filed a lawsuit against the company in March, accusing it of breaking its original non-profit mission to make AI research available to all.OpenAI argues that Musk's lawsuit, as well as his embrace of open source development, is little more than a case of sour grapes after leaving the company. Musk plans largest-ever supercomputer for xAI startup: report
Read More........

UK’s fastest supercomputer switched on


The computer is officially the most sustainable supercomputer in the UK

The UK’s fastest and most powerful supercomputer, known as Isambard-AI, has been switched on at the University of Bristol this week.

The Isambard-AI supercomputer was first announced by the government last March alongside a £225 million investment. The facility has been built by Hewlett Packard Enterprise and contains over 5,000 NVIDIA superchips, allowing it to complete 200 quadrillion calculations per second.

Officially named the AI Research Resource (AIRR), the facility is ten times more powerful than the previous leading supercomputer in the UK. Researchers will use the facility to support critical work on the development of AI technology, working closely alongside the UK’s AI Safety Institute.

The facility includes thousands of graphics processing units (GPUs) and will be used to train the large language models. as Additional focus areas include climate research and accelerating automated drug discovery.

“The Isambard-AI cluster will be one of the most powerful supercomputers in Europe, and will help industry experts and researchers harness the game-changing potential of AI, including through the mission-critical work of our Frontier AI Taskforce,” said Science, Innovation and Technology Secretary Michelle Donelan in a press release.

“This will equip the UK with the means to drive the next wave of scientific breakthroughs and positions Bristol as a vital cog in global technological discovery that will improve people’s lives,” added University of Bristol’s Vice-Chancellor and President Professor Evelyn Welch. The UK is on a mission to become a global AI superpower. Last October, the government announced that taxpayer spending on AI chips and supercomputers is set to increase to £400 million. Additionally, in this year’s Spring Budget, chancellor Jeremy Hunt pledged £100 million in funding to The Alan Turing Institute, the UK’s national institute for data science and AI. UK’s fastest supercomputer switched on
Read More........

IBM developing world's smallest computer

Credit: IBM Research
Most people are familiar with Moore's Law, but few have heard of Bell's Law – a related phenomenon coined by U.S. engineer Gordon Bell. This describes how a new class of computing devices tends to emerge about every decade or so, each 100 times smaller than the last. The shrinking volume of machines becomes obvious when you look back at the history of technology.

The 1960s, for example, were characterised by large mainframes that often filled entire rooms. The 1970s saw the adoption of "minicomputers" that were cheaper and smaller. Personal computing emerged in the early 1980s and laptops became popular in the 1990s. This was followed by mobile phones from the 2000s onwards, which themselves became ever thinner and more compact with each passing year, along with tablets and e-readers. More recently there has been rapid growth in wireless sensor networks that is giving birth to the Internet of Things (IoT).

The new computer announced by IBM is just 1mm x 1mm across, making it the smallest machine of its kind to ever be developed. It will feature as many as a million transistors, a solar cell and communications module. The company predicts these devices will be in widespread use within five years, embedded in all manner of everyday objects. So-called "cryptographic anchors" and blockchain technology will ensure a product's authenticity – from its point of origin to the hands of the customer. These high-tech, miniature watermarks will (for example) verify that products have originated from the factory the distributor claims they are from, and are not counterfeits mixed in with genuine items.

In some countries, nearly 70 percent of certain life-saving pharmaceuticals are counterfeit and the overall cost of fraud to the global economy is more than $600bn every year. This new generation of tiny computers will monitor, analyse, communicate and even act on data.

"These [crypto-anchor] technologies pave the way for new solutions that tackle food safety, authenticity of manufactured components, genetically modified products, identification of counterfeit objects and provenance of luxury goods," says IBM research chief, Arvind Krishna.

Looking further into the future – if Bell's Law continues – devices are likely to be small enough to fit inside blood cells within a few decades. The potential applications then will become like science fiction: could we see a merger between humans and machines?

Source: https://www.futuretimeline.net/
Read More........

Computer program learns to replicate human handwriting

Researchers at University College London have devised a software algorithm able to scan and replicate almost anyone's handwriting
In a world increasingly dominated by the QWERTY keyboard, computer scientists at University College London (UCL) have developed software which may spark the comeback of the handwritten word, by analysing the handwriting of any individual and accurately replicating it. The scientists have created "My Text in Your Handwriting" – a programme which semi-automatically examines a sample of a person's handwriting that can be as little as one paragraph, and generates new text saying whatever the user wishes, as if the author had handwritten it themselves. "Our software has lots of valuable applications," says lead author, Dr Tom Haines. "Stroke victims, for example, may be able to formulate letters without the concern of illegibility, or someone sending flowers as a gift could include a handwritten note without even going into the florist. It could also be used in comic books where a piece of handwritten text can be translated into different languages without losing the author's original style." Published in ACM Transactions on Graphics, the machine learning algorithm is built around glyphs – a specific instance of a character. Authors produce different glyphs to represent the same element of writing – the way one individual writes an "a" will usually be different to the way others write an "a". Although an individual's writing has slight variations, every author has a recognisable style that manifests in their glyphs and spacing. The software learns what is consistent across an individual's style and reproduces this.
To generate an individual's handwriting, the software analyses and replicates the author's specific character choices, pen-line texture, colour and the inter-character ligatures (the joining-up between letters), as well as vertical and horizontal spacing. Co-author, Dr Oisin Mac Aodha (UCL Computer Science), said: "Up until now, the only way to produce computer-generated text that resembles a specific person's handwriting would be to use a relevant font. The problem with such fonts is that it is often clear that the text has not been penned by hand, which loses the character and personal touch of a handwritten piece of text. What we've developed removes this problem and so could be used in a wide variety of commercial and personal circumstances." The system is flexible enough that samples from historical documents can be used with little extra effort. Thus far, the scientists have analysed and replicated the handwriting of such figures as Abraham Lincoln, Frida Kahlo and Arthur Conan Doyle. Infamously, Conan Doyle never actually wrote Sherlock Holmes as saying, "Elementary my dear Watson" but the team have produced evidence to make you think otherwise. To test the effectiveness of their software, the research team asked people to distinguish between handwritten envelopes and ones
created by their automatic software.  People were tricked by the computer-generated writing up to 40% of the time. Given how convincing it can be, some may believe this method could help in forging documents – but the team explained it works both ways and could actually help in detecting forgeries. "Forgery and forensic handwriting analysis are still almost entirely manual processes – but by taking the novel approach of viewing handwriting as texture-synthesis, we can use our software to characterise handwriting to quantify the odds that something was forged," explained Dr Gabriel Brostow, senior author. "For example, we could calculate what ratio of people start their 'o's' at the bottom versus the top and this kind of detailed analysis could reduce the forensics service's reliance on heuristics."Computer program learns to replicate human handwriting
Read More........

A New Reality Materializing: Humans Can Be the New Supercomputer

Illustration: Colourbox
Today, people of all backgrounds can contribute to solving serious scientific problems by playing computer games. A Danish research group has extended the limits of quantum physics calculations and simultaneously blurred the boundaries between man and mac. The Danish research team, CODER, has found out, that the human brain can beat the calculating powers of a computer, when it comes to solving quantum-problems. The saying of philosopher René Descartes of what makes humans unique is beginning to sound hollow. 'I think -- therefore soon I am obsolete' seems more appropriate. When a computer routinely beats us at chess and we can barely navigate without the help of a GPS, have we outlived our place in the world? Not quite. Welcome to the front line of research in cognitive skills, quantum computers and gaming. Today there is an on-going battle between man and machine. While genuine machine consciousness is still years into the future, we are beginning to see computers make choices that previously demanded a human's input. Recently, the world held its breath as Google's algorithm AlphaGo beat a professional player in the game Go--an achievement demonstrating the explosive speed of development in machine capabilities. A screenshot of one of the many games that are available. In this case the task is to shoot spiders in the "Quantum-Shooter" but there are many other
Credit: CODER/AU
kinds of games. But we are not beaten yet -- human skills are still superior in some areas. This is one of the conclusions of a recent study by Danish physicist Jacob Sherson, published in the prestigious science journal Nature. "It may sound dramatic, but we are currently in a race with technology -- and steadily being overtaken in many areas. Features that used to be uniquely human are fully captured by contemporary algorithms. Our results are here to demonstrate that there is still a difference between the abilities of a man and a machine," explains Jacob Sherson. What are quantum computers and how goes playing games help physicist in cutting edge research?Get a few answers in this video about ScienceAtHome. At the interface between quantum physics and computer games, Sherson and his
research group at Aarhus University have identified one of the abilities that still makes us unique compared to a computer's enormous processing power: our skill in approaching problems heuristically and solving them intuitively. The discovery was made at the AU Ideas Centre CODER, where an interdisciplinary team of researchers work to transfer some human traits to the way computer algorithms work. ? Quantum physics holds the promise of immense technological advances in areas ranging from computing to high-precision measurements. However, the problems that need to be solved to get there are so complex that even the most powerful supercomputers struggle with them. This is where the core idea behind CODER--combining the processing power of computers with human ingenuity -- becomes clear. ? Our common intuition: Like Columbus in QuantumLand, the CODER research group mapped out how the human brain is able to make decisions based on intuition and accumulated experience. This is done using the online game "Quantum Moves". Over 10,000 people have played the game that allows everyone contribute to basic research in quantum physics. "The map we created gives us insight into the strategies formed by the human brain. We behave intuitively when we need to solve an unknown problem, whereas for a computer this is incomprehensible. A computer churns through enormous amounts of information, but we can choose not to do this by basing our decision on experience or intuition. It is these intuitive insights that we discovered by analysing the Quantum Moves player solutions," explains Jacob Sherson. ? This is how the "Mind Atlas" looks. Based on 500.000 completed games the group has been able to visualize our ability to solve problems. Each peak on the 'map' represents a good idea, and the area with the most peaks - marked by red rings - are where the human intuition has hit a solution. A computer can then learn to focus on these areas, and in that way 'learn'
Credit: CODER/AU
about the cognitive functions of a human.  The laws of quantum physics dictate an upper speed limit for data manipulation, which in turn sets the ultimate limit to the processing power of quantum computers -- the Quantum Speed ??Limit. Until now a computer algorithm has been used to identify this limit. It turns out that with human input researchers can find much better solutions than the algorithm. "The players solve a very complex problem by creating simple strategies. Where a computer goes through all available options, players automatically search for a solution that intuitively feels right. Through our analysis we found that there are common features in the players' solutions, providing a glimpse into the shared intuition of humanity. If we can teach computers to recognise these good solutions, calculations will be much faster. In a sense we are downloading our common intuition to the computer" says Jacob Sherson. And it works. The group has shown that we can break the Quantum Speed Limit by combining the cerebral cortex and computer chips. This is the new powerful tool in the development of quantum computers and other quantum technologies. We are the new supercomputer: Science is often perceived as something distant and exclusive, conducted behind closed doors. To enter you have to go through years of education, and preferably have a doctorate or two. Now a completely different reality is materializing? In recent years, a new phenomenon has appeared--citizen science breaks down the walls of the laboratory and invites in everyone who wants to contribute. The team at Aarhus University uses games to engage people in voluntary science research. Every week people around the world spend 3 billion hours playing games. Games are entering almost all areas of our daily life and have the potential to become an invaluable resource for science. "Who needs a supercomputer if we can access even a small fraction of this computing power? By turning science into games, anyone can do research in quantum physics. We have shown that games break down the barriers between quantum physicists and people of all backgrounds, providing phenomenal insights into state-of-the-art research. Our project combines the best of both worlds and helps challenge established paradigms in computational research," explains Jacob Sherson. The difference between the machine and us, figuratively speaking, is that we intuitively reach for the needle in a haystack without knowing exactly where it is. We 'guess' based on experience and thereby skip a whole series of bad options. For Quantum Moves, intuitive human actions have been shown to be compatible with the best computer solutions. In the future it will be exciting to explore many other problems with the aid of human intuition. "We are at the borderline of what we as humans can understand when faced with the problems of quantum physics. With the problem underlying Quantum Moves we give the computer every chance to beat us. Yet, over and over again we see that players are more efficient than machines at solving the problem. While Hollywood blockbusters on artificial intelligence are starting to seem increasingly realistic, our results demonstrate that the comparison between man and machine still sometimes favours us. We are very far from computers with human-type cognition," says Jacob Sherson and continues: "Our work is first and foremost a big step towards the understanding of quantum physical challenges. We do not know if this can be transferred to other challenging problems, but it is definitely something that we will work hard to resolve in the coming years."
  • Contacts and sources: Jacob Sherson, Aarhus University, 
  • Citation: " Exploring the quantum speed limit with computer games" Authors: Jens Jakob W. H. SÞrensen, Mads Kock Pedersen, Michael Munch, Pinja Haikka, Jesper HalkjÊr Jensen, Tilo Planke, Morten Ginnerup Andreasen, Miroslav Gajdacz, Klaus MÞlmer, Andreas Lieberoth & Jacob F. Sherson Nature 532, 210–213 (14 April 2016) doi:10.1038/nature17620 http://dx.doi.org/10.1038/nature17620ASource: http://www.ineffableisland.com/
Read More........

Fastest ever brain-computer interface for spelling

Researchers in China have achieved high-speed spelling with a noninvasive brain-computer interface.
Brain–computer interfaces (BCI) are a relatively new and emerging technology allowing direct communication between the brain and an external device. They are used for assisting, augmenting, or repairing cognitive or sensory-motor functions. Research on BCIs began in the 1970s and the first neuroprosthetic devices implanted in humans appeared in the mid-1990s. The past 20 years have seen major progress in BCIs. However, they are still limited by low communication rates, caused by interference from spontaneous electroencephalography (EEG) signals. Now, a team of researchers from Tsinghua University in China, State Key Laboratory Integrated Optoelectronics, Institute of Semiconductors (IOS), and the Chinese Academy of Sciences have developed a greatly improved system. Their EEG-based BCI speller can achieve information transfer rates (ITRs) of 60 characters (∼12 words) per minute, by far the highest ever reported in BCI spellers for either noninvasive or invasive methods. In some of the tests, they reached up to 5.32 bits per second. For comparison, most other
systems in recent years have been at 1 or 2 ITRs. According to the researchers, they achieved this via an extremely high consistency of frequency and phase between the visual flickering signals and the elicited single-trial steady-state visual evoked potentials. Specifically, they developed a new joint frequency-phase modulation (JFPM) method to tag 40 characters with 0.5-seconds-long flickering signals, and created a user-specific target identification algorithm using individual calibration data. A paper describing this breakthrough appears in the 3rd November edition of the journal Proceedings of the National Academy of Sciences (PNAS). In the not-too-distant future, this kind of technology could be applied to other uses, besides medicine. For example, it could be incorporated into smartphones and other consumer electronics to allow texting, typing or other on-screen actions by thought power alone. A partnership between the Japanese government and private sector aims to achieve this by 2020. With continued progress in the speed of BCIs, a new form of "virtual telepathy" could emerge Source: Article
Read More........

Research helping build computers from DNA

Scientists have found a way to "switch" the structure of DNA using copper salts and EDTA (Ethylenediaminetetraacetic acid) -- an agent commonly found in shampoo and other household products. IMAGE: Credit: University of East Anglia
New research from the University of East Anglia could one day help build computers from DNA. Scientists have found a way to 'switch' the structure of DNA using copper salts and EDTA (Ethylenediaminetetraacetic acid) - an agent commonly found in shampoo and other household products. It was previously known that the structure of a piece of DNA could be changed using acid, which causes it to fold up into what is known as an 'i-motif'. But new research published on Tuesday 18 August in the journal Chemical Communications reveals that the structure can be switched a second time into a hair-pin structure using positively-charged copper (copper cations). This change can also be reversed using EDTA. The applications for this discovery include nanotechnology - where DNA is used to make tiny machines, and in DNA-based computing - where computers are built from DNA rather than silicon. It could also be used for detecting the presence of copper cations, which are highly toxic to fish and other aquatic organisms, in water. Lead researcher Dr Zoë Waller, from UEA's school of Pharmacy, said: "Our research shows how the structure of our genetic material - DNA - can be changed and used in a way we didn't realise. "A single switch was possible before - but we show for the first time how the structure can be switched twice. "A potential application of this finding could be to create logic gates for DNA based computing. Logic gates are an elementary building block of digital circuits - used in computers and other electronic equipment. They are traditionally made using diodes or transistors which act as electronic switches. "This research expands how DNA could be used as a switching mechanism for a logic gate in DNA-based computing or in nano-technology." Source: Article
Read More........

Windows 10 may be selling well, how Microsoft are still not satisfied?

Windows 10
Strong demand for Windows 10 surface (also to be the ultimate test of the market) is more the result of its error correction again in the traditional PC market and the value of compromise, rather than a manifestation of the true value, while the market and users for its functionin the mobile space indifference, but also seems to presage Windows 10 in the mobile market than previous Windows 8 will not have too much outstanding performance. Microsoft has been hailed as the mobile market to reverse the decline and transition heavyweight upcoming Windows 10 operating system market. So Windwos 10 can receive favorable for the market, the relationship with Microsoft's future prospects in the mobile market, and "Mobile led the cloud-first" strategy proposed CEO can be implemented. Recently, the results of a recent survey of IT professionals for research institutions Spiceworks released, Microsoft's Windows 10 operating system abnormalities strong potential demand. That plan to install Windows 10 system users in two years the ratio reached 73%, of which about 40% of users plan to install Windows 10 systems within a year. From that perspective, the demand for Windows 10 system is indeed strong, at least compared with its predecessor Windows 8 system is this. It stands to reason such a high market and customer needs, for Microsoft, should be fond of fishes, but in this report we read carefully the user to upgrade or install Windows 10, the main driver ── Strong demand for Microsoft Windows 10 may not be all good, and even worries: In Spiceworks report, Windows 10 system, most attention is the traditional start button return, 64% of respondents believe that, Windows 10 system, the traditional start button back, is one of its most popular feature; secondly, Microsoft for Windows 10 available to Windows 7 andWindows 8 users a free upgrade, got 55 percent of respondents supported, to become the second most popular properties; 51% of respondents believe that the safety performance enhancements, Windows 10 system is the third most popular properties . I do not know the industry see the three main drivers feel? First, from the first drivers to see, so much innovative Windows 10, as it was for the previous generation of Windows 8 errors corrected. Industry know, Windows 8 due to the cancellation of the traditional PC user habits start button and much industry criticism, and become one of the main Windows 8 market performance incompetence. Followed by the so-called free upgrades factors caused. Because smart phones and tablets to the traditional PC and PC impact of highly sophisticated applications, the traditional PC users fromWindows 8 since it has lost the power system upgrade for Windows, so Microsoft on Windows 10 had to take a free market strategy stimulate users to upgrade, so that 55 percent of respondents approved of Microsoft Windows for free, from another perspective, it could be for Windows 10 value (s) of a negative 10. Finally, security.As we all know, security has been paying more attention to the aspects of the Windows system, after all, the majority of enterprises in the current IT system (including PC) running Windows system, so security is a matter of course to do Windows. The above Windows10 upgrade or install three drivers, we believe strong demand for Windows 10 behind, is not a positive driving force for innovation itself due to Windows 10, but for Windows systems prior to correction and change to depreciate the value (from Pay to free) is a market rebound, which is the market and users before the upgrade or install Windows because of its innovative value due to different. It is worth noting that in some innovative features on Windows10 conference was Microsoft deliberately emphasized, such as Cortana, the Edge browser and Continuum drew etc., did not attract the attention of the respondents, that these so-called innovations did not become market and users to upgrade and install Windows 10, the main driving force. Also, I wonder if there is no set term or for other reasons survey, Windows 10 support feature iOS and Android applications, and does not appear in the user is therefore welcomed and select Windows 10, and possession of what proportion of the survey. Perhaps the user does not care about Windows 10 support iOS and Android applications. So the question is, to support iOS and Android applications are Windows10 in the mobile market, the biggest selling point, but gradually marginalized in the PC era, one of the main objectives developed by Microsoft Windows10 is hoping to win the smartphone market, and, ultimately, Microsoft cross-platform, cross-device, unified experience "mobile first" strategy. Industry support for Windows10 iOS and Android applications veiled criticism: First, there will be a "lowest common denominator phenomenon" in the software migration (the most banal metaphor popular product), namely those who are most desirable to use a low-cost means to port software to the people on other platforms, but also the most reluctant in every on a platform that will be the most sophisticated user experience people. The implication is that Windows for iOS and Android compatibility and portability, you can not attract the iOS and Android platforms best software. In addition, compatible Android and iOS versions of software Windows 10, mobile application developers have no reason to think more specially developed Windows versions, which only makes the fragile ecosystems more vulnerable Windows Phone Moreover, if the market and users are more like Android or iOS platform application, then why Windows platform through experience? It seems the market and users do not care for Windows10 compatible Android and iOS applications, or do not become a reason to choose Windows 10 is not unfounded, and this is bound to affect the value and performance of Windows 10 in the mobile market. The above analysis is not difficult to see that the strong demand for Windows10 surface (also to be the ultimate test of the market) is more a result of its error correction again in the traditional PC market and the value of compromise, rather than a manifestation of the true value. While the market and users for their indifference in the mobile space capabilities, but also seems to presage Windows10 have too outstanding performance in the mobile market more compared to the previous Windows 8 will not. It might get higher than the Windows 8 upgrade or install Microsoft is happy to see the amount, but the overall value of Windows and Microsoft cross-platform, cross-device, unified experience "mobile first" strategy from the perspective of Windows 10 in the PC market , it is likely to worry. Because, as may be the last Windows version, in this version of the life cycle of these strategies will be difficult to achieve. Source: Article
Read More........

Nearly half of US jobs could be at risk of computerisation within 20 years

A study by the Oxford Martin School shows that nearly half of US jobs could be at risk of computerisation within 20 years. Transport, logistics and office roles are most likely to come under threat. The new study, a collaboration between Dr Carl Benedikt Frey (Oxford Martin School) and Dr Michael A. Osborne (Department of Engineering Science, University of Oxford), found that jobs in transportation, logistics, as well as office and administrative support, are at "high risk" of automation. More surprisingly, occupations within the service industry are also highly susceptible, despite recent job growth in this sector. "We identified several key bottlenecks currently preventing occupations being automated," says Dr. Osborne. "As big data helps to overcome these obstacles, a great number of jobs will be put at risk." The study examined over 700 detailed occupation types, noting the types of tasks workers perform and the skills required. By weighting these factors, as well as the engineering obstacles currently preventing computerisation, the researchers assessed the degree to which these occupations may be automated in
the coming decades. "Our findings imply that as technology races ahead, low-skilled workers will move to tasks that are not susceptible to computerisation – i.e., tasks that require creative and social intelligence," the paper states. "For workers to win the race, however, they will have to acquire creative and social skills." "While computerisation has been historically confined to routine tasks involving explicit rule-based activities, algorithms for big data are now rapidly entering domains reliant upon pattern recognition and can readily substitute for labour in a wide range of non-routine cognitive tasks. In addition, advanced robots are gaining enhanced senses and dexterity, allowing them to perform a broader scope of manual tasks. This is likely to change the nature of work across industries and occupations." The low susceptibility of engineering and science occupations to computerisation, on the other hand, is largely due to the high degree of creative intelligence they require. However, even these occupations could be taken over by computers in the longer term. Dr Frey said the United Kingdom is expected to face a similar challenge to the US. "While our analysis was based on detailed datasets relating to US occupations, the implications are likely to extend to employment in the UK and other developed countries," he said. Full version of the paper: http://www.futuretech.ox.ac.uk/files/The_Future_of_Employment_OMS_Working_Paper_1.pdfSource: Article
Read More........

Mind Reading Computer Could Communicate With Coma Patients

"Volunteer Duty" Psychology Testing Western researchers have used neuroimaging to read human thought via brain activity when they are conveying specific ‘yes’ or ‘no’ answers. Their findings were published in The Journal of Neuroscience in a study titled, The Brain's Silent Messenger: Using Selective Attention to Decode Human Thought for Brain-Based Communication. According to lead researcher Lorina Naci, the interpretation of human thought from brain activity – without depending on speech or action – is one of the most provoking and challenging frontiers of modern neuroscience. Specifically, patients who are fully conscious and awake, yet, due to brain damage, are unable to show any behavioral responsivity, expose the limits of the neuromuscular system and the necessity for alternate forms of communication. Participants were asked to concentrate on a ‘yes’ or ‘no’ response to questions like “Are you married?” or “Do you have brothers and sisters?” and only think their response, not speak it. “This novel method allowed healthy individuals to answers questions asked in the scanner, simply by paying attention to the word they wanted to convey. By looking at their brain activity we were able to correctly decode the correct answers for each individual,” said Naci, a postdoctoral fellow at Western's Brain and Mind Institute. “The majority of volunteers conveyed their answers within three minutes of scanning, a time window that is well-suited for communication with brain-computer interfaces.” Naci and her Western colleagues Rhodri Cusack, Vivian Z. Jia and Adrian Owen are now utilizing this method to communicate with behaviorally non-responsive patients, who may be misdiagnosed as being in a vegetative state. “The strengths of this technique, especially its ease of use, robustness, and rapid detection, may maximize the chances that any such patient will be able to achieve brain-based communication,” Naci said. Contacts and sources: University of Western Ontario, Source: Nanopatents And InnovationsReference-Image: flickr.com
Read More........

Projectors to double brightness with polarisation breakthrough

Liquid crystal (LC) based projectors could become almost twice as energy efficient and much cheaper as researchers from North Carolina State University in the USA and ImagineOptix Corporation reveal a revolutionary polarising technology. The breakthrough means projectors that rely on batteries will be able to run for almost twice as long and all LC projectors can be made twice as bright.
All LC projectors utilise polarised light. However, researchers say that efficient light sources, such as LEDs, produce unpolarised light. As a result, the light generated by LEDs has to be converted into polarised light before it can be used. According to NC State researchers, the most common method of polarising light involves passing the unpolarised light through a polarizing filter. They claim that this process wastes more than 50% of the originally generated light, with the bulk of the "lost" light being turned into heat. This is a major reason that projectors get hot and have noisy cooling fans. The new technology, demonstrated in a small pico projector and developed at NC State, allows approximately 90 percent of the unpolarised light to be polarised and, therefore, used by the projector "This technology, which we call a polarisation grating-polarisation conversion system (PGPCS), will significantly improve the energy efficiency of LC projectors," said Dr. Michael Escuti, co-author of a paper describing the research and an associate professor of electrical and computer engineering at NC State. "The commercial implications are broad reaching. Projectors that rely on batteries will be able to run for almost twice as long. And LC projectors of all kinds can be made twice as bright but use the same amount of power that they do now." Because only approximately 10 percent of the unpolarised light is converted into heat – as opposed to the more than 50 percent light loss that stems from using conventional polarisation filters – the new technology will also reduce the need for loud cooling fans and enable more compact designs. The technology is a small single-unit assembly composed of four immobile parts. A beam of unpolarised light first passes through an array of lenses, which focus the light into a grid of spots. The light then passes through a polarisation grating, which consists of a thin layer of liquid crystal material on a glass plate. The polarisation grating separates the spots of light into pairs, which have opposite polarisations. The light then passes through a louvered wave plate, which is a collection of clear, patterned plates that gives the beams of light the same polarisation. Finally, a second array of lenses focuses the spots of light back into a single, uniform beam of light.The paper, "Efficient and monolithic polarization conversion system based on a polarization grating," was published July 10 in Applied Optics. The paper was co-authored by Drs. Jihwan Kim and Ravi Komanduri, postdoctoral researchers at NC State; Kristopher Lawler, a research associate at NC State; Jason Kekas, of ImagineOptix Corp.; and Escuti. The research was funded by ImagineOptix, a start-up company co-founded by Escuti and Kekas. Source: InAVate
Read More........

Tech startup 3Gear doubles up on cameras for mm-sensitive gesture control

3Gear, a technology start up in San Francisco, has developed a SDK (software development kit), which uses two 3D cameras taking top down images to accurately track the movements of a user’s hand allowing for much more precise gesture control applications. The current demo system uses a pair of Kinect cameras mounted above a work station on a metal structure, but it is anticipated in future that they could be integrated into a computer display or mounted on it.
The SDK is being released in the hope that third party developers will extend the capabilities of the company’s hand tracking algorithm’s which are an essential part of accurate gesture control.3Gear's system uses two depth cameras (the same type used with Kinect) that capture 30 frames per second. The position of a user's hands and fingers are matched to a database of 30,000 potential hand and finger configurations. The process of identifying and matching to the database—a well-known approach in the gesture-recognition field—occurs within 33 milliseconds, co-founder Robert Wang says, so it feels like the computer can see and respond to even a millimeter finger movement almost instantly. Source: InAVate
Read More........

When a PC Knows Its Owner


Perceptual computing describes a computer's ability to understand voice, hand and facial gestures. Anil Nanduri, director of perceptual computing solutions and products at Intel, describes how computers can
 get to know and respond to voice and gestures. Image: Screen Shot On Video
Read More........

"LlibreOffice" Free And Delight-Full Alternative To MS-office

Subscribe
.Subscribe
Its a very better off free package for home and business user, furnishing all kinds of strong suits as compare to contemporary demand or its even better whatever you want from an office package almost you can avail or more just give half an hour to understand its casual functionality and get ready as LibreOffice is spontaneous and easy to use, and Numerous masses who work with it all day have never faced at a manual – including moving Microsoft Office users, who normally discover all the characteristics they demand are in logical, easy-to-guess places. Plus, when you do need to check something, the built-in help is a valuable aid that answers most of your inquiries. But, for numerous masses, aught renews a good user escort that Teaches you step-by-step how to do what you condition to do. Accomplished LibreOffice user guides are available at following links LibreOffice: "Documentation" as well this page on the wiki. to confirm the composition of the package as per your requirements if you want to obtain this free opportunity, you need to click on following Link: Download or visit another link for your choice of downloads http://www.libreoffice.org/download/, Image: flickr.com
Read More........

By 2018 computers 'will have 5 senses



Some day soon, you'll be able to order a wedding dress on your tablet and feel the fabric and the veil just by touching the screen. When you feel an object, your brain registers the series of vibrations on your skin as being smooth, rough, sharp, etc. Computer sensors are becoming sophisticated enough to do that too.Within the next five years, vibrators within smartphones will be precise enough that they could be designed to mimic the vibrations experienced when your fingers touch a particular surface. Even though you'll  just be touching glass, it will feel like you're touching whatever object is displayed on the screen. Source: The Coming Crisis
Read More........

US Titan Supercomputer Clocked as World's Fastest

The fastest supercomputer, Titan, was sixth on the list when it was was compiled in June
TENNESSEE, USA – The top two spots on the list of the world's most powerful supercomputers have both been captured by the US. The last time the country was in a similar position was three years ago. The fastest machine - Titan, at Oak Ridge National Laboratory in Tennessee - is an upgrade of Jaguar, the system which held the top spot in 2009. The supercomputer will be used to help develop more energy-efficient engines for vehicles, model climate change and research biofuels. It can also be rented to third-parties, and is operated as part of the US Department of Energy's network of research labs. The Top 500 list of supercomputers was published by Hans Muer, professor of computer science at Mannheim, who has been keeping track of developments since 1986. It was released at the SC12 supercomputing conference in Salt Lake City, Utah.Mixed processors Titan leapfrogged the previous champion IBM's Sequoia - which is used to carry out simulations to help extend the life of nuclear weapons - thanks to its mix of central processing unit (CPU) and graphics processing unit (GPU) technologies. According to the Linpack benchmark it operates at 17.59 petaflop/sec - the equivalent of 17,590 trillion calculations per second. The benchmark measures real-world performance - but in theory the machine can boost that to a "peak performance" of more than 20 petaflop/sec. To achieve this the device has been fitted with 18,688 Tesla K20x GPU modules made by Nvidia to work alongside its pre-existing CPUs.  Traditionally
supercomputers relied only on CPUs. CPU cores are designed to carry out a single set of instructions at a time, making them well suited for tasks in which the answer to one calculation is used to work out the next. GPU cores are typically slower at carrying out individual calculations, but make up for this by being able to carry out many at the same time. This makes them best suited for "parallellisable jobs" - processes that can be broken down into several parts that are then run simultaneously. Mixing CPUs and GPUs together allows the most appropriate core to carry out each process. Nvidia said that in most instances its GPUs now carried out about 90% of Titan's workload. "Basing Titan on Tesla GPUs allows Oak Ridge to run phenomenally complex applications at scale, and validates the use of 'accelerated computing' to address our most pressing scientific problems," said Steve Scott, chief technology officer of the GPU accelerated computing business at Nvidia. The other top systems included: (1) Fujitsu's K computer at the Riken Advanced Institute for Computational Science in Kobe, Japan, which was in third spot. (2) IBM's BlueGene/Q Mira computer at Argonne National Library, near Chicago in the US, which came fourth. (3) Another IBM BlueGene/Q system, called Juqueen, at the Forschungszentrum Juelich in Germany - Europe's fastest - which came fifth. Out of the top 500 computers, 62 used a mix of CPU and GPU processors. Six months ago the figure was 58. Source: Korea Times
Read More........