Next generation drones design inspired by nature

© Photo: East News
After being inspired by birds, bats, insects and even flying snakes, researchers from 14 teams have come up with new designs of next generation drones and flying robots. These robots would have the potential to perform multiple tasks from military surveillance to search and rescue, News Tonight reports.
Olga Yazhgunovich: These robots may look similar to many things that nature has given to us in abundance, as flying robot will look like insects and butterflies, Design and Trend says. A report in EurekAlert says that scientists are working on different types of drones that look like different insects and animals. The report also said that scientists have successfully created the smallest drone of all that is as small as merely a millimeter in size. Bioinspiration and Biomimetics journal has come out with fascinating details as to how things are going to shape up in the future as far as the look and shape of the robotic drones are concerned. These drones come with exquisite flight control and can overcome many of the problems drones may face when navigating urban terrain. There is no denying the fact that flying drones are going to be of immense use in different fields in the coming days. It is true that the success of a flying robot depends, obviously, on the exactitude of its flight control, and nothing has more meticulous flight control than the creatures who are born with the gift of flight. Experts are very optimistic about the design and success of such flying robots. Dr. David Lentink of Stanford University says, “Flying animals can be found everywhere in our cities…From scavenging pigeons to alcohol-sniffing fruit flies that make precision landings on our wine glasses, these animals have quickly learnt how to control their flight through urban environments to exploit our resources.” One of the most interesting such robotic drone is a drone under development in Hungary that mimics the flocking of birds. It tries to do it by actually developing an algorithm that allows drones to huddle together while flying through the air. By understanding the ways how tiny insects stabilize themselves in turbulent air, researchers have designed many future drones. One of the researchers from the University of Maryland engineered sensors for their experimental drone based on insects' eyes to mimic amazing capability of flight in clutter. These eyes will act as cameras to record actual position of the drone which will be further monitored by engineers connected to an on-board computer. Another raptor-like appendage for a drone has been designed by some of researchers that can grasp objects at high speeds by swooping in like a bird of prey. Also, a team of researchers led by Prof. Kenny Breuer, at Brown University, has designed an eerily accurate robotic copy of a bat wing with high range of movement, tolerance and flexibility. Prof. Lentink added that membrane based bat wings have better adaptability to airflow and are unbreakable. A few issues will have to be sorted out for the success of such robots. According to the report, one of the biggest challenges facing robotic drones is the ability to survive the elements, such as extreme heat, bitter cold and especially strong winds. To overcome this issue, a team of researchers studied hawk moths as they battled different whirlwind conditions in a vortex chamber, in order to harness their superior flight control mechanisms. Another report in Bioinspiration and Biomimetics says more than a dozen teams are involved in creating flying robots that look like insects, butterflies and others that not just don’t fly in conventional ways but also in unconventional ways and so they are able to fly freely in dense jungles where we cannot expect other drones to fly. Source:http://sputniknews.com/
Read More........

Researchers Teach Machines To Learn Like Humans


A team of scientists has developed an algorithm that captures our learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans. The work, which appears in the latest issue of the journal Science, marks a significant advance in the field -- one that dramatically shortens the time it takes computers to 'learn' new concepts and broadens their application to more creative tasks. A team of scientists has developed an algorithm that captures our learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans. "Our results show that by reverse engineering how people think about a problem, we can develop better algorithms," explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper's lead author. "Moreover, this work points to promising methods to narrow the gap for other machine learning tasks." The paper's other authors were Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines. When humans are exposed to a new concept -- such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet -- they often need only a few examples to understand its make-up and recognize new instances. While machines can now replicate some pattern-recognition tasks previously done only by humans -- ATMs reading the numbers written on a check, for instance -- machines typically need to be given hundreds or thousands of examples to perform with similar accuracy. "It has been very difficult to build machines that require as little data as humans when learning a new concept," observes Salakhutdinov. "Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science." Salakhutdinov helped to launch recent interest in learning with 'deep neural networks,' in a paper published in Science almost 10 years ago with his doctoral advisor Geoffrey Hinton. Their algorithm learned the structure of 10 handwritten character concepts -- the digits 0-9 -- from 6,000 examples each, or a total of 60,000 training examples. In the work appearing in Science this week, the researchers sought to shorten the learning process and make it more akin to the way humans acquire and apply new knowledge -- i.e., learning from a small number of examples and performing a range of tasks, such as generating new examples of a concept or generating whole new concepts. To do so, they developed a 'Bayesian Program Learning' (BPL) framework, where concepts are represented as simple computer programs. For instance, the letter 'A' is represented by computer code -- resembling the work of a computer programmer -- that generates examples of that letter when the code is run. Yet no programmer is required during the learning process: the algorithm programs itself by constructing code to produce the letter it sees. Also, unlike standard computer programs that produce the same output every time they run, these probabilistic programs produce different outputs at each execution. This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter 'A.' While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns "generative models" of processes in the world, making learning a matter of 'model building' or 'explaining' the data provided to the algorithm. In the case of writing and recognizing letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently. The model also "learns to learn" by using knowledge from previous concepts to speed learning on new concepts -- e.g., using knowledge of the Latin alphabet to learn letters in the Greek alphabet. The authors applied their model to over 1,600 types of handwritten characters in 50 of the world's writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic -- and even invented characters such as those from the television series Futurama. In addition to testing the algorithm's ability to recognize new instances of a concept, the authors asked both humans and computers to reproduce a series of handwritten characters after being shown a single example of each character, or in some cases, to create new characters in the style of those it had been shown. The scientists then compared the outputs from both humans and machines through 'visual Turing tests.' Here, human judges were given paired examples of both the human and machine output, along with the original prompt, and asked to identify which of the symbols were produced by the computer. While judges' correct responses varied across characters, for each visual Turing test, fewer than 25 percent of judges performed significantly better than chance in assessing whether a machine or a human produced a given set of symbols. "Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven't seen," notes Tenenbaum. "I've wanted to build models of these remarkable abilities since my own doctoral work in the late nineties. We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts -- even simple visual concepts such as handwritten characters -- in ways that are hard to tell apart from humans."Contacts and sources:James Devitt, New York University Source: http://www.ineffableisland.com/Image: https://pixabay.com/, under Creative Commons CC0
Read More........

'Robot scientist' Eve can discover new drugs faster

London: An artificially-intelligent "robot scientist" could make drug discovery faster and much cheaper, say researchers from University of Cambridge. The "robot scientist" called Eve discovered that a compound shown to have anti-cancer properties might also be used in the fight against malaria. Eve exploits its artificial intelligence to learn from early successes in her screens and select compounds that have a high probability of being active against the chosen drug target. A smart screening system, based on genetically engineered yeast, is used. "This allows Eve to exclude compounds that are toxic to cells and select those that block the action of the parasite protein while leaving any equivalent human protein unscathed," explained professor Steve Oliver from the Cambridge systems biology ventre and the department of biochemistry. This reduces the costs, uncertainty, and time involved in drug screening, and has the potential to improve the lives of millions of people worldwide. "Neglected tropical diseases are a scourge of humanity, infecting hundreds of millions of people, and killing millions of people every year," Oliver added. Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are then screened against assays (tests) designed to be automatically engineered and can be generated much faster and more cheaply than the bespoke assays that are currently standard. "This enables more types of assay to be applied, more efficient use of screening facilities to be made, and thereby increases the probability of a discovery within a given budget," Oliver noted. Eve's robotic system is capable of screening over 10,000 compounds per day, concluded the paper that appeared in the journal Royal Society journal Interface. Source: ummid.com
Read More........

NASA Curiosity rover moves to new location on Mars

Washington, August 20: NASA's Mars Curiosity rover is driving towards the southwest after departing a region where for several weeks it investigated a geological contact zone and rocks that are unexpectedly high in silica and hydrogen. The hydrogen indicates water bound to minerals in the ground, NASA said. In the 'Marias Pass' region, Curiosity successfully used its drill to sample a rock target called 'Buckskin' and then used the camera on its robotic arm for multiple images to be stitched into a self-portrait at the drilling site. The rover finished activities in Marias Pass on August 12 and headed onward up Mount Sharp, the layered mountain it reached in September 2014. In drives on August 12, 13, 14 and 18, it progressed 433 feet (132 meters), bringing Curiosity's total odometry since its August 2012 landing to 11.1 kilometres. Curiosity is carrying with it some of the sample powder drilled from Buckskin. The rover's internal laboratories are analysing the material. The mission's science team members seek to understand why this area bears rocks with significantly higher levels of silica and hydrogen than other areas the rover has traversed. Silica, monitored with Curiosity's laser-firing Chemistry and Camera (ChemCam) instrument, is a rock-forming chemical containing silicon and oxygen, commonly found on Earth as quartz. Hydrogen in the ground beneath the rover is monitored by the rover's Dynamic Albedo of Neutrons (DAN) instrument. It has been detected at low levels everywhere Curiosity has driven and is interpreted as the hydrogen in water molecules or hydroxyl ions bound within or absorbed onto minerals in the rocks and soil. “The ground about 1 meter beneath the rover in this area holds three or four times as much water as the ground anywhere else Curiosity has driven during its three years on Mars," said DAN Principal Investigator Igor Mitrofanov of Space Research Institute, Moscow. DAN first detected the unexpectedly high level of hydrogen using its passive mode. Later, the rover drove back over the area using DAN in active mode, in which the instrument shoots neutrons into the ground and detects those that bounce off the subsurface, but preferentially interacting with hydrogen. The measurements confirmed hydrated material covered by a thin layer of drier material. Curiosity initially noted the area with high silica and hydrogen on May 21 while climbing to a site where two types of sedimentary bedrock lie in contact with each other. Such contact zones can hold clues about ancient changes in environment, from conditions that produced the older rock type to conditions that produced the younger one. — PTI. Source: Article
Read More........

Tech men fear ‘killer robots’

DSC_0126
BUENOS AIRES—It sounds like a science-fiction nightmare. But “killer robots” have the likes of British scientist Stephen Hawking and Apple co-founder Steve Wozniak fretting, and warning they could fuel ethnic cleansing and an arms race. Autonomous weapons, which use artificial intelligence to select targets without human intervention, have been described as “the third revolution in warfare, after gunpowder and nuclear arms,” around 1,000 technology chiefs wrote in an open letter. Unlike drones, which require a human hand in their action, this kind of robot would have some autonomous decision-making ability and capacity to act. “The key question for humanity today is whether to start a global AI [artificial intelligence] arms race or to prevent it from starting,” they wrote. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” said the letter released at the opening of the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires. The idea of an automated killing machine—made famous by Arnold Schwarzenegger’s “Terminator”—is moving swiftly from science fiction to reality, according to the scientists. “The deployment of such systems is—practically if not legally—feasible within years, not decades,” the letter said. The scientists painted the doomsday scenario of autonomous weapons falling into the hands of terrorists, dictators or warlords hoping to carry out ethnic cleansing. “There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people,” the letter said. In addition, the development of such weapons, while potentially reducing the extent of battlefield casualties, might also lower the threshold for going to battle, noted the scientists. The group concluded with an appeal for a “ban on offensive autonomous weapons beyond meaningful human control.” Elon Musk, the billionaire co-founder of PayPal and head of SpaceX, a private space-travel technology venture, also urged the public to join the campaign. “If you’re against a military AI arms race, please sign this open letter,” tweeted the tech boss. Sounding a touch more moderate, however, was Australia’s Toby Walsh. The artificial intelligence professor at NICTA and the University of New South Wales noted that all technologies have potential for being used for good and evil ends. Ricardo Rodriguez, an AI researcher at the University of Buenos Aires, also said worries could be overstated. “Hawking believes that we are closing in on the Apocalypse with robots, and that in the end, AI will be competing with human intelligence,” he said. “But the fact is that we are far from making killer military robots.” Authorities are gradually waking up to the risk of robot wars. Last May, for the first time, governments began talks on the so-called “lethal autonomous weapons systems.” In 2012, Washington imposed a 10-year human control requirement on automated weapons, which was welcomed by campaigners even though they said it should go further. There have been examples of weapons being stopped in their infancy. After UN-backed talks, blinding laser weapons were banned in 1998 before they ever hit the battlefield. Source: ArticleImage: flickr.com
Read More........