Walking robot tested in Finnish repository : Corporate

The ANYmal robot walks through Onkalo's underground tunnels (Image: Tapani Karjanlahti / Posiva)
A four-legged robot designed for autonomous operation in challenging environments has been put through its paces at a depth of more than 400 metres in the tunnels of the Onkalo underground used nuclear fuel repository near Olkiluoto, Finland.

A research team led by the Swiss robotics company ANYbotics visited Olkiluoto in June to test the functionality of its ANYmal robot in underground facilities. The test was organised by Euratom - the European Atomic Energy Community - together with Finnish radioactive waste management company Posiva Oy.

‍The ANYmal robot has been under development for many years. The roots of the ANYbotics company go back to the Swiss Institute of Technology, EHT. A group of researchers from the educational institution built the first four-legged robot back in 2009, and ANYbotics was founded for the commercialisation of this technology in 2016.

The ANYmal robot uses laser sensors and cameras to observe the environment and can locate its own position very precisely. By combining observation data with location data - such as a map or area scan data - it can plan its navigation route independently when necessary.

Posiva said Onkalo offered a unique framework for the robot to move, noting that there are tunnels in other parts of the world, but no other underground disposal facility has yet been built.

During the test, the robot - measuring 93cm in length, 53cm in width and 89cm in height and weighing about 50kg - travelled through the tunnels of Onkalo for about 1.5 hours. With a fully-charged battery, the robot can operate for up to 2 hours. The purpose was to test how far the robot can travel in Onkalo conditions with one charge, and whether there are any terrains in the tunnel where the robot would not be able to advance.

For the test, the robot first "walked" the planned route by remote control, and scanned the map into its internal system. In the test itself, the robot moved along the scanned route autonomously, although all the time in the line of sight with the research team. It was also available for remote control at any moment, for example in case of danger. Various safety functions were programmed into the robot. For example, it went around the obstacles on the route from a certain safety distance and stopped when something came into its safety area.

Authorities are interested in the use of robots for the reason that a robot can reach places that are inaccessible to humans, for example for nuclear material protection inspection work. Carrying out nuclear safeguards with the help of a robot is also of interest to Posiva, the company said. Robots can also be used in rescue operations and industry. They can be equipped with different devices for different tasks, such as optical and thermal cameras, microphones, gas or radiation detectors.

A video of the ANYmal robot in Onkalo can be found here.Researched and written by World Nuclear News  Source: - World Nuclear News
Read More........

Boston Dynamics robots dance to show off their agility

Boston Dynamics has released a new video of its entire range of robots, including Atlas, Spot, and Handle, dancing in unison to show off their versatility, and perhaps to celebrate Hyundai’s newfound interests in the MIT spinoff.
Massachusetts-based Boston Dynamics has thrown up an impressive and unnerving range of robots of varying shapes dancing to the tune of the Motown classic “Do You Love Me.”
The whole gang has hit the dance floor, including the humanoid Atlas first shaking its leg and then joined by Spot the quadruped mobile robot designed for sensing, inspection, and remote operation; Handle, the mobile manipulation robot for moving boxes in the warehouse; Pick, the vision processing solution that uses deep-learning to enable building and depalletising of mixed-SKU pallets, all dancing to the tune in an electrifying dance.
The world’s most dynamic humanoid robot, Atlas is a research platform designed to push the limits of whole-body mobility. Atlas’s advanced control system and state-of-the-art hardware give the robot the power and balance to demonstrate human-level agility.
Atlas has one of the world’s most compact mobile hydraulic systems. Custom motors, valves, and a compact hydraulic power unit enable Atlas to deliver high power to any of its 28 hydraulic joints for impressive feats of mobility.
Boston Dynamics dance video of the robots is meant to tell the world that the humanoids and the quadrupeds are as agile as living creatures and are capable of doing things that are now seen impossible.
Robots already operate in places where ordinary humans find it difficult to function. The bots are deployed to sniff out bombs, patrol oil rigs, monitor Covid-19 patients.
The moves are a bit janky at times, but the mobility and coordination of their routine is impressively fluid for lumps of metal and plastic.Boston Dynamics said it got the gang together to celebrate the start of what is a happier year, it has not disclosed the power behind the robotic dance show. Source:https://www.domain-b.com
Read More........

Realistic masks made in Japan

Super-realistic face masks made by a tiny company in rural Japan are in demand from the domestic tech and entertainment industries and from countries as far away as Saudi Arabia.

The 300,000-yen ($2,650) masks, made of resin and plastic by five employees at REAL-f Co., attempt to accurately duplicate an individual’s face down to fine wrinkles and skin texture.

Company founder Osamu Kitagawa came up with the idea while working at a printing machine manufacturer.

But it took him two years of experimentation before he found a way to use three-dimensional facial data from high quality photographs to make the masks, and started selling them in 2011.

The company, based in the western prefecture of Shiga, receives about 100 orders every year from entertainment, automobile, technology and security companies, mainly in Japan.

For example, a Japanese car company ordered a mask of a sleeping face to improve its facial recognition technology to detect if a driver had dozed off, Kitagawa said.

“I am proud that my product is helping further development of facial recognition technology,” he added.

“I hope that the developers would enhance face identification accuracy using these realistic masks.”

Kitagawa, 60, said he had also received orders from organizations linked to the Saudi government to create masks for the king and princes.

“I was told the masks were for portraits to be displayed in public areas,” he said.

Kitagawa said he works with clients carefully to ensure his products will not be used for illicit purposes and cause security risks, but added he could not rule out such threats.

He said his goal was to create 100 percent realistic masks, and he hoped to use softer materials, such as silicon, in future.

“I would like these masks to be used for medical purposes, which is possible once they can be made using soft materials,” he said.“And as humanoid robots are being developed, I hope this will help developers to create (more realistic robots) at a low cost.” Source: https://www.daily-bangladesh.com
Read More........

Warehouse robot kills 90% of viruses

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), in collaboration with Ava Robotics and the Greater Boston Food Bank (GBFB), have designed a new robotic system that kills microorganisms in its proximity, using ultraviolet light.

During tests at GBFB, the robot drove by pallets and storage aisles at 0.22 miles per hour. At this speed, the robot could cover a 4,000-square-foot warehouse space in just half an hour. Ultraviolet C (UV-C) is a subtype of ultraviolet light that is short-wavelength (100–280 nm) and germicidal. Its light can kill or inactivate microorganisms by destroying nucleic acids and disrupting DNA or RNA. The dosage emitted by the robot seen here neutralised 90% of coronaviruses (and other organisms) on the warehouse surfaces. The results are encouraging enough that the approach could be useful for autonomous UV disinfection in other environments – such as airplanes, factories, restaurants, schools, and supermarkets, according to the researchers. Since UV-C is dangerous for all living organisms, however, it can only operate when nobody is around. MIT designed the UV-C light fixture, which then became integrated with Ava Robotics' mobile robot base. The complete system can map a space and navigate between waypoints and other pre-specified areas. While most effective in the direct "line of sight," the machine can get to nooks and crannies as the light bounces off surfaces. "Our 10-year-old warehouse is a relatively new food distribution facility with AIB-certified, state-of-the-art cleanliness and food safety standards," explained Catherine D'Amato, President of the Greater Boston Food Bank. "COVID-19 is a new pathogen that GBFB, and the rest of the world, was not designed to handle. We are pleased to have this opportunity to work with MIT CSAIL and Ava Robotics to innovate and advance our sanitation techniques to defeat this menace." Food banks are facing a particular demand due to the stress of COVID-19. The United Nations estimates that, because of the virus, the number of people facing severe food insecurity worldwide could double to 265 million. In the U.S. alone, the five-week total of job losses has risen to 26 million, potentially pushing millions more into food insecurity. "Food banks provide an essential service to our communities, so it is critical to help keep these operations running," said Alyssa Pierson, CSAIL research scientist and technical lead of the UV-C lamp assembly. "Here, there was a unique opportunity to provide additional disinfecting power to their current workflow and help reduce the risks of COVID-19
exposure."A shipping area can change overnight, so the team is now researching how to use onboard sensors to adapt to new environments – teaching the robot to differentiate between occupied and unoccupied aisles, for example, so it can switch its path accordingly; and altering its speed to ensure the optimal UV dosage is applied to different objects and surfaces. Comments »Source: https://www.futuretimeline.net
Read More........

Robots likely to be used in classrooms as learning tools, not teachers

Omar Mubin, Western Sydney University and Muneeb Imtiaz Ahmad, Western Sydney UniversityRobots are increasingly being used to teach students in the classroom for a number of subjects across science, maths and language. But our research shows that while students enjoy learning with robots, teachers are slightly reluctant to use them in the classroom. 

In our study, which saw staff and students interact with the Nao humanoid robot, teachers said they were more sceptical of robots being integrated into the classroom. 
 
In our study, students enjoyed the human-like interaction with the Nao humanoid robot. from www.shutterstock.com 

They preferred the robot to not have full autonomy and instead take on restricted roles in the classroom. The teachers also wanted full control over the robot. We observed that the teachers were in general unaware of robots and hence there was a technological bias associated with their opinions.

They said they did not trust the technical capabilities of the robot and wanted the robot to function and behave as a learning “buddy” of children and not as a teacher. We think this reluctance may have occurred primarily due to an uncertainty of how best to incorporate robots in the class, and a lingering concern that robots may eventually replace teachers.

This is despite research showing that robots are much more likely to be used as learning tools than as teachers in a classroom. 

The students, on the other hand, were much more enthusiastic about a robot in their classroom, enjoying the human-like interaction. 

However, they wanted the robot to adapt its behaviour to their feelings and display a wide range of emotions and expressions. Such fully autonomous behaviour will require further research and development in robotics.

For example, some of the children felt the robot’s voice was unnatural and did not adapt to situations by changing tone or pitch.

The children preferred as natural behaviour from the robot as possible, even to the extent that they were untroubled by the robot making mistakes, such as forgetting. It was clear the children were imagining the robot in the role of their teacher. 

How robots are currently used in the classroom:

Numerous types of robots are being incorporated in education. They range from simple “microprocessor on wheels” robots (boebot), to advanced toolkits, (mindstorms) to humanoids (robots that resemble humans). 

The choice of the robot is usually dictated by the area of study and the age group of the student. 

Smaller robots or toolkits are particularly used to teach robotics or computer science. These toolkits can be physically manipulated allowing students to learn a variety of disciplines across engineering. However, the human-like shape of humanoids makes them easier to interact with, and for this reason are often used for language lessons.

Humanoids have the ability to provide real-time feedback, and their physical shape increases engagement. This often leads to a personal connection with the student, which research shows can help resolve issues related to shyness, reluctance, confidence and frustration that may arise in dealing with a human teacher. For example, a robot will not get tired no matter how many mistakes a child makes.

Humanoid robots are being widely utilised in classrooms in many countries including, Japan and South Korea. 


Nao, Pepper, Tiro, IROBI, and Robovie, for example, are primarily used to teach English. 

Telepresence – where a teacher can remotely connect to the classroom through the robot – is also being used as a way to teach students English. The teacher can participate in the classroom by being virtually present through a display mechanism. In some instances, the display is embedded in the robot’s torso.

Western countries have been much more hesitant in acknowledging the integration of robots in classrooms, with privacy, developmental hindrances, the rise in unemployment and technical deficiencies stated as the major drawbacks. 

Robots as learning tools, not teachers: 

Humanoid robots are still a fair way away from being autonomously situated in schools due mainly to technological limitations such as inaccurate speech or emotion recognition.

However, the intention of most researchers in robotics is not for robots to replace teachers. Rather, the design goals of most robots are to function as an aid in the classroom and to enhance the added value they can bring as a stimulating and engaging educational tool. 

In order to facilitate the integration of robots in the classroom, we need to be able to provide appropriate interfacing mechanisms (software, hardware or even mobile apps), allowing the human teacher to control the robot with minimal training.

Omar Mubin, Lecturer in human-centred computing & human-computer interaction, Western Sydney University and Muneeb Imtiaz Ahmad, PhD Candidate in Social Robotics, Western Sydney University

This article was originally published on The Conversation. Read the original article.
The Conversation
Read More........

Forget drones, self-flying RoboBees may soon take flight

Washington DC, June 8Our skies are about to get a lot more high-tech as a team of researchers is developing robotic bees that can fly themselves. The Harvard University's National Science Foundation (NSF)-supported RoboBees project aims to create autonomous robotic insects capable of sustained, independent flight. Such robots could one day assist in reconnaissance, aid in remote communication or even act as artificial pollinators. Led by principal investigator Robert Wood, the researchers have designed increasingly sophisticated and tiny robots with a range of features that will one day soon enable autonomous flying. To do so the team required to advance basic research in a number of areas where they saw obstacles to realizing their vision: from micro-manufacturing methods and materials for actuation, to small-scale energy storage and algorithms to effectively control individuals and coordinated swarms of robots. The group's research led to breakthroughs in each of these areas. Highlights include new methods for manufacturing millimeter-scale devices based on lamination and folding; new sensors applicable to low-power and mobile computing applications; architectures for ultra-low power computing; and coordination algorithms for collections of hundreds or even thousands of robots to work together. The team was inspired by nature, specifically the incredible ability of small insects to self-launch, navigate and perform agile actions despite their small bodies. "Bees and other social insects provide a fascinating model for engineered systems that can maneuver in unstructured environments, sense their surroundings, communicate and perform complex tasks as a collective full of relatively simple individuals," Wood said. "The RoboBees project grew out of this inspiration and has developed solutions to numerous fundamental challenges -- challenges that are motivated by the small scale of the individual and large scale of the collective." Today's RoboBee weighs only 84 mg, roughly the same size and even lighter than a real bee, and represents a model of successful interdisciplinary collaboration. Wood estimates it will take another five to 10 years before the RoboBee might be ready for use in the real world. — ANI. Source: http://www.tribuneindia.com/
Read More........

Bionic kangaroo produces its own energy, human gestures control every move

© Photo: Voice of Russia
A bionic kangaroo has been built and is producing its own energy. Festo, a German automation firm, has made the robotic animal listen to human gestures in order for it to move about. The robotic creature stores up and uses the kinetic energy from its own movements.
Weighing in at 15.4 pounds and topping off at about three feet, three inches tall the robot does not move as swiftly as its natural-born counterpart. Inventors took more than two years to develop the tech animal, with its body parts consisting of elastic tendons, pneumatics and servos to create enough energy to hop about as the animal's signature move. Inside of each leg is a pneumatic cylinder paired with an elastic tendon. When the robotic creature is switched on, the tendons are already pre-tensed and the robot leans forward in turn shifting its center of gravity. The creature's pneumatic cylinders release the tendons only once the perfect angle with the best velocity is achieved—then the robot jumps into the air. Moving the legs forward and lifting the tail are servos. It converts the kinetic energy from the jump into stored energy
when it executes its landing. In addition to the robotic kangaroo being able to generate its own energy, it uses lithium polymer rechargeable batteries, which can be taken out of the kangaroo to be charged up. Gesture-controlled, the team installed the robot with a Bluetooth device where it can communicate with the robot using a special bracelet, according to an article on cnet.com.au. The BionicKangaroo will not be made available for commercial use. This creation is rather a proof-of-concept that shows off how pneumatic and electric drive tech can come together to produce energy to both recover and store in the robot. Festo has created a short video for viewers to watch and see how the robotic kangaroo works in action. The short clip, which last for about three minutes, proves just how far technology has come. The video of the new creation can be viewed below, and can also be watched on Festo.com or YouTube. Source: http://sputniknews.com/
Read More........

'Next generation' humanoid robot revealed


"Next generation" humanoid robot revealed: Boston Dynamics has revealed the latest version of its Atlas humanoid robot, featuring eerily lifelike movements and reactions. This new generation of the Atlas robot – designed to operate both outdoors and inside – is specialized for mobile manipulation. Electrically powered and hydraulically actuated, it uses sensors in its body and legs to balance, with LIDAR and stereo sensors in its head to avoid obstacles, assess the terrain, help with navigation and manipulate objects. This machine is 5' 9" tall (about a head shorter than the DRC Atlas) and weighs 180 lbs (81 kg), which is much lighter than its 330 lb (150 kg) predecessor from a couple of years ago.'Source: http://www.futuretimeline.net/
Read More........

Robots predicted to mate to produce superior offspring


© Flickr.com/ cgfaulkner/cc-by-nc-sa 3.0
British scientists believe it will take about 30 years for robots to start mating with one another to produce offspring, which will be even more advanced than their robotic parents intellectually, technologically and functionally. But just how far-fetched this picture of our not-so-distant future is? Russian engineers and futurologists have answered this question to the Voice of Russia.
Alexander Pelt: Russia has grown skeptical of endless finds that British scientists seem to be constantly coming up with. And indeed it is getting increasingly harder to believe in all those fantastic – bordering on absurd – discoveries that make headlines in the UK, the most recent being math-doing plants and homely women that can be bad for your liver. Now a British artificial intelligence engineer and novelist George Zarkadakis has revealed that in the near future androids could have sex, though not for pleasure, but essentially to reproduce and create “super” offspring. This may indeed happen, Russian futurologist Maxim Kalashnikov says, although the process will probably be very different from human procreation. “Robots don’t need to have sex as we know it to self-replicate. Robots are sexless as they are, so there’s probably no need to make them imitate human reproductive behavior. A robot with an artificial intelligence will be able to switch on production lines and build more of its kind.” Hence, we can’t really expect robot sex and birth-giving to resemble those typical of humankind. One of the possible ways for robots to self-replicate could be to have them swap software and print out their “children” on a 3D copier, which means the entire cycle from robotic “conception” to birth would take minutes. George Zarkadakis also says that sex could protect robots from viruses and make them more robust. The British visionary even predicted that robots might breed with humans, creating powerful hybrid species that could potentially have new amazing capabilities. Does it mean that Darwin’s evolution theory can be true for machines as well as mankind? Eduard Proidakov, an analyst with the Russian technological NGO Modernization, says this idea is too far-fetched because the human race itself has not been properly studied yet. “A human as a machine is a biological system that requires in-depth study. Thirty years is a short span. I suspect we are in for tremendous breakthroughs on how the human organism works. Perhaps we will be able to print out organs in 10-15 years. As for these hybrids, I don’t think it's a viable thought. They would be sort of cyborgs. I don’t think the human race will go that way.” In this sense, it’s worth remembering how ideas of Jules Verne were viewed during his time. His novels about travelling to the Moon and underwater exploits of the boat Nautilus were merely smiled at by 19th century scientists. So don’t be too surprised if you meet a hulking man in the street who will tell you in a metallic voice: “I need your clothes, your boots and your motorcycle.” That’d just mean that British scientists have guessed something right for once and robots got their hands on 3D printers. Source: http://sputniknews.com/
Read More........

Russia Developing Terrorist-Killer Robots

Russian experts are developing robots designed to minimize casualties in terrorist attacks and neutralize terrorists, Deputy Prime Minister Dmitry Rogozin said on Friday.
By Dmitry Rogozin: Robots could also help evacuate injured servicemen and civilians from the scene of a terrorist attack, said Rogozin, who oversees the defense industry. Other antiterror equipment Russia is developing includes systems that can see terrorists through obstacles and effectively engage them in a standoff mode at a long distance without injuring their hostages, he said. Rogozin did not say when the equipment might be deployed by Russia's security and intelligence services. Human Rights Watch has criticized fully autonomous weapons, known as "killer robots," which would be able to select and engage targets without human intervention and called for the preemptive prohibition on such weapons. "Fully autonomous weapons do not exist yet, but they are being developed by several countries and precursors to fully autonomous weapons have already been deployed by high-tech militaries," HRW said in a statement on its website. "Some experts predict that fully autonomous weapons could be operational in 20 to 30 years," the human rights watchdog said. Voice of Russia, RIA. Source: http://sputniknews.com/
Read More........

Next generation drones design inspired by nature

© Photo: East News
After being inspired by birds, bats, insects and even flying snakes, researchers from 14 teams have come up with new designs of next generation drones and flying robots. These robots would have the potential to perform multiple tasks from military surveillance to search and rescue, News Tonight reports.
Olga Yazhgunovich: These robots may look similar to many things that nature has given to us in abundance, as flying robot will look like insects and butterflies, Design and Trend says. A report in EurekAlert says that scientists are working on different types of drones that look like different insects and animals. The report also said that scientists have successfully created the smallest drone of all that is as small as merely a millimeter in size. Bioinspiration and Biomimetics journal has come out with fascinating details as to how things are going to shape up in the future as far as the look and shape of the robotic drones are concerned. These drones come with exquisite flight control and can overcome many of the problems drones may face when navigating urban terrain. There is no denying the fact that flying drones are going to be of immense use in different fields in the coming days. It is true that the success of a flying robot depends, obviously, on the exactitude of its flight control, and nothing has more meticulous flight control than the creatures who are born with the gift of flight. Experts are very optimistic about the design and success of such flying robots. Dr. David Lentink of Stanford University says, “Flying animals can be found everywhere in our cities…From scavenging pigeons to alcohol-sniffing fruit flies that make precision landings on our wine glasses, these animals have quickly learnt how to control their flight through urban environments to exploit our resources.” One of the most interesting such robotic drone is a drone under development in Hungary that mimics the flocking of birds. It tries to do it by actually developing an algorithm that allows drones to huddle together while flying through the air. By understanding the ways how tiny insects stabilize themselves in turbulent air, researchers have designed many future drones. One of the researchers from the University of Maryland engineered sensors for their experimental drone based on insects' eyes to mimic amazing capability of flight in clutter. These eyes will act as cameras to record actual position of the drone which will be further monitored by engineers connected to an on-board computer. Another raptor-like appendage for a drone has been designed by some of researchers that can grasp objects at high speeds by swooping in like a bird of prey. Also, a team of researchers led by Prof. Kenny Breuer, at Brown University, has designed an eerily accurate robotic copy of a bat wing with high range of movement, tolerance and flexibility. Prof. Lentink added that membrane based bat wings have better adaptability to airflow and are unbreakable. A few issues will have to be sorted out for the success of such robots. According to the report, one of the biggest challenges facing robotic drones is the ability to survive the elements, such as extreme heat, bitter cold and especially strong winds. To overcome this issue, a team of researchers studied hawk moths as they battled different whirlwind conditions in a vortex chamber, in order to harness their superior flight control mechanisms. Another report in Bioinspiration and Biomimetics says more than a dozen teams are involved in creating flying robots that look like insects, butterflies and others that not just don’t fly in conventional ways but also in unconventional ways and so they are able to fly freely in dense jungles where we cannot expect other drones to fly. Source:http://sputniknews.com/
Read More........

Researchers Teach Machines To Learn Like Humans


A team of scientists has developed an algorithm that captures our learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans. The work, which appears in the latest issue of the journal Science, marks a significant advance in the field -- one that dramatically shortens the time it takes computers to 'learn' new concepts and broadens their application to more creative tasks. A team of scientists has developed an algorithm that captures our learning abilities, enabling computers to recognize and draw simple visual concepts that are mostly indistinguishable from those created by humans. "Our results show that by reverse engineering how people think about a problem, we can develop better algorithms," explains Brenden Lake, a Moore-Sloan Data Science Fellow at New York University and the paper's lead author. "Moreover, this work points to promising methods to narrow the gap for other machine learning tasks." The paper's other authors were Ruslan Salakhutdinov, an assistant professor of Computer Science at the University of Toronto, and Joshua Tenenbaum, a professor at MIT in the Department of Brain and Cognitive Sciences and the Center for Brains, Minds and Machines. When humans are exposed to a new concept -- such as new piece of kitchen equipment, a new dance move, or a new letter in an unfamiliar alphabet -- they often need only a few examples to understand its make-up and recognize new instances. While machines can now replicate some pattern-recognition tasks previously done only by humans -- ATMs reading the numbers written on a check, for instance -- machines typically need to be given hundreds or thousands of examples to perform with similar accuracy. "It has been very difficult to build machines that require as little data as humans when learning a new concept," observes Salakhutdinov. "Replicating these abilities is an exciting area of research connecting machine learning, statistics, computer vision, and cognitive science." Salakhutdinov helped to launch recent interest in learning with 'deep neural networks,' in a paper published in Science almost 10 years ago with his doctoral advisor Geoffrey Hinton. Their algorithm learned the structure of 10 handwritten character concepts -- the digits 0-9 -- from 6,000 examples each, or a total of 60,000 training examples. In the work appearing in Science this week, the researchers sought to shorten the learning process and make it more akin to the way humans acquire and apply new knowledge -- i.e., learning from a small number of examples and performing a range of tasks, such as generating new examples of a concept or generating whole new concepts. To do so, they developed a 'Bayesian Program Learning' (BPL) framework, where concepts are represented as simple computer programs. For instance, the letter 'A' is represented by computer code -- resembling the work of a computer programmer -- that generates examples of that letter when the code is run. Yet no programmer is required during the learning process: the algorithm programs itself by constructing code to produce the letter it sees. Also, unlike standard computer programs that produce the same output every time they run, these probabilistic programs produce different outputs at each execution. This allows them to capture the way instances of a concept vary, such as the differences between how two people draw the letter 'A.' While standard pattern recognition algorithms represent concepts as configurations of pixels or collections of features, the BPL approach learns "generative models" of processes in the world, making learning a matter of 'model building' or 'explaining' the data provided to the algorithm. In the case of writing and recognizing letters, BPL is designed to capture both the causal and compositional properties of real-world processes, allowing the algorithm to use data more efficiently. The model also "learns to learn" by using knowledge from previous concepts to speed learning on new concepts -- e.g., using knowledge of the Latin alphabet to learn letters in the Greek alphabet. The authors applied their model to over 1,600 types of handwritten characters in 50 of the world's writing systems, including Sanskrit, Tibetan, Gujarati, Glagolitic -- and even invented characters such as those from the television series Futurama. In addition to testing the algorithm's ability to recognize new instances of a concept, the authors asked both humans and computers to reproduce a series of handwritten characters after being shown a single example of each character, or in some cases, to create new characters in the style of those it had been shown. The scientists then compared the outputs from both humans and machines through 'visual Turing tests.' Here, human judges were given paired examples of both the human and machine output, along with the original prompt, and asked to identify which of the symbols were produced by the computer. While judges' correct responses varied across characters, for each visual Turing test, fewer than 25 percent of judges performed significantly better than chance in assessing whether a machine or a human produced a given set of symbols. "Before they get to kindergarten, children learn to recognize new concepts from just a single example, and can even imagine new examples they haven't seen," notes Tenenbaum. "I've wanted to build models of these remarkable abilities since my own doctoral work in the late nineties. We are still far from building machines as smart as a human child, but this is the first time we have had a machine able to learn and use a large class of real-world concepts -- even simple visual concepts such as handwritten characters -- in ways that are hard to tell apart from humans."Contacts and sources:James Devitt, New York University Source: http://www.ineffableisland.com/Image: https://pixabay.com/, under Creative Commons CC0
Read More........

'Robot scientist' Eve can discover new drugs faster

London: An artificially-intelligent "robot scientist" could make drug discovery faster and much cheaper, say researchers from University of Cambridge. The "robot scientist" called Eve discovered that a compound shown to have anti-cancer properties might also be used in the fight against malaria. Eve exploits its artificial intelligence to learn from early successes in her screens and select compounds that have a high probability of being active against the chosen drug target. A smart screening system, based on genetically engineered yeast, is used. "This allows Eve to exclude compounds that are toxic to cells and select those that block the action of the parasite protein while leaving any equivalent human protein unscathed," explained professor Steve Oliver from the Cambridge systems biology ventre and the department of biochemistry. This reduces the costs, uncertainty, and time involved in drug screening, and has the potential to improve the lives of millions of people worldwide. "Neglected tropical diseases are a scourge of humanity, infecting hundreds of millions of people, and killing millions of people every year," Oliver added. Eve is designed to automate early-stage drug design. First, she systematically tests each member from a large set of compounds in the standard brute-force way of conventional mass screening. The compounds are then screened against assays (tests) designed to be automatically engineered and can be generated much faster and more cheaply than the bespoke assays that are currently standard. "This enables more types of assay to be applied, more efficient use of screening facilities to be made, and thereby increases the probability of a discovery within a given budget," Oliver noted. Eve's robotic system is capable of screening over 10,000 compounds per day, concluded the paper that appeared in the journal Royal Society journal Interface. Source: ummid.com
Read More........

NASA Curiosity rover moves to new location on Mars

Washington, August 20: NASA's Mars Curiosity rover is driving towards the southwest after departing a region where for several weeks it investigated a geological contact zone and rocks that are unexpectedly high in silica and hydrogen. The hydrogen indicates water bound to minerals in the ground, NASA said. In the 'Marias Pass' region, Curiosity successfully used its drill to sample a rock target called 'Buckskin' and then used the camera on its robotic arm for multiple images to be stitched into a self-portrait at the drilling site. The rover finished activities in Marias Pass on August 12 and headed onward up Mount Sharp, the layered mountain it reached in September 2014. In drives on August 12, 13, 14 and 18, it progressed 433 feet (132 meters), bringing Curiosity's total odometry since its August 2012 landing to 11.1 kilometres. Curiosity is carrying with it some of the sample powder drilled from Buckskin. The rover's internal laboratories are analysing the material. The mission's science team members seek to understand why this area bears rocks with significantly higher levels of silica and hydrogen than other areas the rover has traversed. Silica, monitored with Curiosity's laser-firing Chemistry and Camera (ChemCam) instrument, is a rock-forming chemical containing silicon and oxygen, commonly found on Earth as quartz. Hydrogen in the ground beneath the rover is monitored by the rover's Dynamic Albedo of Neutrons (DAN) instrument. It has been detected at low levels everywhere Curiosity has driven and is interpreted as the hydrogen in water molecules or hydroxyl ions bound within or absorbed onto minerals in the rocks and soil. “The ground about 1 meter beneath the rover in this area holds three or four times as much water as the ground anywhere else Curiosity has driven during its three years on Mars," said DAN Principal Investigator Igor Mitrofanov of Space Research Institute, Moscow. DAN first detected the unexpectedly high level of hydrogen using its passive mode. Later, the rover drove back over the area using DAN in active mode, in which the instrument shoots neutrons into the ground and detects those that bounce off the subsurface, but preferentially interacting with hydrogen. The measurements confirmed hydrated material covered by a thin layer of drier material. Curiosity initially noted the area with high silica and hydrogen on May 21 while climbing to a site where two types of sedimentary bedrock lie in contact with each other. Such contact zones can hold clues about ancient changes in environment, from conditions that produced the older rock type to conditions that produced the younger one. — PTI. Source: Article
Read More........

Tech men fear ‘killer robots’

DSC_0126
BUENOS AIRES—It sounds like a science-fiction nightmare. But “killer robots” have the likes of British scientist Stephen Hawking and Apple co-founder Steve Wozniak fretting, and warning they could fuel ethnic cleansing and an arms race. Autonomous weapons, which use artificial intelligence to select targets without human intervention, have been described as “the third revolution in warfare, after gunpowder and nuclear arms,” around 1,000 technology chiefs wrote in an open letter. Unlike drones, which require a human hand in their action, this kind of robot would have some autonomous decision-making ability and capacity to act. “The key question for humanity today is whether to start a global AI [artificial intelligence] arms race or to prevent it from starting,” they wrote. “If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable,” said the letter released at the opening of the 2015 International Joint Conference on Artificial Intelligence in Buenos Aires. The idea of an automated killing machine—made famous by Arnold Schwarzenegger’s “Terminator”—is moving swiftly from science fiction to reality, according to the scientists. “The deployment of such systems is—practically if not legally—feasible within years, not decades,” the letter said. The scientists painted the doomsday scenario of autonomous weapons falling into the hands of terrorists, dictators or warlords hoping to carry out ethnic cleansing. “There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people,” the letter said. In addition, the development of such weapons, while potentially reducing the extent of battlefield casualties, might also lower the threshold for going to battle, noted the scientists. The group concluded with an appeal for a “ban on offensive autonomous weapons beyond meaningful human control.” Elon Musk, the billionaire co-founder of PayPal and head of SpaceX, a private space-travel technology venture, also urged the public to join the campaign. “If you’re against a military AI arms race, please sign this open letter,” tweeted the tech boss. Sounding a touch more moderate, however, was Australia’s Toby Walsh. The artificial intelligence professor at NICTA and the University of New South Wales noted that all technologies have potential for being used for good and evil ends. Ricardo Rodriguez, an AI researcher at the University of Buenos Aires, also said worries could be overstated. “Hawking believes that we are closing in on the Apocalypse with robots, and that in the end, AI will be competing with human intelligence,” he said. “But the fact is that we are far from making killer military robots.” Authorities are gradually waking up to the risk of robot wars. Last May, for the first time, governments began talks on the so-called “lethal autonomous weapons systems.” In 2012, Washington imposed a 10-year human control requirement on automated weapons, which was welcomed by campaigners even though they said it should go further. There have been examples of weapons being stopped in their infancy. After UN-backed talks, blinding laser weapons were banned in 1998 before they ever hit the battlefield. Source: ArticleImage: flickr.com
Read More........

History Making Operation Gives Man Two Prosthetic Arms And Hands He Can Control

Baugh completes a task showcasing his control of the MPL.: Image Credit: Johns Hopkins University Applied Physics Laborator
A Colorado man made history at the Johns Hopkins University Applied Physics Laboratory (APL) this summer when he became the first bilateral shoulder-level amputee to wear and simultaneously control two of the Laboratory’s Modular Prosthetic Limbs. Most importantly, Les Baugh, who lost both arms in an electrical accident 40 years ago, was able to operate the system by simply thinking about moving his limbs, performing a variety of tasks during a short training period. Baugh was in town for two weeks in June as part of an APL-funded research effort to further assess the usability of the MPL, developed over the past decade as part of the Revolutionizing Prosthetics Program. Before putting the limb system through the paces, Baugh had to undergo a surgery at Johns Hopkins Hospital known as targeted muscle
reinnervation. “It’s a relatively new surgical procedure that reassigns nerves that once controlled the arm and the hand,” explained Johns Hopkins Trauma Surgeon Albert Chi, M.D. “By reassigning existing nerves, we can make it possible for people who have had upper-arm amputations to control their prosthetic devices by merely thinking about the action they want to perform.” After recovery, Baugh visited the Laboratory for training on the use of the MPLs. First, he worked with researchers on the pattern recognition system. “We use pattern recognition algorithms to identify individual muscles that are contracting, how well they communicate with each other, and their amplitude and frequency,” Chi explained. “We take that information and translate that into actual movements within a prosthetic.”
Baugh completes a task showcasing his control of the MPL. Image Credit: Johns Hopkins University Applied Physics Laboratory
Then Baugh was fitted for a custom socket for his torso and shoulders that supports the prosthetic limbs and also makes the neurological connections with the reinnervated nerves. While the socket got its finishing touches, the team had him work with the limb system through a Virtual Integration Environment (VIE), a virtual-reality version of the MPL. The VIE is completely interchangeable with the prosthetic limbs and through APL’s licensing process currently provides 19 groups in the research community with a low-cost means of testing brain–computer interfaces. It’s being used to test novel neural interface methods and study phantom limb pain, and serves as a portable training system. By the time the socket
Bobby Armiger observes Baugh handing Albert Chi, M.D., a ball., Image Credit: Johns Hopkins University Applied Physics Laboratory
was finished, Baugh said he was more than ready to get started. When he was fitted with the socket, and the prosthetic limbs were attached, he said “I just went into a whole different world.” He moved several objects, including an empty cup from a counter-shelf height to a higher shelf, a task that required him to coordinate the control of eight separate motions to complete. “This task simulated activities that may commonly be faced in a day-to-day environment at home,” said APL’s Courtney Moran, a prosthetist working with Baugh. “This was significant because this is not possible with currently available prostheses. He was able to do this with only 10 days of training, which demonstrates the intuitive nature
APL prosthetist Courtney Moran looks on as Les Baugh tests out the Modular Prosthetic Limbs. , Image Credit: Johns Hopkins University Applied Physics Laboratory
of the control.” Moran said the research team was floored by what Baugh was able to accomplish. “We expected him to exceed performance compared to what he might achieve with conventional systems, but the speed with which he learned motions and the number of motions he was able to control in such a short period of time was far beyond expectation,” she said. “What really was amazing, and was another major milestone with MPL control, was his ability to control a combination of motions across both arms at the same time. This was a first for simultaneous bimanual control.” RP Principal Investigator Michael McLoughlin said “I think we are just getting started. It’s like the early days of the Internet. There is just a tremendous amount of potential ahead of us, and we’ve just started down this road. And I think the next five to 10 years are going to bring phenomenal advancement.” The next step, McLoughlin said, is to send Baugh home with a pair of limb systems so that he can see how they integrate with his everyday life. Baugh is looking forward to that day. “Maybe for once I’ll be able to put change in the pop machine and get pop out of it,” he said. He’s looking forward to doing “simple things that most people don’t think of. And it’s re-available to me.”  Contacts and sources:  Paulette Campbell, The Applied Physics Laboratory, The Johns Hopkins University,, Source: Article
Read More........

Robots to replace teachers at school in Abu Dhabi


Robots will replace teachers at a school in Abu Dhabi, after the Merryland International School made an investment of 500,000 Dirhams (£80,000) in 30 robots. The Humanoid AISOY Raspberry Pi robot will teach basic maths while Nao, a 57-cm tall Evolution Humanoid robot, will teach children with special needs.
Students from grade three to 12 will benefit from the new robotic lab. Commenting on the announcement, Susheela George, the founder of the private school, told Gulf News: "I have sourced out some of the best and most advanced robots including humanoids, quadrupeds, hexapods, flying robots and pet robots from all over the world. Our aim is to mould future scientists, designers, engineers and leaders." Nao can walk, talk and recognise human emotions. The programmable robot can also be used to explore topics in robotics, computer science, and social sciences. The school has also invested in Genibo the robot dog which falls asleep if the students are not paying attention. "The course will start in full swing next year after the teachers have completed their training from Pittsburgh University in the US," said George. George Fernandes, head of department, ICT, said the robotics lab is just the beginning: "We plan to go a long way in making robotics science a subject of study for our students. Children can learn from scratch the science of assembling a robot. At an advanced level, high school students will be taught how to program robots with built-in intelligence," said Fernandes. The robotic lab was inaugurated on August 22 by Dr. John Netting, director general of European Business Assembly (EBA), Oxford and Dr. Martin Moore-Ede and Dr. Donna Martin from Harvard University. Source: Inavateonthenet
Read More........

Robots will become part of the community in the year 2029

With the steady development in the world of robotics and artificial intelligence seems that we will soon see pm New technology becomes robots part of our new, at leastthis is one of the prevailing theories in this period and adopted by one of the pioneers of this field, which (Ray Kurzweil). According to Ray Kurzweil, (Sector Manager Robotics and Artificial Intelligence in Google), it in the year 2029 will become robots able to interact with humans in a manner similar to the human interaction, and will not only become agencies for the implementation of difficult tasks, but it may become part of the community and involved human relations emotional and respond to him as well and is what brings us to Android
Pepper is able to read emotions and interact with them. It seems that the evolution of the robot will be reflected on communities in general, not just on the level of economic and technological and industrial, but rather on the social level as well, and whatever We disagree with (Ray Kurzweil), but we must not forget his place in the field and talking about the experience of being a and experience. Source: Article
Read More........

Ethics: Robots, androids, and cyborgs

There may come a time when robots, androids, and cyborgs will be more than science fiction and develop "intelligence" and with intelligence comes decision-making, freedom, responsibility--ETHICS.

One of the local television stations last night dug into its vaults and aired "Westworld" [MGM-1973] written and directed by Michael Crichton and staring Yul Brynner, Richard Benjamin, and James Brolin. An inexpensive film shot on studio back lots, dessert, and Harold Lloyd's estate, the film exploits dreams of a perfect fantasy vacation [at $1,000 a day] at an amusement facility called Delos where the paying adventurer can choose from Roman World, Medieval World, and Western World. Sophisticated androids are the counterparts of the human visitors and bend at the will of human interaction with NO harm to the humans. Well, maybe. Minor glitches happen which are expected in the complicated computer setup...normal malfunction parameters as expressed by a review board. It isn't much longer when the "glitches" become more complicated and numerous until finally there ensues android revolt--utter chaos. Humans are dying. Not a good thing for the investors of Delos...paid realism with deathly results. Yul Brynner [the gunslinger from the "Western World"] runs amuck, the scientists/programers are sealed in their room with locked doors and perish from asphyxiation, James Brolin dies for real in a shoot out with Yul Brynner, and the rest of the film is a quest by the gunslinger to get Richard Benjamin at all costs. Human ingenuity and reason finally foil the gunslinger, but the whole film, beyond its entertainment value, is the question of the rise of mechanical machines driven by computer programs and the establishment of ethical values. The film neither revealed why the androids changed [substandard, untested components?] and why they took an "evil" and "destructive" stance. Why not a stance of superior intelligence. That would have produced a film of little interest for sure. But the question remains as to the nature of the relationship of androids and the development of the fostering of ethical principals. Is the first stage of quality societal norms a function of a pool of negativity, antisocial behavior; that given time the androids would have evolved into positive, functioning members of their own "species" and interact well with other species? Where do ethical norms originate?

How about the penultimate android with an attitude problem and unshakeable pessimistic disposition--"Marvin, the Paranoid Android" equipped with "GPP" [Genuine People Personalities], from the very popular British TV series and the film "Hitchhiker's Guide To The Galaxy". The story line is somewhat complex in this episodic tale but here is a good summation by Joseph DeMarco:

"Narrowly escaping the destruction of the earth to make way for an intergalatic freeway, hitchhikers Arthur Dent (Earthling Idiot) and Ford Prefect (Writer for the Guide) go on a crazy journey across time and space. They are read bad poetry which is considered terrible torture, and they are almost sucked out an air lock into space. After almost being killed many times, and narrowly escaping at the end of each chapter, they join forces with Zaphod Beeblebrox (A two-headed cocky alien), Trillian (another worthless earthling) and Marvin (the depressed robot) to search for the answer to the meaning of life, which may have been hidden on the recently demolished earth."

When you are contemplating this topic consider the character "Data" from "Star Trek: Next Generation", and recall the 1990 episode of "Star Trek: Next Generation" [#64] called "The Offspring" whereby Data has created a daughter called "Lal". Lal is capable of perception and feeling and given Data's "software" of ethics by "neural transfers". But Lal has some problems with citizens of the star ship. Befriended by Guinan she is introduced to the inhabitants of "Ten Forword" to broaden her social intercourse. Data and Captain Picard are embroiled in a discussion regarding Lal's removal from the star ship when they are interrupted by an emergency message from Counselor Troi. Lal is dying...her functions broke down after experiencing an extraordinary gamut of feelings in the counselor's presence. All attempts to save Lal fail and she succumbs to what we humans all must face--DEATH. Curiously, Lal's demise may have been contributed to a more advanced stage of sensitivity and she was unable to interface the new feelings with the supplied software. Consider Data's inability to experience the grief and emotion the crew feels at Lal's loss and must be content to have only memories of Lal. Data may well be equipped with a sense of ethics when dealing with human issues of loyalty, responsibility, self-sacrifice, etc., but he, and all androids of his caliber, may never fully integrate the full range of human emotions--well beyond the ethics.

Remember Isaac Asimov's "Three Laws of Robotics" which I assume would be relegated to androids too? All is fine until something breaks down or a truly unique circumstance arises that confounds even the best mind's of mankind.

1. A robot may not injure humans nor, through inaction, allow them to come to harm.

2. A robot must obey human orders except where such orders conflict with the First Law.

3. A robot must protect its own existence insofar as such protection does not conflict with the First or Second law.

Roger Clarke has written this detailed essay on Asimov's "laws of Robotics".

As I suspect...there will BE those unique events where Asimov's robot imperatives or any additional instructions will fail: "The freedom of fiction enabled Asimov to project the laws into many future scenarios; in so doing, he uncovered issues that will probably arise someday in real- world situations. Many aspects of the laws discussed in this article are likely to be weaknesses in any robotic code of conduct."

I suppose some wonder about definitions here. For example is there a clear cut distinction between robots and androids and another version--cyborgs. Maybe not, and all is a matter of semantics. And it may well be a fortuitous effort to make such a distinction other than what is common sense. Cyborgs and androids clearly take on the mantle of sentient beings whereas not all robots are merely drones of task oriented character such as Robbie the Robot [Forbidden Planet, "Lost In Space"] or "Marvin" ["Hitchhiker's Guide To The Galaxy"]. And consider too that the discussion here is really existing within the realm of science fiction and certainly not correlated to any real life antecedents [yet], but is still worthy of discussion and analysis. The advent of sophisticated computers, biotechnology, genetics, etc. force us to become aware of the possibility of artificial devices becoming human like and subject to the same issues that humans face--those pesky ethical dilemmas. The development and integration of these new forms may just well be part of the whole picture of evolution as one writer suggested. Seeing the forest is impossible for us and thus humans may not realize that "humans" aren't the only form of life in a complex evolutionary scheme; that a carbon based sentient being is neither the end product of evolution nor the only species to embrace ethical issues.

Steve Mizrach offers this essay on cyborg ethics.

Now consider the notion that species ethics are non-transferable; that a species ethics is a category of one and implicitly forbids the overlapping of another category. In such a case the attribution [transfer] of ethical principles [involving servitude and safety of the primary or transferring species, i.e. Asimov's "Three Rules"] would be impossible for an android. This makes the ensuing comment: Each species is unique in its own ethics and only chance would afford similarity. Earth residents have one set of ethics while residents of some very distant planet would have theirs. A unique set for each species that except for chance could well exhibit diametrically opposed ethics. A learning bridge for the sharing of ethics just may not exist--or the simple transfer of ethics that ensures a species safety is impossible. Divergent species just may not have common grounds for mutual acceptance of ethics.

The notion of sound ethics stemming from religion/theology is not new and does carry some significance. [Unfortunately, on the whole, the implementation of such sound ethics has historically been short of world wide demonstration.] Now, whether an android community would adopt ethical norms [be they their own constructs or implanted or borrowed from other beings] to ensure the safety and perpetuation of their species is another mater. It would be arrogant, despite what may appear beneficial, to assume that mankind's ethical resolutions are the best for all species of intelligence or even the only set of ethics in the universe. Androids may discover that the "self" is the most beneficial status and one wonders just how long such a stance would last. Androids may have no community sense of ethics as we would understand. Ethics becomes twisted and inverted in substantial meaning to what we experience. You are quite correct in that survival of the individual and perpetuation of the species is what establishes a set of ethics. Most humans are guided by good ethics and do have a conscience. But you have to wonder just how far those great ethics are really understood and believed. There is no particular pleasure in killing another, but faced with a situation where food for one's survival is at issue, one would consider killing the intruder to sustain one's own existence. Maybe androids would have similar compunctions or maybe they have a different set of ethics that enable them to diffuse the life and death ethical situation.

"From the far horizons of the unknown come transcribed tales of new dimensions in time and space. These are stories of the future, adventures in which you'll live in a million could-be years on a thousand may-be worlds. The National Broadcasting Company in cooperation with Street & Smith, publishers of Astounding Science Fiction Magazine present: "X Minus One".

For you old time radio fans of yesteryear, X Minus One offered many episodes of robots, androids, humanoids, and the like, but one of the most delightful was an episode called "How To" [Episode #45 that aired April 3rd, 1956]. The story was by Clifford D. Simak, radio transcription was by William Welch, and stared Alan Bunce, Ann Seymour, Les Demon, Joe Bell, Jane Bruce, Santos Ortega, Ben Grauer. As the plot indicates: "A man orders a do-it-yourself robotic dog kit and is accidentally sent a kit for an experimental robot humanoid. The mechanical man is both a blessing and a curse." [Troy Holaday]. This has just about everything: Benevolent robots, counterfeiters, tax men, lawyers.

Let's suppose for the sake of the following argument that androids exist and that they have a set of ethics akin to man: A right to life [murder prohibited], mercy, altruism, etc.--including jurisprudence. Jurisprudence for androids?--yes. If they mirror human ethics of conduct, then they must also abide by the laws of human society and be subject to all of the ramifications.

Robert A. Freitas Jr. offers this essay on jurisprudence.

And finally..."Jennifer, an emotionally troubled whiz kid with obsessive-compulsive disorder, is desperate to find her birth mother in China. But she's also petrified to leave her house. So she uses her technological genius to build Jenny Chow, a surrogate devoid of dysfunction, to take the journey in her place."--New York Academy of Sciences.


A new play has opened called "The Intelligent Design of Jenny Chow" by Rolin Jones and is concerned with a lonely young woman's [Jennifer Marcus] acute agoraphobia and genius who builds a companion--an android called Jenny Chow. While reviewer Charles Isherwood is dismayed at the overall tone of the play especially the whimsical demeanor of the android, it nevertheless illustrates the value of, shall we say, an alternate personality--far more complex and interactive than the standard doll or teddy bear of childhood. For those individuals that find it difficult or impossible to relate to "real" people or the "real" world such an android is not without merit for such an item will offer comfort, lessen loneliness, offer interaction on that person's level of communication and may possess value of therapy. Chemical sympathy may not be needed--just someone to talk to would be far more beneficial. life, which may have been hidden on the recently demolished earth. Source: Article
Read More........

Scientists develop robot with 'feelings'

A 'friendly robot' has been developed to help scientists understand how long-term relationships may be forged between humans and androids. The robot called ERWIN (Emotional Robot with Intelligent Network) is the brainchild of Dr John Murray, from the School of Computer Science, University of Lincoln, UK. It is now being used as part of a study to find out how some of the human-like thought biases in robot characteristics affect the human-robot relationship. It is hoped the research will not only help scientists to understand and develop better, more realistic relationships between humans and 'companion' robots, but that it could also help to inform how relationships are formed by children with autism, Asperger syndrome or attachment disorder. "Cognitive biases make humans what they are, fashioning characteristics and personality, complete with errors and imperfections. Therefore, introducing cognitive biases in a robot's characteristics makes the robot imperfect by nature, but also more human-like," said PhD student Mriganka Biswas. "Based on human interactions and relationships, we will introduce 'characteristics' and 'personalities' to the robot. If we can explain how human-to-human long-term relationships begin and develop, then it would be easier to plan the human-robot relationship," said Biswas. When two people interact for the first time, if the two different personalities attract each other, a relationship forms. But, in the case of conventional human-robot interaction, after gathering information about the robot, the robot's lack of identifiable characteristics and personality prevents any relationship bond developing.  ERWIN has the ability to express five basic emotions while interacting with a human. "Robots are increasingly being used in different fields, such as rescuing people from debris, in medical surgeries, elderly support and as an aid for people who have autism," Biswas said.  For the latter two especially, robots need to be friendly and relatively more sympathetic and emotive to its users. A companion robot needs to be friendly and have the ability to recognise users' emotions and needs, and to act accordingly. Scientists will be collating data from the robot's interactions with humans, while also employing a 3D-printed humanoid robot and Keepon - a small yellow robot designed to study social development by interacting with children.  Its simple appearance and behaviour are intended to help children, particularly those with developmental disorders such as autism, to understand its attentive and emotive actions.. Source: SAM Daily Times
Read More........