Robots likely to be used in classrooms as learning tools, not teachers

 Smaller robots or modular kits are used to teach robotics in classrooms. from www.shutterstock.com 
Omar Mubin, Western Sydney University and Muneeb Imtiaz Ahmad, Western Sydney UniversityRobots are increasingly being used to teach students in the classroom for a number of subjects across science, maths and language. But our research shows that while students enjoy learning with robots, teachers are slightly reluctant to use them in the classroom. 

In our study, which saw staff and students interact with the Nao humanoid robot, teachers said they were more sceptical of robots being integrated into the classroom. 
 
In our study, students enjoyed the human-like interaction with the Nao humanoid robot. from www.shutterstock.com 

They preferred the robot to not have full autonomy and instead take on restricted roles in the classroom. The teachers also wanted full control over the robot. We observed that the teachers were in general unaware of robots and hence there was a technological bias associated with their opinions.

They said they did not trust the technical capabilities of the robot and wanted the robot to function and behave as a learning “buddy” of children and not as a teacher. We think this reluctance may have occurred primarily due to an uncertainty of how best to incorporate robots in the class, and a lingering concern that robots may eventually replace teachers.

This is despite research showing that robots are much more likely to be used as learning tools than as teachers in a classroom. 

The students, on the other hand, were much more enthusiastic about a robot in their classroom, enjoying the human-like interaction. 

However, they wanted the robot to adapt its behaviour to their feelings and display a wide range of emotions and expressions. Such fully autonomous behaviour will require further research and development in robotics.

For example, some of the children felt the robot’s voice was unnatural and did not adapt to situations by changing tone or pitch.

The children preferred as natural behaviour from the robot as possible, even to the extent that they were untroubled by the robot making mistakes, such as forgetting. It was clear the children were imagining the robot in the role of their teacher. 

How robots are currently used in the classroom:
Smaller robots or modular kits are used to teach robotics in classrooms. from www.shutterstock.com 

Numerous types of robots are being incorporated in education. They range from simple “microprocessor on wheels” robots (boebot), to advanced toolkits, (mindstorms) to humanoids (robots that resemble humans). 

The choice of the robot is usually dictated by the area of study and the age group of the student. 

Smaller robots or toolkits are particularly used to teach robotics or computer science. These toolkits can be physically manipulated allowing students to learn a variety of disciplines across engineering. However, the human-like shape of humanoids makes them easier to interact with, and for this reason are often used for language lessons.

 
IROBI robot complete with inbuilt tablet computer. Thomas Hawk/flickr, CC BY
Humanoids have the ability to provide real-time feedback, and their physical shape increases engagement. This often leads to a personal connection with the student, which research shows can help resolve issues related to shyness, reluctance, confidence and frustration that may arise in dealing with a human teacher. For example, a robot will not get tired no matter how many mistakes a child makes.

Humanoid robots are being widely utilised in classrooms in many countries including, Japan and South Korea. 

 
Pepper the robot from Softbank Robotics in Japan. Amber Case/flickr, CC BY

Nao, Pepper, Tiro, IROBI, and Robovie, for example, are primarily used to teach English. 

Telepresence – where a teacher can remotely connect to the classroom through the robot – is also being used as a way to teach students English. The teacher can participate in the classroom by being virtually present through a display mechanism. In some instances, the display is embedded in the robot’s torso.

Western countries have been much more hesitant in acknowledging the integration of robots in classrooms, with privacy, developmental hindrances, the rise in unemployment and technical deficiencies stated as the major drawbacks. 

Robots as learning tools, not teachers: 

Humanoid robots are still a fair way away from being autonomously situated in schools due mainly to technological limitations such as inaccurate speech or emotion recognition.

However, the intention of most researchers in robotics is not for robots to replace teachers. Rather, the design goals of most robots are to function as an aid in the classroom and to enhance the added value they can bring as a stimulating and engaging educational tool. 

In order to facilitate the integration of robots in the classroom, we need to be able to provide appropriate interfacing mechanisms (software, hardware or even mobile apps), allowing the human teacher to control the robot with minimal training.

Omar Mubin, Lecturer in human-centred computing & human-computer interaction, Western Sydney University and Muneeb Imtiaz Ahmad, PhD Candidate in Social Robotics, Western Sydney University

This article was originally published on The Conversation. Read the original article.
The Conversation
Read More........

The five biggest threats to human existence



Other ways humanity could end are more subtle. United States Department of EnergyCC BY

Anders Sandberg, University of Oxford: In the daily hubbub of current “crises” facing humanity, we forget about the many generations we hope are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we face risks, called existential risks, that threaten to wipe out humanity. These risks are not just for big disasters, but for the disasters that could end history.

Not everyone has ignored the long future though. Mystics like Nostradamus have regularly tried to calculate the end of the world. HG Wells tried to develop a science of forecasting and famously depicted the far future of humanity in his book The Time Machine. Other writers built other long-term futures to warn, amuse or speculate. 

But had these pioneers or futurologists not thought about humanity’s future, it would not have changed the outcome. There wasn’t much that human beings in their place could have done to save us from an existential crisis or even cause one.

We are in a more privileged position today. Human activity has been steadily shaping the future of our planet. And even though we are far from controlling natural disasters, we are developing technologies that may help mitigate, or at least, deal with them.

Future imperfect:

Yet, these risks remain understudied. There is a sense of powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing anything about problems that have not occurred yet (partially because of the availability heuristic – the tendency to overestimate the probability of events we know examples of, and underestimate events we cannot readily recall).

If humanity becomes extinct, at the very least the loss is equivalent to the loss of all living individuals and the frustration of their goals. But the loss would probably be far greater than that. Human extinction means the loss of meaning generated by past generations, the lives of all future generations (and there could be an astronomical number of future lives) and all the value they might have been able to create. If consciousness or intelligence are lost, it might mean that value itself becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from becoming reality. And we must not fail even once in this pursuit.

With that in mind, I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final. 

Over the past century we have discovered or created new existential risks – supervolcanoes were discovered in the early 1970s, and before the Manhattan project nuclear war was impossible – so we should expect others to appear. Also, some risks that look serious today might disappear as we learn more. The probabilities also change over time – sometimes because we are concerned about the risks and fix them. 

Finally, just because something is possible and potentially hazardous, doesn’t mean it is worth worrying about. There are some risks we cannot do anything at all about, such as gamma ray bursts that result from the explosions of galaxies. But if we learn we can do something, the priorities change. For instance, with sanitation, vaccines and antibiotics, pestilence went from an act of God to bad public health.

1. Nuclear war:

While only two nuclear weapons have been used in war so far – at Hiroshima and Nagasaki in World War II – and nuclear stockpiles are down from their the peak they reached in the Cold War, it is a mistake to think that nuclear war is impossible. In fact, it might not be improbable. 

The Cuban Missile crisis was very close to turning nuclear. If we assume one such event every 69 years and a one in three chance that it might go all the way to being nuclear war, the chance of such a catastrophe increases to about one in 200 per year. 

Worse still, the Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems implausible that the chances would be much lower than one in 1000 per year.

A full-scale nuclear war between major powers would kill hundreds of millions of people directly or through the near aftermath – an unimaginable disaster. But that is not enough to make it an existential risk. 

Similarly the hazards of fallout are often exaggerated – potentially deadly locally, but globally a relatively limited problem. Cobalt bombs were proposed as a hypothetical doomsday weapon that would kill everybody with fallout, but are in practice hard and expensive to build. And they are physically just barely possible. 

The real threat is nuclear winter – that is, soot lofted into the stratosphere causing a multi-year cooling and drying of the world. Modern climate simulations show that it could preclude agriculture across much of the world for years. If this scenario occurs billions would starve, leaving only scattered survivors that might be picked off by other threats such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently have no good ways of estimating this. 

2. Bioengineered pandemic:

Natural pandemics have killed more people than wars. However, natural pandemics are unlikely to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease as it spread in Europe.

Unfortunately we can now make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far more lethal and able to infect vaccinated individuals. Recent work on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted.


Right now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets better and cheaper, more groups will be able to make diseases worse.

Most work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful. But there are always some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult tried to hasten the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on. 

The number of fatalities from bioweapon and epidemic outbreaks attacks looks like it has a power-law distribution – most attacks have few victims, but a few kill many. Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier pathogens become easier to design.

3. Superintelligence:

Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-intelligence software.

The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally. In fact, it is possible to prove that certain types of superintelligent systems would not obey moral rules even if they were true.

Even more worrying is that in trying to explain things to an artificial intelligence we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do that we might not understand all the implications of what we wish for. 


Software-based intelligence may very quickly go from below human to frighteningly powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance. 

It has been proposed that an “intelligence explosion” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly set. 

The unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a whole is improving itself at the fastest possible rate. But there are good reasons to think that some technologies may speed things up far faster than current societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one most likely to either be massive or just a mirage.

This is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that superintelligence could be achieved “within a generation”, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more likely is that they just saw it as a remote future problem. 

4. Nanotechnology:

Nanotechnology is the control over matter with atomic or molecular precision. That is in itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the potential for abuses that are hard to defend against.

The big problem is not the infamous “grey goo” of self-replicating nanomachines eating everything. That would require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would eventually succeed, but there are plenty of more low-hanging fruits on the destructive technology tree. 


The most obvious risk is that atomically precise manufacturing looks ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too large advantage might be tempting. 

Weapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous “gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate engineering into the hands of anybody who wants it.

We cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be potentially disruptive just because it can give us whatever we wish for.

5. Unknown unknowns:

The most unsettling possibility is that there is something out there that is very deadly, and we have no clue about it.

The silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is extremely rare, or that intelligent life tends to get wiped out? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that didn’t help. 


Whatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We do not know about any such threats (none of the others on this list work like this), but they might exist.

Note that just because something is unknown it doesn’t mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per year, based on the relative age of Earth. 

You might wonder why climate change or meteor impacts have been left off this list. Climate change, no matter how scary, is unlikely to make the entire planet uninhabitable (but it could compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our continued existence.

The availability heuristic makes us overestimate risks that are often in the media, and discount unprecedented risks. If we want to be around in a million years we need to correct that.

Anders Sandberg, James Martin Research Fellow, University of Oxford

This article was originally published on The Conversation. Read the original article.
The Conversation
Read More........

Birds behave like human musicians

New York: The tuneful behaviour of some songbirds is similar to that of human musicians, who play around with their tunes, balancing repetition and variation just like jazz artists, a new study has found.

Researchers studied the pied butcherbird, a very musical species, which provided a wealth of intriguing data for analysis.

"Since pied butcherbird songs share so many commonalities with human music, this species could possibly revolutionise the way we think about the core values of music," said Hollis Taylor of Macquarie University in Australia.

In the past, claims that musical principles are integral to birdsong were largely met with scepticism and dismissed as wishful thinking.

However, the extensive statistical and objective analysis of the new research shows that the more complex a bird's repertoire, the better they are at singing in time, rhythmically interacting with other birds much more skillfully than those who know fewer songs.

The butcherbirds "balance their performance to keep it in a sweet spot between boredom and confusion," said Ofer Tchernichovski, professor at City University of New York (CUNY).

"Pied butcherbirds, not unlike jazz musicians, play around with their tunes, balancing repetition and variation," said Constance Scharff, who directs the animal behaviour laboratory at the Free University of Berlin.

Researchers, including those from New Jersey Institute of Technology in the US, suggest that such musical virtuosity may signify more than just the evolution of a way for birds to establish territorial dominance and facilitate mating.

It may also provide evidence that musical ability in birds was a precursor to the evolution of the many dimensions of musical ability in humans.

The study was published in the journal Royal Society Open Science. — PTI Source: http://www.tribuneindia.com/
Read More........

How Did Early Earth Stay Warm?

An artist’s depiction of an ice-covered planet in a distant solar system resembles what the early Earth might have looked like if a mysterious mix of greenhouse gases had not warmed the climate, Credit: ESA
A UC Riverside-led astrobiology team discovered that methane, a potent greenhouse gas, was not the climate savior once imagined for the mysterious middle chapter of Earth history For at least a billion years of the distant past, planet Earth should have been frozen over but wasn’t. Scientists thought they knew why, but a new modeling study from the Alternative Earths team of the NASA Astrobiology Institute has fired the lead actor in that long-accepted scenario. Humans worry about greenhouse gases, but between 1.8 billion and 800 million years ago, microscopic ocean dwellers really needed them. The sun was 10 to 15 percent dimmer than it is today—too weak to warm the planet on its own. Earth required a potent mix of heat-trapping gases to keep the oceans liquid and livable. For decades, atmospheric scientists cast methane in the leading role. The thinking was that methane, with 34 times the heat-trapping capacity of carbon dioxide, could have reigned supreme for most of the first 3.5 billion years of Earth history, when oxygen was absent initially and little more than a whiff later on. (Nowadays oxygen is one-fifth of the air we breathe, and it destroys methane in a matter of years.) “A proper accounting of biogeochemical cycles in the oceans reveals that methane has a much more powerful foe than oxygen,” said Stephanie Olson, a graduate student at the University of California, Riverside, a member of the Alternative Earths team and lead author of the new study published September 26 in the Proceedings of the National Academy of Sciences. “You can’t get significant methane out of the ocean once there is sulfate.” Sulfate wasn’t a factor until oxygen appeared in the atmosphere and triggered oxidative weathering of rocks on land. The breakdown of minerals such as pyrite produces sulfate, which then flows down rivers to the oceans. Less oxygen means less sulfate, but even 1 percent of the modern abundance is sufficient to kill methane, Olson said. Stephanie Olson and Tim Lyons next to an image of visualizations of sulfate concentrations (top) and methane destruction (bottom) from their biogeochemical model of Earth’s ocean and atmosphere roughly one billion years ago.
Credit: UC Riverside
Olson and her Alternative Earths coauthors, Chris Reinhard, an assistant professor of earth and atmospheric sciences at Georgia Tech University, and Timothy Lyons, a distinguished professor of biogeochemistry at UC Riverside, assert that during the billion years they assessed, sulfate in the ocean limited atmospheric methane to only 1 to 10 parts per million—a tiny fraction of the copious 300 parts per million touted by some previous models. The fatal flaw of those past climate models and their predictions for atmospheric composition, Olson said, is that they ignore what happens in the oceans, where most methane originates as specialized bacteria decompose organic matter. Seawater sulfate is a problem for methane in two ways: Sulfate destroys methane directly, which limits how much of the gas can escape the oceans and accumulate in the atmosphere. Sulfate also limits the production of methane. Life can extract more energy by reducing sulfate than it can by making methane, so sulfate consumption dominates over methane production in nearly all marine environments. The numerical model used in this study calculated sulfate reduction, methane production, and a broad array of other biogeochemical cycles in the ocean for the billion years between 1.8 billion and 800 million years ago. This model, which divides the ocean into nearly 15,000 three-dimensional regions and calculates the cycles for each region, is by far the highest resolution model ever applied to the ancient Earth. By comparison, other biogeochemical models divide the entire ocean into a two-dimensional grid of no more than five regions. “There really aren’t any comparable models,” says Reinhard, who was lead author on a related paper in Proceedings of the National Academy of Sciences that described the fate of oxygen during the same model runs that revealed sulfate’s deadly relationship with methane. Reinhard notes that oxygen dealt methane an additional blow, based on independent evidence published recently by the Alternative Earths team in the journals Science and Geology. These papers describe geochemical signatures in the rock record that track extremely low oxygen levels in the atmosphere, perhaps much less than 1 percent of modern values, up until about 800 million years ago, when they spiked dramatically. Less oxygen seems like a good thing for methane, since they are incompatible gases, but with oxygen at such extremely low levels, another problem arises. “Free oxygen [O2] in the atmosphere is required to form a protective layer of ozone [O3], which can shield methane from photochemical destruction,” Reinhard said. When the researchers ran their model with the lower oxygen estimates, the ozone shield never formed, leaving the modest puffs of methane that escaped the oceans at the mercy of destructive photochemistry. With methane demoted, scientists face a serious new challenge to determine the greenhouse cocktail that explains our planet’s climate and life story, including a billion years devoid of glaciers, Lyons said. Knowing the right combination other warming agents, such as water vapor, nitrous oxide, and carbon dioxide, will also help us assess habitability of the hundreds of billions of other Earth-like planets estimated to reside in our galaxy. “If we detect methane on an exoplanet, it is one of our best candidates as a biosignature, and methane dominates many conversations in the search for life on Mars,” Lyons said. “Yet methane almost certainly would not have been detected by an alien civilization looking at our planet a billion years ago—despite the likelihood of its biological production over most of Earth history.” 
Read More........

For Ants, 'Elite' Individuals Are Not Always More Effective, But Make a Show

Individually paint-marked Temnothorax albipennis workers, Credit: Nigel R. Franks
We all know that social insects, such as ants, often work together to achieve effective responses to environmental challenges. However, research by the University of Bristol, published in the Journal of Experimental Biology, has now uncovered that the contributions of different individuals within such groups vary. In many cases, certain 'keystone' individuals conduct much more work than others, and are assumed to be disproportionately important to the collective response. Researchers examined this phenomena in 'tandem running' during colony emigrations in the ant species Temnothorax albipennis. Tandem running allows ants to lead their nest mates to a new home before the colony decides to emigrate, with more tandem runs being conducted the further away that new home is. Thomas O'Shea-Wheller from the School of Biological Sciences, said: "In our experiments, we individually paint-marked all of the ants in colonies, allowing us to determine the exact contribution that each worker made to the tandem running process. "We found that certain individuals were indeed highly active in tandem running, attempting significantly more tandem runs per ant. "However, surprisingly, these active individuals were in fact no more successful at completing the task, and so their overall contribution was questionable. "Instead, it seemed that when more tandem runs were needed, colonies relied on greater numbers of ants being involved in the process. "Hence, our study shows that in some cases, even when apparently 'key' individuals exist within groups of animals, their relative contribution to task performance may be far from decisive." 
Read More........