DRDO Celebrates 63rd Foundation Day; Chairman Reddy Calls To Focus On Cyber Security, Space, AI

New Delhi: The Defence Research & Development Organisation (DRDO) observed the 63rd Foundation Day of its establishment on Friday, 1st January 2021. G. Satheesh Reddy, Secretary DDR&D & Chairman DRDO met Defence Minister Rajnath Singh and presented him with a model of the Akash Missile System, which was recently cleared for export. On the occasion, the Chairman of DRDO along with DGs and Directors of DRDO HQ paid floral tributes to former President APJ Abdul Kalam at DRDO Bhawan, according to a press statement from the Ministry of Defence. The Ministry stated that DRDO was established in 1958 with just 10 laboratories to enhance the research work in the Defence sector and was tasked with designing and developing cutting-edge defence technologies for the Indian Armed Forces. The Defence Ministry stated that today, the DRDO is working in multiple cutting-edge military technology areas, which include aeronautics, armaments, combat vehicles, electronics, instrumentation, engineering systems, missiles, materials, naval systems, advanced computing, simulation, cyber, life sciences, and other technologies for defence. Addressing the DRDO fraternity, the DRDO Chairman Reddy extended warm wishes to DRDO employees and their families. He stated that an eventful year has passed and a new one is about to begin, and asked scientists to innovate and create for the nation. He said that efforts of DRDO have given a quantum jump to India’s self reliance in defence, contributing towards Atmanirbhar Bharat. He declared ‘Export’ as the theme of DRDO for 2021 and mentioned that many products based on DRDO technologies have already been exported by DPSUs and industry. He said that in 2020, the DRDO achieved many milestones such as the maiden landing of LCA Navy onboard INS Vikramaditya, demonstration of Hypersonic Technology Demonstration Vehicle (HSTDV), Quantum Key Distribution (QKD) & QRNG developments in the area of Quantum Technology, Laser Guided Anti Tank Guided Missile (ATGM), Supersonic Missile Assisted Release of Torpedo (SMART), Anti Radiation Missile (NGARM), enhanced version of PINAKA Rocket System, Quick Reaction Surface to Air Missile (QRSAM), Maiden launch of MRSAM, 5.56 x 30 mm Joint Venture Protective Carbine (JVPC), and many other milestones. He highlighted the contributions of DRDO during the Covid-19 pandemic and said that nearly 40 DRDO laboratories developed more than 50 technologies and over 100 products on a war footing for combating the Novel Coronavirus pandemic in India. These included PPE kits, sanitizers, masks, UV-based disinfection systems, Germi Klean, and critical parts of ventilators leading to ventilator manufacturing in the country in a very short span of time. He further said that DRDO has established three dedicated Covid hospitals at Delhi, Patna, and Muzaffarpur in a record time for strengthening the medical infrastructure. In addition, Mobile Virology Research and Diagnostics Laboratory (MVRDL) were developed to speed-up the Covid-19 screening and R&D activities at various locations for strengthening the Covid testing capabilities. The DRDO Chairman mentioned that new policies and procedures were launched for increasing the efficiency and ease of engagement with various stakeholders in the development, saying DRDO has also taken major steps for further strengthening its base for taking up technological challenges for the defence systems development and will continue to strive for the best in defence technology and ensure the system development in the shortest time. While congratulating DRDO scientists and all other personnel who worked in close coordination with the armed forces for user trials, he set many targets for them. He talked about the flagship programmes of DRDO such as Hypersonic Cruise Missile, Advanced Medium Combat Aircraft (AMCA), New Generation MBT, Unmanned Combat Aerial Vehicle, Enhanced AEW&CS, LCA MK II, and many other systems. In his speech, Chairman Reddy called upon DRDO scientists to focus on next generation needs including cyber security, space and, artificial intelligence. The immense potential available in DRDO has been a catalyst for the development of industries in defence manufacturing sector, he said, as per the Defence Ministry statement. The Chairman highlighted that the academic institutes, R&D organisations and industry need to work together on the advanced and futuristic technologies to make India self-reliant in defence sector. He mentioned that a number of SMEs and MSMEs are supplying small components to subsystems for all DRDO projects and have been nurtured by DRDO. Now they have become partners in all new developments. He stated that DRDO conducted a competition called ‘Dare to Dream’ for startups and very enthusiastic response have been received. He further added that at least 30 startups should be supported every year to develop innovative products for the armed forces. He said that DRDO should make efforts towards strengthening long term ties with academia and aim to leverage the academic expertise available in the country and increase the synergy with them, remarking that DRDO should concentrate on applied research and translational research and then make prototypes from the applied research. He further said that the industry should be in a position to adopt these technologies and have necessary infrastructure, and scale these up to market with sustained quality. He underlined the need to focus on documentation and production for faster induction, and said that many new initiatives towards enabling the industry and empowering youth for Defence R&D will be taken by DRDO.DRDO Chairman Reddy also launched an Online Industry Partner Registration Module to simplify the process of vendor registration. He released the DRDO Monograph on ‘Issues on Development of Communication Technology using Orbiting Satellites’ and also the Environmental Safety Manual and Guidelines for Disposal of Life Expired Chemicals and Gases at DRDO Laboratories. Source: https://indusdictum.com/

Read More........

The five biggest threats to human existence



Other ways humanity could end are more subtle. United States Department of EnergyCC BY

Anders Sandberg, University of Oxford: In the daily hubbub of current “crises” facing humanity, we forget about the many generations we hope are yet to come. Not those who will live 200 years from now, but 1,000 or 10,000 years from now. I use the word “hope” because we face risks, called existential risks, that threaten to wipe out humanity. These risks are not just for big disasters, but for the disasters that could end history.

Not everyone has ignored the long future though. Mystics like Nostradamus have regularly tried to calculate the end of the world. HG Wells tried to develop a science of forecasting and famously depicted the far future of humanity in his book The Time Machine. Other writers built other long-term futures to warn, amuse or speculate. 

But had these pioneers or futurologists not thought about humanity’s future, it would not have changed the outcome. There wasn’t much that human beings in their place could have done to save us from an existential crisis or even cause one.

We are in a more privileged position today. Human activity has been steadily shaping the future of our planet. And even though we are far from controlling natural disasters, we are developing technologies that may help mitigate, or at least, deal with them.

Future imperfect:

Yet, these risks remain understudied. There is a sense of powerlessness and fatalism about them. People have been talking apocalypses for millennia, but few have tried to prevent them. Humans are also bad at doing anything about problems that have not occurred yet (partially because of the availability heuristic – the tendency to overestimate the probability of events we know examples of, and underestimate events we cannot readily recall).

If humanity becomes extinct, at the very least the loss is equivalent to the loss of all living individuals and the frustration of their goals. But the loss would probably be far greater than that. Human extinction means the loss of meaning generated by past generations, the lives of all future generations (and there could be an astronomical number of future lives) and all the value they might have been able to create. If consciousness or intelligence are lost, it might mean that value itself becomes absent from the universe. This is a huge moral reason to work hard to prevent existential threats from becoming reality. And we must not fail even once in this pursuit.

With that in mind, I have selected what I consider the five biggest threats to humanity’s existence. But there are caveats that must be kept in mind, for this list is not final. 

Over the past century we have discovered or created new existential risks – supervolcanoes were discovered in the early 1970s, and before the Manhattan project nuclear war was impossible – so we should expect others to appear. Also, some risks that look serious today might disappear as we learn more. The probabilities also change over time – sometimes because we are concerned about the risks and fix them. 

Finally, just because something is possible and potentially hazardous, doesn’t mean it is worth worrying about. There are some risks we cannot do anything at all about, such as gamma ray bursts that result from the explosions of galaxies. But if we learn we can do something, the priorities change. For instance, with sanitation, vaccines and antibiotics, pestilence went from an act of God to bad public health.

1. Nuclear war:

While only two nuclear weapons have been used in war so far – at Hiroshima and Nagasaki in World War II – and nuclear stockpiles are down from their the peak they reached in the Cold War, it is a mistake to think that nuclear war is impossible. In fact, it might not be improbable. 

The Cuban Missile crisis was very close to turning nuclear. If we assume one such event every 69 years and a one in three chance that it might go all the way to being nuclear war, the chance of such a catastrophe increases to about one in 200 per year. 

Worse still, the Cuban Missile crisis was only the most well-known case. The history of Soviet-US nuclear deterrence is full of close calls and dangerous mistakes. The actual probability has changed depending on international tensions, but it seems implausible that the chances would be much lower than one in 1000 per year.

A full-scale nuclear war between major powers would kill hundreds of millions of people directly or through the near aftermath – an unimaginable disaster. But that is not enough to make it an existential risk. 

Similarly the hazards of fallout are often exaggerated – potentially deadly locally, but globally a relatively limited problem. Cobalt bombs were proposed as a hypothetical doomsday weapon that would kill everybody with fallout, but are in practice hard and expensive to build. And they are physically just barely possible. 

The real threat is nuclear winter – that is, soot lofted into the stratosphere causing a multi-year cooling and drying of the world. Modern climate simulations show that it could preclude agriculture across much of the world for years. If this scenario occurs billions would starve, leaving only scattered survivors that might be picked off by other threats such as disease. The main uncertainty is how the soot would behave: depending on the kind of soot the outcomes may be very different, and we currently have no good ways of estimating this. 

2. Bioengineered pandemic:

Natural pandemics have killed more people than wars. However, natural pandemics are unlikely to be existential threats: there are usually some people resistant to the pathogen, and the offspring of survivors would be more resistant. Evolution also does not favor parasites that wipe out their hosts, which is why syphilis went from a virulent killer to a chronic disease as it spread in Europe.

Unfortunately we can now make diseases nastier. One of the more famous examples is how the introduction of an extra gene in mousepox – the mouse version of smallpox – made it far more lethal and able to infect vaccinated individuals. Recent work on bird flu has demonstrated that the contagiousness of a disease can be deliberately boosted.


Right now the risk of somebody deliberately releasing something devastating is low. But as biotechnology gets better and cheaper, more groups will be able to make diseases worse.

Most work on bioweapons have been done by governments looking for something controllable, because wiping out humanity is not militarily useful. But there are always some people who might want to do things because they can. Others have higher purposes. For instance, the Aum Shinrikyo cult tried to hasten the apocalypse using bioweapons beside their more successful nerve gas attack. Some people think the Earth would be better off without humans, and so on. 

The number of fatalities from bioweapon and epidemic outbreaks attacks looks like it has a power-law distribution – most attacks have few victims, but a few kill many. Given current numbers the risk of a global pandemic from bioterrorism seems very small. But this is just bioterrorism: governments have killed far more people than terrorists with bioweapons (up to 400,000 may have died from the WWII Japanese biowar program). And as technology gets more powerful in the future nastier pathogens become easier to design.

3. Superintelligence:

Intelligence is very powerful. A tiny increment in problem-solving ability and group coordination is why we left the other apes in the dust. Now their continued existence depends on human decisions, not what they do. Being smart is a real advantage for people and organisations, so there is much effort in figuring out ways of improving our individual and collective intelligence: from cognition-enhancing drugs to artificial-intelligence software.

The problem is that intelligent entities are good at achieving their goals, but if the goals are badly set they can use their power to cleverly achieve disastrous ends. There is no reason to think that intelligence itself will make something behave nice and morally. In fact, it is possible to prove that certain types of superintelligent systems would not obey moral rules even if they were true.

Even more worrying is that in trying to explain things to an artificial intelligence we run into profound practical and philosophical problems. Human values are diffuse, complex things that we are not good at expressing, and even if we could do that we might not understand all the implications of what we wish for. 


Software-based intelligence may very quickly go from below human to frighteningly powerful. The reason is that it may scale in different ways from biological intelligence: it can run faster on faster computers, parts can be distributed on more computers, different versions tested and updated on the fly, new algorithms incorporated that give a jump in performance. 

It has been proposed that an “intelligence explosion” is possible when software becomes good enough at making better software. Should such a jump occur there would be a large difference in potential power between the smart system (or the people telling it what to do) and the rest of the world. This has clear potential for disaster if the goals are badly set. 

The unusual thing about superintelligence is that we do not know if rapid and powerful intelligence explosions are possible: maybe our current civilisation as a whole is improving itself at the fastest possible rate. But there are good reasons to think that some technologies may speed things up far faster than current societies can handle. Similarly we do not have a good grip on just how dangerous different forms of superintelligence would be, or what mitigation strategies would actually work. It is very hard to reason about future technology we do not yet have, or intelligences greater than ourselves. Of the risks on this list, this is the one most likely to either be massive or just a mirage.

This is a surprisingly under-researched area. Even in the 50s and 60s when people were extremely confident that superintelligence could be achieved “within a generation”, they did not look much into safety issues. Maybe they did not take their predictions seriously, but more likely is that they just saw it as a remote future problem. 

4. Nanotechnology:

Nanotechnology is the control over matter with atomic or molecular precision. That is in itself not dangerous – instead, it would be very good news for most applications. The problem is that, like biotechnology, increasing power also increases the potential for abuses that are hard to defend against.

The big problem is not the infamous “grey goo” of self-replicating nanomachines eating everything. That would require clever design for this very purpose. It is tough to make a machine replicate: biology is much better at it, by default. Maybe some maniac would eventually succeed, but there are plenty of more low-hanging fruits on the destructive technology tree. 


The most obvious risk is that atomically precise manufacturing looks ideal for rapid, cheap manufacturing of things like weapons. In a world where any government could “print” large amounts of autonomous or semi-autonomous weapons (including facilities to make even more) arms races could become very fast – and hence unstable, since doing a first strike before the enemy gets a too large advantage might be tempting. 

Weapons can also be small, precision things: a “smart poison” that acts like a nerve gas but seeks out victims, or ubiquitous “gnatbot” surveillance systems for keeping populations obedient seems entirely possible. Also, there might be ways of getting nuclear proliferation and climate engineering into the hands of anybody who wants it.

We cannot judge the likelihood of existential risk from future nanotechnology, but it looks like it could be potentially disruptive just because it can give us whatever we wish for.

5. Unknown unknowns:

The most unsettling possibility is that there is something out there that is very deadly, and we have no clue about it.

The silence in the sky might be evidence for this. Is the absence of aliens due to that life or intelligence is extremely rare, or that intelligent life tends to get wiped out? If there is a future Great Filter, it must have been noticed by other civilisations too, and even that didn’t help. 


Whatever the threat is, it would have to be something that is nearly unavoidable even when you know it is there, no matter who and what you are. We do not know about any such threats (none of the others on this list work like this), but they might exist.

Note that just because something is unknown it doesn’t mean we cannot reason about it. In a remarkable paper Max Tegmark and Nick Bostrom show that a certain set of risks must be less than one chance in a billion per year, based on the relative age of Earth. 

You might wonder why climate change or meteor impacts have been left off this list. Climate change, no matter how scary, is unlikely to make the entire planet uninhabitable (but it could compound other threats if our defences to it break down). Meteors could certainly wipe us out, but we would have to be very unlucky. The average mammalian species survives for about a million years. Hence, the background natural extinction rate is roughly one in a million per year. This is much lower than the nuclear-war risk, which after 70 years is still the biggest threat to our continued existence.

The availability heuristic makes us overestimate risks that are often in the media, and discount unprecedented risks. If we want to be around in a million years we need to correct that.

Anders Sandberg, James Martin Research Fellow, University of Oxford

This article was originally published on The Conversation. Read the original article.
The Conversation
Read More........

Brain implant will connect a million neurons with superfast bandwidth

A neural interface being created by the United States military aims to greatly improve the resolution and connection speed between biological and non-biological matter.
The Defence Advanced Research Projects Agency (DARPA) – a branch of the U.S. military – has announced a new research and development program known as Neural Engineering System Design (NESD). This aims to create a fully implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world. The interface would serve as a translator, converting between the electrochemical language used by neurons in the brain and the ones and zeros that constitute the language of information technology. A communications link would be achieved in a biocompatible device no larger than a cubic centimetre. This could lead to breakthrough treatments for a number of brain-related illnesses, as well as providing new insights into possible future upgrades for aspiring transhumanists. “Today’s best brain-computer interface systems are like two supercomputers trying to talk to each other using an old 300-baud modem,” says Phillip Alvelda, program manager. “Imagine what will become possible when we upgrade our tools to really open the channel between the human brain and modern electronics.” Among NESD’s potential applications are devices that could help restore sight or hearing, by feeding digital auditory or visual information into the brain at a resolution and experiential quality far higher than is possible with current technology. Neural interfaces
currently approved for human use squeeze a tremendous amount of information through just 100 channels, with each channel aggregating signals from tens of thousands of neurons at a time. The result is noisy and imprecise. In contrast, the NESD program aims to develop systems that communicate clearly and individually with any of up to one million neurons in a given region of the brain. To achieve these ambitious goals and ensure the technology is practical outside of a research setting, DARPA will integrate and work in parallel with numerous areas of science and technology – including neuroscience, synthetic biology, low-power electronics, photonics, medical device packaging and manufacturing, systems engineering, and clinical testing. In addition to the program’s hardware challenges, NESD researchers will be required to develop advanced mathematical and neuro-computation techniques, to transcode high-definition sensory information between electronic and cortical neuron representations and then compress and represent the data with minimal loss. The NESD program aims to recruit a diverse roster of leading industry stakeholders willing to offer state-of-the-art prototyping, manufacturing services and intellectual property. In later phases of the program, these partners could help transition the resulting technologies into commercial applications. DARPA will invest up to $60 million in the NESD program between now and 2020. Source: http://www.futuretimeline.net/
Read More........

Russia Developing Terrorist-Killer Robots

Russian experts are developing robots designed to minimize casualties in terrorist attacks and neutralize terrorists, Deputy Prime Minister Dmitry Rogozin said on Friday.
By Dmitry Rogozin: Robots could also help evacuate injured servicemen and civilians from the scene of a terrorist attack, said Rogozin, who oversees the defense industry. Other antiterror equipment Russia is developing includes systems that can see terrorists through obstacles and effectively engage them in a standoff mode at a long distance without injuring their hostages, he said. Rogozin did not say when the equipment might be deployed by Russia's security and intelligence services. Human Rights Watch has criticized fully autonomous weapons, known as "killer robots," which would be able to select and engage targets without human intervention and called for the preemptive prohibition on such weapons. "Fully autonomous weapons do not exist yet, but they are being developed by several countries and precursors to fully autonomous weapons have already been deployed by high-tech militaries," HRW said in a statement on its website. "Some experts predict that fully autonomous weapons could be operational in 20 to 30 years," the human rights watchdog said. Voice of Russia, RIA. Source: http://sputniknews.com/
Read More........

Next generation drones design inspired by nature

© Photo: East News
After being inspired by birds, bats, insects and even flying snakes, researchers from 14 teams have come up with new designs of next generation drones and flying robots. These robots would have the potential to perform multiple tasks from military surveillance to search and rescue, News Tonight reports.
Olga Yazhgunovich: These robots may look similar to many things that nature has given to us in abundance, as flying robot will look like insects and butterflies, Design and Trend says. A report in EurekAlert says that scientists are working on different types of drones that look like different insects and animals. The report also said that scientists have successfully created the smallest drone of all that is as small as merely a millimeter in size. Bioinspiration and Biomimetics journal has come out with fascinating details as to how things are going to shape up in the future as far as the look and shape of the robotic drones are concerned. These drones come with exquisite flight control and can overcome many of the problems drones may face when navigating urban terrain. There is no denying the fact that flying drones are going to be of immense use in different fields in the coming days. It is true that the success of a flying robot depends, obviously, on the exactitude of its flight control, and nothing has more meticulous flight control than the creatures who are born with the gift of flight. Experts are very optimistic about the design and success of such flying robots. Dr. David Lentink of Stanford University says, “Flying animals can be found everywhere in our cities…From scavenging pigeons to alcohol-sniffing fruit flies that make precision landings on our wine glasses, these animals have quickly learnt how to control their flight through urban environments to exploit our resources.” One of the most interesting such robotic drone is a drone under development in Hungary that mimics the flocking of birds. It tries to do it by actually developing an algorithm that allows drones to huddle together while flying through the air. By understanding the ways how tiny insects stabilize themselves in turbulent air, researchers have designed many future drones. One of the researchers from the University of Maryland engineered sensors for their experimental drone based on insects' eyes to mimic amazing capability of flight in clutter. These eyes will act as cameras to record actual position of the drone which will be further monitored by engineers connected to an on-board computer. Another raptor-like appendage for a drone has been designed by some of researchers that can grasp objects at high speeds by swooping in like a bird of prey. Also, a team of researchers led by Prof. Kenny Breuer, at Brown University, has designed an eerily accurate robotic copy of a bat wing with high range of movement, tolerance and flexibility. Prof. Lentink added that membrane based bat wings have better adaptability to airflow and are unbreakable. A few issues will have to be sorted out for the success of such robots. According to the report, one of the biggest challenges facing robotic drones is the ability to survive the elements, such as extreme heat, bitter cold and especially strong winds. To overcome this issue, a team of researchers studied hawk moths as they battled different whirlwind conditions in a vortex chamber, in order to harness their superior flight control mechanisms. Another report in Bioinspiration and Biomimetics says more than a dozen teams are involved in creating flying robots that look like insects, butterflies and others that not just don’t fly in conventional ways but also in unconventional ways and so they are able to fly freely in dense jungles where we cannot expect other drones to fly. Source:http://sputniknews.com/
Read More........