Wednesday, February 13, 2013

Explanation Scientists How The World Will End


CALIFORNIA - A philosopher from Australia, Huw Price, known for his work related to the field of cosmology. He built the Centre for the Study of Existential Risk (CSER) in order to ensure that the human species could have a long-term future.

Center study will also examine the possibility of humanity to plant the seeds of its own destruction. "One of the problems we have to deal with is finding a way to cope with our own optimism," Price said, as quoted Stuff, Tuesday (12/02/2013).
Price also partnered with two other thinkers. Martin Rees, a Cambridge astrophysicist who served as the British royal astronomer.
Rees is the author of the book Our Final Century. He wrote about the causes of a deadly combination of nature with man-made disasters.
According to him, human civilization has only a 50 percent chance to survive through 2100. He also predicted in 2020, bioterrorism (bioterror) or biological error (bioerror) will cause 1 million casualties in a single event.
Founder CSER aims to keep scientists and technology developers to think more about the long term consequences of their work. Jaan Tallinn, a theory of the technology is also one of the key developers of Skype revealed that he could have a chance killed by accident on the technology of artificial intelligence (artificial intelligence) rather than cancer or heart disease.
"We're trying to instill in the people (developers of future technologies) that work to think about risks in technology development team. It aims to raise public awareness about the potential risks of the technology," said Price.
Some risks are often discussed by thinkers such as nuclear war, for example. "The threat of nuclear annihilation is only a temporary delay," said Rees.
Rees explains further, we are very fortunate to be able to pass through the Cold War without a disaster. According to him, despite the risk of tens of thousands of bombs had been reduced, but it can not be denied that there is a shift in the next 50 years to bring a new conflict.
Philosophers are also considering the development of robotics today. Progress robot became frightened of its own, due to the emergence of hyper intelligent machines (very smart).
"We are talking about life on a planet, where the environment will no longer be controlled, like other species that no longer control their current environment," said Tallinn.
The motivation behind the work is to discuss the scenario Tallin outside the realm of science fiction. In addition, encourage those working in the field of technology to respond to serious risks that can arise from these technological advances.
"I do not advocate refraining from technological development. However, the technology becomes more powerful, then we need to consider all the consequences, both positive and negative," he concluded.
Sources: http://techno.okezone.com/read/2013/02/12/56/760803/

No comments:

Post a Comment