List Headline Image
Updated by Kunihiro Maeda on May 24, 2019
 REPORT
10 items   0 followers   0 votes   176 views

The Control Problem of Super Artificial Intelligence

Nick Bostrom: Can We Reshape Humanity’s Deep Future?

Dr. Nick Bostrom spends much of his time calculating the possible rewards and dangers of rapid technological advances — how such advances will likely alter the course of human evolution and life as we know it. One useful concept in untangling this puzzle is existential risk — the question of whether an adverse outcome would end human intelligent life or drastically curtail what we, in the infancy of the twenty-first century, would consider a viable future. Figuring out how to reduce existential risk even slightly brings into play an array of thought-provoking issues.

「AI構築はロケット打ち上げのよう」:人工知能のリスクなど研究する起業家、Skype共同創業者J・タリン氏に聞く

人工知能(AI)など新技術の進展が人類存亡の危機を招く可能性があると、複数の科学者や著名企業家が警鐘を鳴らしている。この記事では、SkypeやKazaaの共同創業者で、AI開発による潜在的なリスクなどの研究を推進するJ・タリン氏と同氏の取り組みを紹介する。

Future of Humanity Institute

The is a multidisciplinary research institute at the University of Oxford.

Team - FLI - Future of Life Institute

To catalyze and support research and initiatives for safeguarding life and developing optimistic visions of the future, including positive ways for humanity to steer its own course considering new technologies and challenges.

CBRN National Action Plans: Rising to the Challenges of International Security and the Emergence of Artificial Intell...

CBRN National Action Plans: Rising to the Challenges of International Security and the Emergence of Artificial Intelligence

Stuxnet infected Russian nuclear plant

Jumped airgap, Kaspersky boss says.
Stuxnet had 'badly infected' the internal network of a Russian nuclear plant after the sophisticated malware caused chaos in Iran's uranium facilities in Natanz.

Read more: http://www.itnews.com.au/news/stuxnet-infected-russian-nuclear-plant-363578#ixzz4CBfwY2OV

Prof. Max Tegmark and Nick Bostrom Speak to the UN About the Threat of AI

Rising to the Challenges of International Security and the Emergence of Artificial Intelligence 7 October 2015, United Nations Headquarters, New York

Army Tests Fighting ‘Bot

"THE U.S. ARMY will soon test a six-wheeled, 20-foot robot to see whether it can traverse rugged terrain, fire machine guns and carry
1,900 pounds of gear without soldiers remotely controlling its every move," Defense News reports.

Robotic Warfare

More than 40 countries — including the United States, Great Britain, Russia and China — are developing a new generation of robotic weapons that can be programmed to seek out and destroy enemy targets without direct human control. The push for autonomous machines has raised a host of legal and ethical questions and sparked concerns that the Geneva Conventions — international rules of war that date back to the 1860s — may not be adequate to control robotic warfare. Military experts say autonomous weapons could save lives by keeping soldiers out of harm's way and by using pinpoint accuracy to avoid civilian deaths and other collateral damage. But opponents fear the emerging technology might trigger a new arms race and encourage leaders to use force rather than diplomacy. Meanwhile, the U.S. military is developing revolutionary ways to supply and protect soldiers, including Kevlar underwear, invisible camouflage and customizable 3D-printed food.

The proposed ban on offensive autonomous weapons is unrealistic and dangerous | KurzweilAI

The open letter from the Future of Life Institute (FLI) calling for a “ban on offensive autonomous weapons” is as unrealistic as the broad relinquishment of nuclear weapons would have been at the height of the cold war.

A treaty or international agreement banning the development of artificially intelligent robotic drones for military use would not be effective. It would be impossible to completely stop nations from secretly working on these technologies out of fear that other nations and non-state entities are doing the same.

It’s also not rational to assume that terrorists or a mentally ill lone wolf attacker would respect such an agreement.