· Annualized Rate of Return for 10 Years [2011–2020] = ~215%

· Annual lows grow at ~160%, highs grow at ~200%

· If Bitcoin keeps bitcoining you can support my work with a few Satoshis at https://www.whatismybitcoinaddress.com

Bitcoin ($BTC) Annual Returns


This Top 10 ranking is produced by Dr. Roman V. Yampolskiy and is based solely on his biased expert opinion. To a certain degree the ranking is also based on perceived reputation, Google scholar listings, quality and quantity of papers, Google search rankings, impact of publications and number of important contributions. As the most humble person in the world Dr. Yampolskiy decided not to include himself on the list. *Note, this is for AGI not AI Safety.

1) Nick Bostrom https://www.nickbostrom.com/

2) Eliezer Yudkowsky https://yudkowsky.net/

3) Stuart Russell http://people.eecs.berkeley.edu/~russell/

4) Paul Christiano https://paulfchristiano.com/

5) Stuart Armstrong https://www.fhi.ox.ac.uk/team/stuart-armstrong/

6) Max Tegmark https://space.mit.edu/home/tegmark/

7) Victoria Krakovna https://vkrakovna.wordpress.com/

8) Steve Omohundro https://steveomohundro.com/

9) Hugo DeGaris https://profhugodegaris.wordpress.com/

10) Nadisha-Marie Aliman https://nadishamarie.jimdo.com/


This Top 10 ranking is produced by Dr. Roman V. Yampolskiy (University of Louisville) and is based solely on his biased opinion. (To reduce bias University of Louisville is Not Ranked) To a certain degree the ranking is also based on perceived reputation, Google scholar listings under AI Safety, quality and quantity of papers, Google search rankings, impact of publications and number of scholars working in the area full time. Many other universities do work on AI Safety but are not ranked this year. By definition the list excludes all industry labs.

1. Oxford University (UK)

2. University of California…


Abstract

Explainability and comprehensibility of AI are important requirements for intelligent systems deployed in real-world domains. Users want and frequently need to understand how decisions impacting them are made. Similarly it is important to understand how an intelligent system functions for safety and security reasons. In this paper, we describe two complementary impossibility results (Unexplainability and Incomprehensibility), essentially showing that advanced AIs would not be able to accurately explain some of their decisions and for the decisions they could explain people would not understand some of those explanations.

Keywords: AI Safety, Black Box, Comprehensible, Explainable AI, Impossibility, Intelligible, Interpretability, Transparency…


Roman V. Yampolskiy

roman.yampolskiy@louisville.edu, @romanyam

Abstract

The young field of AI Safety is still in the process of identifying its challenges and limitations. In this paper, we formally describe one such impossibility result, namely Unpredictability of AI. We prove that it is impossible to precisely and consistently predict what specific actions a smarter-than-human intelligent system will take to achieve its objectives, even if we know terminal goals of the system. In conclusion, impact of Unpredictability on AI Safety is discussed.

Keywords: AI Safety, Impossibility, Uncontainability, Unpredictability, Unknowability.

1. Introduction to Unpredictability

With increase in capabilities of artificial intelligence, over the…


As a security expert I enjoy a good social engineering attack. I got an email from my “boss” asking for some urgent help. He needed me to buy some gift cards for “him”.

Read it from the end!!!

You got lost? You don’t know where you are? I can come and get you. Do you need a ride home?

From: Adel Elmaghraby <adel.louisville.edu@gmail.com>
Sent: Wednesday, April 24, 2019 9:06 PM
To: Yampolskiy,Roman V <roman.yampolskiy@louisville.edu>
Subject: Re: Urgent request

Get lost

On Thu, Apr 25, 2019 at 2:05 AM Yampolskiy,Roman V <roman.yampolskiy@louisville.edu> wrote:

That is some very high level meeting, they…


Roman V. Yampolskiy

Department of Computer Engineering and Computer Science

University of Louisville

roman.yampolskiy@louisville.edu

Abstract Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence/robotics communities. We will argue that the attempts to allow machines to make ethical decisions or to have rights are misguided. Instead we propose a new science of safety engineering for intelligent artificial agents. In particular we issue a challenge to the scientific community to develop intelligent systems capable of proving that they are in fact safe even under recursive self-improvement.

Keywords: AI Confinement, Machine Ethics, Robot Rights.

Ethics and Intelligent Systems

The…


Attribution of Output to a Particular Algorithm

Roman V. Yampolskiy

Computer Engineering and Computer Science

University of Louisville

roman.yampolskiy@louisville.edu

Abstract

With unprecedented advances in genetic engineering we are starting to see progressively more original examples of synthetic life. As such organisms become more common it is desirable to be able to distinguish between natural and artificial life forms. In this paper, we present this challenge as a generalized version of Darwin’s original problem, which he so brilliantly addressed in On the Origin of Species. After formalizing the problem of determining origin of samples we demonstrate that the problem is in…


“I used to brag about talks I gave; now I brag about talks I turned down.”

I always had a hard time saying NO. Every time I did, if felt like a missed opportunity. Also, it is a somewhat rude action of rejecting an offering to engage, collaborate, or help and it always left me feeling guilty and regretful. I wish I had infinite time so I could say yes to all the cool opportunities I get, but my time is most definitely not unlimited. So, I found a way to perceive my “NOs” as accomplishments. I started to write…


On February 11th 2019 President of the USA signed an executive order on Maintaining American Leadership in Artificial Intelligence[1]. In it, the President particularly emphasized that the “… relevant personnel shall identify any barriers to, or requirements associated with, increased access to and use of such data and models, including … safety and security concerns …”. Additionally, in March, the White House announced AI.gov, an initiative for presenting efforts from multiple federal agencies all geared towards creating “AI for the American People”[2]. Once again, robust and Safe AI was emphasized: “The complexity of many AI systems creates important safety and…

Roman V. Yampolskiy

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store