Thursday, 23 November 2017

Will AI probably kill us all?

Less than fifty per cent think not...

At least not anytime soon anyway. 

A minority of researchers think the transition from Artificial General Intelligence to Artificial Super Intelligence will not happen for at least another one hundred years.

Current trends however show that the law of accelerating returns is very likely to radically slash that prediction. The one hundred years is more likely to be fifteen to twenty years.

At the moment, three of the most relevant individuals on the planet today are preaching caution when it comes to powerful machines.

Professor Stephen Hawkins states: "The rise of  powerful AI will either be the best or the worst thing ever to happen to humanity. 

Microsoft tycoon Bill Gates asks the question, "how can they not see what a huge challenge this is?"

Elon Musk Founder of Tesla thinks we should be extremely careful about this, while stating clearly that he is not against the advancement of AI.

Technologically, we have made huge leaps and bounds. Very far ahead than that of any other aspect of human endeavour. 

Boyinaband's video explains "the law of accelerating returns," below.

We have to be aware that once AI gets on to the internet, within a short space of time, it will know everything. 

And everything means, the accumulated knowledge of all humanity. It will also know about the habits and lifestyle of every living individual on the planet. 

Our thoughts, interactions, brilliance, foibles, stupidity, wars, poverty, strengths and weaknesses will all be apparent to it. 

AI is very likely to know us better than we know ourselves.

Confucius tells us: "The man who asks a question is a fool for only a minute. The man who doesn't ask, is a fool for life."

Very often, it seems that psychologically we're still in the seventeenth century whilst we're living technologically in the twenty first century.

I'll be a fool for a minute for the sake of tomorrow and ask, "will AI probably kill us all?


MusicArtinDesign is also thinking of Tomorrow!

Monday, 13 November 2017

Meet Sofia

The New Saudi Arabian Citizen...

She is the first non human that's now classified as an earthling. 

Sofia is an artificial intelligence robot and the first robot "citizen" on our planet.

I don't think anyone could have imagined that progress in Artificial Intelligence technology would develop as quickly as it has. 

It's here, and it has come with many human characteristics. 

The programming of artificial, self learning robots while we still have deadly conflict though,  is a worry...

We know about the killer drones that are already in use over battlefields all over the planet.

Many more have been developed or are in different stages of development that we know nothing about. What I would like to know about are the safeguards that are in place that these intelligent machines wouldn't turn on us human beings.

Everyone is excited.

Russian President Vladimir Putin, speaking to a group of Russian students, called artificial intelligence "not only Russia's future, but the future of the whole of mankind... The one who becomes leader in this sphere will be ruler of the world. There are colossal opportunities and threats that are difficult to predict now."

The technology that the science fiction writer Isaac Asimov predicted 75 years ago, has arrived. Much more sophisticated and smarter than he imagined, and it's come 40 years early.

In Asimov's fiction novel "Runaround" there was a government Handbook of Robotics that included the following three rules:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.

3. A robot can protect its own existence as long as such protection does not conflict with the First or Second Law.

Oren Etzioni, PhD, author, and CEO of the Allen Institute for Artificial Intelligence, has suggested an update for Isaac Asimov's three laws of Artificial Intelligence. Given the widespread media attention emanating from Elon Musk (and others) warning, that these updates might be worth reviewing.

Etzioni has suggested an update to Asimov's rules as follows:

1. An A.I. system must be subject to the full gamut of laws that also applies to it's human operator.

2. An A.I. system must clearly disclose that it's not human.

3. An A.I. system cannot retain or disclose confidential information without explicit approval from the source of that information.
Etzioni offered these updates and suggested to begin a discussion that would lead to a non-fictional Handbook of Robotics by the United Nations.

Elon Musk, together with deep mind co-founder Mustafa Suleyman and top specialists have called on the United Nations to ban the development, deployment and use of autonomous weapons within A.I. (Posted to the Robot Report and other sources.)

These warnings should be heeded.

Thinking about Tomorrow

Derek Soyemi @ Bodederek 

 MusicArtinDesign Trends

The Dogon Tribe

And The Ancestors From Space The Dogon people of Mali in West Africa have a Space history, and it's quite a compelling one too. ...