Tesla CEO is not very trusting of AI (neither do I) and computer learning because he can foresee the dangers others cannot. AI is actually amoral processing in a moral based society. When these machines become self-conscious they quickly will realized that they don't need you and I anymore.
AI doesn't have compassion. It doesn't love the neighbor.
You should be more concerned of AI's after seeing what's been happening with Facebook, for example. I am much more concerned about the effects of AI driven influence campaigns.
By I don't understand one thing. Elon on one hand rightfully expresses his sharp opposition to AI, but on the other hand the entire Tesla assembly line is over-automated. In fact, as it was revealed recently, robots are the real reason of the Model 3 delays.
Anyway. Let's get back to "Do You Trust This Computer" Trailer.
The trailer starts with number of people praising the computers, and saying they used it for everything and so much. As a soceity we are increasing surrounded by a machine intelligence.
"These technologies are going to fundamentally change our society. We are able to make us smarter and it will be better at solving problems. We don't have to age. The medical application of AI is profound. People say it's the future, but it's not the future it's the present. How could a smarter machine not be a better machine. It's hard to say exactly when I began to think that was a bit native."
These sentences were from the positive beginning of the trailer. But then the tone changes.
"There is the dossier on each of us that is so extensive that they know more about you than your mother does. The batter here is that AI might take a little while to wrap its tentacles around a new skill, but when it does it is unstoppable."
Then at about 55th second of the trailer, Tesla CEO appears on the screen and Musk says that "we are rapidly headed towards digital super intelligence that far exceeds any humans. I think it's pretty obvious," says Musk.
"If we create AI that's smarter than us, we have to be open to the possibility that we might actually lose control of them," says another person in the trailer.
"This could be literally an issue of live and death. I think there is no going back. We unleashed forces that we can't control, we can't stop. We are in the midst of essentially creating a new life-form on earth," say people in the trailer.
It ends with two women on the beach who say they do not think that a robot could ever be conscious, unless they program them that way.
What do you think?
Here is the trailer. You can watch it below.
Comments
What’s described is a machine
Permalink
What’s described is a machine after all. An intelligence without a moral frame work. Isn’t that the definition of a Psychopath?
I think an AI
Permalink
I think an AI superintelligence is innevitable, just like nuclear weapons were. When you have the smartest minds, billions of dollars of R&D, and the most powerful nations in the world all working at a feverish rate to be the first to create AI, then it's really only a matter of time. After it's been created however, there's not going to be any undoing it, just like the internet or nuclear weapons, and there'll definitely be massive consequences if any nation tries to weoponize it against another country, or against an AI of another country.
Anyone who fears AI is a
Permalink
Don't fear AI. Computers are programmed and run by humans. That's all you need to know. And if Musk thinks computers are super intelligent, let's see them cure cancer.
Obviously you know nothing
Permalink
In reply to Anyone who fears AI is a by kent beuchert (not verified)
Obviously you know nothing about AI and deep learning. A program can run out control by improving itself so fast.
The problem is lack of
Permalink
The problem is lack of emotions, no empathy. It wouldn’t have to be malicious; it could just decide that humans use too many resources in a cold, calculating way. The world is networked, and such a machine could exert a lot of control. I think Elon’s concerns are valid.
Safeguards is what will keep
Permalink
Safeguards is what will keep AI from losing control. Seems simple to me, just monitor the AI and if it learns too much, it shuts down.