The Turing was a great idea, but too limited. General purpose AI is the true goal of researchers today. I think we'll see it come about in the next 30 years, assuming humans don't kill themselves off before then.
If I'm being honest, I think we should accept, yet fear Artificial Intelligence. It could be beneficial in helping with problems that occur every day. But at the same time if it could take on its own thoughts it could use it against us. Take Facebook's AI for example. It created a language by itself that only the AI and another computer could understand. Simplifying itself to the point where humans couldn't understand it. That could pose a threat to the overall safety of humanity. I think if we are to have an AI in the next 10 years that can think for itself, we need to have a way to override its thought process so it can safely interact with humans.
I like the idea of Artificial Intelligence to help us but if they become too smart then they most likely will turn against us since they won't need us.
Users browsing: 1 Guest(s)