Ever been asked by your Google assistant if the person in two different pictures are same?
And you said YES? Do we know how they work? Are we scared by the way they recognize us, even in our childhood pictures?
If your answer to the questions above are "YES", then this article would be of great interest to you!
If this rapid development of neural networks strikes fear in you, you're not alone. When machine-learning researcher Maya Gupta joined Google and asked AI engineers about problems with the technology, she tells Voosen, they were uneasy too. "I'm not sure what it's doing," they said. "I'm not sure I can trust it."
All we know about neural networks is: feed them a bunch of data, and they'll eventually figure out how to identify pictures or find distant galaxies. But after the first step and before the last step, AI engineers don't actually know what happens. Just like human brains, neural networks are a mystery.
There has been a quick escalation in the evolution of deep learning and has not given scientists and researchers sufficient time and opportunity to fully decode its way of function.
So, why is this enigma posing a potential problem for us? Let's dig in to find the answer for this question...
When a neural network goofs up, engineers don't know precisely why, or whether that one screw-up represents a small fluke or a much bigger flaw in the programming. That would be a minor problem with pictures, but a much bigger issue when it comes to predicting diseases or driving autonomous vehicles. And sometimes these networks do things we never meant them to. AI trained to recognize objects begins to recognize human faces, for example, or two bots trained to negotiate with each other come up with their own language.
Most of us have read about the article linked above or at-least would have read it now.
Facebook's experiment isn't the only time that artificial intelligence has invented new forms of language.
Earlier this year, Google revealed that the AI it uses for its Translate tool had created its own language, which it would translate things into and then out of. But the company was happy with that development and allowed it to continue.
According to some very intelligent people – Elon Musk and Stephen Hawking to name two prominent examples – mankind is on the precipice of giving learning machines enough rope to hang our race. The current research and adoption of deep learning is not quite that dire, but it begs the question of where to stop. When should machines quit learning, and is it possible, once we have given them the ability to absorb information and use it to evolve, to make them stop.
The fear, of course, is that since machine learning, at this time, has been growing exponentially, there will be a very short window of time from the moment that machines are at the lowest level of human intelligence to the moment when they surpass our intellect.
For most of us, however, deep learning will be a helpful addition to our social media use and online web searching. Giant robots are not going to take over the world, at least not yet, but computer intelligence like RankBrain will help consumers find the best sites to buy goods, help avoid spam and hacking websites on the internet and filter out fake news, at least to the best of its ability. While these algorithms are learning all the time, they are still no more than semi-intelligent decision making engines.
The uncertain and nebulous threats posited by experts, however, should give us all pause prior to accepting the ubiquity of deep learning in so many areas.
There's a video found on internet which says AI detectives are cracking open the black box of deep learning. Check it out to know more about how the scientists are unscrambling the mystery behind all these.
However, we should be careful about the data we let loose in the internet and be watchful about the happenings in the world of science and technology. Also, let us enjoy the magic by not letting it bother the share of benefits machine learning brings us..!!