Every useful program has three parts. Input and algorithm which produce some output.
Classic programming means that you write the algorithm which can convert your input into output you want. You are the master, and the code itself cannot think because it’s written only for this one use case.
All artificial intelligence right now works on a generalized algorithm which creates an algorithm for the specific use case by learning from matching inputs and outputs. The problem is that it will not be perfect. You need a very good set of learning data to learn from and even if you will have an awesome set, it will still produce errors.
Let’s take, for example, a simple calculator. You can put all corner cases in the learning set, but it doesn’t mean the final algorithm will correctly calculate those cases . Every new input-output match in every learning iteration is going to change the final algorithm. Also, the algorithm is not altered in a way to pass the current example but is changed a little bit to match the overall average.
To make it perfectly clear: generalized AI algorithm just looks for patterns and tries to find an algorithm that can cover it. But every corner case (like division by zero, for example) has to be handled by hand. That’s why you don’t see an AI calculator yet. AI is not precise and is suitable for a lot of data, but it is hard to make an algorithm. Like image analysis, voice recognition, autonomous cars, and so on.
Actually, many people say they are doing AI, but in fact, it’s just data mining. Data mining is still classic programming. You have data that you analyze manually with the help of some tools, but in the end, you are still coding the algorithm which uses those data. That’s important to understand.
Now I can explain why I’m not afraid. There has to be a lot of learning data for everything. Nowadays computer has problems just to analyze what is in the image. It’s getting better and better, but it’s still optimized for this one problem, and there are still a lot of fuck-ups that no human would do. You can be impressed by Go program, but anyhow the game is complex, the World is much more complex, and that program is still optimized for this one game only. Try to play chess with this program, and you win. Maybe you could argue with IBM Watson. Well, it’s very impressive, but it still makes childish mistakes. And by the way, have you heard something about some significant improvement lately?
So, I’m not afraid. AI is about looking for patterns. We are a very long way away from algorithm really thinking by itself.
But… I don’t trust in recommendation systems.
By the way, maybe now you understand why all big companies like Google, Apple, Amazon and so on wants all data possible and why you still see an advertisement or recommendation or help from assistants you don’t like or need.