I'm not afraid of AI

Every useful program has three parts. Input and algorithm which produce some output.

Classic programming means that you write the algorithm which can convert your input into output you want. You are the master and the code by itself cannot think because it's written only for this one use case.

All artificial intelligence right now works on a generalized algorithm which creates algorithm for the specific use case by learning from matching inputs and outputs. The problem is that it will not be perfect. You need a very good set of learning data to learn from and even if you will have awesome set, it will still produce errors.

Let's take for example simple calculator. You can put all corner cases in learning set, but it doesn't mean final algorithm will calculate those cases correctly. Every new input-output match in every learning iteration is going to change the final algorithm. Also, the algorithm is not altered in a way to pass the current example, but is changed little bit to match overall average.

To make it perfectly clear: generalized AI algorithm just looks for patterns and tries to find an algorithm which can cover it. But every corner case (like division by zero, for example) has to be handled by hand. That's why you don't see AI calculator yet. AI is not precise and is good where you have a lot of data, but is hard to make algorithm. Like image analysis, voice recognition, autonomous cars and so on.

Actually, many people say they are doing AI but in fact it's just data mining. Data mining is still classic programming. You have data which you analyze manually with the help of some tools, but in the end you are still coding the algorithm which uses those data. That's important to understand.

Now I can explain why I'm not afraid. There has to be a lot of learning data for everything. Nowadays computer has problems just to analyze what is in the image. It's getting better and better, but it's still optimized for this one problem and there are still a lot of fuck-ups which no human would do. You can be impressed by Go program, but anyhow the game is complex, the World is much more complex, and that program is still optimized for this one game only. Try to play chess with this program and you win. Maybe you could argue with IBM Watson. Well, it's very impressive but it still makes childish mistakes. And by the way, have you heard something about some big improvement lately?

So, I'm not afraid. AI is about looking for patterns. We are a very long way away from algorithm really thinking by itself.

But… I don't trust in recommendation systems.

By the way, maybe now you understand why all big companies like Google, Apple, Amazon and so on wants all data possible and why you still see an advertisement or recommendation or help from assistants you don't like or need.

Trust in recommendation systems

On the Internet is so many information which no one can consume it all. That's why almost every bigger page uses some type of recommendation system. To find what user likes and filter out undesirable content. For example look at Facebook—you would need at least half a day to read everything what just your friends shares. Google does that as well with Google Play Music, YouTube or even maps or search engine! Last one is actually called personalized search. Also Twitter for some time plays with „best Tweets“ or „in case you missed it“ which totally changes your timeline. Not even mentioning e-shops like Amazon or ad platforms.

Trend is clear. We all are just bunch of numbers in different databases and when we came to some web page, it will do some crazy math behind and show content which we probably want. Question is—can we trust in those numbers representing us?

I don't know.

I currently work on one recommendation system at Seznam.cz and I also build my own „general Internet reader which will sort automatically content by my preferences“. I'm saying that only to mention that I know little bit about those systems and it somehow shapes my point of view about trust.

And my point of view is: I don't trust them.

I'm little bit scared what some pages can do with those system. They can basically change our thinking. They can keep us in some kind of social bubble. I know it's the worst scenario but there are also everyday scenarios like when I mark I don't like something and something is similar to it, there is chance I will not see it! Or worst I will not able to easily find it.

And that's why I block 3rd-party cookies. I block not ads but everything what can track me. I don't follow anything at Facebook. I don't use YouTube recommendation. I don't care about Twitter's best Tweets for me. Actually I moved Twitter feed to my app. I don't dislike songs at Google Music. I am careful about clicking to anything which could fire some kind of signal about me, some kind of number to equations which will try to do the best to recommend me something else.

The most craziest thing about that is probably that I do that also with systems I'm programming. At work I don't use it at all and at home I know exact meaning of all signals so I am very caution about what I'm clicking at.

Actually my little project helps me to understand this new era and see all problems about it. It's very hard to do it well and I think it's similar to security. When security is messed up, users are hurt. Same applies to recommendation systems. But there is bigger problem—everyone is trying best to make the best possible security but the best recommendation system does not mean better revenue. And that's probably why generally I don't trust them and try to avoid them.

There are only two systems I trust now. Google personalized search because I never had problem with it (yet) and my own reader because I know every bit of it.