My Dad and I had a bit of a back and forth email exchange about AI and Google filtering search results a week or so ago.
For me, the end result was that it seemed like we both had valid points, but a lack of solid information about what really was going on made it hard for either of us to drive our respective points home.
This article wont help.
For many people, online reviews are the first port of call when looking for a restaurant and hotel.
As such, they have become the lifeblood for many businesses — a permanent record of the quality of their services and products. And these businesses are constantly on the watch for unfair or fake reviews, planted by disgruntled rivals or angry customers.
But there will soon be a major new threat to the world of online reviews: Fake reviews written automatically by artificial intelligence (AI).
Allowed to rise unchecked, they could irreparably tarnish the credibility of review sites — and the tech could have far broader (and more worrying) implications for society, trust, and fake news.
Its a bit of a read, but the bottom line is that they have taught an AI how to write reviews that read like a human wrote them. Indeed, I failed the little ‘test’ at the bottom of the article.
The point they make is scary. Reviews are one thing, but there is nothing stopping the AI from writing news. Or blogs. Or… yeah, you get the idea.
One of the points I was making with my Dad is that while software engineers have written the AI, I am not so sure they are still in control of it.
In this case, the AI reads reviews to learn how to write reviews. What happens when there are more AI reviews than real human reviews and so the AI is reading its own reviews? Who wrote the code for that? (And are they even still working for the company?).
Don’t get me wrong, I am all for AI and ML, and while I rely (perhaps a little too much) on real people reviews of the products I buy, the idea of a computer writing a review on a product so it will rank higher in my search results leaves me a little cold.