The past few weeks have generated a firestorm of news regarding the control and ownership of information, whether US Congressional vote to allow ISPs to sell browser data or most recently, Google’s plans to filter and flag search results based on perceived fact checking.
It’s not a new idea: Facebook rolled out changes to what links are “favored” by its algorithms, effectively censoring out sites that it declares are fake news. And while that may be appropriate on a social media platform, it seems out of place inside Google search results.
Google handles somewhere around 59,000 searches every second, and close to 1.17 billion people use Google on a regular basis. Since its humble inception at Stanford University in California, the company has swelled into what is effectively the Internet catalog. If it’s on the Internet, Google can find it and tell you more.
And that’s exactly the reason fact-checking search results seems out of place here. Google is not supposed to be a reflection of actual truth. Its goal has always been to reflect what exactly is on the Internet, and while those two types of truth may have diverged of late, it still seems like the responsibility of Google engineers to ensure that the user gets what they’re searching for, not what Google wants them to see.
The move to “fact-check” news articles in Google search ultimately changes nothing. People are going to visit the links they want to visit and read what they want to read.
It’s just unsettling that Google feels the need to tell people what to think.