This content originally appeared on The Keyword and was authored by Dan Pritchett
For many of us, Google Maps is the place we go for information about the world around us. We search for businesses, seek directions, check photos and read reviews.
One way Maps is kept accurate and reliable is through updates that everyday people add. Since we started accepting contributed content in 2010, more than 970 million people have updated Google Maps in the form of reviews, photos, ratings and factual information like addresses and business hours. These contributions allow Google Maps to keep up with the world constantly changing around us and also help people make more informed decisions.
Just as Google Maps is a reflection of the real-world, so are the people that contribute to it. The same neighbor who lends a hand could also be writing witty reviews of local restaurants. Unfortunately, the opposite is also true. Just as there are bad actors in the real-world, there are those who try to game Google Maps with inappropriate content — the vast majority of which is removed before you see it.
While much of our work to prevent inappropriate content is done behind the scenes, we wanted to share some detail about our investments and progress in keeping Google Maps reliable and trustworthy.
How we single out the bad actors
Bad actors try to mislead people through a variety of techniques, from fake reviews that attack a business to inauthentic ratings that boost a place’s reputation. Fighting this unhelpful content is a complex, ceaseless battle — one that we rarely detail publicly so as to not tip off scammers to our ever-changing techniques.
One of the best tools we have to fight back is an understanding of what normal, authentic Google Maps usage looks like. For example, we know that the average person is likely to use Google Maps while navigating a commute or road trip, and while searching for nearby restaurants or services. They’ll leave reviews at places they’ve been, and usually add ratings or photos in location-specific clusters.
Observations like these inform our machine learning algorithms, which scan millions of daily contributions. These algorithms detect and remove policy-violating content across a variety of languages, and also scan for signals of abnormal user activity. For instance, they can detect if a new Google Maps account in say, Bangkok, suddenly leaves bad car dealership reviews in Mexico City and 1-star restaurant ratings in Chicago. The policy-violating content is either removed by our automated models or flagged for further review, along with the user account.
We also deploy thousands of trained operators and analysts who help with content evaluations that might be difficult for algorithms, such as understanding reviews with local slang.
Who are the bad actors and how do we stop them?
Our teams and protections are built to fight two main types of bad actors: content fraudsters and content vandals.
Fraudsters, who are ultimately motivated by money, try to trick people with scams like fake reviews to attract customers or fake listings to generate business leads. To deter them, we preemptively remove opportunities for them to profit off of fake content.
For example, we have focused efforts on detecting content coming from click farms where fake reviews and ratings are being generated. Through better detection of click farm activity we are making it harder to post fake content cheaply, which ultimately makes it harder for a click farm to sell reviews and make money. And to catch fake business profiles before they appear on Maps, we've strengthened our Google My Business verification processes with new machine learning models that help identify fraudulent engagement. By fighting large-scale efforts to create fake business profiles, we’ve stymied millions of attempts from fraudsters aiming to steal customers from legitimate businesses by crowding them out of search results.
Then there are content vandals, who may be motivated by social and political events or simply want to leave their mark online. For example, they post fake reviews or edit the names of places to send a message, and they add off-topic photos as pranks.
Content vandalism can be more difficult to tackle as it’s often random. For instance, a teenager who posts an off-topic photo to their high school’s listing on Maps as a joke or someone who left profanity in a nonsensical review.
Impeding content vandals comes down to anticipation and quick reaction. As places become more prone to vandalism, we adjust our defenses. For instance, last year we quickly modified our algorithms to preemptively block racist reviews when we observed anti-Chinese xenophobia associated with COVID-19. To avoid the spread of election-related misinformation, we limited the ability for people to edit the phone numbers, addresses and other information for places like voting sites. And we restricted reviews for certain places where we saw higher rates of policy-violating content, like schools in the U.S.
Our progress in fighting unwelcome content
With the help of people and technology that closely monitor Maps 24/7, we’re able to take swift action against scammers, ranging from content removal and account suspension to litigation. In 2020 alone, we took the following actions to ensure the content you see in Google Maps is reliable:
- We blocked or removed 55 million policy-violating reviews and nearly 3 million fake Business Profiles. This is 20 million fewer reviews than we removed in 2019 as we saw a drop in the overall number of reviews due to fewer people being out and about during COVID-19.
- We took down more than 960,000 reviews and more than 300,000 Business Profiles that were reported to us by Google Maps users. This is an increase over 2019 largely due to increased use of automated moderation which complements the manual review of flagged content performed by operators and analysts.
- We reviewed and removed more than 160 million photos and 3.5 million videos that either violated our policies or were of low quality. For example, thanks to advancements in our automated systems, we’ve significantly improved our detection of photos that were extremely blurry. This has led to major improvements in the quality of photos on Maps - both new photos added and ones shared in years past. And as we more aggressively targeted bad actors overall, account removals could lead to deletion of all content left by one account, in some cases thousands of photos.
- Our technologies and teams disabled more than 610,000 user accounts after detecting and investigating suspicious or policy-violating behavior.
- We stopped more than 3 million attempts by bad actors to verify Business Profiles on Google that didn’t belong to them.
Content contributed by our users is an important part of how we continue to make Google Maps more helpful and accurate for everyone. As more people share their local knowledge on Google Maps, we’ll continue to invest in the policies, technologies and resources needed to make sure information is reliable.
This content originally appeared on The Keyword and was authored by Dan Pritchett
Dan Pritchett | Sciencx (2021-02-18T17:00:00+00:00) A look at how we tackle fake and fraudulent contributed content. Retrieved from https://www.scien.cx/2021/02/18/a-look-at-how-we-tackle-fake-and-fraudulent-contributed-content/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.