How Google moderates Google Maps reviews
Fake or abusive Google Maps reviews have always bothered local businesses. Recently, Google shared insights into how Google handles these reviews that some users leave for local businesses on Google Maps.
According to the explainer that Google published, moderating Google Maps reviews is a multi-step process. Following are the five steps Google takes to ensure that the Google Maps reviews businesses are receiving are authentic, credible, and helpful.
Step #1 — Alignment with Google’s strict content policies
To ensure the authenticity and usefulness of reviews, Google has well-defined content policies. For instance, one of the key pillars of that policy is that “contributions must be based on real experiences and information.”
According to Google, “We’ve created strict content policies to make sure reviews are based on real-world experiences and to keep irrelevant and offensive comments off of Google Business Profiles.”
Review content that violates these content policies — such as personal attacks or defamatory language, inaccurate content, stolen photos, etc. — is immediately flagged.
Step #2 — Marrying content policies with Google’s algorithm
According to Google, once the content policies are created and finalized, they are integrated into Google’s algorithm.
In addition to algorithms, these policies also become a part of the training for human moderators.
“Once a policy is written, it’s turned into training material — both for our operators and machine learning algorithms — to help our teams catch policy-violating content and ultimately keep Google reviews helpful and authentic.”
Step #3 — Immediate moderation by Google
Because Google uses a combination of human and machine review systems, it is able to immediately flag and moderate reviews.
“As soon as someone posts a review, we send it to our moderation system to make sure the review doesn’t violate any of our policies.
Given the volume of reviews we regularly receive, we’ve found that we need both the nuanced understanding that humans offer and the scale that machines provide to help us moderate contributed content.”
Step #4 — Encouraging community moderation
Google also does not just rely on its own resources. In fact, it successfully taps into the resources of the broader community by encouraging community moderation. It is done by allowing and encouraging the public to flag fake reviews and submit reports via a Report-a-Post function.
This step works as a backup option in case both the human moderators and machine reviewing algorithms mechanisms fail to flag and moderate a review.
Step #5 — Proactively anticipating fake reviews
Last but not least, proactively anticipating fake reviews is also a key part of Google’s review mechanisms.
Google mentioned that it has become particularly proactive about anticipating events that could lead to fake reviews. It also includes special monitoring of businesses near the vicinity of those events.
This helps ensure that any spike of reviews is carefully monitored and reviewed, minimizing the chances of fake reviews and only letting useful and authentic reviews go through the system.
Google explained this in the following words:
“For instance, when there’s an upcoming event with a significant following — such as an election — we implement elevated protections to the places associated with the event and other nearby businesses that people might look for on Maps.”
Google has been extremely effective in identifying and moderating fake reviews and business profiles. That’s because of an effective mix of human moderators and machine learning review systems — built on well-defined and strict content policies.
In addition, community moderation also plays a key role in helping Google differentiate between authentic and useless or unreliable reviews.
You can read more about how reviews on Google Maps work here.