Filtering Bot Traffic

The free plan of our IP geolocation API is limited to 1,000 requests per day, and the pricing of the paid plans is based on daily request volumes. One simple way to reduce your request volumes and squeeze some extra value our of our API is to avoid doing lookups for known bots, such as the Google or Bing search bots which crawl websites and can generate a lot of additional requests.

Filtering out bot traffic is easily done by looking at the user-agent header. Here are some user agents for common bots:

Search BotUser Agent
Google BotGooglebot/2.1 (+
Bing BotMozilla/5.0 (compatible; bingbot/2.0 +
Yandex BotMozilla/5.0 (compatible; YandexBot/3.0; +
Baidu SpiderMozilla/5.0 (compatible; Baiduspider/2.0; +

Notice that they all contain the term "bot" or "spider". Here's a little javascript snippet that uses a regular expression to look for these terms, and performs a different action based on what it finds:

if(navigator.userAgent.match(/bot|spider/i)) { // It's a bot! } else { // It's not a bot! }

That check can then be used to avoid hitting the API for bots, by doing something like this:

if (navigator.userAgent.match(/bot|spider/i)) { // It is a bot. We might want to set some defaults here, or do nothing. } else { // It's not a bot! Hit the API $.get("", function (response) { // Log the response console.log(response); }); }