If you've spent any time looking at IP geolocation providers, you've probably seen accuracy percentages everywhere: "99.8% accurate." or "97%+ at the city level."
These numbers look scientific and they feel reassuring. Do they actually mean anything? Once you dig deeper, something becomes very clear: There's no shared definition, no shared dataset, and no shared methodology behind them.
Every provider measures accuracy differently and most don't explain how. So when two companies both claim "99% accuracy," they may not be talking about the same thing at all. IP data accuracy is a huge challenge for several different reasons. We will take a closer look at this problem here and explain how we are tackling it at IPinfo.
To validate a global accuracy percentage like "99.8%," a provider would need:
That dataset does not exist. So global accuracy claims aren't scientific conclusions, they're simplified headlines that obscure more than they reveal while trying to solve the wrong problem.
Different parts of the internet behave differently:
Bundling all of this into a single global accuracy number makes things appear more precise than they are. A provider might consider an IP "accurate" if it lands in the right country. Another might count "within 50 miles" as correct and a third one will claim 50km. All could claim "97% accuracy," each one of them will mean a different thing.
Most accuracy claims in the industry share common challenges:
Testing limitations:
Vague success criteria:
Missing context:
Without transparency about how these numbers are derived, a "99% accuracy" claim becomes impossible to evaluate meaningfully.
Here's what some of the players publicly claim today:
Looks impressive. And if you are looking to buy IP data this could be reassuring – if you don't look closely at the fine print. The accuracy percentages are published, but they're explicitly not guaranteed.
MaxMind offers an interactive accuracy comparison tool by country with an important disclaimer: "Due to the nature of geolocation technology and other factors beyond our control, we cannot guarantee any specific future accuracy level." IP2Location also publishes a list of accuracy percentages by country and adds details on methodology like: sample sizes of "hundreds to thousands" per country, threshold of <50 miles, degradation warning of 1-5% per month for outdated data.
If these accuracy claims are so problematic, why do providers keep making them? The answer reveals how the problem perpetuates itself:
Buyers ask for the numbers. Companies that don't provide a number get eliminated from consideration, even when a number would be meaningless.
Competitive pressure creates an arms race. Once one provider claims "99% accurate," competitors feel pressure to claim "99.5%" even when comparing methodologies would be impossible. The percentages keep climbing while becoming less meaningful.
Information asymmetry works in the provider's favor. Most buyers aren't IP geolocation experts and the complexity involved in IP data accuracy is hard to explain. Most buyers can't evaluate methodology, sample sizes, or testing approaches. Providers know a confident "99.8%" claim looks more impressive than explaining why a single number is misleading.
It's become industry standard. Once everyone publishes accuracy percentages, not publishing one makes you look like you're hiding something, even when the opposite is true.
The result? A cycle where:
So what's the alternative? At IPinfo, we are replacing the single percentage with something more useful: a verification system anyone can inspect.
We avoid promoting a global accuracy percentage, not because we don't measure accuracy (we do, constantly, we are obsessed with it), but because a single number can't capture the size, complexity, and diversity of the internet.
We built accuracy as a living system that is the basis of everything we do, guiding the decisions and investments we make as a company:
ProbeNet internet measurement platform. Real-world internet signals (routing, latency, movement, service behavior) from thousands of points of presence.
Multisource data ingestion. Routing tables, RIR records, hosting footprints, infrastructure signals, residential/mobile indicators, device data, dozens of independent sources.
Behavioral analysis. Understanding how IPs change over time: ASN shifts, stability, movement patterns, rotation, proxy/CGNAT behavior.
Continuous anomaly detection. When the signals disagree, we investigate and correct.
Daily data refresh. Updates flow continuously and data is updated daily, always.
Transparent validation tools. Tools like our Pingable IP Finder used together with something like ping.sx/ping allow anyone to test IP geolocation, and our /accuracy page examples let anyone inspect real ProbeNet measurements when we disagree with other IP data providers. We also display traceroute measurements in our ASN pages and are planning on launching new tools and ways to help you validate our data.
A dedicated research program. Focused on understanding how the internet behaves and innovating around the hardest accuracy challenges.
In short: Accuracy isn't a static number for us, it's an ongoing commitment.
When evaluating any accuracy claim, ask these questions:
If a provider can't answer these questions clearly, their accuracy percentage can't be understood, let alone trusted.
The internet is too complex, too dynamic, and too diverse for any provider to credibly claim global, uniform accuracy. Real accuracy isn't something you announce once. It's something you prove continuously. At IPinfo, our focus is simple: we do the hard work, provide the evidence and let anyone verify it. That's not just what accuracy should look like, it's what accountability looks like.

As the product marketing manager, Fernanda helps customers better understand how IPinfo products can serve their needs.