I had fun making it but please note that the current implementation is just a demo and far from a proper production tool.
If you really want to use it then for best possible results you need at least 500 probes per phase.
It could be optimized fairly easily but not without going over the anon user limit which I tried to avoid
though with some key differences that address the limitations mentioned in the thread. The main issue with pure ping-based geolocation is that: IPs are already geolocated in databases (as you note) Routing asymmetries break the distance model Anycast/CDNs make single IPs appear in multiple locations ICMP can be blocked or deprioritized My approach used HTTP(S) latency measurements (not ping) with an ML model (SVR) trained on ~39k datapoints to handle internet routing non-linearity, then performed trilateration via optimization. Accuracy was ~600km for targets behind CloudFront - not precise, but enough to narrow attribution from "anywhere" to "probably Europe" for C2 servers. The real value isn't precision but rather: Detecting sandboxes via physically impossible latency patterns Enabling geo-fenced malware Providing any location signal when traditional IP geolocation fails Talk: https://youtu.be/_iAffzWxexA"
---
Our research scientist, Calvin, will be giving a talk at NANOG96 on Monday that delves into active measurement-based IP geolocation.
IEEE 802.11mc > Wi-Fi Round Trip Time (RTT) https://en.wikipedia.org/wiki/IEEE_802.11mc#Wi-Fi_Round_Trip...
/? fine time measurement FTM: https://www.google.com/search?q=fine+time+measurement+FTM
1. Trilateration mostly doesn't work with internet routing, unlike GPS. Other commenters have covered this in more detail. So the approach described here - to take the closest single measurement - is often the best you can do without prior data. This means you need a crazy high distribution of nodes across cities to get useful data at scale. We run our own servers and also sponsor Globalping and use RIPE Atlas for some measurements (I work for a geo data provider), yet even with thousands of available probes, we can only accurately infer latency-based location for IPs very close to those probes.
2. As such, latency/traceroute measurements are most useful for verifying existing location data. That means for the vast majority of IP space, we rely on having something to compare against.
3. Traceroute hops are good; the caveat being that you're geolocating a router. RIPE IPmap already locates most public routers with good precision.
4. Overall these techniques work quite well for infrastructure and server IP addresses but less so for eyeball networks.
https://ping.sx is also a nice comparison tool
Seems tool is relying on ICMP results from various probes. So wouldn't this project become useless if target device disables ICMP?
I wonder if you can "fake" results by having your gateway/device respond with fake ICMP requests.
How's this different from RIPE ATLAS?
Sometimes residential ISPs (that hosts the probe) may have a bad routing due to many factors, how does the algorithm take that into account?