About BotForensics
BotForensics is a specialised security research platform that helps organisations improve their headless browsers and automated tools. We identify weaknesses in bots that allow websites to detect and block them, whether for threat intelligence, web scraping, or other automated browsing purposes.
Our Mission
Our mission is to help organisations make their bots and automated tools indistinguishable from real human browsers. This enables threat intelligence companies to bypass cloaking and see actual malicious content, helps web scrapers avoid blocking, and assists in understanding how AI-powered browsers like Comet, Atlas, and Dia are detected.
What We Do
We operate an instrumented honeypot site that:
- Analyses client request headers and HTTP traffic
- Delivers JavaScript code that inspects the runtime environment
- Probes for TLS certificate validation, same-origin policy, IPv6, WebRTC, Tor, web workers, fonts, and more
- Uses over 200 unique signals to classify sessions as headless browsers or real human users
- Identifies bot fingerprints and attributes them to specific companies
Our Methodology
We seed phish submission forms with unique honeypot URLs and observe which bots fetch our site. This allows us to identify each bot's URL sources (VirusTotal, CT logs, urlscan.io, etc.) and analyse their behavior patterns.
Everything in our reports is based on real behavior we have observed in the wild on our honeypot site. Nothing is speculative unless explicitly noted.
Team
BotForensics is developed by James Stanley and Martin Falkus.