Comparing malware-blocking DNS providers using URLhaus and PhishTank

Canadian Shield, Cloudflare, DNS Filter, NextDNS, OpenDNS, Quad9

  1. Malware-blocking test result
  2. Phishing-blocking test result
  3. Discussion
  4. Google Safe Browsing

Quad9–a DNS provider that blocks malicious domains by default–recently announced (in this post) it has >90% effectiveness in filtering malware websites, according to an independent test. Reading through the forum post, a security researcher from DNSFilter (one of the DNS providers tested) raised a concern about the quality of the dataset Suspicious Domain List that was sourced from outdated lists. I noticed the lists that he mentioned are different from the ones I saw on the DShield website. The more recent version (which I saw) mentioned URLhaus and PhishTank, whereas previously it mentioned Malware Domain List, Malwaredomains blocklist and others. That researcher was right about it when I noticed another DNS-filtering test that also used previous version of DShield list, mentioned there were only 137 out of 2,288 (~6%) of the domains are live during the test. This is exactly the reason DShield switched out its sources.

Using outdated datasets casts doubt on the accuracy of that DNS-filtering test. In light of this discovery, I decided to do my own test using URLhaus and PhishTank datasets. Instead of using the raw datasets, I sourced domain lists from the blocklists urlhaus-filter and phishing-filter (which I maintain). Specifically, I used urlhaus-filter-hosts-online.txt and phishing-filter-hosts which already filtered out IP address and popular domains. I just need to remove the comment and

cat urlhaus-filter-hosts-online.txt | \
# Remove header comment
sed "/^#/d" | \
# Remove ""
sed "s/^ //g" > urlhaus.txt

The files were generated on 10 July 2020 00:05 UTC (± 5 minutes) using URLhaus and PhishTank datasets downloaded around that time. The test is conducted using a script modified from other tests ([1], [2]). I ran the test on 10 July 2020 07:00 UTC (estimated). I tested the following DNS providers:

  • Canadian Shield (
  • Cloudflare (
  • DNSFilter (
    • Configured to block Botnet, Cryptomining, Malware, New Domains and Phishing & Deception categories.
  • NextDNS (
    • Configured to block Newly Registered Domains, in addition default security filtering.
  • OpenDNS (
    • Default security filtering includes Malware/Botnet and Phishing Protections.
  • Quad9 (

I use Google DNS ( to determine liveness of domains, domains that did not return IP address are excluded from the results.

Malware-blocking test result §

DNS ProviderCanadian Shield
% blocked
10 July 2020
% blocked
13 July 2020

(Warning: Do not visit any of the links in the CSV and spreadsheet)

Phishing-blocking test result §

DNS ProviderCanadian Shield
% blocked
10 July 2020
% blocked
13 July 2020

(Warning: Do not visit any of the links in the CSV and spreadsheet)

Discussion §

The results skew towards DNS providers–like NextDNS–that utilise URLhaus and PhishTank. This is what happened when there are only two samples. Quad9 noted that independent test skewed towards it because its network providers also utilise the same data sources (i.e. previous version of DShield) and also admitted that “this type of testing is tricky to do”. What makes it tricky is not just because of limited samples, but also the fact that even if a DNS provider use the same dataset(s), it may decide not to use all of the domains in a dataset.

PhishTank is a notable example of this kind of discrepancy. Despite being operated by OpenDNS, the DNS provider only blocked half of the phishing domains. OpenDNS explains that PhishTank is just one source and it also look at other sources to determine whether a website is really a phish. This means it doesn’t 100% trust any of its sources, which also explains why none of the providers tested has 100% score.

Using URLhaus and PhishTank alone cannot possibly determine the effectiveness of malicious-blocking DNS providers accurately. I believe there are many malicious links out there that are not covered in those datasets. While I do think they are high quality and every DNS provider should consider utilising them, they are not representative samples. So, take DNS-filtering testing which has limited sample with a grain of salt.

(Edit: 14 Jul 2020) I was curious if the result is due to the samples being too fresh (7 hours); DNS providers may not update their sources in real-time and perhaps only update once or twice a day. I ran the tests again on 13 July 2020 using the same samples (which I downloaded in 10 July 2020), a 3-day delay. The results show no significant change though.

Google Safe Browsing §

(Edit: 3 Sep 2020) Recently, I was curious how well Safe Browsing blocks the domains/IP listed in urlhaus-filter and phishing-filter. I used the datasets generated on 3 Sep 2020 00:06:23 UTC and ran the test (see below) at (roughly) 05:00 UTC. I used “safe-browse-url-lookup“ library to simplify the test, the library queries all types of threats by default.

CategoryDomains marked as unsafePercentage
Malware102 / 32593.13 %
Phishing2533 / 683237.08 %

While the result doesn’t look encouraging, I believe the Safe Browsing API is more suitable for looking up a full URL, as opposed to domains and IP address as listed in the blocklists. My approach to creating those blocklists is based on the assumption that, if a URL is hosting malware, probably due to compromised web server, then there may be other malicious links on that domain. While Google’s approach can minimise false positive, I believe my paranoid approach in creating those blocklists can possibly reduce false negative.

const { readFile, writeFile } = require('fs').promises
const { checkMulti: lookup } = require('safe-browse-url-lookup')({ apiKey: '<your-api-key>' })
const { delay } = require('bluebird')

const fn = async () => {
  try {
    const input = await readFile('urlhaus.txt')
    const threats = input.toString('utf-8')
      // remove comment
      .replace(/^#.+/gm, '')
      // 'https://' will yield the same result
      .map(str => `http://${str}`)

    // Max 500 URLs per query
    const multiple = Math.ceil(threats.length / 500)
    let result = {}
    for (let i = 0; i < multiple; i++) {
      console.log('Run: ' + String(i + 1))
      await delay(5000)
      const min = i === 0 ? 0 : i * 500
      const max = i === 0 ? 500 : (i + 1) * 500
      const urlMap = await lookup(threats.slice(min, max))
      result = { ...result, ...urlMap }

    await writeFile('result-phishing.json', JSON.stringify(result, null, 2))

    const positive = []
    const negative = []
    for (const ele in result) {
      if (result[ele] === true) positive.push(ele)
      else negative.push(ele)
  } catch (err) {
    throw new Error(err)