Giter Site home page Giter Site logo

fabriziosalmi / blacklists Goto Github PK

View Code? Open in Web Editor NEW
101.0 2.0 6.0 12.8 GB

Hourly updated domains blacklist ๐Ÿšซ

Home Page: https://github.com/fabriziosalmi/blacklists/releases/download/latest/blacklist.txt

License: GNU General Public License v3.0

Shell 36.51% Python 21.46% HTML 12.98% JavaScript 26.43% Dockerfile 2.62%
blacklisting blacklists dns-blacklist dns-blocking dns-blocklists domain-blocklist adguard-blocklist dns-filtering pihole-blocklists web-application-firewall

blacklists's Introduction

Domains Blacklist

Hourly updated domains blacklist ๐Ÿšซ

"Building a service on top of a regularly updated blacklist can provide immense value, not only for individual internet users but also for businesses and cybersecurity professionals. Whatever service you choose to build, ensure it's user-friendly, reliable, and secure."

โœ… Downloads

https://github.com/fabriziosalmi/blacklists/releases/download/latest/blacklist.txt

๐Ÿ“– DNS filtering for dummies

๐Ÿ“ˆ At a glance

Uptime Robot status Static Badge Static Badge Static Badge Static Badge GitHub issues

Compatibility

Features

  • Hourly Updates: Stay protected against emerging threats
  • Comprehensive Coverage: Aggregated from the most frequently updated blacklists (more info)
  • Broad Compatibility: Works across browsers, firewalls, proxies, and more
  • Robust Security: Protect against phishing, spam, scams, ads, trackers, bad websites and more
  • Whitelist Capability: Submit one or more domains for whitelisting
  • Local Mirror: Set up easily using the Docker image

๐Ÿ‘จโ€๐Ÿ’ป Contribute

  • Propose additions or removals to the blacklist
  • Enhance blacklist or whitelist processing
  • Dive into statistics and data analytics

๐Ÿ… Credits

This project owes its existence to numerous dedicated blacklist creators such as:

T145/BlackMirror - Fabrice Prigent (UT1 mirror) - 1hosts - PolishFiltersTeam - ShadowWhisperer - StevenBlack - bigdargon - developerdan - firebog - hagezi - malware-filter - phishfort - phishing.army - quidsup - DandelionSprout - RPiList - What-Zit-Tooya - azet12 - cert.pl - mitchellkrogza - o0.pages.dev - pgl.yoyo.org - lightswitch05 - frogeye.fr - fruxlabs - durablenapkin - digitalside.it - malwareworld.com

and many more.

For a full list, check the complete blacklists URLs.

Code improvements by xRuffKez.

๐Ÿ‘จโ€๐Ÿ’ป Fixing..

  • Wiki update
  • Improve implementation docs
  • Worst domains hunting

๐Ÿ‘จโ€๐Ÿ’ป Testing

  • Machine learning to predict bad domains and rank all domains
  • Firefox extension site checker
  • Resolving ip addresses to fqdns (ip blacklists, CrowdSec and more) and create custom lists

๐Ÿ—“๏ธ Roadmap

2024

  • Improve blacklist
  • Improve whitelist
  • Domain ranking service
  • Improve websites

Static Badge

Supported by donators

blacklists's People

Contributors

actions-user avatar dependabot[bot] avatar fabriziosalmi avatar xruffkez avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar

blacklists's Issues

generate v4 (python)

import os
import subprocess
import uuid
import requests
from pathlib import Path

def command_exists(cmd):
    try:
        subprocess.check_call(['which', cmd], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
        return True
    except:
        return False

def run_command(cmd, exit_on_fail=True):
    result = subprocess.run(cmd, capture_output=True)
    if result.returncode != 0 and exit_on_fail:
        print(f"Error executing: {' '.join(cmd)}")
        print(result.stderr.decode())
        exit(1)
    return result

def install_package(pkg):
    if PACKAGE_MANAGER == "apt-get":
        run_command(['sudo', 'apt-get', 'install', '-y', pkg])
    elif PACKAGE_MANAGER == "apk":
        run_command(['sudo', 'apk', 'add', '--no-cache', pkg])

def download_url(url, filename):
    response = requests.get(url)
    if response.status_code == 200:
        with open(filename, 'wb') as file:
            file.write(response.content)
    else:
        print(f"Failed to download: {url}")
        exit(1)

PACKAGE_MANAGER = ""

if command_exists("apt-get"):
    PACKAGE_MANAGER = "apt-get"
elif command_exists("apk"):
    PACKAGE_MANAGER = "apk"
else:
    print("Unsupported package manager. Exiting.")
    exit(1)

# Update and install prerequisites
if PACKAGE_MANAGER == "apt-get":
    run_command(['sudo', 'apt-get', 'update'])
    install_package('python3')
    # Link python3 to python for Ubuntu, if not already linked
    if not Path("/usr/bin/python").exists():
        run_command(['sudo', 'ln', '-s', '/usr/bin/python3', '/usr/bin/python'])
elif PACKAGE_MANAGER == "apk":
    run_command(['sudo', 'apk', 'update'])

run_command(['pip3', 'install', '--no-cache-dir', '--upgrade', 'pip', 'setuptools', 'tldextract', 'tqdm'])

for pkg in ['pv', 'ncftp']:
    install_package(pkg)

LISTS = "blacklists.fqdn.urls"
TMP_DIR = "/tmp/blacklist_processing"

Path(TMP_DIR).mkdir(parents=True, exist_ok=True)
os.chdir(TMP_DIR)

print("Download blacklists")
with open(LISTS, 'r') as file:
    for url in file:
        url = url.strip()
        filename = f"{TMP_DIR}/{uuid.uuid4().hex}.fqdn.list"
        download_url(url, filename)

# Aggregating files
print("Aggregate blacklists")
with open("aggregated.fqdn.list", 'w') as outfile:
    for file in Path(TMP_DIR).glob("*.fqdn.list"):
        with open(file, 'r') as infile:
            outfile.write(infile.read())

# Sanitization (assuming sanitize.py takes input.txt and produces output.txt)
print("Sanitize blacklists")
os.rename("aggregated.fqdn.list", "input.txt")
run_command(['python', 'sanitize.py'])
os.rename("output.txt", "all.fqdn.blacklist")

# Remove entries present in whitelist.txt (assuming whitelist.py takes blacklist.txt and produces filtered_blacklist.txt)
os.rename("all.fqdn.blacklist", "blacklist.txt")
run_command(['python', 'whitelist.py'])
os.rename("filtered_blacklist.txt", "all.fqdn.blacklist")

# Tar the result
run_command(['tar', '-czf', 'all.fqdn.blacklist.tar.gz', 'all.fqdn.blacklist'])

print(f"Total domains: {sum(1 for line in open('all.fqdn.blacklist'))}")

os.chdir("..")
subprocess.run(['rm', '-rf', TMP_DIR])

print("Script completed successfully.")

whitelist.py

import os
from pathlib import Path
import argparse
from tqdm import tqdm

def read_fqdn_from_file(file_path: Path) -> set:
    """Read the file and return a set of FQDNs."""
    fqdns = set()
    with file_path.open('r') as file:
        for line in tqdm(file, desc=f"Reading {file_path}", unit="lines", leave=False):
            fqdn = line.strip()
            if fqdn:
                fqdns.add(fqdn)
    return fqdns

def write_fqdn_to_file(file_path: Path, content: set) -> None:
    """Write a set of FQDNs to the specified file."""
    with file_path.open('w') as file:
        file.write('\n'.join(content))

def ensure_file_exists(file_path: Path) -> None:
    """Check if a file exists or exit the program."""
    if not file_path.is_file():
        print(f"File '{file_path}' not found.")
        exit(1)

def main(blacklist_path: Path, whitelist_path: Path, output_path: Path) -> None:
    """Main function to process blacklist and whitelist files."""
    
    # Check if files exist
    ensure_file_exists(blacklist_path)
    ensure_file_exists(whitelist_path)

    blacklist_fqdns = read_fqdn_from_file(blacklist_path)
    whitelist_fqdns = read_fqdn_from_file(whitelist_path)

    # Filter out whitelisted FQDNs from the blacklist
    filtered_fqdns = blacklist_fqdns - whitelist_fqdns

    write_fqdn_to_file(output_path, filtered_fqdns)

    print(f"{len(blacklist_fqdns)} FQDNs in the blacklist.")
    print(f"{len(whitelist_fqdns)} FQDNs in the whitelist.")
    print(f"{len(filtered_fqdns)} FQDNs after filtering.")

if __name__ == '__main__':
    parser = argparse.ArgumentParser(description="Process blacklist and whitelist files.")
    parser.add_argument('--blacklist', default='blacklist.txt', type=Path, help='Path to blacklist file')
    parser.add_argument('--whitelist', default='whitelist.txt', type=Path, help='Path to whitelist file')
    parser.add_argument('--output', default='filtered_blacklist.txt', type=Path, help='Path to output file')
    args = parser.parse_args()
    
    main(args.blacklist, args.whitelist, args.output)

CrowdSec

Tested ip2fqdn reverse lookup (bash script using dig) against crowdsec decisions -a list (20K ip addresses):

  • DNS server(s) used: 1
  • duration: ~2 hours
  • reversed entries: still to enumerate, good partial at the moment ^_^

Expected duration using 10 DNS servers: 12mins
Expected max duration using 100 DNS servers: 2mins

Firefox extension

Creating a Firefox extension to check if a domain is blacklisted requires a different approach, but it's definitely feasible. Below is a basic outline of how you can achieve this:

1. Extension Setup

  1. Manifest file (manifest.json):

The manifest file contains metadata about the extension, like its name and permissions it requires.

{
  "manifest_version": 3,
  "name": "Blacklist Checker",
  "version": "1.0",
  "description": "Checks if a domain is blacklisted.",
  "permissions": ["activeTab", "webRequest", "storage"],
  "background": {
    "service_worker": "background.js"
  },
  "browser_action": {
    "default_popup": "popup.html",
    "default_icon": {
      "48": "icon.png"
    }
  }
}
  1. Background Script (background.js):

This script runs in the background and can be used to communicate with your server, checking if a domain is blacklisted.

browser.browserAction.onClicked.addListener((tab) => {
    const currentDomain = new URL(tab.url).hostname;
    // Here, you can send the domain to your server to check if it's blacklisted
    fetch('http://your_server_endpoint/', {
        method: 'POST',
        body: JSON.stringify({ domain: currentDomain }),
        headers: {
            'Content-Type': 'application/json'
        }
    })
    .then(response => response.json())
    .then(data => {
        if (data.blacklisted) {
            // Handle domain being blacklisted
            alert(`${currentDomain} is blacklisted!`);
        } else {
            // Handle domain not being blacklisted
            alert(`${currentDomain} is not blacklisted.`);
        }
    });
});
  1. Popup (popup.html):

This is the UI that shows when you click on the extension icon. For simplicity, we'll just display a button the user can press to check the current domain.

<!DOCTYPE html>
<html>
<head>
    <title>Blacklist Checker</title>
</head>
<body>
    <button id="checkButton">Check if Blacklisted</button>

    <script src="popup.js"></script>
</body>
</html>
  1. Popup Script (popup.js):

This script will run when the popup is opened and handle the button click.

document.getElementById("checkButton").addEventListener("click", function() {
    browser.runtime.sendMessage({ action: "checkDomain" });
});

2. Backend Server:

Set up a server (e.g., Flask, Express.js) to receive requests from the extension and check if a domain is blacklisted.

For brevity, I won't include the backend code here, but it would essentially accept a domain as input, query your blacklist database, and then respond with whether or not the domain is blacklisted.

3. Package and Test the Extension:

  1. Place all your extension files (manifest.json, background.js, popup.html, popup.js, and any icons) in a folder.
  2. Load the folder into Firefox by navigating to about:debugging, clicking "This Firefox", then "Load Temporary Add-on", and select any file in your extension's directory.
  3. You should now see your extension's icon in the Firefox toolbar. Click on it and test the functionality.

4. Distribution:

Once you're satisfied with the extension's functionality, you can submit it to the Firefox Add-ons website (AMO) for others to use.

Keep in mind that you'll have to ensure the security and privacy of your users, especially if you're

Sanitize it!

To clean up the file as per your requirements, you can use the following bash script. This script will:

  1. Remove comments (lines starting with #).
  2. Remove any extra content after the FQDN in each line.
  3. Remove any lines that don't start with a valid FQDN.
#!/bin/bash

# Function to remove comments, extra content, and invalid lines
clean_fqdns() {
    local input_file="$1"

    # Remove comments (lines starting with #)
    sed -i '/^\s*#/d' "$input_file"

    # Remove extra content after FQDN in each line
    sed -i 's/\([^[:space:]]*\).*/\1/' "$input_file"

    # Remove lines not starting with a valid FQDN
    grep -E '^[a-zA-Z0-9.-]+$' "$input_file" > "${input_file}.cleaned"
}

# Test the function with your file "input_file.txt"
clean_fqdns "input_file.txt"

echo "File cleaned and saved as 'input_file.txt.cleaned'"

Save the script in a file (e.g., clean_fqdns.sh) and make it executable with chmod +x clean_fqdns.sh. Place the script in the same directory as your input file (e.g., input_file.txt), and then run the script with ./clean_fqdns.sh.

The script will clean up the file and create a new file called input_file.txt.cleaned, which will contain only one FQDN per row without any comments or extra content.

IP to FQDN API

Certainly! A robust solution should:

  1. Provide meaningful error messages.
  2. Handle multiple concurrent requests without straining resources.
  3. Display progress if requested.
  4. Use a production-ready server like gunicorn.
  5. Work behind a reverse proxy (for security and scalability).

For the purpose of this demonstration, I'll focus on points 1-3. Implementing a solution behind a reverse proxy using gunicorn can be done but is out of scope for this concise guide. If you'd like that elaborated, let me know after this step.

1. Enhanced Flask API with Progression and Error Handling:

from flask import Flask, request, jsonify
from flask_restful import Resource, Api
import subprocess

app = Flask(__name__)
api = Api(app)

DNS_SERVERS = []

# Load DNS servers from file
with open('dns_servers.txt', 'r') as f:
    DNS_SERVERS = [line.strip() for line in f if line.strip()]

class ResolveIP(Resource):
    def get(self, ip_address):
        progress = {}
        for index, dns_server in enumerate(DNS_SERVERS, 1):
            try:
                result = subprocess.check_output(['dig', '+short', '-x', ip_address, f'@{dns_server}'], stderr=subprocess.STDOUT).decode('utf-8').strip()
                progress[dns_server] = result if result else "No record"
                
                if result:
                    # Return as soon as we have a valid FQDN
                    return jsonify({"ip": ip_address, "fqdn": result, "progress": progress})

            except subprocess.CalledProcessError:
                progress[dns_server] = "Error during lookup"
        
        return jsonify({"ip": ip_address, "error": "Unable to resolve", "progress": progress}), 404

api.add_resource(ResolveIP, '/resolve/<string:ip_address>')

if __name__ == '__main__':
    app.run(debug=True, host='0.0.0.0', port=5000)

Enhancements:

  • Progress: The progress of the resolution is now maintained. For each DNS server queried, there is a status (either the resolved FQDN, "No record", or an error message). The progress is returned in the response.

  • Error Handling: More detailed error messages provide context on where the failure occurred, and each DNS server's status is returned.

2. Deploying with gunicorn:

  1. Install gunicorn:

    pip install gunicorn
  2. Run the app using gunicorn:

    gunicorn your_api_file_name:app -w 4 -b 0.0.0.0:5000

    The -w 4 argument means you are using 4 worker processes. Depending on your server's resources, you may increase or decrease this number.

For complete robustness, you'd also put this setup behind a reverse proxy like nginx to handle SSL, throttling, and other web server features. If you want to learn how to set it up with nginx, let me know!

CrowdSec

Got it. You want to resolve the IPs to their corresponding Fully Qualified Domain Names (FQDNs), and then save those to a text file, one FQDN per line.

This process involves reverse DNS lookups. The Python socket library can be used for this purpose. Here's how you can do it:

  1. Fetching Blocked IPs

    As before, fetch the list of banned IPs using CrowdSec's cscli:

    cscli bans list -o json > bans.json
  2. Convert IPs to FQDNs and Save to a File

    Here's a Python script that reads the bans.json file, performs a reverse DNS lookup for each IP, and saves the result to resolved_domains.txt:

    import json
    import socket
    
    # Load the banned IPs
    with open("bans.json", "r") as f:
        data = json.load(f)
    
    # Open the output file
    with open("resolved_domains.txt", "w") as out:
        for item in data:
            ip = item.get("Ip", None)
            if ip:
                try:
                    # Reverse DNS lookup
                    fqdn = socket.gethostbyaddr(ip)[0]
                    out.write(f"{fqdn}\n")
                except socket.herror:
                    # If no resolution is available, just continue to the next IP
                    continue
  3. Run the Script

    To execute the script, simply run:

    python3 convert_to_fqdn.py

Keep in mind that this process might be slow if there are many IPs to resolve since each resolution is a network request. If you have a very long list, consider using asynchronous methods or parallelizing the requests for faster processing. Additionally, not all IPs will have an associated FQDN.

Squid implementations

If you're using Squid as an outgoing proxy and want to block direct IP requests (both HTTP and HTTPS) while only allowing client requests with host headers, you can achieve this by adding specific access control lists (ACLs) and http_access rules in your Squid configuration.

Here are the steps to configure Squid to achieve this:

  1. Edit the Squid Configuration File:

Open the Squid configuration file (squid.conf) in a text editor:

sudo nano /etc/squid/squid.conf
  1. Define ACLs for Requests with Host Headers:

Define an ACL for requests that have host headers:

acl with_host_header dstdomain . # Matches requests with a domain name
acl ip_request dstdom_regex ^\d+\.\d+\.\d+\.\d+$ # Matches requests with IP addresses
  1. Block Direct IP Requests:

Now, allow requests with host headers while denying those with direct IP addresses:

http_access deny ip_request
http_access allow with_host_header
  1. Other Required Access Controls:

You'll probably have other http_access lines in your configuration for various rules. Make sure that the order of these rules does not conflict with the rules you just added. In Squid, the first matching rule wins, so more specific rules should come before more general ones.

  1. Save and Restart Squid:

After making these changes, save the configuration file and restart Squid to apply the changes:

sudo systemctl restart squid

With these changes, Squid will deny requests made directly to IP addresses and will only allow requests with host headers. Ensure you test the configuration after applying the changes to make sure it works as intended and to identify if there are any other conflicting rules.

Identify threats domains

If you're looking to identify threat domains using just Python and a server, here's a simple yet effective approach to get started:

  1. Data Collection:

    • Store each new version of the blacklist in a timestamped format. This allows you to analyze trends over time. You can use SQLite for lightweight data storage.
  2. Frequent Offenders:

    • Analyze which domains appear on the list most frequently over a given time. Repeated appearances might indicate particularly malicious actors.
  3. Domain Analysis:

    • Use libraries like tldextract to break down the domain into parts (subdomain, domain, TLD). Often, malicious actors use similar domain names but with different TLDs or slight variations.
  4. Age of Domain:

    • Newer domains are more often used for phishing and other malicious activities, as they're disposable. You can use the whois library in Python to fetch the registration date of domains.
  5. External Threat Intelligence:

    • Query popular threat intelligence platforms or databases via their APIs to get more information on the reputation of a domain. Many have free tiers or public lists. Examples include VirusTotal, URLhaus, etc.
  6. Lexical Analysis:

    • Malicious domains often have gibberish or algorithmically generated names. By analyzing the structure and naming pattern, you can potentially flag suspicious domains. This can be done using simple string metrics or more advanced methods like machine learning.
  7. Domain Resolution & IP Analysis:

    • Use Python's socket library to resolve the domain to an IP address. Track if multiple malicious domains resolve to the same IP or IP range, indicating a potentially bad actor or hosting provider.
  8. Active Analysis (Caution: Legal & Ethical considerations):

    • Fetch the content of the domain using libraries like requests. Analyzing the content can give insights into phishing sites, especially if they're replicating another well-known site. However, actively probing or scraping a website can be legally and ethically problematic, so always ensure you have the right to do so.
  9. Monitoring & Alerts:

    • Regularly monitor the data and set up alert mechanisms (using, e.g., Python's smtplib for sending emails) for when certain criteria are met, like a domain showing up frequently.
  10. Visualization:

    • Use libraries like matplotlib or seaborn for visualization. If you're storing your data in SQLite, you can easily fetch and visualize patterns.
  11. Regular Updates & Maintenance:

    • Schedule your Python scripts using cron jobs (on Linux) or Task Scheduler (on Windows) to periodically check for updates and conduct analysis.
  12. Backups & Security:

    • Regularly back up your data. Ensure that your server is secure, with up-to-date software and proper firewall settings, especially if you're making it accessible over the internet.

This is a basic setup. As you get more data and understand the patterns better, you can refine your threat intelligence system to be more sophisticated, possibly incorporating machine learning or more advanced heuristics.

sanitize.py

import re
import tldextract
from tqdm import tqdm

# Pre-compiled regex pattern for FQDN validation
fqdn_pattern = re.compile('^(?!-)[A-Za-z0-9-]{1,63}(?<!-)$')

def is_valid_fqdn(s):
    """Check if the string is a valid FQDN."""
    if '*' in s:
        return False
    extracted = tldextract.extract(s)
    if not all([extracted.domain, extracted.suffix]):
        return False
    return all(fqdn_pattern.match(x) for x in s.split('.'))

def remove_prefix(line, prefix):
    """General function to remove specified prefix from a line."""
    return line[len(prefix):] if line.startswith(prefix) else line

def sanitize_line(line, rules):
    """Apply all sanitization rules to a line."""
    line = line.strip()
    for rule in rules:
        line = rule(line)
        if line is None:
            return None
    return line if is_valid_fqdn(line) else None

def process_large_file(input_file_path, output_file_path):
    """Process large files line by line."""

    sanitization_rules = [
        lambda line: None if line.startswith("#") else line,
        lambda line: remove_prefix(line, "127.0.0.1 "),
        lambda line: remove_prefix(line, "0.0.0.0 "),
        lambda line: remove_prefix(line, "||"),
        lambda line: remove_prefix(line, "http://"),
        lambda line: remove_prefix(line, "https://")
    ]

    unique_domains = set()
    total_lines = sum(1 for _ in open(input_file_path))

    with open(input_file_path, 'r') as infile:
        for line in tqdm(infile, total=total_lines, desc="Processing"):
            sanitized_line = sanitize_line(line, sanitization_rules)
            if sanitized_line is not None:
                unique_domains.add(sanitized_line)

    # Sort the unique domain names in alphabetical order
    sorted_unique_domains = sorted(unique_domains)

    # Write the sorted unique domain names to the output file
    with open(output_file_path, 'w') as outfile:
        for domain in tqdm(sorted_unique_domains, desc="Writing"):
            outfile.write(domain + '\n')

# Use this function to process your large file
process_large_file('input.txt', 'output.txt')

Telegram Bot

Creating a Telegram bot to check if a domain is blacklisted is a practical idea. Here's a step-by-step guide to creating such a bot using Python:

1. Set Up Your Telegram Bot:

  1. Start a chat with the BotFather on Telegram.
  2. Use the /newbot command to create a new bot.
  3. Follow the prompts to name your bot and get your HTTP API token.
  4. Make a note of the token. You'll need it to interact with the Telegram Bot API.

2. Develop the Bot Using Python:

You can use the python-telegram-bot library, which provides a convenient wrapper for the Telegram Bot API.

  1. Install the necessary packages:

    pip install python-telegram-bot
  2. Bot Script:

Here's a simple script for a bot that checks if a domain is blacklisted:

from telegram import Bot, Update
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, CallbackContext
import sqlite3  # Assuming you're using SQLite to store your blacklisted domains

TOKEN = "YOUR_TELEGRAM_TOKEN"
DB_PATH = "path_to_your_blacklist_database.db"

def start(update: Update, context: CallbackContext) -> None:
    update.message.reply_text("Send me a domain, and I'll check if it's blacklisted!")

def check_domain(update: Update, context: CallbackContext) -> None:
    domain = update.message.text.lower().strip()

    conn = sqlite3.connect(DB_PATH)
    cursor = conn.cursor()
    cursor.execute("SELECT * FROM blacklist WHERE domain=?", (domain,))
    
    if cursor.fetchone():
        update.message.reply_text(f"{domain} is blacklisted!")
    else:
        update.message.reply_text(f"{domain} is not blacklisted.")
    conn.close()

def main():
    updater = Updater(token=TOKEN)

    # For handling commands
    updater.dispatcher.add_handler(CommandHandler('start', start))
    
    # For handling messages
    updater.dispatcher.add_handler(MessageHandler(Filters.text & ~Filters.command, check_domain))

    updater.start_polling()
    updater.idle()

if __name__ == '__main__':
    main()

Replace YOUR_TELEGRAM_TOKEN with the token you received from BotFather. Also, adjust the database path and query as needed, based on how you store your blacklist.

3. Run Your Bot:

Execute the Python script to run your bot. Engage with the bot on Telegram, and it should be able to check if a domain is blacklisted.

Remember, always ensure that the bot token remains confidential. Don't expose it or push it to public repositories.

Services

If you have an updated blacklist of domains, there are numerous services you can build on top of it to cater to different needs. Here are some potential ideas:

  1. Real-time Threat Intelligence API:

    • Offer an API for businesses and individuals to query the reputation of a domain in real-time, helping in decision-making processes for firewall rules, web access controls, or more.
  2. Alerting and Notification Service:

    • Users can register their domain or IP, and you can notify them if they ever appear on a blacklist, helping website administrators or business owners maintain their reputation.
  3. Browser Extension:

    • Develop a browser extension that warns users when they're about to visit a blacklisted domain.
  4. Integration with Network Security Appliances:

    • Offer integration modules or plugins for popular firewall or security appliances, which can ingest your blacklist to block malicious domains.
  5. Historical Analysis and Reporting:

    • Provide reports on the historical behavior of domains, showing when they were added or removed from blacklists, frequency, and more.
  6. Threat Intelligence Dashboard:

    • Offer a web dashboard where users can view recent additions to the blacklist, trends, top malicious domains, and other insights.
  7. Domain Analysis Toolkit:

    • A toolkit where users can get detailed reports about a domain, like its past malicious activities, associated IPs, geolocations, and more.
  8. Whitelisting Service:

    • Businesses and domain owners can request reviews to be whitelisted if they believe they were added to the blacklist erroneously. You can offer manual reviews or automated checks to validate such claims.
  9. Email Security Filter:

    • Offer an email filtering service that checks email content for links pointing to blacklisted domains, marking them as spam or malicious.
  10. IoT Security Service:

    • With IoT devices often being a weak security link, you can develop a service for these devices to query and block any communication with blacklisted domains.
  11. Dynamic DNS Feeds:

    • Provide feeds that can be ingested by SIEM (Security Information and Event Management) systems, threat intelligence platforms, or security operations centers.
  12. Affiliate Network Filtering:

    • Ad and affiliate networks can use your service to filter out affiliates that use blacklisted domains, ensuring that their network remains clean.
  13. SSL/TLS Certificate Analysis:

    • Monitor SSL/TLS certificates of blacklisted domains and provide data on invalid, self-signed, or suspicious certificates.
  14. E-commerce Plugin:

    • Develop plugins for popular e-commerce platforms that check for blacklisted domains when users are entering website URLs (e.g., for dropshipping or affiliate linking) to ensure no links point to malicious sites.
  15. Educational & Research Portal:

    • Offer data to researchers, students, or cybersecurity professionals who want to study domain blacklists, trends, and patterns.

Building a service on top of a regularly updated blacklist can provide immense value, not only for individual internet users but also for businesses and cybersecurity professionals. Whatever service you choose to build, ensure it's user-friendly, reliable, and secure.

Search API

To improve the performance of your HTTP API for checking the presence of FQDNs in the list of domains and subdomains, you can implement several optimizations:

  1. Data Storage: Consider using a more efficient data structure to store your domain and subdomain list. A Trie data structure can be very efficient for string searches and would reduce the search time complexity significantly.

  2. Caching: Implement a caching mechanism to store frequently queried FQDNs and their results. This can reduce the need to repeatedly search the data structure for the same FQDNs.

  3. Bloom Filters: You could use a Bloom filter to pre-filter FQDNs that are definitely not in your list. This can help in quickly rejecting non-existent FQDNs and save computational effort.

  4. Distributed System: If the list of FQDNs is too large to fit in memory of a single server, you could consider distributing the data across multiple servers or using a distributed key-value store like Redis.

  5. Indexing: If using a database, create appropriate indexes to speed up lookups. For example, if you're using a relational database, indexing the FQDN column would help.

  6. Multithreading or Asynchronous Handling: Handle incoming requests in parallel using multithreading or asynchronous processing. This can improve response time by allowing the server to handle multiple requests concurrently.

  7. Load Balancing: If your application sees a significant amount of traffic, consider using load balancing techniques to distribute incoming requests across multiple server instances.

  8. Use Compiled Languages: If Python's performance becomes a bottleneck, consider using a compiled language like Go or Rust for the core search algorithm.

  9. Profiling and Optimization: Regularly profile your code to identify performance bottlenecks. Use tools like cProfile for Python to find areas where optimization is needed.

  10. Horizontal Scaling: If your application experiences high traffic, you can scale horizontally by adding more servers. This can help distribute the load and improve response times.

Remember that each optimization should be carefully tested to ensure that it's providing the desired performance improvement without introducing new issues. Performance tuning is an ongoing process, so monitor your application's performance and make adjustments as needed.

http cache tips

Certainly! After reviewing the linked README, I see you want to efficiently check if a remote file has changed to decide whether to fetch it or not. Here's how you can improve the existing approach:

Using ETag and Last-Modified Headers

Many web servers use ETag and Last-Modified headers to signal when content has changed. When you fetch a resource, the server often sends these headers in its response. By saving these headers' values and sending them in subsequent requests, the server can tell you whether the content has changed.

Here's an example of how you might integrate this approach into your script:

  1. Save the ETag and Last-Modified headers (if they exist) after fetching a file.
  2. On the next run, send a request with these headers' values to see if the file has changed.

Here's a sample Python script that demonstrates this:

import requests
import os

# File URL
URL = "https://get.domainsblacklists.com/blacklist.txt"

# Headers file
HEADERS_FILE = "headers.txt"

def get_saved_headers():
    if os.path.exists(HEADERS_FILE):
        with open(HEADERS_FILE, 'r') as f:
            headers = {
                "If-None-Match": f.readline().strip(),
                "If-Modified-Since": f.readline().strip()
            }
            return headers
    return {}

def save_headers(response_headers):
    with open(HEADERS_FILE, 'w') as f:
        f.write(response_headers.get('ETag', '') + "\n")
        f.write(response_headers.get('Last-Modified', '') + "\n")

def fetch_blacklist_txt():
    headers = get_saved_headers()
    response = requests.get(URL, headers=headers)
    
    # If status is 304 Not Modified, there's no need to download
    if response.status_code == 304:
        print("File hasn't changed.")
        return
    
    # Otherwise, save the new file and update headers
    with open("blacklist.txt", "w") as file:
        file.write(response.text)
    
    save_headers(response.headers)

# Rest of your script...

if __name__ == "__main__":
    fetch_blacklist_txt()
    # ... other tasks ...

This script will efficiently check if the remote file has changed by taking advantage of HTTP caching headers. The benefits are:

  • Bandwidth is saved since you're not downloading the entire file if it hasn't changed.
  • The remote server appreciates this too, as it doesn't have to send data unnecessarily.
  • Your script will run faster in cases where the file hasn't changed.

This approach is commonly used for optimizing requests and is considered a best practice.

Graph action

To create a simple image graph using a GitHub Action, you would typically:

  1. Fetch the remote txt file.
  2. Count the number of entries in the file.
  3. Store that count (usually in a file in the repository).
  4. Create an image/graph to represent that count.
  5. Commit the image back to the repo.

For the graph generation, Python has a popular library named matplotlib which can be used to create the image.

Here's a high-level overview:

Steps:

  1. Setup Repository:

    • Create a directory .github/scripts/ and store your Python scripts there.
    • You might have a data file (let's say data.json) which stores the hourly count of entries.
  2. Python Script (graph_generator.py):

This script fetches the remote txt file, counts its entries, updates data.json, and creates an image graph.

import requests
import json
import matplotlib.pyplot as plt
import datetime

# Fetch remote file and count entries
url = 'URL_TO_REMOTE_TXT_FILE'
response = requests.get(url)
entries = response.text.split("\n")
entry_count = len(entries)

# Update data.json with new entry count
try:
    with open('data.json', 'r') as f:
        data = json.load(f)
except FileNotFoundError:
    data = []

current_hour = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
data.append({"hour": current_hour, "count": entry_count})

with open('data.json', 'w') as f:
    json.dump(data, f)

# Generate graph and save as image
hours = [entry['hour'] for entry in data]
counts = [entry['count'] for entry in data]

plt.figure(figsize=(10, 5))
plt.plot(hours, counts, marker='o')
plt.xticks(rotation=45)
plt.tight_layout()
plt.savefig('graph.png')
  1. GitHub Action:

Set up the GitHub Action to run the script, then commit the updated data file and image.

.github/workflows/graph_generator.yml:

name: Graph Generator

on:
  schedule:
    - cron: '0 * * * *'  # Runs every hour

jobs:
  generate_graph:
    runs-on: ubuntu-latest

    steps:
    - name: Checkout repository
      uses: actions/checkout@v2

    - name: Set up Python
      uses: actions/setup-python@v2
      with:
        python-version: 3.8

    - name: Install dependencies
      run: |
        python -m pip install --upgrade pip
        pip install requests matplotlib

    - name: Run graph generator
      run: python .github/scripts/graph_generator.py

    - name: Commit and push updates
      run: |
        git add data.json graph.png
        git config --local user.email "[email protected]"
        git config --local user.name "GitHub Action"
        git commit -m "Update data and graph" -a || echo "No changes to commit"
        git push

Now, every hour, the GitHub Action will execute, fetch the remote txt file, count the entries, update your data file, generate a graph, and push the graph and data back to your repo.

get.domainsblacklists.com improvements

<html>
<head>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    
    <!-- SEO Meta Tags -->
    <title>Domains Blacklist</title>
    <meta name="description" content="Home of the Domains Blacklist. Get the URL for apps like Pi-hole, AdGuard Home, Squid, and more.">
    <link rel="canonical" href="https://get.domainsblacklists.com/"> 
    
    <!-- Open Graph Protocol -->
    <meta property="og:title" content="Domains Blacklist">
    <meta property="og:description" content="Home of the Domains Blacklist. Get the URL for apps like Pi-hole, AdGuard Home, Squid, and more.">
    <meta property="og:image" content="https://get.domainsblacklists.com/path-to-image.jpg"> <!-- Replace with a link to an image representing your content. -->
    <meta property="og:url" content="https://get.domainsblacklists.com/">
    <meta property="og:type" content="website">
    
    <!-- Twitter Cards -->
    <meta name="twitter:card" content="summary">
    <meta name="twitter:title" content="Domains Blacklist">
    <meta name="twitter:description" content="Home of the Domains Blacklist. Get the URL for apps like Pi-hole, AdGuard Home, Squid, and more.">
    <meta name="twitter:image" content="https://get.domainsblacklists.com/path-to-image.jpg"> <!-- Replace with a link to an image representing your content. -->
    <link href="https://cdn.jsdelivr.net/npm/[email protected]/dist/css/bootstrap.min.css" rel="stylesheet">
    <link rel="stylesheet" href="styles.css">
</head>
<body>
    <div class="container">
        <div class="row justify-content-center">
            <div class="col-md-10">
                <div class="main-section text-center">
                    <h1>Get the Blacklist URL</h1>
		    <p class="mb-4">Use the URL below for apps like Pi-hole, AdGuard Home, Squid or any Web Application Firewall. More info on <a href="https://github.com/fabriziosalmi/blacklists/" target="_blank">GitHub</a>.</p>
                    <div class="input-group mb-3 d-flex justify-content-center">
                        <input type="text" id="blacklist-url" class="form-control" style="background-color: #f2f2f2;" value="https://get.domainsblacklists.com/blacklist.txt" readonly>
                    </div>
                    <button class="btn btn-dark" onclick="copyToClipboard()">Copy Link</button>
                    <span id="confirmation-text" class="text-success"></span>
                </div>
            </div>
        </div>
    </div>
    <script src="script.js"></script>
</body>
</html>

Grafana

To track the total count of blacklisted entries across time using Grafana, you'll need to do the following:

  1. Make sure you have InfluxDB set up as a data source in Grafana.
  2. Modify the script to record counts rather than individual entries.
  3. Set up a Grafana dashboard to visualize the data.

1. Set up InfluxDB as a Data Source in Grafana:

  1. Open Grafana and go to the configuration (gear icon) on the left panel.
  2. Click on "Data Sources" and then "Add data source".
  3. Choose "InfluxDB" from the list.
  4. Fill in the details for your InfluxDB instance, including the URL, authentication token, and organization.
  5. Click "Save & Test".

2. Modify Script to Send Counts:

To record the number of blacklisted entries every hour, modify the send_to_influxdb function in the script:

def send_to_influxdb(domains_count):
    client = InfluxDBClient(url=INFLUXDB_URL, token=INFLUXDB_TOKEN, org=INFLUXDB_ORG)
    write_api = client.write_api(write_options=SYNCHRONOUS)

    point = Point("blacklist_stats").field("count", domains_count)
    write_api.write(INFLUXDB_BUCKET, INFLUXDB_ORG, point)

And update the if __name__ == '__main__': block:

if __name__ == '__main__':
    domains = fetch_blacklisted_domains()
    send_to_influxdb(len(domains))

3. Set Up Grafana Dashboard:

  1. In Grafana, click on the "+" icon on the left panel and choose "Dashboard".
  2. Click "Add New Panel".
  3. Choose the InfluxDB data source you added earlier.
  4. In the query editor:
    • From: "blacklist_stats".
    • Select: field(value) -> "count".
    • Click on "Apply".
  5. Under the "Visualization" tab, you can choose how you want to display the data. A graph or bar gauge would be appropriate for visualizing the count across time.
  6. You can also adjust the time range (for example, Last 7 days) and the refresh interval (every 1 hour) to fit your needs.
  7. Save the dashboard.

Now, every hour, as your script sends the count of blacklisted domains to InfluxDB, Grafana will visualize the changes, giving you an overview of how the blacklist count evolves over time.

Telegram whitelist request bot

Creating a Telegram bot to handle this workflow requires several steps:

  1. Create a Telegram Bot.
  2. Write the bot code using Python (using the python-telegram-bot library).
  3. Use the GitHub API to create branches and submit pull requests.

Here's a step-by-step guide:

1. Create a Telegram Bot:

  1. Start a chat with the BotFather on Telegram.
  2. Use the /newbot command to create a new bot.
  3. Follow the instructions and get your HTTP API token.

2. Set Up Python Environment:

First, install the required libraries:

pip install python-telegram-bot PyGithub

3. Write the Bot Code:

You'll need your GitHub token for authentication. Make sure you have the permissions to create branches and submit pull requests.

import os
from telegram import Bot, Update
from telegram.ext import Updater, CommandHandler, MessageHandler, Filters, CallbackContext
from github import Github

# GitHub configurations
GITHUB_TOKEN = 'YOUR_GITHUB_TOKEN'
REPO_NAME = 'username/repo'  # Change to your repository name

# Create an instance of the GitHub client
gh = Github(GITHUB_TOKEN)
repo = gh.get_repo(REPO_NAME)

def start(update: Update, _: CallbackContext) -> None:
    update.message.reply_text('Send me a URL or domain, and I will create a merge request for you.')

def handle_message(update: Update, _: CallbackContext) -> None:
    domain = update.message.text
    branch_name = domain.replace('.', '-')

    # Create a new branch
    base_sha = repo.get_branch("main").commit.sha
    repo.create_git_ref(ref=f"refs/heads/{branch_name}", sha=base_sha)

    # Read the whitelist.txt and append the domain
    contents = repo.get_contents("whitelist.txt", ref="main")
    updated_content = contents.decoded_content.decode('utf-8') + '\n' + domain

    # Update the whitelist.txt in the new branch
    repo.update_file(path="whitelist.txt", message=f"Add {domain} to whitelist", content=updated_content, sha=contents.sha, branch=branch_name)

    # Create a pull request
    repo.create_pull(title=f"Add {domain} to whitelist", body=f"Proposing to add {domain} to the whitelist.", head=branch_name, base="main")

    update.message.reply_text(f"Merge request created for {domain}!")

def main():
    # Initialize the Telegram bot
    updater = Updater(token='YOUR_TELEGRAM_TOKEN')

    # Register handlers
    dp = updater.dispatcher
    dp.add_handler(CommandHandler("start", start))
    dp.add_handler(MessageHandler(Filters.text & ~Filters.command, handle_message))

    # Start the bot
    updater.start_polling()
    updater.idle()

if __name__ == '__main__':
    main()

Replace YOUR_TELEGRAM_TOKEN with the token you got from BotFather and YOUR_GITHUB_TOKEN with your GitHub token.

4. Run the Bot:

Run the Python script. Interact with your bot on Telegram, and when you send a domain, it will create a branch named after the domain (with . replaced by -) and submit a merge request to add the domain to the whitelist.txt file in the specified GitHub repository.

Note: Always be cautious when automating interactions with GitHub or any other service to prevent abuse, unintended spamming, or going beyond rate limits. It might be beneficial to add error handling and checks to ensure smooth operations.

Intelligence

Using a time series database (TSDB) to track domain blacklisting over time can be a good idea. TSDBs are optimized for handling time-stamped data. However, the use case you mentioned might be better handled by a combination of TSDB and a relational database or a specialized solution like an ELK (Elasticsearch, Logstash, Kibana) stack.

Here's a general approach:

  1. Choosing a Time Series Database:

    • InfluxDB is a popular open-source time series database.
    • TimescaleDB is another option, which is built on PostgreSQL. This gives you the advantage of a relational database combined with the capabilities of a TSDB.
  2. Storing Data:

    • When you receive a new blacklist file, timestamp the data and insert it into your chosen TSDB.
    • For each domain, record its FQDN and associated IP address(es), and the timestamp it was blacklisted.
  3. Analysis:

    • To find domains or IP addresses that are repeatedly blacklisted, you can run periodic queries on the TSDB.
    • If using TimescaleDB, for instance, SQL queries can help identify repeating patterns.
  4. Using ELK Stack:

    • Elasticsearch can index and search large amounts of log or event data.
    • Logstash can be used to ingest and process the blacklist files, transforming and loading the data into Elasticsearch.
    • Kibana can then visualize this data.

    For your use case, every time you receive a new blacklist file:

    • Use Logstash to process and send the data to Elasticsearch.
    • In Elasticsearch, each document will have the domain, associated IP, and timestamp.
    • Use Kibana to create visualizations and dashboards to identify trends, such as which IPs are frequently blacklisted.
  5. Actionable Insights:

    • Set up alerts or triggers. For instance, if an IP appears on the blacklist more than a certain number of times in a specified period, you can be alerted.
  6. Maintenance:

    • Regularly back up your database.
    • Periodically prune old data if you do not need to retain all historical data indefinitely.

In conclusion, while a TSDB is an excellent tool for tracking time-series data, the nature of your use case suggests that you might benefit more from a combination of databases or using tools like the ELK stack. This way, you'll have both the time-based tracking and the relational analysis capabilities you're seeking.

Download tar.gz https://get.domainsblacklists.com/blacklist.txt if updated

Certainly! Here's an improved version of the script that:

  1. Shows progress using the tqdm library. This will show a progress bar while downloading the file.
  2. Extracts the content of the downloaded file (assuming it's a tar.gz file) to blacklist.txt.

Firstly, you'll need to install tqdm:

pip install tqdm

Now, here's the improved script:

import requests
import os
from tqdm import tqdm
import tarfile

# Constants
TOKEN = 'YOUR_GITHUB_PERSONAL_ACCESS_TOKEN'
REPO_OWNER = 'owner_of_repository'
REPO_NAME = 'repository_name'
FILE_PATH = 'path_to_file_in_repo.txt'
LAST_CHECKED_TIMESTAMP = 'last_checked_timestamp.txt'
DOWNLOAD_FILE_NAME = 'downloaded_file.tar.gz'
EXTRACTED_FILE_NAME = 'blacklist.txt'
HEADERS = {
    'Authorization': f'token {TOKEN}',
    'Accept': 'application/vnd.github.v3+json'
}

def get_last_checked_timestamp():
    if os.path.exists(LAST_CHECKED_TIMESTAMP):
        with open(LAST_CHECKED_TIMESTAMP, 'r') as f:
            return f.read().strip()
    return None

def set_last_checked_timestamp(timestamp):
    with open(LAST_CHECKED_TIMESTAMP, 'w') as f:
        f.write(timestamp)

def download_with_progressbar(url, filename):
    response = requests.get(url, stream=True)
    total_size = int(response.headers.get('content-length', 0))
    block_size = 1024
    t = tqdm(total=total_size, unit='B', unit_scale=True, desc=filename)
    with open(filename, 'wb') as f:
        for data in response.iter_content(block_size):
            t.update(len(data))
            f.write(data)
    t.close()

def main():
    # Get file details from GitHub
    url = f'https://api.github.com/repos/{REPO_OWNER}/{REPO_NAME}/contents/{FILE_PATH}'
    response = requests.get(url, headers=HEADERS)
    file_data = response.json()

    # Check if updated
    last_checked = get_last_checked_timestamp()
    if not last_checked or file_data['updated_at'] > last_checked:
        print("File was updated. Downloading...")

        # Download the file with progress bar
        download_url = file_data['download_url']
        download_with_progressbar(download_url, DOWNLOAD_FILE_NAME)
        
        # Extract the tar.gz file to blacklist.txt
        with tarfile.open(DOWNLOAD_FILE_NAME, 'r:gz') as tar:
            tar.extractall()
            os.rename('all.fqdn.blacklist', EXTRACTED_FILE_NAME)  # Assuming the file inside tar.gz is named all.fqdn.blacklist

        # Update the timestamp
        set_last_checked_timestamp(file_data['updated_at'])
    else:
        print("File has not been updated since last check.")

if __name__ == '__main__':
    main()

In this script:

  • The download_with_progressbar function uses the tqdm library to show a progress bar when downloading the file.
  • The tarfile library is used to extract the content of the downloaded tar.gz file to blacklist.txt.

Make sure to adjust the filename inside the tar.gz archive if it's different from all.fqdn.blacklist.

Automatically generated release notes

Automatically generated release notes provide an automated alternative to manually writing release notes for your GitHub releases. With automatically generated release notes, you can quickly generate an overview of the contents of a release. Automatically generated release notes include a list of merged pull requests, a list of contributors to the release, and a link to a full changelog.

You can also customize your automated release notes, using labels to create custom categories to organize pull requests you want to include, and exclude certain labels and users from appearing in the output.

bl checker

DNS Resolver Status Details
AdGuard Familyย (176.103.130.132) Blocked Details
CleanBrowsing Adultย (185.228.168.10) Blocked Details
CleanBrowsing Familyย (185.228.168.168) Blocked Details
CloudFlare Familyย (1.1.1.3) Blocked Details
Neustar Familyย (156.154.70.3) Blocked Details
OpenDNS Familyย (208.67.222.123) Blocked Details
Yandex Familyย (77.88.8.7) Blocked Details
AdGuardย (176.103.130.130) Pointing toย 66.254.114.41 Loaded in 31 msec
CleanBrowsing Securityย (185.228.168.9) Pointing toย 66.254.114.41 Loaded in 33 msec
CloudFlareย (1.1.1.1) Pointing toย 66.254.114.41 Loaded in 0 msec
Comodo Secureย (8.26.56.26) Pointing toย 66.254.114.41 Loaded in 1 msec
Google DNSย (8.8.8.8) Pointing toย 66.254.114.41 Loaded in 1 msec
Neustar Protectionย (156.154.70.2) Pointing toย 66.254.114.41 Loaded in 16 msec
Norton Familyย (199.85.126.20) Pointing toย 66.254.114.41 Loaded in 11 msec
OpenDNSย (208.67.222.222) Pointing toย 66.254.114.41 Loaded in 0 msec
Quad9ย (9.9.9.9) Pointing toย 66.254.114.41 Loaded in 17 msec
Yandex Safeย (77.88.8.88) Pointing toย 66.254.114.41 Loaded in 139 msec
DNS Resolver Status Details adguard.com Favicon AdGuard Family (176.103.130.132) Blocked Blocked [Details](https://adguard.com/en/adguard-dns/overview.html) cleanbrowsing.org Favicon CleanBrowsing Adult (185.228.168.10) Blocked Blocked [Details](https://cleanbrowsing.org/filters) cleanbrowsing.org Favicon CleanBrowsing Family (185.228.168.168) Blocked Blocked [Details](https://cleanbrowsing.org/filters) cloudflare.com Favicon CloudFlare Family (1.1.1.3) Blocked Blocked [Details](https://one.one.one.one/family/) neustar.com Favicon Neustar Family (156.154.70.3) Blocked Blocked [Details](https://www.home.neustar/dns-services) opendns.com Favicon OpenDNS Family (208.67.222.123) Blocked Blocked [Details](https://opendns.com/) yandex.com Favicon Yandex Family (77.88.8.7) Blocked Blocked [Details](https://dns.yandex.com/) adguard.com Favicon AdGuard (176.103.130.130) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 31 msec cleanbrowsing.org Favicon CleanBrowsing Security (185.228.168.9) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 33 msec cloudflare.com Favicon CloudFlare (1.1.1.1) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 0 msec comodo.com Favicon Comodo Secure (8.26.56.26) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 1 msec google.com Favicon Google DNS (8.8.8.8) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 1 msec neustar.com Favicon Neustar Protection (156.154.70.2) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 16 msec symantec.com Favicon Norton Family (199.85.126.20) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 11 msec opendns.com Favicon OpenDNS (208.67.222.222) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 0 msec quad9.net Favicon Quad9 (9.9.9.9) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 17 msec yandex.com Favicon Yandex Safe (77.88.8.88) Pointing to [66.254.114.41](https://reputation.noc.org/?ip=66.254.114.41) Loaded in 139 msec

check dns records against dns filtering services

import dns.resolver

def check_domain(domain):
    dns_servers = {
        'Google': '8.8.8.8',
        'Cloudflare': '1.1.1.3',
        'Quad9': '9.9.9.9',
        'CleanBrowsing': '185.228.168.9',
        'AdGuard DNS': '176.103.130.130',
        'Yandex.DNS': '77.88.8.8',
    }

    # Example blacklisted IPs for A records
    blacklist_ips = {
        'Google': '0.0.0.0',
        'Cloudflare': '0.0.0.0',
        'Quad9': '0.0.0.0',
        'CleanBrowsing': '185.228.168.10',
        'AdGuard DNS': '176.103.130.131',
        'Yandex.DNS': '77.88.8.1',
    }

    for provider, server in dns_servers.items():
        resolver = dns.resolver.Resolver()
        resolver.nameservers = [server]

        try:
            answers = resolver.resolve(domain, 'A')
            result_ip = answers[0].address
            
            if result_ip == blacklist_ips[provider]:
                return 1  # Blacklisted by at least one server
        except Exception as e:
            pass

    return 0  # Not blacklisted by any server

if __name__ == "__main__":
    domain_to_check = input("Enter domain to check: ").strip()
    result = check_domain(domain_to_check)
    print(result)

Teams Bot

Microsoft Teams allows for incoming webhooks as part of their app integrations. Incoming webhooks are a simple way to share information from external sources with your Teams channel.

Here's how you can set up a domain checking bot for Microsoft Teams using incoming webhooks:

1. Set Up Incoming Webhook in Microsoft Teams:

  1. Add the Webhook to Teams:

    • Go to the channel where you want to add the webhook.
    • Click on the ... (More options) -> Connectors.
    • Search for Incoming Webhook and click Configure.
    • Provide a name, and you can also upload an image for your webhook here.
    • Click Create.
    • Copy the webhook URL provided (this will be used to post back messages to this channel).
  2. Send a Message to the Webhook:
    Test out the webhook using curl or any REST client:

    curl -H "Content-Type: application/json" -d "{\"text\": \"Hello World\"}" <WEBHOOK_URL>

2. Develop the Bot using Python:

  1. Bot Script:

Here's a simple script using Flask:

from flask import Flask, request, jsonify
import sqlite3
import requests

app = Flask(__name__)
DB_PATH = "path_to_your_blacklist_database.db"
TEAMS_WEBHOOK_URL = "YOUR_TEAMS_WEBHOOK_URL"

@app.route('/teams-bot', methods=['POST'])
def teams_bot():
    # Get the incoming message
    incoming_msg = request.json.get('text', '').lower().strip()

    # Connect to the database and check if domain is blacklisted
    conn = sqlite3.connect(DB_PATH)
    cursor = conn.cursor()
    cursor.execute("SELECT * FROM blacklist WHERE domain=?", (incoming_msg,))
    
    if cursor.fetchone():
        msg = f"{incoming_msg} is blacklisted!"
    else:
        msg = f"{incoming_msg} is not blacklisted."
    conn.close()

    # Send response to Teams
    headers = {'Content-Type': 'application/json'}
    payload = {'text': msg}
    requests.post(TEAMS_WEBHOOK_URL, headers=headers, json=payload)

    return jsonify(success=True)

if __name__ == '__main__':
    app.run(port=5000)

Replace YOUR_TEAMS_WEBHOOK_URL with the webhook URL you got from Microsoft Teams.

  1. Set Up Ngrok (if running locally):
    Use ngrok to expose your local server to the internet:
    • After installing ngrok, run ngrok http 5000.
    • This provides an externally accessible URL you can use as your bot endpoint.

3. Connect the Service:

  • Use the ngrok URL (or wherever your service is hosted) as the endpoint that Microsoft Teams will send user messages to.
  • When a user sends a message in Teams, the bot should check the domain and respond whether it's blacklisted.

Remember, as always, ensure any integration is secure, especially when interacting with databases.

Turning your project into a sustainable business

Turning your project into a sustainable business can be an exciting venture. You've already done significant groundwork by creating a dynamic blacklist system and have automated parts of it. Here's a strategic approach you can consider to take it further:

  1. Refinement and Reliability:

    • Test Your Solution: Ensure that your system works consistently. Errors in blacklists (like blocking legitimate sites) can erode trust quickly.
    • Scalability: Ensure that your infrastructure can handle larger datasets and higher query rates if your user base grows.
  2. Documentation:

    • Usage Guidelines: Provide clear instructions on how users or businesses can integrate with your blacklist service.
    • Updates and Changelog: Regularly update your users about changes, additions, or removals in the list.
  3. Service Differentiation:

    • Real-time Updates: Offer real-time or faster update intervals as a premium feature.
    • Custom Blacklists: Allow premium users to curate their own blacklists or integrate multiple sources.
    • API Access: Consider offering API-based access to your service for developers and businesses.
  4. Monetization Strategies:

    • Subscription Model: Offer the basic version for free, and have tiered subscriptions for advanced features.
    • Affiliate Programs: Partner with cybersecurity firms or related businesses.
    • Donations: Allow for a donation system, especially if youโ€™re offering the service for free. It gives those who find value in your service an avenue to support you.
  5. Community and Collaboration:

    • Engage with Source Creators: Collaborate and share revenue with those who provide high-quality blacklists. This can motivate them to provide timely and accurate updates.
    • Public Discussions: Forums or chat groups where users can discuss false positives, request additions, or share insights.
  6. Marketing and Outreach:

    • Blog: Write about cybersecurity, the importance of blacklists, case studies, etc. This not only builds trust but also helps in organic SEO.
    • Partnerships: Partner with cybersecurity bloggers, YouTubers, or influencers to promote your service.
    • Social Media: Regular updates, tips, and engaging content related to internet security can be shared.
  7. Legal and Ethical Considerations:

    • Transparency: Be transparent about your sources and methods.
    • Privacy Policy and T&Cs: Ensure you have a well-drafted privacy policy and terms of service.
  8. Feedback Loop:

    • User Feedback: Create mechanisms for users to give feedback. This can help you iterate on your product and make necessary improvements.
    • Analytics: Monitor how many users are accessing your service, peak times, any downtimes, etc. Use this data to refine and improve.
  9. Diversification:

    • Whitelisting Service: Since you already have a whitelist mechanism, think about creating a separate but related whitelisting service.
    • Related Tools: Create tools or plugins for popular platforms that use your blacklist for filtering or security.
  10. Plan for Growth:

  • Hire or Collaborate: As your service grows, consider bringing in experts to help, whether in marketing, cybersecurity, or tech.
  • Continuous Learning: The world of cybersecurity is always evolving. Keep updated with the latest trends and threats.

Remember, any business is a continuous journey of learning, adapting, and growing. Listen to your users, keep refining your product, and stay passionate about the value you're providing. Good luck!

Integrations

A list of software and solutions that can integrate and enforce domain-based (FQDN) blacklists like blacklist.txt includes:

  1. DNS Servers/Resolvers:

    • BIND9: As discussed earlier, you can use RPZ (Response Policy Zones) to enforce domain-based blacklisting.
    • PowerDNS: Supports RPZ and can enforce domain-based blacklisting.
    • Unbound: Can use domain blacklists directly to block unwanted domains.
    • Pi-hole: An open-source DNS sinkhole that can be fed with domain-based blacklists to block ads and trackers at the DNS level.
  2. Web Proxies:

    • Squid: This caching and forwarding web proxy can integrate domain-based blacklists to block access to specific sites.
    • Privoxy: A non-caching web proxy that can be combined with Tor. It can be fed with domain blacklists.
  3. Web Content Filters:

    • DansGuardian (now E2Guardian): Filters web access and can be combined with Squid. It supports domain-based blacklists.
    • URLFilterDB: Can integrate with the Squid web proxy and uses domain blacklists.
  4. Firewalls & Unified Threat Management (UTM) Solutions:

    • pfSense: An open-source firewall and router. With packages like pfBlockerNG, it can integrate domain-based blacklists.
    • OPNsense: A fork of pfSense. It can also integrate domain-based blacklists with certain plugins.
    • Endian Firewall: A UTM solution that can use domain blacklists to block access to specific sites.
    • Sophos UTM: Commercial solution with web filtering capabilities that can potentially integrate custom domain blacklists.
  5. Next-gen Firewalls & Security Solutions:

    • Palo Alto Networks: Their NGFWs can integrate custom domain blacklists.
    • Fortinet: Their FortiGate NGFWs can be fed with custom domain blacklists for web filtering.
    • Cisco Firepower/ASA with FirePOWER: Can use domain-based blacklists for web filtering.
  6. SIEM Solutions:

    • Splunk: With its security plugins, you can integrate domain-based blacklists and get alerted on matches.
    • ELK Stack (Elasticsearch, Logstash, Kibana): Can integrate domain-based blacklists and visualize/access logs based on hits.
  7. Threat Intelligence Platforms:

    • MISP (Malware Information Sharing Platform & Threat Sharing): Can integrate domain-based indicators of compromise.
  8. Endpoint Protection:

    • Next-Gen Antivirus solutions: Many modern antivirus solutions allow for the integration of custom blacklists, including domain names.

[...]

blacklists

DNS servers

Sorry, I cannot list 100 public DNS servers off the top of my head, but I can provide a shorter list of some popular public DNS servers that you can use. Remember, always make sure you're not abusing these services or violating any terms of service.

Here are some popular public DNS servers:

  1. Google

    • 8.8.8.8
    • 8.8.4.4
  2. Cloudflare

    • 1.1.1.1
    • 1.0.0.1
  3. OpenDNS (Owned by Cisco)

    • 208.67.222.222
    • 208.67.220.220
  4. Quad9

    • 9.9.9.9
    • 149.112.112.112
  5. DNS.WATCH

    • 84.200.69.80
    • 84.200.70.40
  6. Comodo Secure DNS

    • 8.26.56.26
    • 8.20.247.20
  7. Verisign

    • 64.6.64.6
    • 64.6.65.6
  8. Alternate DNS

    • 198.101.242.72
    • 23.253.163.53
  9. Yandex.DNS

    • 77.88.8.8
    • 77.88.8.1
  10. CleanBrowsing

    • 185.228.168.9
    • 185.228.169.9

Please keep in mind:

  1. These are popular public DNS servers, and repeated or aggressive querying might be flagged as abusive.
  2. Always use them with respect and ensure you're not causing any disruption.

If you're testing or probing, make sure to spread your queries out, stay within your specified rate limits, and ensure you're not violating any terms of use. If you're building a large-scale application or doing intensive testing, consider setting up your DNS server or using a paid service.

Bouncer

Alright, if you're looking to build a bouncer for CrowdSec in Go, here's a more detailed outline and some Go snippets to get you started.

  1. Initialization:
    • On startup, your Go application should fetch the initial blacklist.
    • You should then load these domains into a Go map for O(1) lookup time.
package main

import (
	"io/ioutil"
	"net/http"
	"strings"
	"time"
)

var blacklistedDomains map[string]bool

func fetchBlacklist(url string) {
	resp, err := http.Get(url)
	if err != nil {
		// Handle error
	}
	defer resp.Body.Close()

	body, err := ioutil.ReadAll(resp.Body)
	if err != nil {
		// Handle error
	}

	domains := strings.Split(string(body), "\n")
	for _, domain := range domains {
		blacklistedDomains[domain] = true
	}
}
  1. Blocking Logic:
    • For each incoming request, check if the requested domain is blacklisted.
    • Block or alert based on the result.
func isBlacklisted(domain string) bool {
	return blacklistedDomains[domain]
}

func handleRequest(domain string) {
	if isBlacklisted(domain) {
		// Block or alert
	}
}
  1. Periodic Blacklist Update:
    • Use Go's time package to periodically fetch the updated blacklist.
func main() {
	blacklistedDomains = make(map[string]bool)
	fetchBlacklist("https://get.domainsblacklists.com/blacklist.txt")

	// Update every hour
	ticker := time.NewTicker(1 * time.Hour)
	quit := make(chan struct{})
	go func() {
		for {
			select {
			case <-ticker.C:
				fetchBlacklist("https://get.domainsblacklists.com/blacklist.txt")
			case <-quit:
				ticker.Stop()
				return
			}
		}
	}()
}
  1. Integration with CrowdSec:

    • This largely depends on how you plan to deploy this bouncer and what exactly you need to interact with in CrowdSec. Typically, you would make API calls to CrowdSec to fetch decisions and then use the domain blacklist in conjunction with these decisions to make block/allow decisions.
  2. Configuration & Customization:

    • Consider using a configuration file or environment variables to allow users to specify settings like the blacklist URL.
  3. Logging & Error Handling:

    • Go has great logging libraries. Consider using the standard log package or third-party packages like logrus for more advanced features.
  4. Compile and Run:

    • Once your Go bouncer is ready, you can compile it using go build and then deploy the resulting binary.

This is a high-level outline to get you started. A production-ready bouncer would involve more complexities such as error handling, performance optimizations, integration with other systems, and so forth.

Remember that Go is statically typed, and error handling is explicit, so make sure you handle all possible error cases, especially when making network calls or IO operations.

generate v3

#!/bin/bash

set -e # Stop the script if any command fails
set -u # Stop the script if an uninitialized variable is used

echo "Setup script"

Detect package manager

PACKAGE_MANAGER=""
UPDATE_CMD=""
INSTALL_CMD=""

if command -v apt-get &>/dev/null; then
PACKAGE_MANAGER="apt-get"
UPDATE_CMD="sudo apt-get update"
INSTALL_CMD="sudo apt-get install -y"
elif command -v apk &>/dev/null; then
PACKAGE_MANAGER="apk"
UPDATE_CMD="sudo apk update"
INSTALL_CMD="sudo apk add --no-cache"
else
echo "Unsupported package manager. Exiting."
exit 1
fi

Safe function to install a package

install_package() {
local package=$1
if ! $INSTALL_CMD $package; then
echo "Failed to install '$package' using $PACKAGE_MANAGER."
exit 1
fi
}

Update and install prerequisites

$UPDATE_CMD
install_package "python3"

Link python3 to python (for Ubuntu, since Alpine doesn't have python2 by default)

if [ "$PACKAGE_MANAGER" == "apt-get" ]; then
[[ ! -e /usr/bin/python ]] && sudo ln -s /usr/bin/python3 /usr/bin/python
fi

python3 -m ensurepip --upgrade
pip3 install --no-cache-dir --upgrade pip setuptools tldextract tqdm

Install pv and ncftp based on the detected package manager

for pkg in pv ncftp; do
install_package $pkg
done

LISTS="blacklists.fqdn.urls"
TMP_DIR="/tmp/blacklist_processing"

mkdir -p "$TMP_DIR"
cd "$TMP_DIR"

Function to download a URL

download_url() {
local url="$1"
local filename="$TMP_DIR/$(uuidgen | tr -dc '[:alnum:]').fqdn.list"

echo "Blacklist: $url"
if ! wget -q --progress=bar:force -O "$filename" "$url"; then
echo "Failed to download: $url"
exit 1
fi
}

echo "Download blacklists"
while IFS= read -r url || [[ -n "$url" ]]; do
download_url "$url"
done < "$LISTS"

echo "Aggregate blacklists"
cat *.fqdn.list | sort -u > aggregated.fqdn.list

Cleanup

rm *.fqdn.list

Sanitization steps

echo "Sanitize blacklists"
sed -Ei -e "s/^0.0.0.0\ ?//g"
-e "s/^127.0.0.1//g"
-e "/$ipv4_pattern|^$ipv6_pattern/d" aggregated.fqdn.list

Additional sanitization using sanitize.py

mv aggregated.fqdn.list input.txt
if ! python sanitize.py; then
echo "Error during sanitize.py"
exit 1
fi
mv output.txt all.fqdn.blacklist

Remove entries that are present in whitelist.txt

mv all.fqdn.blacklist blacklist.txt
if ! python whitelist.py; then
echo "Error during whitelist.py"
exit 1
fi
mv filtered_blacklist.txt all.fqdn.blacklist

Tar the file

if ! tar -czf all.fqdn.blacklist.tar.gz "all.fqdn.blacklist"; then
echo "Error: Failed to create the tar.gz file."
exit 1
fi

echo "Total domains: $(wc -l < all.fqdn.blacklist)."

Clean up

cd ..
rm -rf "$TMP_DIR"

echo "Script completed successfully."

InfluxDB

Sending blacklist entries every hour to InfluxDB involves a combination of:

  1. Extracting the blacklist entries from your source.
  2. Formatting these entries in a manner compatible with InfluxDB.
  3. Sending this data to InfluxDB.
  4. Automating the above steps to run every hour.

Below, I'll guide you through these steps.

1. Setup:

First, ensure you have the necessary Python libraries:

pip install influxdb-client

2. Script to Send Blacklist Entries to InfluxDB:

Here's a simple Python script (send_to_influxdb.py) that will send blacklist entries to InfluxDB:

from influxdb_client import InfluxDBClient, Point
from influxdb_client.client.write_api import SYNCHRONOUS
import sqlite3

# SQLite database path
DB_PATH = "path_to_your_blacklist_database.db"

# InfluxDB configurations
INFLUXDB_URL = 'http://localhost:8086'  # Modify as needed
INFLUXDB_TOKEN = 'YOUR_INFLUXDB_TOKEN'
INFLUXDB_ORG = 'YOUR_INFLUXDB_ORGANIZATION'
INFLUXDB_BUCKET = 'YOUR_INFLUXDB_BUCKET'

# Connect to SQLite database and fetch blacklisted domains
def fetch_blacklisted_domains():
    conn = sqlite3.connect(DB_PATH)
    cursor = conn.cursor()
    cursor.execute("SELECT domain FROM blacklist")
    domains = [row[0] for row in cursor.fetchall()]
    conn.close()
    return domains

def send_to_influxdb(domains):
    client = InfluxDBClient(url=INFLUXDB_URL, token=INFLUXDB_TOKEN, org=INFLUXDB_ORG)
    write_api = client.write_api(write_options=SYNCHRONOUS)

    points = []
    for domain in domains:
        point = Point("blacklist").tag("source", "your_source_name").field("domain", domain)
        points.append(point)

    write_api.write(INFLUXDB_BUCKET, INFLUXDB_ORG, points)

if __name__ == '__main__':
    domains = fetch_blacklisted_domains()
    send_to_influxdb(domains)

Replace placeholders (YOUR_INFLUXDB_TOKEN, etc.) with appropriate values.

3. Automate the Process:

To send the blacklist entries every hour, you can schedule the script to run as a cron job (on Linux) or a scheduled task (on Windows).

For Linux (using cron):

Edit the crontab:

crontab -e

Add the following line to schedule the script to run every hour:

0 * * * * /path/to/python3 /path/to/send_to_influxdb.py

Replace /path/to/python3 with the path to your Python interpreter (you can get this with which python3) and /path/to/send_to_influxdb.py with the full path to the script.

For Windows (using Task Scheduler):

  1. Open Task Scheduler.
  2. Create a new basic task.
  3. Set the trigger to repeat every hour.
  4. For the action, choose "Start a program" and point it to your Python interpreter and add the path to the script in the "Add arguments" section.

With these steps, your blacklist entries should be sent to InfluxDB every hour. Ensure you handle any potential exceptions or errors, especially when dealing with external systems.

generate_fqdn.sh

#!/bin/bash

setup_environment() {
    echo "Setup script"

    # Detect package manager
    if command -v apt-get &>/dev/null; then
        PACKAGE_MANAGER="apt-get"
        UPDATE_CMD="sudo apt-get update"
        INSTALL_CMD="sudo apt-get install -y"
    else
        echo "Unsupported package manager. Exiting."
        exit 1
    fi

    # Update and install prerequisites
    $UPDATE_CMD
    $INSTALL_CMD python3

    # Link python3 to python (for Ubuntu)
    sudo ln -s /usr/bin/python3 /usr/bin/python

    python3 -m ensurepip --upgrade
    pip3 install --no-cache-dir --upgrade pip setuptools tldextract tqdm

    # Install other necessary packages
    for package in pv ncftp; do
        if ! $INSTALL_CMD $package; then
            echo "Failed to install '$package' using $PACKAGE_MANAGER."
            exit 1
        fi
    done
}

download_blacklists() {
    echo "Download blacklists"

    while IFS= read -r url; do
        local random_filename=$(uuidgen | tr -dc '[:alnum:]')
        if ! wget -q --progress=bar:force -O "$random_filename.fqdn.list" "$url"; then
            echo "Failed to download: $url"
        fi
    done < "blacklists.fqdn.urls"
}

aggregate_blacklists() {
    echo "Aggregate blacklists"

    for file in *.fqdn.list; do
        sudo cat "$file" >> aggregated.fqdn.list
    done

    sudo cat aggregated.fqdn.list | sort -u > all.fqdn.blacklist
    sudo rm ./*.fqdn.list
}

sanitize_blacklists() {
    echo "Sanitize blacklists"

    mv all.fqdn.blacklist input.txt
    python sanitize.py
    mv output.txt all.fqdn.blacklist

    echo "Remove whitelisted domains"
    mv all.fqdn.blacklist blacklist.txt
    python whitelist.py
    mv filtered_blacklist.txt all.fqdn.blacklist
}

create_compressed_file() {
    echo "Create compressed file"
    if ! tar -czf all.fqdn.blacklist.tar.gz "all.fqdn.blacklist"; then
        echo "Error: Failed to create the tar.gz file."
        exit 1
    fi

    total_lines_new=$(cat all.fqdn.blacklist | wc -l)
    echo "Total domains: $total_lines_new."
}

# Execute functions
setup_environment
download_blacklists
aggregate_blacklists
sanitize_blacklists
create_compressed_file

Services > Browser Extension

Certainly! Let's adapt the Chrome extension for Firefox. Firefox extensions utilize the WebExtensions API, which is largely compatible with the Chrome extension API. However, there are some differences to be aware of.

Firefox Extension Structure:

The folder structure remains almost the same:

/MyBlacklistExtensionFirefox
    /icons
        icon16.png
        icon48.png
        icon128.png
    popup.html
    popup.js
    manifest.json

1. Update the manifest.json:

We need to modify the manifest.json to ensure compatibility with Firefox:

{
  "manifest_version": 2,

  "name": "Blacklist Warning Extension for Firefox",
  "version": "1.0",
  "description": "Warns users about blacklisted domains.",

  "browser_action": {
    "default_icon": {
      "16": "icons/icon16.png",
      "48": "icons/icon48.png",
      "128": "icons/icon128.png"
    },
    "default_popup": "popup.html"
  },

  "permissions": [
    "activeTab",
    "<all_urls>"
  ]
}

The primary change here is the permissions. Firefox prefers "<all_urls>" over the wildcarded HTTP and HTTPS formats.

2. Load Your Extension into Firefox:

  • Open Firefox.
  • Enter about:debugging in the address bar.
  • Click "This Firefox" on the left sidebar.
  • Click "Load Temporary Add-on...".
  • Select any file (e.g., manifest.json) in your extension's directory.

Your extension should now appear in the list of temporary extensions and the icon in the Firefox toolbar.

3. Submitting to Firefox Add-ons:

When you're ready to share your extension with Firefox users:

  1. Visit the Firefox Add-ons Developer Hub.
  2. Click "Submit a New Add-on".
  3. Provide the necessary details and upload your extension. It will be packaged as a .zip or .xpi file.

Remember, Firefox is stricter with reviews than Chrome. Ensure your code is clean, doesn't have any security vulnerabilities, and doesn't violate any terms of service. Once approved, your extension will be available for all Firefox users to install.

Notes:

  • Debugging: The about:debugging page also allows you to inspect your extension's popup's background processes.

  • Cross-browser compatibility: While Firefox is more aligned with the WebExtensions API now, there may still be some quirks when porting between browsers. Always test in each targeted browser.

  • Icon Sizes: Firefox might display your icon in different places, so make sure you provide a full set of icon sizes in the manifest.json to ensure crisp display everywhere.

Services > Alerting & Notifications

Certainly! Creating an alerting and notification service involves several steps:

  1. User Registration: Collect user information, including domain or IP and their contact details.
  2. Database: Store the user data.
  3. Blacklist Checking Service: Periodically check the stored domains/IPs against blacklists.
  4. Notification System: Notify users if their domain or IP appears on a blacklist.

Given the requirements, I'll outline a high-level solution using Python, SQLite as a database, and the SMTP library to send email alerts:

  1. Setting Up the Database:

    First, let's create a script to set up a SQLite database:

    # setup_db.py
    import sqlite3
    
    con = sqlite3.connect('domains.db')
    cursor = con.cursor()
    
    # Create the table
    cursor.execute('''
    CREATE TABLE users (
        id INTEGER PRIMARY KEY,
        domain TEXT NOT NULL,
        email TEXT NOT NULL
    )
    ''')
    
    con.commit()
    con.close()

    Run this script to create the SQLite database:

    python setup_db.py
  2. User Registration Script:

    Register domains and their respective user's email.

    # register_domain.py
    import sqlite3
    
    def register(domain, email):
        con = sqlite3.connect('domains.db')
        cursor = con.cursor()
    
        cursor.execute("INSERT INTO users (domain, email) VALUES (?, ?)", (domain, email))
    
        con.commit()
        con.close()
    
    if __name__ == "__main__":
        domain = input("Enter the domain/IP to monitor: ")
        email = input("Enter your email: ")
        register(domain, email)
        print(f"Monitoring {domain} for {email}.")
  3. Blacklist Checking and Notification:

    This script will check the blacklists and send an email notification:

    # monitor_blacklist.py
    import sqlite3
    import requests
    import smtplib
    from email.message import EmailMessage
    
    BLACKLIST_URL = "https://get.domainsblacklists.com/blacklist.txt"
    
    def get_blacklist():
        response = requests.get(BLACKLIST_URL)
        if response.status_code == 200:
            return set(response.text.splitlines())
        return set()
    
    def send_email(recipient, domain):
        msg = EmailMessage()
        msg.set_content(f'Your domain {domain} is on the blacklist!')
        msg['Subject'] = 'Blacklist Alert!'
        msg['From'] = '[email protected]'
        msg['To'] = recipient
    
        # Authenticate with your SMTP server and send email
        server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
        server.login('[email protected]', 'your_password')  # Use a more secure method for production
        server.send_message(msg)
        server.quit()
    
    if __name__ == "__main__":
        con = sqlite3.connect('domains.db')
        cursor = con.cursor()
        cursor.execute("SELECT domain, email FROM users")
        users = cursor.fetchall()
    
        blacklist = get_blacklist()
    
        for domain, email in users:
            if domain in blacklist:
                print(f"Alert! {domain} found in blacklist. Notifying {email}.")
                send_email(email, domain)

    Note: For the email service, we're using Gmail's SMTP server. Remember to enable "Less secure app access" for your Gmail account to allow sending emails from the script. In a production environment, consider using dedicated email services and not hardcoding credentials.

Now, to run the service:

  1. Set up the database: python setup_db.py
  2. Register domains: python register_domain.py
  3. Periodically run the monitoring script: python monitor_blacklist.py

For a real-world service, you'd want to incorporate more features such as error handling, user verification, a web-based interface, better security for password handling, and scheduling tools to automate the blacklist monitoring.

Documentation

We can break this down into multiple steps:

  1. Fetching the Blacklist: This will involve setting up a scheduled task to download the blacklist every hour.
  2. Integration with Squid Proxy: Configuring Squid to use this blacklist.
  3. Integration with Pi-hole/AdGuard Home: Getting the blacklist into a format and location that Pi-hole/AdGuard Home can utilize.

1. Fetching the Blacklist

You can use a simple script with wget or curl to fetch the list:

#!/bin/bash
BLACKLIST_URL="https://get.domainsblacklists.com/blacklist.txt"
BLACKLIST_PATH="/path/to/save/blacklist.txt"
wget -O "$BLACKLIST_PATH" "$BLACKLIST_URL"

To make this run every hour, use cron. Edit the crontab with:

crontab -e

And add the following line:

0 * * * * /path/to/script.sh

2. Integration with Squid Proxy

Squid can use ACLs (Access Control Lists) to block domains:

Edit your squid.conf:

sudo nano /etc/squid/squid.conf

Add the following:

acl blacklisted_domains dstdomain "/path/to/save/blacklist.txt"
http_access deny blacklisted_domains

Reload Squid:

sudo systemctl reload squid

For direct IPs, you would have to handle those separately, possibly at a firewall level or another level of your network configuration.

3. Integration with Pi-hole/AdGuard Home

  • Pi-hole:

    1. Go to the Pi-hole admin interface.
    2. Click on the 'Settings' tab.
    3. Go to the 'Blocklists' tab.
    4. Add the URL https://get.domainsblacklists.com/blacklist.txt to the list.
    5. Click 'Save and Update'.
  • AdGuard Home:

    1. Go to the AdGuard Home admin panel.
    2. Click on the 'Filters' tab.
    3. Click on 'Add Filter'.
    4. Insert the URL https://get.domainsblacklists.com/blacklist.txt.
    5. Click on 'Check' to verify, and then add it.

Remember to periodically update Pi-hole/AdGuard blocklists.

Note: Before applying any blocklist, always review the list to make sure legitimate domains that you want/need to access aren't being blocked. Also, ensure that your tools can handle the size of the blocklist, especially if it's a long one. Always test in a controlled environment first.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.