Mark McBride

Selectively Disabling HTTP/1.0 and HTTP/1.1

In January 2026, I decided to enable the HTTP/3 protocol for this site. After a few config tweaks to nginx and modifications to my firewall to allow UDP traffic, I was up and running. While reviewing the access and error logs to ensure things were working as expected, two things stood out:

The Approach

I decided to experiment a bit. I would turn off HTTP/1.X access to my site unless I explicitly allowed it and see what happened. Then I’d allow it unless explicitly denied and see what happened. My approach is simple:

The Configuration

Here are the relevant changes to my nginx configuration files. It makes use of the nginx map directive to create global variables that can be used for a decision to allow or block traffic in my server definitions.

Approach 1: Include Only Known Good Agents

The first of two approaches aims to only allow agents we know. The obvious downside to this approach is you can’t possibly know all the good actors. But if being ultra selective is preferred, this is the option you want.

nginx.conf (Include Option)
http {

    ...

    # Check for text-based browsers
    map $http_user_agent $is_text_browser {
        default 0;

        # Text-Based Browsers (not exhaustive)
        "~*^w3m" 1;
        "~*^Links" 1;
        "~*^ELinks" 1;
        "~*^lynx" 1;

        # Bots (not exhaustive)
        "~*Googlebot" 1;
        "~*bingbot" 1;
        "~*Yahoo! Slurp" 1;
        "~*DuckDuckBot" 1;
        "~*YandexBot" 1;
        "~*Kagibot" 1;
    }

    # Check if request is HTTP/1.X
    map $server_protocol $is_http1 {
        default 0;
        "HTTP/1.0" 1;
        "HTTP/1.1" 1;
    }

    # If Request is not text-based browser, 
    # and is HTTP/1.X, set the http1_and_unknown variable
    # to 1, which is equivalent to "true"
    map "$is_http1:$is_text_browser" $http1_and_unknown {
        default 0;
        "1:0" 1;
    }

    ...

}

Approach 2: Exclude Assumed Bad Agents

The alternative is to only deny agents that seem to be no good. For example if it’s HTTP/1.X and the user agent is blank, assume it’s bad. Or even if it claims to be a desktop browser, assume it’s lying.

nginx.conf (Exclude Option)
http {

    ...

    # Check for questionable user agents
    map $http_user_agent $is_questionable_agent {
        default 0;
        # Agents that are exhibit questionable behavior in conjunction
        # with HTTP/1.1 requests (not exhaustive)
        "~*^Mozilla/5.0" 1;
        "" 1;
    }

    # Check if request is HTTP/1.X
    map $server_protocol $is_http1 {
        default 0;
        "HTTP/1.0" 1;
        "HTTP/1.1" 1;
    }

    # If is_questionable_agent, and 1.X, set the client_needs_to_upgrade_http variable
    map "$is_http1:$is_questionable_agent" $client_needs_to_upgrade_http {
        default 0;
        "1:1" 1;
    }

    ...

}

With the $client_needs_to_upgrade_http global variable we can do a few things now. Most importantly, we can create a conditional if statement and return a 426 status code when the value is 1. I also found it useful to put all 426 requests into a separate log file so I could occassionally look through it an see if I was denying access to something I’d rather allow access.

markmcb.conf (repeat for any server block)
server {
    ...

    server_name   markmcb.com;

    # Handle HTTP/1.0 and HTTP/1.1 requests we flagged with a 426 status
    if ($http1_and_unknown) {
        return 426;
    }

    # Set the error page for 426 to a named location @upgrade_required
    error_page 426 @upgrade_required;

    # Define named location @upgrade_required that allows us to set the 
    # Upgrade and Connection headers and log. Note: ONLY set these headers
    # on HTTP/1.X requests. It is invalid in HTTP/2 and higher
    # and some browsers will reject the connection if they're set.
    location @upgrade_required {
        internal;
        access_log /var/log/nginx/access_markmcb_426.log;
        add_header Upgrade "HTTP/2" always;
        add_header Connection 'Upgrade' always;
        return 426 'Upgrade required';
    }

    # Handle other requests
    location / {
        access_log /var/log/nginx/access_markmcb.log;
        index index.html;
    }

    ...
}

A quick test with curl should look something like this.

Testing responses with curl
curl --http1.1 --user-agent "" -I https://markmcb.com/
HTTP/1.1 426 Server: nginx Date: Thu, 22 Jan 2026 17:06:26 GMT Content-Type: application/octet-stream Content-Length: 16 Connection: keep-alive Upgrade: HTTP/2 Connection: Upgrade
curl --http2 -I https://markmcb.com/
HTTP/2 200 server: nginx date: Thu, 22 Jan 2026 17:06:31 GMT content-type: text/html content-length: 11194 last-modified: Thu, 22 Jan 2026 17:05:30 GMT etag: "697258da-2bba" alt-svc: h3=":443"; ma=86400 accept-ranges: bytes

The Result

For about two days I had approach 1 in place. It seemed to work well. I could see legit browser traffic flowing through and I got a high degree of satisfaction seeing the incredible volume of noise just disappear. Instead of my primary log file polluted with bogus requests, my new 426 log was full of stuff like this:

Bad actors in access_markmcb_426.log
"GET /wp-content/uploads/admin.php HTTP/1.1" 426
"GET /wp-fclass.php HTTP/1.1" 426
"GET /wp-includes/ID3/ HTTP/1.1" 426
"GET /wp-includes/PHPMailer/ HTTP/1.1" 426
"GET /wp-includes/Requests/about.php HTTP/1.1" 426
"GET /wp-includes/Requests/alfa-rex.php HTTP/1.1" 426
"GET /wp-includes/Requests/src/Cookie/ HTTP/1.1" 426
"GET /wp-includes/Requests/src/Response/about.php HTTP/1.1" 426
"GET /wp-includes/Text/Diff/Renderer/ HTTP/1.1" 426
"GET /wp-includes/Text/index.php HTTP/1.1" 426
"GET /wp-includes/Text/xwx1.php HTTP/1.1" 426
"GET /wp-includes/assets/about.php HTTP/1.1" 426
"GET /wp-includes/block-patterns/ HTTP/1.1" 426
"GET /wp-includes/blocks/ HTTP/1.1" 426
"GET /wp-includes/images/media/ HTTP/1.1" 426
"GET /wp-includes/images/smilies/about.php HTTP/1.1" 426
"GET /wp-includes/images/wp-login.php HTTP/1.1" 426
"GET /wp-includes/style-engine/ HTTP/1.1" 426
"GET /wp-themes.php HTTP/1.1" 426

The good news on the app front is many apps already leverage HTTP/2 and HTTP/3. For example, if I paste the link to this article into iOS Messages it generates a preview using HTTP/2.

But there were quite a few non-bogus apps making HTTP/1.1 requests too. At first, I just picked them out one-by-one and allowed them. This seemed to work well. My first realization that approach 1 is probably too agressive came when I posted this article on Mastodon. Every Mastondon instance it seems uses HTTP/1.1 to read the open graph metadata for link previews. When I first posted, there were a dozen or so. But as the article got shared, there were literally hundreds of them making the same OG requests. In this specific case, they mostly all used the same user agent starting with “Mastodon” so it was easy to allow them. But it got me thinking that this approach probably results in collateral damage that you can’t know about until it happens. And the only way to mitigate that is to spend more time than I’m willing to routinely monitoring and resolving issues.

So I switched to approach 2. The combination of empty user agent on HTTP/1.1 didn’t seem to result in anything obviously good getting blocked. The more agressive line that blocks user agents starting with “Mozilla” is risky. It clearly stops bad bots trying to use a known good desktop agent, but I noticed a lot of bots start their user agent with the same. I ended up removing this portion of the match.

Regardless of the fine-tuning though, this definitely confirmed my feeling that most bad traffic comes over HTTP/1.X. To give you a feel for this, compare the before and after proportions. The first chart is 14 days of data, which is 12 days with HTTP/1.X and 2 days without. The second chart is only those 2 days of data with most HTTP/1.X blocked. As you can see, the shift in the proportion of errors is drastic.

Before selectively blocking HTTP/1.X: 50/50 split between 400s and 200s
After selectively blocking HTTP/1.X: Very few 400s

To Include or Exclude?

The downside to option A (Include) is A LOT of bots use HTTP/1.1. So if you want all the feed readers, social media helper bots, AI, and search engines you’ve never heard of to access your site, then option B (exclude) is probably the better choice.

I’ll probably stick with option B in combination with nginx IP rate limits on HTTP/1.X requests and some pattern-based 444s for obvious recurring bad URIs (e.g., /admin.php). Like this I’ll only exclude the odd looking requests and be more rate restrictive on clients that flood my server with requests.

At some point when I feel confident I’m not blocking anything important, I’ll make more use of 444 responses rather than the dozens of other 300, 400, and 500 codes for the known bad actors. If a legit user goes to a bad URL, I want them to get a 404. If a bad actor does, I want to give it a 444. I see a lot of people saying they 301/redirect to law enforcement sites and the like. While funny in concept to send a bad actor to the police, in reality the bots aren’t following 301 redirects. A 444 is the better option as it literally wastes the bad actor’s time. When computers talk, it’s a back and forth process. If your server simply doesn’t respond, the bad agent waits some period of time hoping to hear back, which never happens. So the 444 leaves them in limbo and without easy confirmation that you’re listening. For a single request it’s not a big impact, but for a flood of requests it saves your resources while wasting theirs waiting for responses. (Note: 444 is not a standard http status code. It’s unique to nginx. If you’re using something else, check your web server’s docs for the equivalent.)

Is This Best Practice?

As with most things, it depends.

HTTP/1.0 is obsolete. You can feel good about avoiding it. Mostly. Browsers like w3m still use it.

HTTP/1.1 is still a valid standard. You’ll find many opinions online calling for its death. The most common case against it is security. It’s stable and simple, but it’s also without many of the safeguards incorporated by the newer protocols.

It ultimately comes down to what’s acceptable security for you and how you want to serve humans and bots. A few cases to consider:

So if you block the HTTP/1.X protocols you will block some good humans and bots, but will certainly reduce the high volume bad actors. You can either accept the consequences of blocking a few good actors, or you can let most HTTP/1.X traffic through and exclude the trouble-makers as you find them. I started off with the former, but after thinking about it more the latter is where I’ve landed.

A Final Note

The volume of bad HTTP/1.X traffic triggered this experiment. It’s worth noting that there are many other ways to filter out bad content and what I’ve mentioned in this article should simply be a consideration.

If you’re not sure what you need, log exploration is a good place to start. Spend some time understanding the types of traffic you’re getting and let it inform your strategy. If you don’t have a favorite log browsing tool, I really like and recommend lnav. It makes digging through millions of lines of logs quite easy.