Mark McBride

Selectively Disabling HTTP/1.0 and HTTP/1.1

In January 2026, I decided to enable the HTTP/3 protocol for this site. After a few config tweaks to nginx and modifications to my firewall to allow UDP traffic, I was up and running. While reviewing the access and error logs to ensure things were working as expected, two things stood out:

The Approach

I decided to turn off HTTP/1.X access to my site unless I explicitly allowed it. My approach is simple:

The Configuration

Here are the relevant changes to my nginx configuration files. It makes use of the nginx map directive to create global variables that can be used for a decision to allow or block traffic in my server definitions.

Approach 1: Include Only Known Good Agents

The first of two approaches aims to only allow agents we know. The obvious downside to this approach is you can’t possibly know all the good guys. But if being ultra selective is preferred, this is the option you want.

nginx.conf (Include Option)
http {

    ...

    # Check for text-based browsers
    map $http_user_agent $is_text_browser {
        default 0;

        # Text-Based Browsers (not exhaustive)
        "~*^w3m" 1;
        "~*^Links" 1;
        "~*^ELinks" 1;
        "~*^lynx" 1;

        # Bots (not exhaustive)
        "~*Googlebot" 1;
        "~*bingbot" 1;
        "~*Yahoo! Slurp" 1;
        "~*DuckDuckBot" 1;
        "~*YandexBot" 1;
        "~*Kagibot" 1;
    }

    # Check if request is HTTP/1.X
    map $server_protocol $is_http1 {
        default 0;
        "HTTP/1.0" 1;
        "HTTP/1.1" 1;
    }

    # If Request is not text-based browser, 
    # and is HTTP/1.X, set the http1_and_unknown variable
    # to 1, which is equivalent to "true"
    map "$is_http1:$is_text_browser" $http1_and_unknown {
        default 0;
        "1:0" 1;
    }

    ...

}

Approach 2: Exclude Assumed Bad Agents

The alternative is to only deny agents that seem to be no good. For example if it’s HTTP/1.X and the user agent is blank, assume it’s bad. Or even if it claims to be a desktop browser, assume it’s lying.

nginx.conf (Exclude Option)
http {

    ...

    # Check for questionable user agents
    map $http_user_agent $is_questionable_agent {
        default 0;
        # Agents that are exhibit questionable behavior in conjunction
        # with HTTP/1.1 requests (not exhaustive)
        "~*^Mozilla/5.0" 1;
        "^*^$" 1;
    }

    # Check if request is HTTP/1.X
    map $server_protocol $is_http1 {
        default 0;
        "HTTP/1.0" 1;
        "HTTP/1.1" 1;
    }

    # If is_questionable_agent, and 1.X, set the client_needs_to_upgrade_http variable
    map "$is_http1:$is_questionable_agent" $client_needs_to_upgrade_http {
        default 0;
        "1:1" 1;
    }

    ...

}

With the $client_needs_to_upgrade_http global variable we can do a few things now. Most importantly, we can create a conditional if statement and return a 426 status code when the value is 1. I also found it useful to put all 426 requests into a separate log file so I could occassionally look through it an see if I was denying access to something I’d rather allow access.

markmcb.conf (repeat for any server block)
server {
    ...

    server_name   markmcb.com;

    # Handle HTTP/1.0 and HTTP/1.1 requests we flagged with a 426 status
    if ($http1_and_unknown) {
        return 426;
    }

    # Set the error page for 426 to a named location @upgrade_required
    error_page 426 @upgrade_required;

    # Define named location @upgrade_required that allows us to set the 
    # Upgrade and Connection headers and log. Note: ONLY set these headers
    # on HTTP/1.X requests. It is invalid in HTTP/2 and higher
    # and some browsers will reject the connection if they're set.
    location @upgrade_required {
        internal;
        access_log /var/log/nginx/access_markmcb_426.log
        add_header Upgrade "HTTP/2" always;
        add_header Connection 'Upgrade' always;
        return 426 'Upgrade required';
    }

    # Handle other requests
    location / {
        access_log /var/log/nginx/access_markmcb.log;
        index index.html;
    }

    ...
}

A quick test with curl should look something like this.

Testing responses with curl
curl --http1.1 --user-agent "" -I https://markmcb.com/
HTTP/1.1 426 Server: nginx Date: Thu, 22 Jan 2026 17:06:26 GMT Content-Type: application/octet-stream Content-Length: 16 Connection: keep-alive Upgrade: HTTP/2 Connection: Upgrade
curl --http2 -I https://markmcb.com/
HTTP/2 200 server: nginx date: Thu, 22 Jan 2026 17:06:31 GMT content-type: text/html content-length: 11194 last-modified: Thu, 22 Jan 2026 17:05:30 GMT etag: "697258da-2bba" alt-svc: h3=":443"; ma=86400 accept-ranges: bytes

The Result

So far this has worked well with no negative consequenses I can detect. I did add a few agents to my list after browsing the 426 log. For example, I noticed when I shared the link to this article on Mastodon a few dozen bots using HTTP/1.1 immediately tried to read the open graph metadata for link previews, so I allowed them. The good news is many apps already leverage HTTP/2 and HTTP/3. For example, if I paste the link to this article into iOS Messages it generates a preview using HTTP/2.

What’s truely satisying are all the bogus requests now shoved aside. My 426 log is full of stuff like this:

Bad actors in access_markmcb_426.log
"GET /wp-content/uploads/admin.php HTTP/1.1" 426
"GET /wp-fclass.php HTTP/1.1" 426
"GET /wp-includes/ID3/ HTTP/1.1" 426
"GET /wp-includes/PHPMailer/ HTTP/1.1" 426
"GET /wp-includes/Requests/about.php HTTP/1.1" 426
"GET /wp-includes/Requests/alfa-rex.php HTTP/1.1" 426
"GET /wp-includes/Requests/src/Cookie/ HTTP/1.1" 426
"GET /wp-includes/Requests/src/Response/about.php HTTP/1.1" 426
"GET /wp-includes/Text/Diff/Renderer/ HTTP/1.1" 426
"GET /wp-includes/Text/index.php HTTP/1.1" 426
"GET /wp-includes/Text/xwx1.php HTTP/1.1" 426
"GET /wp-includes/assets/about.php HTTP/1.1" 426
"GET /wp-includes/block-patterns/ HTTP/1.1" 426
"GET /wp-includes/blocks/ HTTP/1.1" 426
"GET /wp-includes/images/media/ HTTP/1.1" 426
"GET /wp-includes/images/smilies/about.php HTTP/1.1" 426
"GET /wp-includes/images/wp-login.php HTTP/1.1" 426
"GET /wp-includes/style-engine/ HTTP/1.1" 426
"GET /wp-themes.php HTTP/1.1" 426

To give you a feel for how many of those bad requests there were, compare the before and after proportions. The first chart is 14 days of data, which is 12 days with HTTP/1.X and 2 days without. The second chart is only those 2 days of data with most HTTP/1.X blocked. As you can see, the shift in the proportion of errors is drastic.

Before selectively blocking HTTP/1.X: 50/50 split between 400s and 200s
After selectively blocking HTTP/1.X: Very few 400s

To Include or Exclude?

The downside to option A (Include) is A LOT of bots use HTTP/1.1. So if you want all the feed readers, social media helper bots, AI, and search engines you’ve never heard of to access your site, then option B (exclude) is probably the better choice.

I’ll probably be more open to start and stick with option B. And at some point when I feel confident I’m not blocking anything important, I’ll change those 426 responses to 444 and stop logging them. With a 426, the server responds and tells the client it needs to upgrade. Depite the signal to upgrade, it’s highly unlikely the client will do anything with the 426, which means it’s wasted effort. With a 444 nginx simply doesn’t respond (this non-standard status code is unique to nginx, so other web servers may require something different to not respond).

Is This Best Practice?

As with most things, it depends.

HTTP/1.0 is obsolete. You can feel good about avoiding it. Mostly. Browsers like w3m still use it.

HTTP/1.1 is still a valid standard. You’ll find many opinions online calling for its death. The most common case against it is security. It’s stable and simple, but it’s also without many of the safeguards incorporated by the newer protocols.

It ultimately comes down to how you serve humans and bots. A few cases to consider:

So if you block the HTTP/1.X protocols, you block some humans and bots, but mostly bad actors. You can either accept the consequences of blocking a few good actors, or you can let most HTTP/1.X traffic through and exclude the trouble-makers as you find them. I started off with the former, but after thinking about it more the latter is where I’ve landed.