Skip to main content

2 posts tagged with "nginx"

View All Tags

HTTP/2 and HTTP/3: Faster, but Is It worth enabling them? Pros, cons, and configuration

· 3 min read
Customer Care Engineer

http2-vs-http3-speed-pros-cons-configuration

Modern HTTP/2 and HTTP/3 protocols can significantly speed up site loading, improve user experience and increase search engine rankings. But not everything is so simple: they have both advantages and disadvantages. Let's understand what these protocols are, their pros and cons, and how to enable them on your server.


What are HTTP/2 and HTTP/3?

HTTP/2 is an updated version of the HTTP/1.1 protocol that allows multiple website resources to be loaded in parallel rather than one by one. This speeds up response times and reduces server load.

HTTP/3 is an even more advanced version that uses the QUIC protocol on top of UDP. It creates more stable connections, especially in poor network conditions.


Advantages

  1. HTTP/2
  • Parallel (multiplexed) loading of site resources.
  • Reduced latency through header compression.
  • Traffic savings.
  1. HTTP/3
  • Quick connection establishment with minimal delay.
  • Resilience to packet loss (especially important for mobile internet).
  • Excellent performance on unstable networks.

By enabling these protocols, you will speed up your site, make it more user-friendly, and gain an SEO advantage.


Disadvantages

  1. Compatibility
  • HTTP/2 and HTTP/3 are not supported by older browsers and devices. For example, certain Internet Explorer versions and older Android devices cannot take advantage of these protocols.
  • HTTP/3 depends on UDP, which can be blocked by some firewalls and network filters.
  1. Configuration complexity
  • Incorrect configuration of HTTP/2 can worsen performance (for example, if stream prioritization is not used).
  • HTTP/3 requires an up-to-date version of Nginx, OpenSSL, and QUIC support, which can be challenging on older servers.
  1. Resource consumption
  • HTTP/3 is more demanding on server resources, particularly with a large number of connections.
  1. Dependence on HTTPS
  • HTTP/2 only works over HTTPS, which increases the complexity and cost of certificate setup and maintenance.

 5. HTTP/1.1 and performance with HTTP/2/3

  • HTTP/2 and HTTP/3 do not exclude support for HTTP/1.1. This may slightly reduce performance, but it does not cause critical issues, since HTTP/1.1 is used only for clients that do not support more modern protocols.

How to Enable HTTP/2 and HTTP/3 in Nginx

info

If you are using a control panel, for example FASTPANEL, you can enable HTTP/2 and HTTP/3 for your site in the site settings without manually editing its configuration file.

  1. Checking compatibility

Connect to your server via SSH.

Check the current Nginx version:

sudo nginx -v

For HTTP/3, version 1.25.0 or higher is required.

Check the current OpenSSL version:

openssl version

To work with HTTP/3, you need OpenSSL version 3.0.0 or higher, as earlier versions do not support QUIC.

Additionally, before making changes to the nginx configuration, make sure there are no errors:

nginx -t

If everything is fine (you can ignore “warn” messages), you will see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

2.  Configure HTTP/2

Open your site’s configuration file in a text editor:

sudo nano /etc/nginx/sites-available/your-site.conf

Add the directive http2 to the listen 443 ssl line and add the line http2 on inside the server block, so it looks something like this:

server {

listen 443 ssl http2;

server_name example.com;



ssl_certificate /path/to/fullchain.pem;

ssl_certificate_key /path/to/privkey.pem;



http2 on;


rest of your config file

}
warning

Note that a valid SSL certificate is required for HTTPS and HTTP/2 to function.

Restart the web server to apply the changes:

systemctl restart nginx
  1. Configure HTTP/3

Similarly to the previous step, open your site’s configuration file and modify it to look like this:

server {

listen 443 ssl http2;

listen 443 quic reuseport;

server_name example.com;



ssl_certificate /path/to/fullchain.pem;

ssl_certificate_key /path/to/privkey.pem;



http2 on;



ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;

add_header Alt-Svc 'h3=":443"; ma=86400';


rest of your config file

}

Here:

  • listen 443 quic reuseport; — enables HTTP/3 (QUIC) on port 443 and improves performance under high connection loads. 
  • ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3; — specifies TLS versions for encryption. For better security, it’s recommended to use only TLSv1.2 and TLSv1.3.
  • add_header Alt-Svc 'h3=":443"; ma=86400'; — this header tells browsers that the server supports HTTP/3 and stores this information for 24 hours. 
warning

The parameter reuseport can only be used once in the Nginx server configuration. Attempting to specify it multiple times for different listen directives will cause conflicts and improper server operation.

Then run an additional compatibility check for your nginx version with these directives, as well as a syntax check:

nginx -t

If everything is fine (you can ignore “warn” messages), you will see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

Restart Nginx to apply the changes:

systemctl restart nginx

Conclusion

HTTP/2 and HTTP/3 are a step into the future, speeding up site load times, improving SEO and making your resource more usable. However, it is important to consider compatibility, resource consumption and configuration complexity.

If most of your users are on modern browsers, start by enabling HTTP/2. Then move on to HTTP/3 if you’re ready to update your server software and are confident in your infrastructure’s compatibility.

Don’t waste your server resources: block unwanted bots using Nginx

· 4 min read
Customer Care Engineer

block-unwanted-bots-using-nginx

Search engine bots (crawlers) are special programs that scan websites on the Internet. Search engines need them to find, index and display pages in search results. But not all bots are useful!

Sometimes your site may be visited by unwanted bots that:

  • Collect data without your permission.
  • Consume server resources, slowing it down.
  • Are used to look for vulnerabilities.

If you want to protect your site from such bots, it’s time to configure Nginx! In this article, we’ll show you how to easily and quickly block them using a special configuration file.


Why Nginx configuration instead of robots.txt?

The robots.txt file is a tool for managing search bots’ behavior. It tells them which parts of the site should not be crawled. It’s very easy to use this file: simply create one in the site’s root directory, for example:

User-agent: BadBot  

Disallow: /  

However, there is a problem: instructions in robots.txt are a recommendation rather than an enforced rule. Conscientious bots do follow this file’s instructions, but most bots simply ignore it.

By contrast, configuring Nginx allows you to physically block access for unwanted bots, guaranteeing a 100% effective result.


How Nginx blocks unwanted bots: using response 444

Unlike robots.txt, which only provides recommendations to bots, Nginx physically blocks their access. One way to achieve this is by using a special server response with the code 444. 

In Nginx, the response code 444 is an internal method of terminating the connection with the client without sending any response. This is an efficient approach to ignore unwanted requests and minimize server load.


Setting up the blocking

Step 1: How to identify unwanted bots?

Unwanted bots can be identified by their User-Agent, which is a parameter sent by all clients when visiting your site. For example, some User-Agents might look like this:

    AhrefsBot     SemrushBot     MJ12bot

You can find suspicious User-Agent values in the Nginx access log (if your site uses PHP-FPM):

sudo grep -i bot /var/log/nginx/access.log

Or in the Apache access log (if your site uses the Apache module or FastCGI as a PHP handler):

  • For Ubuntu/Debian:
sudo grep -i bot /var/log/apache2/access.log
  • For CentOS/AlmaLinux/RockyLinux:
sudo grep -i bot /var/log/httpd/access.log

If you’re using a control panel such as FASTPANEL, each site will have its own separate log file. You can analyze them individually or all at once using a command like:

  • If your site uses the Apache module or FastCGI as the PHP handler:
sudo cat /var/www/*/data/logs/*-backend.access.log |  grep -i bot | tail -500
  • If your site uses PHP-FPM:
sudo cat /var/www/*/data/logs/*-frontend.access.log |  grep -i bot | tail -500

This command will display the last 500 requests made to all your sites where the User-Agent parameter contains the word “bot.” An example of one line (one request to your site) might look like this:

IP - [03/Nov/2022:10:25:52 +0300] "GET link HTTP/1.0" 301 811 "-" "Mozilla/5.0 (compatible; DotBot/1.2; +https://opensiteexplorer.org/dotbot; [email protected])"

or

IP - [24/Oct/2022:17:32:37 +0300] "GET link HTTP/1.0" 404 469 "-" "Mozilla/5.0 (compatible; BLEXBot/1.0; +http://webmeup-crawler.com/)"

The bot’s User-Agent is located between the segments “compatible;” and “/version.number“ at the end of the request line in parentheses. So in the above examples, User-agents are: BLEXBot and DotBot.

Analyze the information you gather and note the User-Agent strings of the most active bots for the next step of configuring the block. 

Step 2: Create a File to Block Bots

  1. Connect to your server via SSH.
  2. Before making changes, ensure that your current Nginx configuration has no errors:
nginx -t

If everything is fine, you’ll see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

If there are any errors, review them and fix them in the file indicated by the error messages.

  1. Create a separate file listing the bots to block:
sudo nano /etc/nginx/conf.d/block_bots.conf

Add the following code to the file:

    map $http_user_agent $block_bot {

        default 0;

        ~*AhrefsBot 1;

        ~*SemrushBot 1;

        ~*MJ12bot 1;

    }



    server {

        if ($block_bot) {

            return 444;

        }
    }

Here we create a map that determines which bots should be blocked.

Following this pattern, list the User-Agent strings of the bots you want to block. You must list each bot on a new line and place a semicolon ; at the end of each line as a delimiter.

After you finish building your list, press "Ctrl + O" on your keyboard to save the file, then "Ctrl + X" to exit the nano editor.

Step 3: Apply the Changes

After making your changes, always test the Nginx configuration for correctness to ensure there are no syntax errors:

sudo nginx -t

If everything is fine, you’ll see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

If there are errors, review the output to identify and correct them in the file specified.

Then reload the Nginx configuration to apply the changes:

sudo systemctl reload nginx

In the future, if you need to add more bots to the block_bots.conf file, you should repeat this step each time. 


Conclusion

Now you know how to easily block unwanted search bots on your server using Nginx! Keep an eye on your logs and add new lines to the block_bots.conf configuration file as needed.

Make sure you only block malicious bots so that you don't prevent useful search engines like Google or Bing from indexing your site.