Skip to main content

DNS_PROBE_FINISHED_NXDOMAIN Error: Causes and ways to resolve it

· 3 min read
Customer Care Engineer

dns-probe-finished-nxdomain-how-to-resolve

If your browser reports DNS_PROBE_FINISHED_NXDOMAIN, it means that it cannot determine the IP address of the requested site. This can happen for a variety of reasons:

  • The domain name is not present in DNS servers or its registration has expired.
  • The server responsible for the domain zone is unavailable.
  • DNS is configured incorrectly on the device.
  • Interference from a VPN, antivirus, or firewall.
  • Issues with the internet service provider.

The accompanying error message may look slightly different in different browsers:

  • Google Chrome: «This site can’t be reached».
  • Mozilla Firefox: «Hmm. We’re having trouble finding that site».
  • Microsoft Edge: «Hmm… can’t reach this page».
  • Safari: «Safari Can’t Find the Server».

How to identify the cause of the error?

1. Check the domain status

First, make sure the entered address is correct. If everything is correct, check the domain registration using ICANN Lookup. Enter the URL and see if the domain is active.

2. Check availability via proxy

Try accessing the site using a proxy, VPN, or another network (for example, your mobile provider). If the site opens in this scenario, then the issue is most likely related to the settings on your device or network.

How to fix DNS_PROBE_FINISHED_NXDOMAIN

Clearing the DNS cache

Sometimes the browser or system saves outdated DNS records. Clearing the cache helps refresh them.

  • Windows:
  1. Open Command Prompt as administrator: Start → type cmd in the search bar and press Enter.
  2. Run the command:
ipconfig /flushdns
  1.  Restart your browser.
  • macOS:
  1. Open Terminal: on the keyboard, press cmd + space, type Terminal, and press Enter.
  2. Enter:
sudo dscacheutil -flushcache; sudo killall -HUP mDNSResponder
  1. Press Enter.
  • Google Chrome:
  1. In the browser’s address bar, enter:

chrome://net-internals/#dns

  1. Click Clear host cache.

Updating the IP address

If clearing the cache did not help, try obtaining a new IP address.

  • Windows:
ipconfig /release
ipconfig /renew
netsh int ip set dns
netsh winsock reset

 Restart your system.

  • macOS:
  1. Go to System PreferencesNetwork.
  2. Open the connection → AdvancedTCP/IP.
  3. Click Renew DHCP Lease.

Using alternative DNS servers

The issue might be related to your provider’s DNS servers. Try using Google DNS (8.8.8.8, 8.8.4.4) or Cloudflare DNS (1.1.1.1, 1.0.0.1).

  • Windows:
  1. Open Control PanelNetwork and InternetNetwork and Sharing Center.
  2. Select the active connection → Properties.
  3. In the Internet Protocol Version 4 (TCP/IPv4) section, specify:
  • Primary DNS: 8.8.8.8

  • Secondary DNS: 8.8.4.4

  • macOS:

  1. Open System Preferences.
  2. Go to Network.
  3. Select the active connection (for example, Wi-Fi or Ethernet) in the left column.
  4. Click the Advanced button.
  5. Go to the DNS tab.
  6. In the DNS Servers section, click the + button and add the following DNS servers:
  • 8.8.8.8 (Google DNS)
  • 8.8.4.4 (Google DNS)

 or

  • 1.1.1.1 (Cloudflare DNS)
  • 1.0.0.1 (Cloudflare DNS)
  1. Click OK, then Apply.

Restarting the DNS Client Service (Windows)

  1. Open Command Prompt as administrator.
  2. Type:
net stop dnscache
net start dnscache

 Checking the hosts file

The hosts file may contain incorrect entries that block access to the site.

  • Windows:
  1. Open Notepad as administrator.
  2. Open the file  (File  →  Open):

C:\Windows\System32\drivers\etc\hosts 

  1. Удалите строки, содержащие проблемный домен.
  • macOS:
  1. Open the hosts file in a text editor:
sudo nano /etc/hosts
  1. Delete the lines that contain the problematic domain.
  2. Save the file using the keyboard shortcut Ctrl + O, and then exit the editor using Ctrl + X

Resetting chrome flags

Hidden browser settings might have changed.

  1. Enter in the address bar:

chrome://flags/

  1. Click Reset all to default.

Disabling antivirus and VPN

Some antivirus programs or VPN services may block DNS queries. Temporarily disable them and check if the site is now accessible.

Checking CDN settings

If the site uses Cloudflare or another CDN, try temporarily disabling proxying for that domain in your CDN control panel.

Restarting the router

Sometimes the issue is related to the router. Try the following:

  1. Turn it off for 5 minutes.
  2. Turn it on and check the connection.

Conclusion

The DNS_PROBE_FINISHED_NXDOMAIN error is related to DNS issues. You can resolve it by clearing the cache, changing DNS servers, checking the domain, or adjusting system settings. If nothing helps, contact your internet service provider.

What Is a PTR record and why can’t I set it up on my own?

· 2 min read
Customer Care Engineer

ptr-record-what-is-it-and-how-to-set-up

Introduction

If you have ever configured a mail server or encountered reverse DNS checks for other reasons, you have likely heard about PTR records. But what exactly are they? Why can you often not set up a PTR record yourself? Let’s figure it out!

What is a PTR record?

A PTR (Pointer) record is a type of DNS record used for reverse mapping of IP addresses to domain names. Unlike standard A records (which map a domain to an IP), PTR records let you determine which domain a particular IP address belongs to.

How does a PTR record work?

When a server receives an incoming connection, it can request a reverse DNS (rDNS) lookup for the sender’s IP address. If a PTR record is configured, it will return the corresponding domain name. This is important for:

  • Setting up mail servers (SMTP servers often require PTR records for proper email delivery and to avoid spam issues);
  • Identifying IP addresses in logs and enhancing security;
  • Ensuring correct operation of certain services that depend on rDNS.

Why can’t I set up a PTR record on my own?

Many users with access to manage DNS records expect they can create a PTR record just like an A or CNAME record. However, here’s the main issue: PTR records are not configured in your DNS hosting; they are set up by the IP address provider (ISP, data center, or hosting provider).

Key reasons:

  1. Control of IP addresses – PTR records belong to the owner of the IP pool. If you have a dedicated server or VPS, your hosting provider owns the IP address and must configure the record.
  2. Lack of rDNS management – Even if you have DNS management access, the reverse DNS zone (in-addr.arpa) is controlled by the owner of the IP address block.
  3. Provider requirements – Some hosting providers only allow you to configure PTR through support tickets, not via a control panel.
  4. Dynamic IP addresses – If your IP address is dynamic (for example, with a home internet connection), your ISP will not let you set a personalized PTR record.

How to configure a PTR record?

1. Contact your provider

To create or change a PTR record, you need to contact the hosting provider or ISP that allocated your IP address. This is usually done by opening a support ticket.

2. Specify the required domain

Typically, the provider will require the PTR record to point to a real domain, which is already set up and resolvable via an A record.

3. Verify the configuration

After changing the PTR record, it’s worth checking its functionality using the following commands:

Windows:

nslookup 123.123.123.123

Linux and MacOS:

dig -x 123.123.123.123
note

The above IP addresses are examples. To verify, use the real IP address for which the PTR record was changed.

Conclusion

A PTR record is an important part of DNS, especially for mail servers. However, you cannot set up this record without the involvement of the IP address owner. If you need to create a PTR record, contact your hosting provider to discuss the possibility of configuring it. Doing so will help you avoid email delivery problems and increase trust in your server.

301 Redirect: a simple guide to setting it up with .htaccess or Nginx

· 2 min read
Customer Care Engineer

how-to-set-up-301-redirect-nginx-and-htaccess

Want to redirect users and search engines to a new website address? 301 redirect is your best friend! It helps you maintain SEO rankings and avoid 404 errors. In this article, we will show you how to set up 301 redirect in .htaccess and Nginx quickly and easily.


What is a 301 redirect and why do you need it?

A 301 redirect is a permanent redirect from one URL to another. It is used to:

  • Preserve a site’s search engine rankings after changing its address.
  • Combine multiple URLs into one.
  • Avoid traffic loss and 404 errors.

How to Set Up a 301 Redirect in .htaccess (Apache)

  1. Find or сreate the .htaccess

The .htaccess file is located in the root (primary working) directory of your site. If it doesn’t exist, create a new one.

  1. Add the following code for redirection
  • For a single URL:
Redirect 301 /old-page https://yoursite.com/new-page
  • To redirect an entire website:
RewriteEngine On

RewriteCond %{HTTP_HOST} ^oldsite\.com$ [NC]

RewriteRule ^(.*)$ https://newsite.com/$1 [L,R=301]

Replace oldsite.com and newsite.com with your site’s old and new domains respectively. 

  1. Save the file

The changes will take effect immediately.


How to set up a 301 redirect in Nginx

  1. Open the nginx configuration file for your site

Connect to your server via SSH and open the necessary file in the nano text editor:

sudo nano /etc/nginx/sites-available/your-site.com.conf

Replace yoursite.com with your site’s domain. 

If you can’t find such a file, you can locate the configuration file with the following command:

sudo grep -irl name /etc/nginx
  1. Add redirect rules to the server block
  • For a single URL:
server {

listen 80;

server_name oldsite.com;

return 301 https://newsite.com/new-page;

}
  • To redirect an entire site:
server {

listen 80;

server_name oldsite.com;

return 301 https://newsite.com$request_uri;

}
  1. Save and apply the changes

Save the file using the shortcut "Ctrl + O" and exit nano with "Ctrl + X". Then apply the changes with:

sudo systemctl reload nginx

How to check if the redirect is working

After configuring, make sure your 301 redirect is active:

  • Open the old url in a browser.

Go to the old URL in your browser and make sure you are redirected to the new address.

info

It is best to perform this check in a private browser window (incognito) to avoid caching the results.

HTTP/2 and HTTP/3: Faster, but Is It worth enabling them? Pros, cons, and configuration

· 4 min read
Customer Care Engineer

http2-vs-http3-speed-pros-cons-configuration

Modern HTTP/2 and HTTP/3 protocols can significantly speed up site loading, improve user experience and increase search engine rankings. But not everything is so simple: they have both advantages and disadvantages. Let's understand what these protocols are, their pros and cons, and how to enable them on your server.


What are HTTP/2 and HTTP/3?

HTTP/2 is an updated version of the HTTP/1.1 protocol that allows multiple website resources to be loaded in parallel rather than one by one. This speeds up response times and reduces server load.

HTTP/3 is an even more advanced version that uses the QUIC protocol on top of UDP. It creates more stable connections, especially in poor network conditions.


Advantages

  1. HTTP/2
  • Parallel (multiplexed) loading of site resources.
  • Reduced latency through header compression.
  • Traffic savings.
  1. HTTP/3
  • Quick connection establishment with minimal delay.
  • Resilience to packet loss (especially important for mobile internet).
  • Excellent performance on unstable networks.

By enabling these protocols, you will speed up your site, make it more user-friendly, and gain an SEO advantage.


Disadvantages

  1. Compatibility
  • HTTP/2 and HTTP/3 are not supported by older browsers and devices. For example, certain Internet Explorer versions and older Android devices cannot take advantage of these protocols.
  • HTTP/3 depends on UDP, which can be blocked by some firewalls and network filters.
  1. Configuration complexity
  • Incorrect configuration of HTTP/2 can worsen performance (for example, if stream prioritization is not used).
  • HTTP/3 requires an up-to-date version of Nginx, OpenSSL, and QUIC support, which can be challenging on older servers.
  1. Resource consumption
  • HTTP/3 is more demanding on server resources, particularly with a large number of connections.
  1. Dependence on HTTPS
  • HTTP/2 only works over HTTPS, which increases the complexity and cost of certificate setup and maintenance.

 5. HTTP/1.1 and performance with HTTP/2/3

  • HTTP/2 and HTTP/3 do not exclude support for HTTP/1.1. This may slightly reduce performance, but it does not cause critical issues, since HTTP/1.1 is used only for clients that do not support more modern protocols.

How to Enable HTTP/2 and HTTP/3 in Nginx

info

If you are using a control panel, for example FASTPANEL, you can enable HTTP/2 and HTTP/3 for your site in the site settings without manually editing its configuration file.

  1. Checking compatibility

Connect to your server via SSH.

Check the current Nginx version:

sudo nginx -v

For HTTP/3, version 1.25.0 or higher is required.

Check the current OpenSSL version:

openssl version

To work with HTTP/3, you need OpenSSL version 3.0.0 or higher, as earlier versions do not support QUIC.

Additionally, before making changes to the nginx configuration, make sure there are no errors:

nginx -t

If everything is fine (you can ignore “warn” messages), you will see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

2.  Configure HTTP/2

Open your site’s configuration file in a text editor:

sudo nano /etc/nginx/sites-available/your-site.conf

Add the directive http2 to the listen 443 ssl line and add the line http2 on inside the server block, so it looks something like this:

server {

listen 443 ssl http2;

server_name example.com;



ssl_certificate /path/to/fullchain.pem;

ssl_certificate_key /path/to/privkey.pem;



http2 on;


rest of your config file

}
warning

Note that a valid SSL certificate is required for HTTPS and HTTP/2 to function.

Restart the web server to apply the changes:

systemctl restart nginx
  1. Configure HTTP/3

Similarly to the previous step, open your site’s configuration file and modify it to look like this:

server {

listen 443 ssl http2;

listen 443 quic reuseport;

server_name example.com;



ssl_certificate /path/to/fullchain.pem;

ssl_certificate_key /path/to/privkey.pem;



http2 on;



ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;

add_header Alt-Svc 'h3=":443"; ma=86400';


rest of your config file

}

Here:

  • listen 443 quic reuseport; — enables HTTP/3 (QUIC) on port 443 and improves performance under high connection loads. 
  • ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3; — specifies TLS versions for encryption. For better security, it’s recommended to use only TLSv1.2 and TLSv1.3.
  • add_header Alt-Svc 'h3=":443"; ma=86400'; — this header tells browsers that the server supports HTTP/3 and stores this information for 24 hours. 
warning

The parameter reuseport can only be used once in the Nginx server configuration. Attempting to specify it multiple times for different listen directives will cause conflicts and improper server operation.

Then run an additional compatibility check for your nginx version with these directives, as well as a syntax check:

nginx -t

If everything is fine (you can ignore “warn” messages), you will see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

Restart Nginx to apply the changes:

systemctl restart nginx

Conclusion

HTTP/2 and HTTP/3 are a step into the future, speeding up site load times, improving SEO and making your resource more usable. However, it is important to consider compatibility, resource consumption and configuration complexity.

If most of your users are on modern browsers, start by enabling HTTP/2. Then move on to HTTP/3 if you’re ready to update your server software and are confident in your infrastructure’s compatibility.

If you prefer not to configure these protocols manually, you can choose a server with the free FASTPANEL, where enabling HTTP/2 and HTTP/3 for your site is simple and convenient.

HDD, SSD, or NVMe: how to choose a storage type when renting a server

· 2 min read
Customer Care Engineer

hdd-vs-ssd-vs-nvme-storage-options-for-your-server

When renting a server, the choice of storage system directly affects the performance of your projects, storage reliability, and rental cost. It is important to understand the difference between HDD, SSD and NVMe to make the best choice for your needs.

HDD: durability and stability

Hard disk drives (HDD) are traditional storage devices that have served data centers for years, storing large volumes of data. They aren’t as fast as SSDs, but they provide long service life under moderate load.

HDD typically have a lifespan of around 20,000–25,000 hours. In practice, many HDD in data centers operate for about 3–5 years, depending on usage intensity.

HDD are highly sensitive to power outages because they use moving parts (e.g., read/write heads), which can lead to data damage. In the event of an abrupt shutdown, the risk of data loss is higher compared to SSD.

Advantages of HDD:

  • Durability: They operate for a long time under moderate load.
  • Cost: Cheaper than SSDs and NVMe, especially when calculated per TB of data.
  • Large storage capacity: Ideal for storing huge volumes of data at lower access speeds.

SSD: faster, but with limited lifespan

Solid-state drives (SSD) are fast and reliable devices for servers where speed is critical. However, SSD have a more limited write cycle lifespan. For SATA SSD, the endurance is about 300–500 full write cycles, which, under moderate usage, could theoretically last up to five years. Yet, if your workloads involve a lot of write operations—common for many websites—the lifetime of SSD can be significantly reduced.

SSD are more resistant to sudden power loss because they have no moving parts. However, intensive writes can quickly consume the drive’s write endurance, particularly in cheaper models.

Advantages of SSD:

  • High speed: Excellent for servers where performance is crucial.
  • Resistance to power outages: More resilient to hardware damage during abrupt shutdowns.

NVMe: maximum speed, but shorter lifespan

NVMe (Non-Volatile Memory Express) диски drives are a modern alternative to SATA SSDs, offering even higher performance. They provide significantly faster read and write speeds, which is ideal for servers handling large amounts of data or performing computationally intensive tasks.

However, NVMe drives tend to have a shorter lifespan than SATA SSDs. Due to their high write speeds, these drives can wear out faster under constant load.

Like SSD, NVMe drives are less prone to damage during abrupt shutdowns. However, they are still not as long-lasting as HDD due to the intense operational loads.

Advantages of NVMe:

  • Maximum speed: Ideal for servers that process large data volumes. 
  • High performance: Suitable for tasks with heavy workloads.

Which storage type should you choose?

  • If durability and cost matter more to you, and you don’t plan on heavy write operations, HDD is a great choice. It’s cheaper and will provide stable operation for years.
  • If you need fast data processing under moderate load, go for an SSD. It offers good speed and wears out less quickly compared to NVMe.
  • NVMe is suitable for servers with extremely high speed requirements, but keep in mind its shorter lifespan and higher price. 

Your choice of storage depends on your specific tasks: if longevity and affordability are the priority, choose HDD. If you need high performance, SSD or NVMe will be the optimal solution.

Additionally, we offer servers tailored to your needs and budget, providing the perfect fit for any requirement.

SSL certificates: what’s the difference between paid and free, and which should you choose?

· 3 min read
Customer Care Engineer

ssl-certificates-paid-or-free

SSL certificate is a must for any modern website. It ensures secure data transfer between the server and the user. There are several variants of certificates, including free (most often Let's Encrypt and ZeroSSL) and paid ones. Let's find out how they differ and when you should choose a paid certificate.

What is a free SSL certificate from Let’s Encrypt or ZeroSSL?

Let's Encrypt is a free and automated service that provides SSL certificates for websites. It’s ideal for most simple projects, whether it’s a blog or a small online store.

ZeroSSL is a similar tool that also offers free certificates but comes with some additional features.

Advantages of Free Certificates:

  1. No cost: The main advantage. Let’s Encrypt and ZeroSSL provide SSL certificates completely free of charge, which is perfect for most users who do not require an additional level of trust.
  2. Support in modern browsers: Certificates from Let’s Encrypt and ZeroSSL are accepted by all current browsers, so users won’t see any security warnings when visiting your site.
  3. Wildcard certificates: Both Let’s Encrypt and ZeroSSL support wildcard certificates, allowing you to protect all subdomains of a given domain.

Drawbacks:

  1. Limited support: If you encounter problems with your certificate, you’ll need to resolve them yourself, as free certificates do not come with support.
  2. Short-term validity: Let’s Encrypt and ZeroSSL certificates are only valid for 90 days. Although there are ways to set up automatic renewal, in most cases this requires command-line skills and a basic understanding of how a web server works.
  3. Level of trust and reliability: Unlike paid certificates, Let’s Encrypt and ZeroSSL do not offer Extended Validation (EV), which may limit the level of trust some users and search engines have in your site. 

Differences between ZeroSSL and Let’s Encrypt:

  • ZeroSSL offers a more user-friendly interface and paid certificate options with additional features (for example, extending validity up to one year).
  • Let's Encrypt is completely free but requires configuring automated renewals.

What are paid SSL certificates?

Paid SSL certificates are offered by many providers, such as DigiCert, GlobalSign, Comodo, and others. They include additional benefits and features that may be valuable for more complex projects handling sensitive personal data.

Advantages of paid certificates:

  1. Long-term certificates: Paid certificates typically last from 1 to 3 years. This is convenient if you don’t want to renew your certificate frequently and prefer a longer-term solution.
  2. Extended Validation (EV SSL): Paid certificates often include EV, which involves a more thorough vetting of the purchasing company. This increases the level of trust users have in your site.
  3. Technical support and warranties: Paid certificates usually come with support and insurance against any issues related to the certificate’s installation and operation. In cases where it’s proven that your clients’ data was stolen due to a certificate issue, you would be compensated under the insurance policy. 
  4. Improved search indexing: Many search engines give preference to secure websites in search results. Paid certificates can help boost SEO, as they signal greater reliability for your site.

When should you choose a paid SSL certificate?

  1. If your site handles sensitive information or payments: Paid certificates with EV are especially valuable for sites that process personal data or handle financial transactions. They help increase user trust.
  2. For multi-site projects: Paid certificates can protect multiple sites or subdomains, making them ideal for corporate or large commercial websites.
  3. If you need additional support: With paid certificates, you can get help from support services—important for businesses that don’t want to handle technical problems on their own.
  4. For improving SEO: Paid certificates can boost your rankings in search engines.
  5. For long-term use: Paid certificates have a longer validity period and don’t require frequent renewal, which is convenient for large sites and projects.

Conclusion

Free certificates from Let’s Encrypt or ZeroSSL are an excellent solution for most small websites and blogs. They provide basic security and are suitable for sites that don’t need extended validation or extra features.

If your site requires additional features - for example, protection of multiple domains or extended support - a paid certificate would be a better choice. In this case, you can explore the available paid options (to purchase a certificate from us, follow the link).

Basic work with journald

· 2 min read
Customer Care Engineer

read-journald-logs-and-learn-how-to-clear-it

Journald is a logging system used in modern Linux-based operating systems to record system events. It collects information about the operation of various services, applications, and system processes to help administrators monitor system health and diagnose errors.

Unlike standard text logs, journald stores data in a binary format. This allows logs to be stored more compactly and managed more efficiently, but at the same time, you cannot simply open these logs in a text editor. Special tools are required to view and analyze them.

In this article, we will look at how to view the records maintained by journald and how to clear them to save disk space.


How to view journal logs

To read logs, use the journalctl command:

  • All logs:
sudo journalctl
  • Logs since the last reboot:
sudo journalctl -b
  • Logs for a specific service:
sudo journalctl -u nginx
  • Logs for a specific day:
sudo journalctl --since "2024-11-01" --until "2024-11-02"
  • View the last n entries (for example, the last 100):
sudo journalctl -n 100
  • Filter by priority level (for example, for errors):
sudo journalctl -p err
  • View journal entries in reverse order, starting from the newest (useful when you need to see the latest log entries quickly):
sudo journalctl -r
  • View journal entries in real time (similar to tail -f):
sudo journalctl -f

You can combine these options. For example, to display all errors from the nginx service on November 10, 2024, showing only the last 10 entries:

sudo journalctl -u nginx --since "2024-11-10" --until "2024-11-10 23:59:59" -n 10

How to clear the Journal

If logs occupy too much space, you can use the following commands to clear them:

  • Clear old logs (e.g., older than 7 days):
sudo journalctl --vacuum-time=7d
  • Clear logs exceeding a specified size (e.g., 1 GB):
sudo journalctl --vacuum-size=1G
  • Completely clear all logs:
sudo journalctl --vacuum-files=0

How to reduce the Journal size

By default, journald can occupy a lot of disk space if logs are not limited. To set a maximum size for logs, open the journald.conf configuration file:

sudo nano /etc/systemd/journald.conf

In this file, you can configure the following parameters:

  • SystemMaxUse — the maximum size for all journals:
SystemMaxUse=1G
  • RuntimeMaxUse — the maximum size for temporary journals:
RuntimeMaxUse=500M
  • MaxRetentionSec — the maximum time to retain logs:
MaxRetentionSec=1month

Set values suitable for your system and needs, then save the file using Ctrl + O, and exit the editor using Ctrl + X. 

To apply the changes, restart the journald service:

sudo systemctl restart systemd-journald

You can also enable logging to RAM or even disable it entirely. Neither option is recommended in a production environment, as the journal contains important diagnostic information. Its accuracy and relevance are crucial for proper diagnostics of processes on your server.

If you still want to activate storing the journal in RAM, set the following value in /etc/systemd/journald.conf:

Storage=volatile

To completely disable logging, specify:

Storage=none

Don’t waste your server resources: block unwanted bots using Nginx

· 4 min read
Customer Care Engineer

block-unwanted-bots-using-nginx

Search engine bots (crawlers) are special programs that scan websites on the Internet. Search engines need them to find, index and display pages in search results. But not all bots are useful!

Sometimes your site may be visited by unwanted bots that:

  • Collect data without your permission.
  • Consume server resources, slowing it down.
  • Are used to look for vulnerabilities.

If you want to protect your site from such bots, it’s time to configure Nginx! In this article, we’ll show you how to easily and quickly block them using a special configuration file.


Why Nginx configuration instead of robots.txt?

The robots.txt file is a tool for managing search bots’ behavior. It tells them which parts of the site should not be crawled. It’s very easy to use this file: simply create one in the site’s root directory, for example:

User-agent: BadBot  

Disallow: /  

However, there is a problem: instructions in robots.txt are a recommendation rather than an enforced rule. Conscientious bots do follow this file’s instructions, but most bots simply ignore it.

By contrast, configuring Nginx allows you to physically block access for unwanted bots, guaranteeing a 100% effective result.


How Nginx blocks unwanted bots: using response 444

Unlike robots.txt, which only provides recommendations to bots, Nginx physically blocks their access. One way to achieve this is by using a special server response with the code 444. 

In Nginx, the response code 444 is an internal method of terminating the connection with the client without sending any response. This is an efficient approach to ignore unwanted requests and minimize server load.


Setting up the blocking

Step 1: How to identify unwanted bots?

Unwanted bots can be identified by their User-Agent, which is a parameter sent by all clients when visiting your site. For example, some User-Agents might look like this:

    AhrefsBot     SemrushBot     MJ12bot

You can find suspicious User-Agent values in the Nginx access log (if your site uses PHP-FPM):

sudo grep -i bot /var/log/nginx/access.log

Or in the Apache access log (if your site uses the Apache module or FastCGI as a PHP handler):

  • For Ubuntu/Debian:
sudo grep -i bot /var/log/apache2/access.log
  • For CentOS/AlmaLinux/RockyLinux:
sudo grep -i bot /var/log/httpd/access.log

If you’re using a control panel such as FASTPANEL, each site will have its own separate log file. You can analyze them individually or all at once using a command like:

  • If your site uses the Apache module or FastCGI as the PHP handler:
sudo cat /var/www/*/data/logs/*-backend.access.log |  grep -i bot | tail -500
  • If your site uses PHP-FPM:
sudo cat /var/www/*/data/logs/*-frontend.access.log |  grep -i bot | tail -500

This command will display the last 500 requests made to all your sites where the User-Agent parameter contains the word “bot.” An example of one line (one request to your site) might look like this:

IP - [03/Nov/2022:10:25:52 +0300] "GET link HTTP/1.0" 301 811 "-" "Mozilla/5.0 (compatible; DotBot/1.2; +https://opensiteexplorer.org/dotbot; [email protected])"

or

IP - [24/Oct/2022:17:32:37 +0300] "GET link HTTP/1.0" 404 469 "-" "Mozilla/5.0 (compatible; BLEXBot/1.0; +http://webmeup-crawler.com/)"

The bot’s User-Agent is located between the segments “compatible;” and “/version.number“ at the end of the request line in parentheses. So in the above examples, User-agents are: BLEXBot and DotBot.

Analyze the information you gather and note the User-Agent strings of the most active bots for the next step of configuring the block. 

Step 2: Create a File to Block Bots

  1. Connect to your server via SSH.
  2. Before making changes, ensure that your current Nginx configuration has no errors:
nginx -t

If everything is fine, you’ll see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

If there are any errors, review them and fix them in the file indicated by the error messages.

  1. Create a separate file listing the bots to block:
sudo nano /etc/nginx/conf.d/block_bots.conf

Add the following code to the file:

    map $http_user_agent $block_bot {

        default 0;

        ~*AhrefsBot 1;

        ~*SemrushBot 1;

        ~*MJ12bot 1;

    }



    server {

        if ($block_bot) {

            return 444;

        }
    }

Here we create a map that determines which bots should be blocked.

Following this pattern, list the User-Agent strings of the bots you want to block. You must list each bot on a new line and place a semicolon ; at the end of each line as a delimiter.

After you finish building your list, press "Ctrl + O" on your keyboard to save the file, then "Ctrl + X" to exit the nano editor.

Step 3: Apply the Changes

After making your changes, always test the Nginx configuration for correctness to ensure there are no syntax errors:

sudo nginx -t

If everything is fine, you’ll see:

nginx: the configuration file /etc/nginx/nginx.conf syntax is ok

nginx: configuration file /etc/nginx/nginx.conf test is successful

If there are errors, review the output to identify and correct them in the file specified.

Then reload the Nginx configuration to apply the changes:

sudo systemctl reload nginx

In the future, if you need to add more bots to the block_bots.conf file, you should repeat this step each time. 


Conclusion

Now you know how to easily block unwanted search bots on your server using Nginx! Keep an eye on your logs and add new lines to the block_bots.conf configuration file as needed.

Make sure you only block malicious bots so that you don't prevent useful search engines like Google or Bing from indexing your site.

How to set up logrotate for automatic log archiving and saving server space

· 2 min read
Customer Care Engineer

Log management is a crucial part of any server administrator's job. Logs that are not rotated can quickly occupy all available disk space, slow down the server, and cause unpredictable errors. In this article, we’ll explain how to configure and use logrotate for automatic log cleanup and rotation on a server. 


What is logrotate and why is it important to use?

Logrotate is a tool designed for automatic log management. It helps to:

  • Clear old logs — automatically deletes or archives old log files.
  • Save disk space — compresses and removes unnecessary logs.

Log rotation prevents logs from accumulating and causing disk overflow, which could result in crashes and data loss. Logrotate automatically archives old logs and makes room for new data.


How does logrotate work?

When logrotate is active, it automatically performs the following steps:

  1. Log rotation — old logs are renamed and stored, while new files are created in their place.
  2. Compression — old logs can be compressed into .gz format to save space.
  3. Deletion — outdated logs can be deleted if they are no longer needed.

Example: A log file named access.log can be transformed into access.log.1, then compressed into access.log.1.gz, and eventually deleted after a specified retention period.


How to configure logrotate

1. Installing logrotate

On most Linux systems, logrotate is pre-installed. To check if logrotate is installed, run the command:

sudo logrotate --version

 If logrotate is not installed, it can be installed via a package manager. 

  • For Debian/Ubuntu:
sudo apt update && sudo apt install logrotate
  • For CentOS/RockyLinux/AlmaLinux:
sudo yum install logrotate

2. Configuring logrotate

Logrotate configuration is usually stored in /etc/logrotate.conf. This file contains general parameters for all logs on the server. To configure the rotation of individual logs, you can create separate configuration files for different services in the /etc/logrotate.d/ directory.

Example of a standard Nginx configuration:

/var/log/nginx/*.log {
daily          # Logs are rotated daily
missingok      # Do not display an error if the log is missing
rotate 7       # Keep 7 archived files
compress       # Compress old logs
delaycompress  # Delay compression until the next rotation
notifempty     # Do not rotate empty files
create 0640 www-data adm  # Create new logs with specific permissions
}

3. Key configuration parameters

  • daily/weekly/monthly — defines how often the log file will be rotated (daily, weekly, or monthly).
  • rotate [N] — specifies the number of archived logs to retain.
  • compress — enables log file compression (typically into .gz).
  • missingok — prevents errors if a log file is missing.
  • notifempty — skips rotation for empty files.
  • create — creates new logs with specified permissions.

4. Running logrotate

Logrotate usually runs automatically via cron. However, you can run it manually if you need to check the configuration or perform a rotation immediately:

sudo logrotate -f /etc/logrotate.conf

5. Verifying logrotate's operation

To ensure that logrotate is working correctly, you can check the latest entries in its service log:

sudo journalctl -u logrotate -n 10

Logs are taking up too much space on your server. How to fix it?

· 2 min read
Customer Care Engineer
info

Most log files are stored in the /var/log directory, but they are not limited to it. The principles described in this section apply to all *.log files in any directory on your server.

Logs are files that store information about server events: application and operating system activity, various errors, user requests to websites, and more. Over time, logs can take up a significant amount of disk space, especially under heavy load or if there are software errors.

One critical aspect of log files is that, in most cases, deleting them can cause issues for the program generating them — whether it’s a web server or even the operating system itself.

Additionally, logs often contain valuable diagnostic information that can help identify software issues on your server and prevent larger problems. Therefore, it’s important to handle them properly and carefully.


How to identify logs that can be cleaned

Use ncdu to locate large logs on the server. If a log file is unusually large, check its latest entries:

sudo tail /path/to/log

If there are no anomalies, check the beginning of the file to determine whether the log grew large simply due to age (pay attention to the date of the earliest entries):

sudo head /path/to/log

After this, you can proceed with cleaning the file.

info

If you’re unsure why the log file has grown so large, it’s better to save it and contact your hosting provider’s support team for clarification.


How to safely clean logs

The truncate command clears the contents of a file without deleting it:

sudo truncate -s 0 /var/log/nginx/error.log

Separately note the files that are logs, despite the lack of *.log extension:

  • /var/log/btmp
  • /var/log/syslog
  • /var/log/messeges
  • /var/log/secure
  • /var/log/maillog

These files can also be safely cleaned using the truncate command.

A special case is the log located in the /var/log/journal directory. You can find more details about working with it in separate article.   


How to prevent logs from growing too large

While analyzing logs, you may notice some of them have names like:

  • syslog.1
  • yoursite.access.log.1

These appear when log rotation is applied, for example, using the logrotate program. Old files can be deleted or compressed during rotation, saving disk space.

You can read more about configuring this mechanism in a separate article.