Troubleshooting Common Network Issues in Linux

Photoreal cinematic workspace: engineer gestures at holographic network map above desk, floating translucent screens with abstract gauges, unplugged Ethernet beside a server rack.

Troubleshooting Common Network Issues in Linux
SPONSORED

Sponsor message — This article is made possible by Dargslan.com, a publisher of practical, no-fluff IT & developer workbooks.

Why Dargslan.com?

If you prefer doing over endless theory, Dargslan’s titles are built for you. Every workbook focuses on skills you can apply the same day—server hardening, Linux one-liners, PowerShell for admins, Python automation, cloud basics, and more.


Network connectivity forms the backbone of modern computing infrastructure, and when things go wrong, productivity grinds to a halt. Whether you're managing servers in a data center, troubleshooting remote workstations, or simply trying to understand why your Linux machine won't connect to the internet, network issues can be frustrating and time-consuming. The ability to quickly diagnose and resolve these problems is an essential skill for anyone working with Linux systems, from system administrators to developers and power users.

Network troubleshooting in Linux involves understanding how different layers of networking interact, from physical connections to application-level protocols. This comprehensive guide explores the most common network problems you'll encounter in Linux environments, providing practical solutions and diagnostic techniques that work across distributions. We'll examine everything from basic connectivity issues to complex DNS problems, offering multiple perspectives and approaches to help you resolve issues efficiently.

Throughout this guide, you'll discover systematic troubleshooting methodologies, essential command-line tools, configuration file locations, and real-world solutions to networking challenges. You'll learn how to identify whether problems stem from hardware, configuration errors, or service failures, and gain the confidence to tackle network issues methodically rather than through trial and error.

Understanding Network Connectivity Fundamentals

Before diving into specific troubleshooting scenarios, establishing a solid understanding of how Linux handles network connectivity is crucial. Linux networking operates through multiple layers, each with its own configuration files, services, and potential failure points. The kernel manages low-level network interfaces, while user-space tools and services handle higher-level functions like DNS resolution, routing, and firewall rules.

Modern Linux distributions use various network management systems, including NetworkManager, systemd-networkd, and traditional ifupdown scripts. Understanding which system your distribution uses is the first step in effective troubleshooting. NetworkManager is common on desktop systems and provides both graphical and command-line interfaces, while systemd-networkd is increasingly popular on servers for its integration with systemd and predictable behavior.

"The most common mistake in network troubleshooting is jumping to complex solutions before verifying the basics. Always start with physical connectivity and work your way up the stack."

Network interfaces in Linux are named according to various schemes. Older systems used predictable names like eth0 and wlan0, while modern systems employ predictable network interface names such as enp3s0 or wlp2s0. These names encode information about the device's location on the system bus, making them consistent across reboots but potentially confusing for newcomers. Understanding your interface names is essential before attempting any configuration changes.

Network Management System Common Distributions Primary Use Case Configuration Location
NetworkManager Ubuntu Desktop, Fedora, RHEL/CentOS (desktop) Desktop systems with dynamic networking needs /etc/NetworkManager/
systemd-networkd Arch Linux, Ubuntu Server (optional), CoreOS Servers and containers requiring minimal dependencies /etc/systemd/network/
ifupdown Debian, older Ubuntu versions Traditional static configurations /etc/network/interfaces
netplan Ubuntu Server 18.04+ Abstraction layer over NetworkManager or systemd-networkd /etc/netplan/

Diagnosing Basic Connectivity Problems

When faced with network connectivity issues, a systematic approach saves time and prevents unnecessary configuration changes. The troubleshooting process should follow the network stack from bottom to top: physical layer, data link layer, network layer, and finally application layer. This methodical approach helps isolate the problem quickly and prevents the common mistake of changing multiple settings simultaneously, which obscures the actual cause.

Physical connectivity issues are surprisingly common and often overlooked. A loose cable, disabled network port, or faulty hardware can manifest as complex-seeming problems. The first diagnostic step should always be verifying that your network interface is recognized by the system and has a physical connection.

The ip link show command displays all network interfaces and their current states. Look for the interface status indicators: UP means the interface is enabled, while LOWER_UP indicates a physical connection is detected. If you see an interface without LOWER_UP, you likely have a physical connectivity problem.

ip link show
# Example output:
# 2: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
#     link/ether 00:1a:2b:3c:4d:5e brd ff:ff:ff:ff:ff:ff

For wireless connections, additional diagnostics are necessary. The iw command provides detailed information about wireless interfaces, including signal strength, connected access points, and supported frequencies. Poor signal strength often causes intermittent connectivity problems that are difficult to diagnose without proper metrics.

Driver issues can also cause connectivity problems. The dmesg command shows kernel messages, including network driver initialization and errors. Filtering these messages with grep helps identify driver-related problems:

dmesg | grep -i network
dmesg | grep -i eth
dmesg | grep -i firmware

Verifying IP Address Configuration

After confirming physical connectivity, the next step is verifying IP address configuration. An interface might be physically connected but lack a valid IP address due to DHCP failures, misconfiguration, or network policy restrictions. The ip addr show command displays all configured IP addresses on your interfaces.

Look for an IPv4 address in the expected subnet range. If you see only a link-local address (169.254.x.x), your system failed to obtain an address via DHCP. If you see no IP address at all, the interface hasn't been configured. Static IP configurations should match your network's addressing scheme, while DHCP-configured interfaces should show addresses consistent with your DHCP server's pool.

"DHCP problems account for nearly thirty percent of network connectivity issues in enterprise environments. Understanding DHCP client behavior is essential for efficient troubleshooting."

When DHCP fails, examining the DHCP client logs provides valuable information. Different distributions use different DHCP clients: dhclient, dhcpcd, or NetworkManager's internal client. Check the system journal for DHCP-related messages:

journalctl -u NetworkManager | grep -i dhcp
journalctl -u dhclient
journalctl -u systemd-networkd | grep -i dhcp

Common DHCP problems include network switches with DHCP snooping enabled, VLAN misconfigurations, or firewall rules blocking DHCP traffic (UDP ports 67 and 68). If DHCP consistently fails, temporarily assigning a static IP address can help determine whether the problem is DHCP-specific or more fundamental.

Resolving DNS Resolution Failures

DNS problems are among the most common and confusing network issues users encounter. The symptoms are deceptive: you can ping IP addresses successfully, but domain names fail to resolve. This disconnect between IP connectivity and name resolution leads many users to incorrectly diagnose the problem as a complete network failure when only DNS is affected.

Linux DNS resolution involves multiple components working together. The /etc/resolv.conf file contains DNS server addresses, but many systems now generate this file dynamically through NetworkManager, systemd-resolved, or other services. Manually editing resolv.conf often results in changes being overwritten on the next network restart, causing confusion and frustration.

Diagnosing DNS Problems

The nslookup and dig commands are essential DNS troubleshooting tools. While ping tests general connectivity, these tools specifically test DNS resolution. Using dig provides more detailed information about the resolution process, including query time, server used, and any errors encountered:

dig example.com
nslookup example.com
host example.com

If DNS queries fail, check your configured DNS servers in /etc/resolv.conf. This file should contain nameserver entries pointing to valid DNS servers. Public DNS servers like 8.8.8.8 (Google) or 1.1.1.1 (Cloudflare) can serve as temporary alternatives to test whether the problem is with your configured DNS servers or something more fundamental.

Many modern distributions use systemd-resolved, which adds a caching layer and provides DNS-over-TLS capabilities. This service creates a stub resolver at 127.0.0.53, and /etc/resolv.conf often contains only this address. To see the actual upstream DNS servers being used:

resolvectl status
systemd-resolve --status
"DNS caching can be both a blessing and a curse. Stale cache entries cause mysterious failures that disappear after clearing the cache, leading to reports of 'intermittent' problems that are actually quite consistent."

Fixing DNS Configuration Issues

Fixing DNS problems depends on your network management system. For NetworkManager-based systems, DNS servers should be configured through NetworkManager rather than directly editing resolv.conf. The nmcli command provides command-line access to NetworkManager settings:

nmcli connection show
nmcli connection modify "connection-name" ipv4.dns "8.8.8.8 8.8.4.4"
nmcli connection up "connection-name"

For systemd-networkd systems, DNS configuration belongs in the network unit files located in /etc/systemd/network/. These files use an INI-style format with [Network] sections containing DNS directives:

[Network]
DNS=8.8.8.8
DNS=8.8.4.4
Domains=~.

The Domains=~. directive tells systemd-resolved to use these DNS servers for all domains, not just specific ones. After modifying network unit files, restart systemd-networkd and systemd-resolved:

systemctl restart systemd-networkd
systemctl restart systemd-resolved

DNS caching problems require clearing the cache. For systemd-resolved, use the resolvectl command. For dnsmasq (often used by NetworkManager), restart the service. For nscd (name service cache daemon), clear the cache with nscd commands:

resolvectl flush-caches
systemctl restart dnsmasq
nscd -i hosts

Troubleshooting Routing and Gateway Issues

Routing determines how network packets travel from your system to their destination. Even with a properly configured IP address and working DNS, incorrect routing prevents communication beyond your local network. Gateway configuration is particularly critical, as the default gateway provides the path to destinations outside your subnet.

The ip route show command displays your system's routing table. Look for a default route, typically shown as "default via" followed by the gateway IP address. Without a default route, your system can only communicate with devices on the same subnet:

ip route show
# Expected output includes:
# default via 192.168.1.1 dev enp3s0 proto dhcp metric 100
# 192.168.1.0/24 dev enp3s0 proto kernel scope link src 192.168.1.100 metric 100

Gateway Connectivity Testing

Before troubleshooting complex routing issues, verify basic gateway connectivity. The gateway must be reachable for any external communication to work. Use ping to test gateway reachability:

ping -c 4 192.168.1.1

If the gateway responds, routing problems likely involve routes beyond your local network. If the gateway doesn't respond, the problem could be incorrect gateway configuration, ARP issues, or problems with the gateway device itself. The arp command shows the MAC address resolution for your gateway:

ip neigh show
arp -a

An incomplete or failed ARP entry for your gateway indicates layer 2 connectivity problems. This might result from VLAN misconfigurations, switch port security, or MAC address filtering on the network.

"Routing problems often manifest as partial connectivity, where some destinations work while others fail. This pattern typically indicates missing or incorrect specific routes rather than gateway failures."

Adding and Modifying Routes

Temporary route changes help diagnose problems and provide immediate fixes while you implement permanent solutions. The ip route command allows adding, deleting, and modifying routes. To add a default gateway temporarily:

sudo ip route add default via 192.168.1.1 dev enp3s0

For permanent route changes, the method depends on your network management system. NetworkManager stores routes in connection profiles, accessible through nmcli or the graphical interface. For systemd-networkd, routes are defined in network unit files using [Route] sections:

[Route]
Gateway=192.168.1.1
Destination=0.0.0.0/0

Complex routing scenarios might require multiple routes for different destinations. Policy-based routing allows routing decisions based on source address, interface, or other criteria. These advanced configurations typically use ip rule commands and multiple routing tables, configured in /etc/iproute2/rt_tables.

Firewall Configuration and Port Accessibility

Firewall rules frequently cause connectivity problems, especially after system updates or security hardening. Linux firewalls operate at the kernel level through netfilter, with user-space tools like iptables, nftables, or firewalld providing management interfaces. Understanding which firewall system your distribution uses is essential for effective troubleshooting.

The most common symptom of firewall-related problems is selective connectivity: some services work while others fail, or connections work from certain sources but not others. This behavior distinguishes firewall issues from general connectivity problems, which affect all traffic equally.

Firewall Tool Backend Common Distributions Check Status Command
iptables netfilter (legacy) Older Debian, Ubuntu, CentOS 6 sudo iptables -L -n -v
nftables netfilter (modern) Debian 10+, newer distributions sudo nft list ruleset
firewalld nftables or iptables RHEL, CentOS, Fedora sudo firewall-cmd --list-all
ufw iptables Ubuntu, Linux Mint sudo ufw status verbose

Testing whether the firewall causes connectivity problems involves temporarily disabling it or adding permissive rules. For troubleshooting purposes only, you can disable common firewall systems:

# UFW (Ubuntu)
sudo ufw disable

# firewalld (RHEL/CentOS/Fedora)
sudo systemctl stop firewalld

# iptables (manual management)
sudo iptables -P INPUT ACCEPT
sudo iptables -P FORWARD ACCEPT
sudo iptables -P OUTPUT ACCEPT
sudo iptables -F

Important: Disabling the firewall should only be done for testing in secure environments. Always re-enable the firewall after identifying the problem. A better approach involves checking current rules and adding specific exceptions rather than wholesale disabling security measures.

Port accessibility testing uses tools like telnet, nc (netcat), or nmap. These tools attempt connections to specific ports, helping identify whether firewall rules block the traffic:

nc -zv example.com 80
telnet example.com 443
nmap -p 22,80,443 example.com
"The principle of least privilege applies to firewall rules. Opening all ports for convenience creates security vulnerabilities. Instead, identify exactly which ports your services need and create specific rules for those ports."

Configuring Firewall Rules Properly

Proper firewall configuration balances security with functionality. For UFW, allowing specific services or ports is straightforward:

sudo ufw allow 22/tcp
sudo ufw allow 80/tcp
sudo ufw allow 443/tcp
sudo ufw enable

Firewalld uses zones to group interfaces and apply different rule sets. The public zone is typically used for internet-facing interfaces, while internal zones have more permissive rules:

sudo firewall-cmd --zone=public --add-service=http --permanent
sudo firewall-cmd --zone=public --add-service=https --permanent
sudo firewall-cmd --zone=public --add-port=8080/tcp --permanent
sudo firewall-cmd --reload

For iptables, rules must specify the chain, protocol, port, and action. INPUT chain rules control incoming connections, while OUTPUT controls outgoing traffic:

sudo iptables -A INPUT -p tcp --dport 22 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 80 -j ACCEPT
sudo iptables -A INPUT -p tcp --dport 443 -j ACCEPT

Connection tracking states improve firewall efficiency and security. The ESTABLISHED and RELATED states allow return traffic for connections initiated by your system, eliminating the need for separate outbound rules:

sudo iptables -A INPUT -m conntrack --ctstate ESTABLISHED,RELATED -j ACCEPT

Network Service Problems and Socket Issues

Application-level network services depend on all lower layers functioning correctly. Even with perfect network connectivity, misconfigured services fail to accept connections or communicate properly. Service troubleshooting requires understanding how applications bind to network sockets and listen for connections.

The ss command (socket statistics) has largely replaced netstat for examining network sockets. It shows which services are listening on which ports, the state of connections, and which processes own the sockets:

ss -tulpn
# Options: -t (TCP), -u (UDP), -l (listening), -p (process), -n (numeric)

Common service problems include services not running, listening on wrong interfaces or ports, or permission issues preventing binding to privileged ports (below 1024). The systemctl command manages services on systemd-based distributions:

systemctl status nginx
systemctl status apache2
systemctl status sshd

Resolving Service Binding Issues

Services must bind to network interfaces and ports to accept connections. Configuration errors often cause services to bind to localhost (127.0.0.1) only, making them inaccessible from other machines. Service configuration files specify binding addresses, typically using directives like "listen", "bind", or "address".

For example, SSH configuration in /etc/ssh/sshd_config uses the ListenAddress directive. If set to 127.0.0.1, SSH only accepts local connections. Setting it to 0.0.0.0 (all IPv4 interfaces) or :: (all IPv6 interfaces) makes the service accessible from the network:

ListenAddress 0.0.0.0
Port 22

Web servers like Apache and Nginx have similar configuration requirements. Apache's Listen directive and Nginx's listen directive in server blocks control binding behavior. Always verify that services listen on the expected interfaces:

sudo ss -tlnp | grep :80
sudo ss -tlnp | grep :443
"Port conflicts cause service startup failures that appear as network problems. Always check whether another service already uses the port before troubleshooting network connectivity."

Testing Service Connectivity

Testing service connectivity from different perspectives helps isolate problems. Local testing (from the same machine) uses localhost or 127.0.0.1, while remote testing uses the machine's network IP address. If local connections work but remote connections fail, the problem likely involves firewall rules or service binding configuration.

The curl command tests HTTP/HTTPS services, providing detailed information about connection establishment, TLS handshakes, and response times:

curl -v http://localhost:80
curl -v http://192.168.1.100:80
curl -I https://example.com

For non-HTTP services, netcat provides generic TCP/UDP connectivity testing. It can both connect to services and create simple test servers:

# Test connection to a service
nc -zv 192.168.1.100 22

# Create a simple test listener
nc -l 8080

Service logs provide crucial troubleshooting information. Most services log to systemd's journal, accessible through journalctl, or to files in /var/log/. Common log locations include /var/log/apache2/, /var/log/nginx/, and /var/log/syslog. Always check logs when services fail to start or accept connections:

journalctl -u nginx -n 50
journalctl -u apache2 --since "10 minutes ago"
tail -f /var/log/nginx/error.log

Performance Problems and Network Latency

Network performance issues differ from connectivity failures. Connections work, but they're slow, unstable, or exhibit high latency. These problems often have more subtle causes: congestion, incorrect MTU settings, driver issues, or hardware problems. Diagnosing performance issues requires measurement tools and understanding of network metrics.

Latency, bandwidth, and packet loss are the three primary network performance metrics. High latency causes delays in interactive applications, low bandwidth limits data transfer rates, and packet loss requires retransmissions that further degrade performance. Each problem has different causes and solutions.

Measuring Network Performance

The ping command measures latency and packet loss. Extended ping tests reveal patterns: consistent high latency indicates congestion or routing inefficiency, while variable latency suggests interference or resource contention:

ping -c 100 example.com
ping -i 0.2 -c 500 192.168.1.1

The mtr (my traceroute) command combines ping and traceroute functionality, showing latency statistics for each hop in the path to a destination. This helps identify where performance problems occur:

mtr --report --report-cycles 100 example.com

Bandwidth testing requires tools like iperf3, which measures maximum achievable throughput between two systems. One system runs as a server, while the other connects and performs bandwidth tests:

# On server:
iperf3 -s

# On client:
iperf3 -c server-ip-address -t 30
"Network performance problems often occur at unexpected locations. The bottleneck might be your ISP, a misconfigured switch, wireless interference, or even a failing network cable. Systematic testing identifies the actual cause."

Resolving MTU and Fragmentation Issues

Maximum Transmission Unit (MTU) size affects network performance significantly. The MTU determines the largest packet size that can be transmitted without fragmentation. Most Ethernet networks use an MTU of 1500 bytes, but VPNs, tunnels, and some ISPs require smaller values. Incorrect MTU settings cause performance problems or complete connection failures for certain traffic types.

Path MTU Discovery (PMTUD) automatically determines the correct MTU for a path, but firewall rules blocking ICMP can break this mechanism. Testing the path MTU manually uses ping with the "don't fragment" flag:

ping -M do -s 1472 example.com
# 1472 bytes + 28 bytes (IP + ICMP headers) = 1500 bytes total

If packets are dropped, reduce the size until they succeed. The working size plus 28 bytes is your path MTU. Configure this value on your interface:

sudo ip link set dev enp3s0 mtu 1450

For permanent MTU configuration, the method depends on your network management system. NetworkManager accepts MTU settings in connection profiles, while systemd-networkd uses the MTUBytes directive in network unit files:

[Link]
MTUBytes=1450

Wireless Network Specific Issues

Wireless networks introduce additional complexity and failure modes compared to wired connections. Signal strength, interference, authentication methods, and regulatory domain settings all affect wireless connectivity. Wireless problems often manifest as intermittent failures, slow speeds, or inability to connect to access points.

The iw and iwconfig commands provide wireless-specific information. Modern systems prefer iw, which supports newer wireless standards and features:

iw dev wlp2s0 info
iw dev wlp2s0 scan | grep -E 'SSID|signal|freq'
iw dev wlp2s0 link

Wireless Signal and Interference Problems

Signal strength directly affects connection quality and speed. The signal level shown by iw is in dBm (decibels relative to one milliwatt), where values closer to 0 are stronger. Typically, -50 dBm is excellent, -60 dBm is good, -70 dBm is fair, and below -80 dBm is poor.

Interference from other wireless networks, microwave ovens, Bluetooth devices, or physical obstacles degrades performance. The 2.4 GHz band is particularly crowded, with only three non-overlapping channels (1, 6, and 11). Using 5 GHz networks when available reduces interference:

iw dev wlp2s0 scan | grep -E 'freq|SSID|signal' | less

Driver and firmware issues cause many wireless problems. Wireless chipsets require firmware files loaded by the kernel. Missing or outdated firmware prevents wireless functionality entirely. Check dmesg for firmware-related errors:

dmesg | grep -i firmware
dmesg | grep -i wireless

Installing distribution-specific firmware packages resolves most firmware issues. For example, Debian-based systems use firmware-iwlwifi for Intel wireless cards, while other manufacturers have their own packages.

Wireless Authentication and Security

WPA/WPA2/WPA3 authentication problems prevent connection establishment. The wpa_supplicant service handles wireless authentication on most Linux systems. NetworkManager typically manages wpa_supplicant automatically, but manual configuration might be necessary for troubleshooting.

Authentication failures appear in system logs with specific error messages indicating the problem type: wrong password, unsupported authentication method, or timeout waiting for responses:

journalctl -u wpa_supplicant -n 50
journalctl -u NetworkManager | grep -i wpa

Enterprise wireless networks using 802.1X authentication (WPA Enterprise) require additional configuration including certificates, authentication methods (PEAP, TTLS, TLS), and identity information. These configurations are more complex and prone to errors. Testing with a simpler authentication method (like WPA2 Personal) helps determine whether the problem is authentication-specific or more fundamental.

Advanced Diagnostic Techniques

Complex network problems require advanced diagnostic tools and techniques. Packet capture and analysis reveal exactly what's happening on the network, while traffic monitoring helps identify patterns and anomalies. These techniques are essential for problems that resist simpler troubleshooting methods.

Packet Capture with tcpdump

The tcpdump command captures network packets for analysis. It shows actual network traffic, revealing problems invisible to higher-level tools. Basic tcpdump usage captures all traffic on an interface:

sudo tcpdump -i enp3s0
sudo tcpdump -i enp3s0 -n
sudo tcpdump -i enp3s0 -nn -v

Filters limit capture to relevant traffic, reducing noise and making analysis easier. Berkeley Packet Filter (BPF) syntax allows filtering by protocol, port, host, or combination of criteria:

# Capture only HTTP traffic
sudo tcpdump -i enp3s0 port 80

# Capture traffic to/from specific host
sudo tcpdump -i enp3s0 host 192.168.1.100

# Capture DNS queries
sudo tcpdump -i enp3s0 port 53

# Complex filter: HTTP traffic to/from specific subnet
sudo tcpdump -i enp3s0 'port 80 and net 192.168.1.0/24'

Saving captures to files allows analysis with Wireshark or other tools. The -w option writes packets to a file in pcap format:

sudo tcpdump -i enp3s0 -w capture.pcap
sudo tcpdump -i enp3s0 -w capture.pcap -C 100 -W 5
"Packet captures reveal the truth about network behavior. When application logs and service status checks provide conflicting information, packet analysis shows exactly what's happening on the wire."

Network Traffic Analysis and Monitoring

Continuous monitoring helps identify intermittent problems and usage patterns. Tools like iftop, nethogs, and nload provide real-time network traffic visualization:

sudo iftop -i enp3s0
sudo nethogs enp3s0
nload enp3s0

These tools show which connections consume bandwidth, helping identify unexpected traffic, bandwidth-intensive applications, or potential security issues. Nethogs specifically shows per-process network usage, making it easy to identify which application causes high traffic.

For long-term monitoring and trend analysis, tools like vnstat collect network statistics over time without constantly running:

vnstat -i enp3s0
vnstat -i enp3s0 -d  # Daily statistics
vnstat -i enp3s0 -m  # Monthly statistics

Configuration File Management and Best Practices

Network configuration files control how your system behaves on the network. Understanding these files' locations, formats, and precedence rules is essential for maintaining stable network configurations. Different distributions and network management systems use different configuration approaches, but certain principles apply universally.

Always backup configuration files before making changes. A simple copy with a timestamp suffix provides an easy rollback path:

sudo cp /etc/network/interfaces /etc/network/interfaces.backup-$(date +%Y%m%d)
sudo cp /etc/NetworkManager/system-connections/connection.nmconnection /etc/NetworkManager/system-connections/connection.nmconnection.backup

Key Configuration File Locations

Several critical files control network behavior across most Linux distributions. Understanding their purposes and interactions prevents configuration conflicts:

  • /etc/resolv.conf - DNS resolver configuration, often auto-generated
  • /etc/hosts - Static hostname to IP address mappings, checked before DNS
  • /etc/nsswitch.conf - Name service switch configuration, controls resolution order
  • /etc/hostname - System hostname
  • /etc/network/interfaces - Network interface configuration (Debian/Ubuntu ifupdown)
  • /etc/sysconfig/network-scripts/ - Network interface configuration (RHEL/CentOS traditional)
  • /etc/NetworkManager/system-connections/ - NetworkManager connection profiles
  • /etc/systemd/network/ - systemd-networkd configuration files
  • /etc/netplan/ - Netplan configuration files (Ubuntu Server)

The /etc/nsswitch.conf file deserves special attention. It controls the order in which name resolution methods are tried. The hosts line typically reads:

hosts: files dns

This configuration checks /etc/hosts before querying DNS. Adding "myhostname" enables systemd hostname resolution, while "mdns" enables multicast DNS for .local domains. Understanding this order explains why DNS changes might not take effect if /etc/hosts contains conflicting entries.

Version Control for Network Configurations

Maintaining network configuration in version control provides change tracking, rollback capabilities, and documentation. Git works well for this purpose, even on single systems:

cd /etc
sudo git init
sudo git add network/ NetworkManager/ systemd/network/ netplan/
sudo git commit -m "Initial network configuration"

After configuration changes, commit the changes with descriptive messages:

sudo git add -A
sudo git commit -m "Changed DNS servers to use Cloudflare"

This approach provides a complete history of network configuration changes, making it easy to identify when problems started and what changed.

Common Error Messages and Their Solutions

Specific error messages provide clues about network problems. Recognizing common messages and understanding their causes accelerates troubleshooting. These errors appear in various logs, command output, and application messages.

Connection Refused

"Connection refused" indicates that the remote system actively rejected the connection attempt. This means network connectivity exists, but no service is listening on the target port. Common causes include:

  • ✉️ Service not running on the target system
  • ✉️ Service listening on a different port than expected
  • ✉️ Service configured to listen only on localhost
  • ✉️ Firewall rules blocking the connection

Verify the service status and listening ports on the target system using ss or netstat. Check firewall rules on both client and server systems.

Network Unreachable

"Network unreachable" indicates routing problems. The system cannot find a route to the destination network. This typically means:

  • 🔧 No default gateway configured
  • 🔧 Missing specific route to the destination network
  • 🔧 Interface not properly configured or down

Check routing table with "ip route show" and verify gateway configuration. Ensure the network interface has an IP address and is in the UP state.

Temporary Failure in Name Resolution

This error indicates DNS problems. The system cannot resolve hostnames to IP addresses. Common causes include:

  • ⚡ DNS servers unreachable or not configured
  • ⚡ /etc/resolv.conf missing or empty
  • ⚡ DNS service (systemd-resolved) not running
  • ⚡ Firewall blocking DNS traffic (UDP/TCP port 53)

Verify DNS configuration in /etc/resolv.conf, test DNS servers with dig or nslookup, and check systemd-resolved status if applicable.

No Route to Host

"No route to host" suggests layer 2 (data link) problems or that the destination host is down. Unlike "network unreachable," this error means a route exists, but the destination cannot be reached:

  • 🌐 Destination host is down or unreachable
  • 🌐 ARP resolution fails for the destination
  • 🌐 Firewall on destination host blocks all traffic
  • 🌐 VLAN or switch configuration problems

Check ARP table with "ip neigh show", verify the destination host is running, and test connectivity from other systems on the same network.

Operation Not Permitted

Permission errors in networking contexts usually relate to firewall rules, SELinux policies, or attempting privileged operations without root access:

  • 💼 SELinux blocking network operations
  • 💼 Firewall rules explicitly denying traffic
  • 💼 Attempting to bind to privileged ports (<1024) without root
  • 💼 AppArmor or other security modules restricting network access

Check SELinux status with "getenforce" and review audit logs. Verify firewall rules and ensure services run with appropriate permissions.

Frequently Asked Questions
Why can I ping IP addresses but not domain names?

This indicates a DNS resolution problem. Your network connectivity works fine, but DNS servers are unreachable, misconfigured, or not responding. Check /etc/resolv.conf for valid DNS server entries, test DNS servers directly with dig or nslookup, and verify that firewall rules allow DNS traffic on UDP/TCP port 53. If using systemd-resolved, check its status with "resolvectl status".

How do I determine which network management system my distribution uses?

Check which services are active: "systemctl status NetworkManager", "systemctl status systemd-networkd", or look for configuration files in /etc/NetworkManager/, /etc/systemd/network/, or /etc/network/interfaces. Ubuntu Desktop typically uses NetworkManager, while Ubuntu Server 18.04+ uses netplan with systemd-networkd as the backend. RHEL/CentOS systems use NetworkManager by default.

What should I do when network changes don't take effect after editing configuration files?

Many network configuration files are auto-generated and get overwritten. Instead of editing them directly, use the appropriate management tool for your system: nmcli for NetworkManager, networkctl for systemd-networkd, or netplan apply for netplan-based systems. After making changes through the proper interface, restart the network service or bring the connection down and up again.

Why does my network connection drop intermittently?

Intermittent connection drops have various causes: wireless signal interference, failing hardware, driver issues, power management settings, or DHCP lease renewal problems. Check dmesg and system logs for error messages, monitor signal strength for wireless connections, test with a different cable for wired connections, and disable power management for network interfaces with "ethtool -s interface wol d". For wireless, try changing channels or switching to 5GHz if available.

How can I test network performance between two Linux systems?

Use iperf3, which provides accurate bandwidth measurements. Install iperf3 on both systems, run "iperf3 -s" on one system (server), and "iperf3 -c server-ip" on the other (client). The test shows throughput, packet loss, and jitter. For latency testing, use ping or mtr. For real-world application performance, use curl with timing information or scp with time measurement for large file transfers.

What's the difference between "ip" and "ifconfig" commands?

The ip command is the modern replacement for ifconfig, providing more features and better support for advanced networking. While ifconfig still works on many systems, it's deprecated and doesn't support newer features like multiple addresses per interface, policy routing, or VRFs. Use "ip addr" instead of "ifconfig", "ip route" instead of "route", and "ip link" for interface management. The ip command also provides more detailed and accurate information.

How do I make network configuration changes permanent?

Permanence depends on your network management system. For NetworkManager, use nmcli to modify connection profiles, which persist automatically. For systemd-networkd, edit files in /etc/systemd/network/ and restart the service. For traditional ifupdown (Debian), edit /etc/network/interfaces. For netplan (Ubuntu Server), edit YAML files in /etc/netplan/ and run "netplan apply". Avoid using ip commands directly for permanent changes, as they only affect the running configuration.

Why does my firewall keep blocking connections I've explicitly allowed?

Multiple firewall layers might be active simultaneously. Check all potential firewall systems: iptables, nftables, firewalld, and ufw. Docker and libvirt create their own firewall rules that might interfere. SELinux or AppArmor might block connections even when firewall rules allow them. Use "iptables -L -n -v", "nft list ruleset", or "firewall-cmd --list-all" to view all active rules. Check the order of rules, as the first matching rule determines the action.