What Does the df Command Do?
Displays filesystem disk space usage, reporting total, used, and available blks and mount points; supports options like -h for human-readable sizes and -T to show filesystem types.
Understanding the df Command
Managing disk space effectively stands as one of the fundamental responsibilities in system administration and everyday computing. Running out of storage can bring systems to a grinding halt, corrupt data, and cause applications to fail unexpectedly. Whether you're maintaining enterprise servers or managing your personal workstation, understanding how much space remains available and where it's being consumed becomes critical for preventing disruptions and maintaining optimal system performance.
The df command represents a powerful disk space reporting utility found in Unix-like operating systems that displays information about file system disk space usage. This essential tool provides administrators and users with immediate visibility into mounted file systems, showing how much space has been used, how much remains available, and where these file systems are mounted in the directory structure. The command delivers multiple perspectives on storage utilization, from basic capacity reporting to detailed inode usage analysis.
Throughout this comprehensive exploration, you'll discover the complete functionality of the df command, including its various options and practical applications. You'll learn how to interpret its output effectively, understand the difference between various measurement units, and apply advanced filtering techniques to focus on specific file systems. Additionally, you'll gain insights into common scenarios where df proves invaluable, troubleshooting approaches when dealing with disk space issues, and best practices for integrating this command into your regular system monitoring routines.
Understanding Basic Functionality and Core Purpose
The df command derives its name from "disk free" or "disk filesystem," serving as the primary tool for reporting file system disk space usage across Unix, Linux, BSD, and macOS operating systems. When executed without any arguments, it displays information about all currently mounted file systems, presenting data in a tabular format that includes the file system name, total size, used space, available space, usage percentage, and mount point.
At its core, this utility reads information from the kernel's file system statistics, providing a snapshot of storage utilization at the moment of execution. Unlike commands that scan directories and calculate sizes recursively, df operates at the file system level, making it extremely fast regardless of how many files exist on the system. This efficiency makes it ideal for quick checks and automated monitoring scripts that need to run frequently without imposing significant system overhead.
"The difference between knowing your disk usage and guessing it can mean the difference between proactive maintenance and emergency firefighting when systems fail."
The command operates by querying the statfs or statvfs system calls, which return information about mounted file systems directly from kernel data structures. This approach ensures accuracy and speed, as the kernel maintains these statistics continuously as part of its normal file system operations. The information displayed reflects the actual state of the file system as understood by the operating system, accounting for reserved blocks, file system overhead, and any space set aside for privileged users.
Standard Output Format and Column Interpretation
When you execute df without options, the output typically contains six columns of information. The first column identifies the file system device or resource, which might be a physical disk partition like /dev/sda1, a network file system like nfs-server:/export, or a virtual file system like tmpfs. Understanding this column helps identify the actual storage backing each mount point.
The second through fifth columns present numerical data about space utilization. The total size indicates the complete capacity of the file system, while the used column shows how much space currently contains data. The available column reveals space that can still be written to, and the capacity percentage provides a quick visual indicator of how full the file system has become. These values typically appear in 1K blocks by default on most systems, though this can vary by implementation.
| Column Name | Description | Example Value | Interpretation |
|---|---|---|---|
| Filesystem | Device or resource name | /dev/sda1 | Physical partition on first SATA drive |
| 1K-blocks | Total size in kilobytes | 102400000 | Approximately 100GB total capacity |
| Used | Space currently occupied | 61440000 | Approximately 60GB in use |
| Available | Space remaining for use | 35840000 | Approximately 35GB free |
| Use% | Percentage of capacity used | 63% | File system is 63% full |
| Mounted on | Directory path where accessible | /home | Accessible under /home directory |
The final column shows the mount point, which represents the directory path where the file system has been attached to the directory tree. This location determines where users and applications access the storage. A file system mounted at /home contains all user home directories, while one mounted at /var typically holds variable data like logs and temporary files. Multiple file systems can exist on a single physical disk, each mounted at different points in the directory hierarchy.
Essential Command Options and Variations
The df command provides numerous options that modify its behavior and output format to suit different needs and preferences. These options allow you to control measurement units, filter displayed file systems, and adjust the level of detail presented. Mastering these variations enables more efficient workflows and clearer communication when sharing disk space information with colleagues or in documentation.
Human-Readable Output Formatting
The most frequently used option remains -h or --human-readable, which transforms the default 1K-block output into sizes expressed with unit suffixes like K, M, G, and T. This formatting makes the information immediately comprehensible without mental arithmetic to convert kilobytes into more familiar units. When you execute df -h, a value of 63G becomes instantly more meaningful than 66060288.
A related option, -H or --si, uses powers of 1000 instead of 1024 for unit calculations, aligning with SI (International System of Units) standards. This distinction matters when comparing disk space reports with manufacturer specifications, as hard drive vendors typically advertise capacities using decimal (base-10) calculations. The difference between these two options can result in noticeable discrepancies for larger storage volumes.
For situations requiring specific units regardless of size, the -k, -m, and --block-size options provide precise control. Using -k forces output in kilobytes, -m in megabytes, while --block-size=SIZE allows arbitrary block sizes like --block-size=1G for gigabyte reporting. These options prove valuable in scripts that need consistent formatting for parsing or when generating reports that must match specific formatting requirements.
Filtering and Selective Display Options
Real systems often mount numerous file systems, including many virtual or special-purpose file systems that clutter output without providing useful information about actual storage. The -t or --type option filters output to show only file systems of a specified type. For example, df -t ext4 displays only ext4 file systems, while df -t nfs shows only network file systems. This filtering helps focus attention on relevant storage when troubleshooting or monitoring specific subsystems.
"Filtering out noise from your disk space reports transforms a wall of data into actionable intelligence that drives better decisions."
Conversely, the -x or --exclude-type option removes specified file system types from the output. The command df -x tmpfs -x devtmpfs eliminates temporary and device file systems, showing only persistent storage. This approach often proves more practical than including specific types, as it removes the clutter of pseudo file systems that exist only in memory and don't represent actual disk usage.
The -l or --local option restricts output to local file systems only, excluding network-mounted storage like NFS or CIFS shares. This filtering becomes particularly useful on systems with numerous network mounts, where you want to focus exclusively on locally attached storage. Similarly, the --total option adds a final row summarizing the total space across all displayed file systems, providing an aggregate view of storage utilization.
Inode Information and Advanced Reporting
Beyond space usage, file systems also have limits on the number of files they can contain, tracked through data structures called inodes. The -i or --inodes option switches df's output from displaying space usage to showing inode usage. This view reveals how many inodes exist, how many are used, how many remain available, and the usage percentage. Running out of inodes prevents creating new files even when space remains available, making this information critical for systems with many small files.
The output format when using -i mirrors the standard display but substitutes inode counts for block counts. The IUsed column shows inodes currently allocated to files and directories, while IFree indicates remaining inodes available for new file system objects. The IUse% column provides the percentage of inodes consumed, and monitoring this metric proves essential for systems hosting applications that generate numerous small files, such as mail servers or web caches.
Additional options include -a or --all, which shows all file systems including those with zero blocks, and -P or --portability, which uses POSIX-compliant output format. The portable format ensures consistent output across different Unix implementations, making it valuable in heterogeneous environments or when writing portable scripts. The -T or --print-type option adds an additional column showing the file system type, providing useful context without requiring separate commands.
Practical Applications and Real-World Scenarios
Understanding the df command's capabilities becomes truly valuable when applied to actual system administration tasks and troubleshooting scenarios. These practical applications demonstrate how the command integrates into daily workflows, automated monitoring, and problem resolution processes. From routine health checks to emergency response, df serves as a first-line tool for storage management.
π Routine System Monitoring and Health Checks
Regular monitoring of disk space prevents unexpected outages and performance degradation. System administrators typically incorporate df into daily or hourly monitoring routines, checking for file systems approaching capacity thresholds. A common practice involves running df -h at the start of each day to establish a baseline understanding of storage consumption patterns and identify any unusual growth that might indicate problems.
Automated monitoring scripts frequently combine df with threshold checking to generate alerts when file systems exceed predefined capacity limits. These scripts might execute df -P for consistent parsing, extract the usage percentage for each mount point, and trigger notifications when values exceed 80% or 90%. This proactive approach allows intervention before space exhaustion causes service disruptions or data loss.
"Monitoring disk space isn't about preventing the inevitable growth of data, but about ensuring that growth happens predictably and within managed boundaries."
π Troubleshooting Space Exhaustion Issues
When systems report "No space left on device" errors, df provides the starting point for investigation. Running df -h immediately reveals which file system has filled up, directing attention to the appropriate mount point. However, situations arise where df shows available space but operations still fail, often indicating inode exhaustion rather than space exhaustion. In these cases, df -i reveals the true problem, showing 100% inode usage even with available blocks.
Another common scenario involves discrepancies between df output and the sum of file sizes within a file system. This situation typically occurs when processes hold open file handles to deleted files, preventing the space from being reclaimed. The operating system still reserves the space until the process closes the file or terminates. Identifying these situations requires combining df with tools like lsof to find processes with deleted files still open, then deciding whether to restart services or wait for natural process termination.
πΎ Capacity Planning and Growth Projection
Organizations use df output collected over time to analyze storage growth trends and plan capacity expansions. By capturing df results daily or weekly, administrators can calculate growth rates and project when file systems will reach capacity. This historical data enables informed decisions about when to add storage, which systems require attention first, and how much capacity to provision.
The command proves particularly valuable when combined with other tools in capacity planning workflows. Scripts might execute df, store results in a database, and generate graphs showing utilization trends over months or years. These visualizations help justify budget requests for storage infrastructure and demonstrate the effectiveness of space reclamation initiatives like log rotation or archive management.
π₯οΈ Script Integration and Automation
The df command's consistent output format and reliable behavior make it ideal for integration into shell scripts and automation frameworks. Scripts commonly parse df output to make decisions about backups, cleanup operations, or resource allocation. Using df -P ensures POSIX-compliant output that remains consistent across different systems, reducing the complexity of parsing logic.
For example, a backup script might check available space before starting a backup operation, using df to verify sufficient room exists for the backup data. If space falls below a threshold, the script might clean up old backups, send alerts to administrators, or skip the backup entirely to prevent partial or corrupted backup files. This defensive programming prevents cascading failures and ensures reliable backup operations.
| Use Case | Recommended Command | Purpose | Typical Threshold |
|---|---|---|---|
| Daily health check | df -h | Quick visual inspection of space usage | Alert at 80% usage |
| Automated monitoring | df -P | Consistent parsing in scripts | Alert at 85% usage |
| Inode monitoring | df -i | Check file count limits | Alert at 90% usage |
| Local storage only | df -hl | Exclude network mounts | Alert at 80% usage |
| Specific filesystem type | df -t ext4 | Focus on particular storage type | Alert at 85% usage |
| Total capacity summary | df -h --total | Aggregate storage overview | Alert at 75% total usage |
π Network Storage and Remote File Systems
In environments with network-attached storage, df reports on NFS, CIFS, and other remote file systems just as it does for local storage. However, network file systems introduce additional considerations. The reported space reflects the remote server's capacity and usage, meaning multiple clients might see the same space statistics. This shared nature requires coordination when monitoring and managing capacity.
Network connectivity issues can cause df to hang when querying remote file systems, as the command waits for responses from unresponsive servers. Using the -l option excludes these network mounts, allowing df to complete quickly even when network storage is unavailable. For specific monitoring of network storage, targeting those file systems explicitly with -t nfs ensures you're checking the resources you care about without interference from local file systems.
Understanding Output Discrepancies and Common Confusion Points
Users frequently encounter situations where df's output seems inconsistent with expectations or contradicts information from other commands. These apparent discrepancies usually stem from fundamental differences in how various tools measure and report storage usage. Understanding these differences prevents misinterpretation and helps identify genuine issues versus normal system behavior.
Differences Between df and du Commands
The most common source of confusion arises when comparing df output with results from the du (disk usage) command. While df reports file system level statistics maintained by the kernel, du recursively scans directories and sums file sizes. These approaches can yield different results for several reasons, all related to how file systems actually work beneath their simple hierarchical appearance.
"The space reported by df represents truth from the file system's perspective, while du shows truth from the directory tree's perspective, and these truths don't always align."
Deleted files with open file handles represent a primary cause of discrepancies. When a process opens a file and the file gets deleted, the directory entry disappears but the file system continues reserving the space until the process closes the file handle. The du command won't count this space because it can't see the deleted file in the directory structure, but df reports it as used because the file system hasn't reclaimed the blocks. This situation commonly occurs with log files that get deleted while applications still write to them.
Reserved blocks for the root user also create apparent discrepancies. Most Linux file systems reserve 5% of space for the root user, ensuring system processes can continue operating even when regular users fill the file system. The df command shows this reserved space as used, but regular users running du can't access these blocks and won't count them. This protection mechanism prevents complete system lockup when file systems fill, maintaining enough space for critical operations.
Sparse Files and Actual Disk Usage
Sparse files contain large regions of zeros that the file system doesn't actually store on disk, instead recording only the non-zero portions. These files appear to consume their full size when examined with ls or du, but df shows only the actual blocks allocated on disk. Virtual machine disk images and database files frequently use sparse allocation to conserve space while presenting a consistent interface to applications.
This optimization means a file might report as 100GB in size while consuming only 20GB of actual disk space. The df command reflects this reality, showing the 20GB as used space, while naive size calculations might suggest 100GB consumption. Understanding sparse files prevents panic when file sizes appear to exceed available space and helps explain why copying sparse files to systems without sparse file support suddenly requires dramatically more space.
File System Overhead and Metadata
File systems require space for their own operational structures, including inodes, block bitmaps, journal data, and directory structures. This overhead means the total space reported by df as available for data will always be less than the raw partition size. The amount of overhead varies by file system type, with journaling file systems like ext4 and XFS requiring more space for their transaction logs than simpler file systems.
Additionally, file systems don't pack data perfectly efficiently. Files must align to block boundaries, meaning a 1-byte file consumes an entire block (typically 4KB). This internal fragmentation accumulates across thousands or millions of files, creating apparent space usage that doesn't correspond directly to the sum of file sizes. Systems with many small files experience more significant overhead from this effect than those storing fewer large files.
Advanced Techniques and Power User Approaches
Beyond basic usage, combining df with other commands and employing advanced filtering techniques unlocks powerful capabilities for system analysis and automation. These approaches transform df from a simple reporting tool into a component of sophisticated monitoring and management systems.
Combining df with grep and awk for Targeted Analysis
Piping df output through text processing tools enables precise extraction of specific information. The command df -h | grep '^/dev/' filters output to show only actual disk partitions, excluding pseudo file systems. This filtering provides a cleaner view when you care only about physical storage, removing clutter from tmpfs, devtmpfs, and other virtual file systems that don't represent persistent storage.
More sophisticated filtering uses awk to extract specific columns or perform calculations. The command df -P | awk '$5+0 > 80 {print $6, $5}' identifies file systems exceeding 80% capacity and prints their mount points with usage percentages. This approach enables rapid identification of problematic file systems in automated scripts, where the output can trigger alerts or remediation actions without human intervention.
π Creating Alert Systems with df
Building alert systems around df involves periodically executing the command, parsing its output, and triggering notifications when thresholds are exceeded. A simple shell script might run df -P, extract usage percentages, compare them against defined limits, and send email or log messages when violations occur. More sophisticated implementations integrate with monitoring platforms like Nagios, Zabbix, or Prometheus, providing historical tracking and trend analysis.
These monitoring systems typically implement multiple threshold levels, distinguishing between warning conditions (perhaps 80% usage) and critical conditions (90% or 95% usage). The response to each level varies, with warnings generating email notifications while critical conditions might trigger automated cleanup procedures, disable non-essential services, or page on-call personnel. This tiered approach balances awareness with urgency, preventing alert fatigue while ensuring critical issues receive immediate attention.
π Historical Tracking and Trend Analysis
Capturing df output over time enables trend analysis that reveals storage consumption patterns and predicts future needs. Scripts might execute df daily, append results to a log file with timestamps, and periodically analyze the accumulated data to calculate growth rates. This historical perspective distinguishes between normal seasonal variations and unusual growth that might indicate problems.
For example, a web server might show predictable weekly patterns with higher usage during business hours and lower usage on weekends. Recognizing these patterns allows setting appropriate alert thresholds that account for expected variation without generating false alarms. Conversely, sudden deviations from established patterns immediately signal potential issues requiring investigation, such as log rotation failures or unexpected data accumulation.
"Historical data transforms df from a snapshot tool into a predictive instrument that enables proactive management rather than reactive firefighting."
π― Targeting Specific Mount Points
Rather than displaying all file systems, df accepts specific mount points or device names as arguments, reporting only on those targets. The command df -h /home shows information exclusively for the file system mounted at /home, eliminating irrelevant information. This targeted approach proves valuable in scripts that need to check specific locations or when troubleshooting issues known to affect particular file systems.
Multiple targets can be specified in a single command, as in df -h /home /var /tmp, which reports on all three mount points. This capability enables focused monitoring of critical file systems without the distraction of less important mounts. Scripts can maintain lists of important mount points and check them specifically, ensuring that monitoring focuses on business-critical storage rather than treating all file systems equally.
Platform-Specific Variations and Considerations
While the df command maintains remarkable consistency across Unix-like systems, subtle differences exist between implementations on Linux, BSD variants, macOS, and commercial Unix systems. Understanding these variations prevents confusion when moving between platforms and ensures scripts remain portable or are appropriately adapted for their target environments.
Linux Implementation Details
Linux systems typically use GNU coreutils' implementation of df, which offers the most extensive option set and follows GNU conventions. This version supports long option names like --human-readable and --exclude-type, making commands more self-documenting. The default output format uses 1K blocks unless modified by environment variables or options, and the -h option produces base-1024 units (KiB, MiB, GiB) rather than base-1000 (KB, MB, GB).
Linux df also recognizes a wide variety of file system types, including modern systems like Btrfs and ZFS (when supported), network file systems like NFS and CIFS, and numerous pseudo file systems used by the kernel for various purposes. The command correctly handles file systems with features like snapshots and deduplication, though reported space usage might seem counterintuitive when these features are active.
BSD and macOS Differences
BSD variants, including FreeBSD, OpenBSD, and macOS, use BSD-derived implementations of df that differ in some option names and default behaviors. These versions typically use short options exclusively, lacking the long option names available in GNU df. The output format may vary slightly, and some filtering options might not exist or work differently.
On macOS specifically, df shows additional file system types related to Apple's ecosystem, such as APFS (Apple File System) and HFS+. The command integrates with macOS's unique storage features like Time Machine and APFS snapshots, which can affect reported space usage in ways that might surprise users familiar only with traditional file systems. Understanding these platform-specific features prevents misinterpretation of df output on Apple systems.
Commercial Unix Systems
Commercial Unix systems like Solaris, HP-UX, and AIX each have their own df implementations with varying feature sets and output formats. These versions might lack some options available in Linux or BSD implementations, and the default output format may differ. Scripts intended for cross-platform use must account for these variations, either by using only the most portable options or by detecting the platform and adjusting behavior accordingly.
The POSIX standard defines a minimal set of df functionality that should work consistently across compliant systems. Using the -P option ensures POSIX-compliant output format, and sticking to standard options like -k for kilobyte output maximizes portability. However, advanced features like human-readable output or type filtering may require platform-specific adaptations in portable scripts.
Best Practices and Recommendations
Effective use of the df command extends beyond simply executing it and reading the output. Adopting best practices ensures accurate interpretation, prevents common mistakes, and integrates disk space monitoring seamlessly into broader system management workflows. These recommendations reflect lessons learned from decades of Unix system administration across diverse environments.
β‘ Regular Monitoring Schedules
Establishing consistent monitoring schedules prevents surprises and enables early detection of storage issues. Daily manual checks combined with automated hourly monitoring provide comprehensive coverage without excessive overhead. Manual checks during morning routines help administrators maintain situational awareness, while automated monitoring catches rapid changes that might occur between manual checks.
The frequency of monitoring should match the rate of storage consumption in your environment. Systems with rapid data growth, such as database servers or log collectors, benefit from more frequent checks than relatively static systems like application servers. Adjusting monitoring frequency based on observed growth patterns optimizes the balance between vigilance and resource consumption.
π‘οΈ Setting Appropriate Thresholds
Defining when to alert on disk space usage requires balancing sensitivity against false alarm rates. Setting thresholds too low generates excessive alerts that desensitize administrators to warnings, while setting them too high risks missing critical situations until it's too late. A common approach uses 80% as a warning threshold and 90% as critical, providing adequate notice without constant interruptions.
"Thresholds should reflect your ability to respond, not arbitrary percentages; an alert that arrives when you can't act serves only to create stress without solving problems."
However, these percentages should adjust based on file system size and growth rate. A 10TB file system at 80% still has 2TB available, potentially representing weeks or months of capacity. Conversely, a 10GB file system at 80% has only 2GB remaining, which might fill in hours. Consider both percentage and absolute remaining space when setting thresholds, ensuring alerts provide actionable lead time for response.
π Documentation and Runbooks
Documenting normal disk space patterns and response procedures ensures consistent handling of space issues, particularly in team environments or during incidents when stress levels are high. Runbooks should detail which file systems are critical, what typical usage looks like, where temporary files accumulate, and approved cleanup procedures. This documentation prevents hasty decisions that might delete important data or disrupt services.
Include df command examples in documentation with explanations of what the output means in your specific environment. New team members benefit from understanding which file systems matter most, why certain mounts show high usage normally, and what constitutes an actual problem versus expected behavior. This knowledge transfer accelerates onboarding and reduces the risk of misinterpreting normal conditions as emergencies.
π Integration with Broader Monitoring
While df provides essential disk space information, it should integrate with comprehensive monitoring systems that track CPU, memory, network, and application metrics. Correlating disk space trends with other system metrics often reveals root causes of issues. For example, increasing disk usage might correlate with memory pressure causing excessive swapping, or network issues preventing log rotation from transferring files to remote storage.
Modern monitoring platforms can collect df output automatically, store it in time-series databases, and generate dashboards showing historical trends and current status. These systems enable sophisticated alerting rules that consider multiple factors, such as alerting only when disk usage increases rapidly rather than simply exceeding a threshold. This context-aware monitoring reduces false positives and focuses attention on genuine issues.
π‘ Combining with Other Tools
Using df in conjunction with complementary tools provides deeper insights into storage issues. When df shows high usage, follow up with du to identify which directories consume the most space. Use tools like ncdu or baobab for interactive exploration of directory sizes, making it easier to identify cleanup opportunities. The lsof command helps find processes with deleted files still open, explaining discrepancies between df and du output.
For systems with many small files, checking inode usage with df -i alongside space usage with df -h provides complete visibility into capacity constraints. Some file systems might have space available but no inodes remaining, or vice versa. Monitoring both dimensions prevents situations where space exists but files can't be created, or where inodes remain but no space is available for file data.
Frequently Asked Questions
Why does df show different space usage than du for the same directory?
The discrepancy between df and du typically occurs because df reports file system level statistics while du scans directory trees. Deleted files with open file handles still consume space reported by df but won't appear in du output. Additionally, file system reserved blocks for root and metadata overhead appear in df but not du. Sparse files also contribute to differences, as du might show apparent size while df shows actual disk allocation.
What does it mean when df shows space available but I still get "No space left on device" errors?
This situation usually indicates inode exhaustion rather than space exhaustion. Run df -i to check inode usage. If the IUse% column shows 100%, the file system has run out of inodes even though space remains. This commonly occurs on systems with many small files. The solution involves deleting unnecessary files to free inodes or recreating the file system with more inodes allocated, though the latter requires backup and restore.
How can I exclude certain file system types from df output?
Use the -x or --exclude-type option followed by the file system type to exclude. For example, df -x tmpfs excludes tmpfs file systems. You can specify multiple -x options to exclude several types: df -x tmpfs -x devtmpfs -x squashfs. This filtering removes clutter from pseudo file systems that don't represent actual disk storage, making output more relevant for capacity monitoring.
What is the difference between df -h and df -H?
The -h option displays sizes in human-readable format using base-1024 units (KiB, MiB, GiB), where 1K equals 1024 bytes. The -H option uses base-1000 units (KB, MB, GB) following SI standards, where 1K equals 1000 bytes. This difference becomes significant for larger sizes; a 1TB drive shows as approximately 931GiB with -h but 1.0TB with -H. Use -H when comparing with manufacturer specifications, which typically use decimal units.
Can df be used to monitor remote file systems, and are there any special considerations?
Yes, df reports on network file systems like NFS and CIFS just as it does local storage, querying the remote server for statistics. However, network issues can cause df to hang while waiting for unresponsive servers. Use the -l option to exclude network mounts when you need quick results. For network storage monitoring, target specific mounts explicitly or use -t nfs to check only network file systems, understanding that reported space reflects the remote server's capacity.
Why does the sum of Used and Available not equal the total size shown by df?
File systems reserve space for various purposes that don't appear in the simple Used plus Available equation. Most Linux file systems reserve 5% for the root user, ensuring system processes can continue when regular users fill the file system. Additionally, file system metadata, journals, and internal structures consume space. The formula is more accurately: Total = Used + Available + Reserved + Overhead. This accounting explains why you can't write files even when Available shows remaining space if you're not the root user.
Sponsor message β This article is made possible by Dargslan.com, a publisher of practical, no-fluff IT & developer workbooks.
Why Dargslan.com?
If you prefer doing over endless theory, Dargslanβs titles are built for you. Every workbook focuses on skills you can apply the same dayβserver hardening, Linux one-liners, PowerShell for admins, Python automation, cloud basics, and more.