How to Defragment Linux Systems

How to Defragment Linux Systems

There is a common myth that Linux disks never need defragmentation at all. In most cases, this is true, due mostly to the excellent journaling filesystems Linux uses (ext2, 3, 4, btrfs, etc.) to handle the filesystem. However, in some specific cases, fragmentation might still occur. If that happens to you, the solution is fortunately very simple.

Fragmentation occurs when a file system updates files in little chunks, but these chunks do not form a contiguous whole and are scattered around the disk instead.  This is particularly true for FAT and FAT32 filesystems. It was somewhat mitigated in NTFS and almost never happens in Linux (extX). Here is why.

In filesystems such as FAT and FAT32, files are written right next to each other on the disk. There is no room left for file growth or updates:

defragment-linux-fragmented

The NTFS leaves somewhat more room between the files, so there is room to grow. As the space between chunks is limited, fragmentation will still occur over time.

defragment-linux-ntfs

Linux’s journaling filesystems take a different approach. Instead of placing files beside each other, each file is scattered all over the disk, leaving generous amounts of free space between each file. There is sufficient room for file updates/growth and fragmentation rarely occurs.

defragment-linux-journal

Additionally, if fragmentation does happen, most Linux filesystems would attempt to shuffle files and chunks around to make them contiguous again.

Disk fragmentation seldom occurs in Linux unless you have a small hard drive, or it is running out of space. Some possible fragmentation cases include:

  • if you edit large video files or raw image files, and disk space is limited
  • if you use older hardware like an old laptop, and you have a small hard drive
  • if your hard drives start filling up (above 85% used)
  • if you have many small partitions cluttering your home folder

The best solution is to buy a larger hard drive. If it’s not possible, this is where defragmentation becomes useful.

The fsck command will do this for you – that is, if you have an opportunity to run it from a live CD, with all affected partitions unmounted.

This is very important: RUNNING FSCK ON A MOUNTED PARTITION CAN AND WILL SEVERELY DAMAGE YOUR DATA AND YOUR DISK.

You have been warned. Before proceeding, make a full system backup.

Disclaimer: The author of this article and Make Tech Easier take no responsibility for any damage to your files, data, system, or any other damage, caused by your actions after following this advice. You may proceed at your own risk. If you do proceed, you accept and acknowledge this.

You should just boot into a live session (like an installer disk, system rescue CD, etc.) and run fsck on your UNMOUNTED partitions. To check for any problems, run the following command with root permission:

You can check what the [/path/to/your/partition] is by running

There is a way to run fsck (relatively) safely on a mounted partition – that is by using the -n switch. This will result in a read only file system check without touching anything. Of course, there is no guarantee of safety here, and you should only proceed after creating a backup. On an ext2 filesystem, running

would result in plenty of output – most of them error messages resulting from the fact that the partition is mounted. In the end it will give you fragmentation related information.

defragment-linux-fsck

If your fragmentation is above 20%, you should proceed to defragment your system.

All you need to do is to back up ALL your files and data to another drive (by manually copying them over), format the partition, and copy your files back (don’t use a backup program for this). The journalling file system will handle them as new files and place them neatly to the disk without fragmentation.

To back up your files, run

Mind the asterix (*); it is important.

Note: It is generally agreed that to copy large files or large amounts of data, the dd command might be best. This is a very low level operation and does copy everything “as is”, including the empty space, and even the junk left over. This is not what we want, so it is probably better to use cp.

Now you only need to remove all the original files.

Optional: you can fill the empty space with zeros. You could achieve this with formatting as well, but if for example you did not copy the whole partition, only large files (which are most likely to cause fragmentation), this might not be an option.

Wait for it to finish. You could also monitor the progress with pv.

defragment-linux-dd

When it is done, just delete the temporary file.

After you zeroed out the empty space (or just skipped that step entirely), copy your files back, reversing the first cp command:

If you prefer a simpler approach, install e2fsprogs,

and run e4defrag as root on the affected partition. If you don’t want to or cannot unmount the partition, you can use its mount point instead of its path. To defragment your whole system, run

It is not guaranteed to succeed while mounted (you should also stop using your system while it is running), but it is much easier than copying all files away and back.

Fragmentation should rarely be an issue on a Linux system due to the the journalling filesystem’s efficient data handling. If you do run into fragmentation due to any circumstances, there are simple ways to reallocate your disk space like copying all files away and back or using e4defrag. It is important, however, to keep your data safe, so before attempting any operation that would affect all or most of your files, make sure you make a backup just to be on the safe side.

18 comments

  1. Am I right to assume you can run e4defrag from a live USB and defrag your main install ???

    • You are. You can also just run it on selected files or directories while in your main installation. If fragmentation would happen (which is extremely unlikely), it would be on those many gigabyte large files you constantly edit. e4defrg can be targeted to only care about these. (Unfortunately the scope of the article was too limited to go into details.)

    • Hey Che,

      Interesting addition. I have never used XFS in a live environment, although one fine day I might just give it a shot. As far as i understand it is great for huge filesystems, like databaees, etc. I have not got many of those. :) (Also I’m worried about the apparent data loss issue on system crashes, my hardware is less reliable these days). Would you suggest XFS has any place in a Desktop environment yet?

  2. Unmounting and running an fcsk that should be e4defrag’s job not user’s job. It is an unnecessary complication.

    For the “solution” of just creating another partition and copying the contents. First it will take ages. and second it will get break on face of extended attributes like unmutable files.

    • Not necessarily another partition, but another drive, even external. It would be faster. Yes, it still does take ages, but it should only be necessary in specific circumstances, under which, one might even take the time. It’s not like you need to defrag your Linux system every week, is it? :)

      As for immutable files… I don’t see the widespread use of them on files, that are often edited or changed in size, in fact, that would make it most “unwise” to mark a file immutable if one needs to edit it, wouldn’t it? in fact I’d go as far as to say it makes no sense. But fragmentation will usually occur on files that grow, as a result of, for example, editing. See the flaw of logic? Some shortcomings only apply if we take them out of context… (In the very unlikely case of files being marked immutable after they’ve become fragmented, well, tough cookie. People must be more careful when they play with advanced features they don’t fully understand. So before marking it immutable one should check fragmentation, afterwards it will not be a problem. Simple as that.)

      As for what is the user’s job and what isn’t… While Vindose and Pear (or some other fruit) users like it that way (that is, being told to use one way and no other), Linux is about enabling you to choose your own way. What if the user wants complete control? Let people decide what they prefer, shall we?

      • I self-teached Unix 24 years ago, in no small part by the “Read the source!” method meaning I have given some proofs of, how to say, it, “Computing Madhood”?
        :-)

        defrag not being an end to end solution is bad because the unmounting/fscking of a is a _requirement_. So it should be done automatically. because the operation is in fact atomic. By forcing the user to do those three stages (Note: if you want flexilibility you could either prompt the user or have a no-fsck option) first of all you take some of his time and the sum of all the time lost by the users will certainly be superior to the time in implementing an end-to-end solution. Second: Because if a mistake of the user can cause an entire filesystem to be lost then designer must do his utmost to make the operation as safe and foolproof as possible. That is why guns have more safeties than pocket knives and nukes more than guns. And that is why defrag should not rely on infallible users that will never forget to do an fsck.

        About unmutable files I use them. I have a collection of files representing many, many, many hours of work and that once created are not supposed to change. I have backups. Of course. And I also made them unmutable in order to protect them from an error while being root. Only paranoids survive.

        • Well, if a user screws it up, that’ll teach ’em, won’t it? :D

          I mean yea, you are right, but I like to take up my readers’ time. It will make me look more important, once I made them work so long. (Also have you not learned about the system by doing long hours? First merit is patience. A 1.5 TB copy operation would teach patience aplenty). :D

          I like the bit about paranoid surviving, but I mist add paranoid and whackos (you know, those who could not care less if the PC goes up in smoke, in fact, they enjoy it and dance around yelling). That’s why I’m still here…

  3. It’a a separate but related topic, but running Mint (Ubunto) from Live USB eventually clogs the drive to where a disk “clean up” will improve performance and avoid fragmentation. The very configurable utility Bleachbit has done that for me for five or six years. It can also replace cleaned files with zeros or fill free space with them. There’s a lot of conflicting commentary about Bleachbit online, but my experience on Ubuntu spin-offs over several years has been great and I run it immediately after rebooting.

    • Bleachbit is useful and good, but I’d use it sparingly. Can I ask, why are you running Mint form Live USB, instead of making it permanent?

  4. I put this to test some time ago with another one of these “defrag ext4” posts. I have 4x 250GB drives in a Raid 0, running ext4 file system as a Debian / Ubuntu / Raspbian repository. I have roughly about 700+ thousand files on there at a total size of 900+ GB. Every day at 5 AM the repositories sync the new data. Apt-mirror will update new files and remove old files thus creating a great scenario for fragmentation to occur, especially as the sync is around 3GB+ and over 1000+ files. I did the analysis and the the fragmentation was less than 0.1% of the entire drive.

    • Proves one of the points I believe (that it is highly unlikely on an ext4 FS). Although, depending what the sync does, it might not create a fragment-able environment all, quite the opposite in fact. Does it apply incremental updates to large files, or does it swap small ones files around? (I mean copy up new ones.) As you say it removes old files, it is in favour of non-fragmentation. Moving whole files will make ext4 “logic” shuffle the fragmented bits, and rearrange them in a consistent manner.

      From what you wrote is sounds like you have plenty of small files (like what 1.5MB/file on average or less?) Fragmentation rarely affects small files on ext4. On Linux you would most likely encounter fragmentation, if you’ve had your 900+GB spread across like 100 files, some of which would get quite regular incremental updates which would cause them to grow, while others would shrink in size at times, all this with less than 25% available space, but nothing would really get deleted or moved. Then even ext would get fragmented over time. :)

      On a more user-like scale, this would probably happen with several large video files being regularly edited, for example, but as I mentioned above (and in the article too), it is highly unlikely.

      Fact is, with that much shuffling about of files, that your server does, your would get even less fragmented, than an average system. See the screenshot above, on my Ubuntu VM, which I only use for test driving stuff for MTE, I got 0.3%, which is still low, yet it’s still 300% of what you have. :)

      • Atilla,
        You make a good point with regards to the small files being shuffled.
        I have a 3TB drive full of movies and series. There I download new items are added daily on average about 30GB+ and after some time, the older ones are removed, so in a month, 900GB+ are shuffled. I’ll check that fragmentation level and revert.

        Regards

        • Yea, in that case ext4 would sort itself out. When files are moved, it actually handles fragmentation on the FS level, just one of the perks of using a good journalling FS. :)

  5. “Additionally, if fragmentation does happen, most Linux filesystems would attempt to shuffle files and chunks around to make them contiguous again.”

    What!? Linux file systems defragment themselves automatically? I’ve not heard that one before.

Comments are closed.

Sponsored Stories