An interesting topic in the forums: does Linux need defragmenting? The most insightful answer, from irne.barnard, below:
I'd actually strongly suggest not defraging ... the reason behind this? Even on windows most defraggers have 2 options, 1 Defragment, 2 Compact ... sometimes called something different, but the end result's the same.
The "defragment" is supposed to make all files into contiguous blocks. The "compact" is supposed to defragment and make the free space into a contiguous block. Now while this sounds nice, the reality is quite different:
Fragmentation does cause productivity loss as each file access now could become hundreds of HDD reads / writes. The file system makes all the difference:
1. FAT based file systems save each file directly following each other. So if you later edit / add to this file, the added portion needs to be saved somewhere else. This will create a fragment (or more than one). A new file is saved starting from the 1st blank spot, even if that blank spot is too small to contain the entire file.
2. NTFS is a bit better in theory, it allows some free space around each file. Then if it notices that the file will become fragmented, it "attempts" to save the entire file in a new location. The caveat to this is "efficiency": if it would not take too much time to save the entire file over again it will be done, otherwise just create a new fragment. Just how it determines "too much time" is up for grabs ... and like most M$ ideas also a secret!
3. Ext3/4 also generates blank spaces "around" each file. Where it differs significantly from NTFS is that it'll only fragment when it's impossible to keep the file as one single fragment ... that (usually) only happens when the disk becomes too full.