Unix and Unix like filesystems, UFS, FFS, ext2,3&4 use cylinder groups or block groups to keep a file's data blocks near the inode that describe the file.
When I once taught Solaris admin, we went into this in depth. For smaller files, the entire file was kept in the same cylinder group as it's inode. When files got larger (I don't recall the point) the file was spread across multiple cylinder groups for two reasons. First, no sense taking up all the data blocks in a cylinder group to have only a few inodes. And second, other I/O operations pending in the same cylinder group could be performed before the head makes a "big" seek to the next cylinder group to continue the large I/O.
Seeking is what takes the most time in a disk I/O operation, so one optimization is to reduce the number of seeks you have to make.
With larger files, in a multi-user system, you wouldn't want a single I/O operation to monopolize a disk resource, so you provide a mechanism for other I/O operations to occur along with the large I/O.
Anyway, the result is good I/O performance as long as the file system is less than 90% full. You start to run out of enough blocks in a cylinder group when you exceed 90% capacity. This means you begin to see more head seeks for smaller I/O operations.
You have similar behaviors with FFS and the ext based file systems as they have similar structures.
With ZFS, I think you need to keep even more free space for best performance. IIRC, performance begins to drop at 80-85% full.
Of course, all of this was before the advent of the SSD. Seek times are meaningless when we are talking SSD devices.
I also used this analogy when teaching, and the numbers reflect 1990's technology. You had disks with access times measured in milliseconds. So you might have a 12ms disk access time. Memory was measured in nanoseconds. So you might have had 60ns RAM.
These are numbers humans don't grasp, so let's scale them to numbers you and I understand. Let's call that 60ns RAM 60 SECOND RAM. As humans, we understand 60 seconds.
Well, if we scale up the RAM access to take 60 seconds, what does our disk access become?
60ns -> 60sec means we multiplied by a billion. So our 12ms becomes 12 million seconds, 138.88 days, or over 4.5 months.
That was/is the relative difference between a disk I/O and RAM access. What takes you a "minute" to do in RAM takes just under 139 days to do on disk.
FWIW