Oh my god! They‘ve done it! As it seems, Apple has introduced a really cool under-the-hood feature into Mac OS X 10.3 (you know, the black cat thing). It is some kind of auto-defragmentation within the HFS+ file system.
As documented in this Ars Technica Forum Thread, the source code for this can be reviewed in the recently posted Darwin sources.
Heres how it works: files under 20 MB in size get checked for fragmentation on open and if they are too distributed, they get reallocated. So a contiguous block of data is allocated, the data is copied over and the old data gets cleared afterwards. The result is a nicely ordered file. No need for disk doctors anymore as this seems to be a viable solution to the problem of scattered data.
Here is the fun part: the source code itself contains a nice illustration of how this thing works. It‘s so great: whirr… shhwip! :>POOF!<: *gleam*
/* * Relocate a file to a new location on disk * cnode must be locked on entry * * Relocation occurs by cloning the file‘s data from its * current set of blocks to a new set of blocks. During * the relocation all of the blocks (old and new) are * owned by the file. * * ----------------- * |///////////////| * ----------------- * 0 N (file offset) * * ----------------- `´`´`´`´`´`´`´`´` * |///////////////| } whirr... { STEP 1 (aquire new blocks) * ----------------- `´`´`´`´`´`´`´`´` * 0 N N+1 2N * * ----------------- ----------------- * | ////////| ===}|/////// | STEP 2 (clone data) * ----------------- ----------------- * 0 N shhhwip! 2N * * ----------------- * :>POOF!<: |////*gleam*////| STEP 3 (head truncate blocks) * ----------------- * 0 N * * During steps 2 and 3 page-outs to file offsets less * than or equal to N are suspended. * * During step 3 page-ins to the file get supended. */
I love it.