Вы находитесь на странице: 1из 9

FAT & NTFS File Systems

in Windows XP
Version 1.2 — Last Updated September 1, 2002
Hold mouse here for list of most recent changes.
Receive notice whenever this page is updated.

by Alex Nichol, MS-MVP

© 2002 by Author, All Rights Reserved


Files in Windows XP can be organised on the hard disk in two different ways.
• The old FAT (File Allocation Table) file system was developed originally (when the
original IBM PCs came out) for MS-DOS on small machines and floppy disks. There are
variants — FAT12 is used on all floppy disks, for example — but hard disk partitions in
Windows XP can be assumed to use the FAT32 version, or 32-bit File Allocation Table.
• Later, a more advanced file system was developed for hard disks in Windows NT, called
NTFS (the “NT File System”). This has matured, through several versions, into the latest
one that exists alongside FAT in Windows XP.
The file system used goes with an individual partition of the disk. You can mix the two types on
the same physical drive. The Windows XP operating system is the same, whichever file system is
used for its partition, so it is a mistake (and source of confusion) to speak of “a FAT disk reading an
NTFS partition.” It is the operating system, not the disk, that does the reading.
Actual files are unaffected by which file system they are on; that is merely a matter of a method of
storage. An analogy would be letters stored in an office. They might be in box-files on shelves
(FAT) or in suspended folders in file cabinets (NTFS); but the letters themselves would be
unaffected by the choice of which way to store them, and could be moved from one storage place to
the other. Similarly, files can be moved between folders on an NTFS partition and folders on a FAT
partition, or across a network to another machine that might not even be running Windows.
EXAMPLE: Consider the downloading to your computer of a file through a link on a web page.
You click on the link, and the file is copied across the Internet and stored on your hard drive. If you
download the file from this present site, the file is stored on a computer running Unix, which uses
neither FAT nor NTFS. The file itself is not affected when it is copied from a Windows computer to
the Unix-based server, or copied from that server to your Windows-based computer.
However, if a machine has two different operating systems on it, dual booted, they may not both be
able to read both types of partition. DOS (including an Emergency Startup boot floppy), Windows
95/98, and Windows ME cannot handle NTFS (without third party assistance). Early versions of
Windows NT cannot handle FAT32, only FAT16. So, if you have such a mixed environment, any
communal files must be held on a partition of a type that both operating systems can understand —
meaning, usually, a FAT32 partition. (See the article Planning Your Partitions on this site, under the
section “Multiple Operating Systems,” for a table of which file system each recent version of
Windows can use and understand.)


There are three considerations that affect which file system should be chosen for any partition:
a. Do you want to use the additional capabilities that only NTFS supports?
NTFS can provide control of file access by different users, for privacy and security. The
Home Edition of Windows XP only supports this to the limited extent of keeping each user’s
documents private to him or herself. Full file-access control is provided in Windows XP
Professional, as is encryption of individual files and folders. If you use encryption it is
essential to back up the encryption certificates used — otherwise, if the partition containing
your "Documents and Settings" has to be reformatted, the files will be irretrievably lost.

b. Considerations of Stability and Resilience

NTFS has stronger means of recovering from troubles than does FAT. All changes to files
are “journalized,” which allows the system to roll back the state of a file after a crash of the
program using it or a crash of the system. Also, the structure of the file system is less likely
to suffer damage in a crash, and is therefore more easily reinstated by CheckDisk
(CHKDSK.EXE). But in practical terms, the stability of FAT is adequate for many users,
and it has the benefit that a FAT partition is accessible for repair after booting from a DOS
mode startup floppy, such as one from Windows 98. If an NTFS partition is so damaged that
it is not possible to boot Windows, then repair can be very difficult.

c. Considerations of economy and performance

In a virtual memory system like Windows XP, the ideal size of disk clusters matches the
internal “page size” used by the Intel processors — 4 kilobytes. An NTFS partition of almost
any size you are likely to meet will use this, but it is only used in FAT32 up to an 8 GB
partition. Above that, the cluster size for FAT increases, and the wastefulness of the “cluster
overhang” grows. (For a table of the varying default cluster sizes used by FAT16, FAT32,
and Win XP’s version of NTFS, for partitions of varying sizes, click here.)

On the other hand NTFS takes much more space for holding descriptive information on
every file in that file’s own block in the Master File Table (MFT). This can use quite a large
proportion of the disk, though this is offset by a possibility that the data of a very small file
may be stored entirely in its MFT block. Because NTFS holds significant amounts of these
structures in memory, it places larger demands on memory than does FAT.

Searching directories in NTFS uses a more efficient stucture for its access to files, so
searching a FAT partition is a slower process in big directories. Scanning the FAT for the
pieces of a fragmented file is also slower. On the other hand, NTFS carries the overhead of
maintaining the “journalized” recovery.
Also, of course, in a dual boot system, there may be the overriding need to use FAT on a partition so
that it can also be read from, say, Windows 98.


Leaving matters of access control and dual use aside, as partition sizes grow, the case for NTFS gets
stronger. Microsoft definitely recommends NTFS for partitions larger than 32 GB — to the extent
that Windows XP will not format a FAT partition above that size. However, with smaller sizes,
FAT is likely to be more efficient — certainly below 4 GB, and probably below 8 GB. I suggest that
NTFS should be used for partitions of 16 GB or above, where the FAT 32 cluster size goes up to 16
KB, the intermediate region (that is, partitions between 8 and 16 GB in size) being largely a matter
of taste.

Ideally, a disk is initially formatted in the file system which is to be used permanently — NTFS, for
example, can then put the Master File Table in its optimal location in the middle of the partition.
However, on an upgrade of an existing system, the file system is left as it is. For example, an
upgraded Windows 98 system will be on FAT32. Also, some computer makers ship new computers
with all partitions formatted as FAT32. These can be converted to NTFS if that seems more suitable
to your needs. If you use the method described here, the result will be nearly as satisfactory as if a
fresh format to NTFS had been done.
But this conversion is a one-way process. Windows XP provides a native tool for converting FAT to
NTFS, but no tool for converting NTFS to FAT. It may be possible to convert NTFS to FAT using
Partition Magic 7.01, but the result is uncertain. It you attempt it, it is essential that you first decrypt
all encrypted files, or they will be forever inaccessible. (For this reason, Partition Magic will stop if
it finds one.) If it is a new machine, too, be sure that your warranty will not be compromised by
doing a file system conversion.
A further aspect that needs caution is that the conversion may result in the NTFS permissions on the
partition and its folders being not the simple general access that might be expected. It is certainly
important that the conversion be done when logged in as an Administrator.


Will a backup or image made from NTFS remain NTFS if I restore to a newly formatted partition?
This depends on the approach of the particular backup program you use. It may make an exact
image of the partition, including the file system’s structures, in which case the restored partition
will be exactly as the original. (Indeed, any format of the drive before restoring the drive image not
only is unnecessary, but all that it accomplishes will be overwritten when you restore the image.)
Or, the software may work on a file-by-file basis, in which case the files themselves will be restored
— to whatever file system has been used in formatting the partition to which you restore them. But,
again, note that a file-by-file restore from a backup of NTFS to a FAT partition will result in
encrypted files being unreadable, because there is no way to decrypt them on FAT!


Virtual Memory in Windows XP

Version 1.6 — Last Updated February 21, 2006
Hold mouse here for list of most recent changes.

by Alex Nichol
(MS-MVP - Windows Storage Management/File Systems)
© 2002-2005 by Author, All Rights Reserved

This page attempts to be a stand-alone description for general users of the way Virtual Memory
operates in Windows XP. Other pages on this site are written mainly for Windows 98/ME (see
Windows 98 & Win ME Memory Management) and, while a lot is in common, there are significant
differences in Windows XP.

What is Virtual Memory?

A program instruction on an Intel 386 or later CPU can address up to 4GB of memory, using its full
32 bits. This is normally far more than the RAM of the machine. (The 32nd exponent of 2 is exactly
4,294,967,296, or 4 GB. 32 binary digits allow the representation of 4,294,967,296 numbers —
counting 0.) So the hardware provides for programs to operate in terms of as much as they wish of
this full 4GB space as Virtual Memory, those parts of the program and data which are currently
active being loaded into Physical Random Access Memory (RAM). The processor itself then
translates (‘maps’) the virtual addresses from an instruction into the correct physical equivalents,
doing this on the fly as the instruction is executed. The processor manages the mapping in terms of
pages of 4 Kilobytes each - a size that has implications for managing virtual memory by the system.

What are Page Faults?

Only those parts of the program and data that are currently in active use need to be held in physical
RAM. Other parts are then held in a swap file (as it’s called in Windows 95/98/ME: Win386.swp)
or page file (in Windows NT versions including Windows 2000 and XP: pagefile.sys). When a
program tries to access some address that is not currently in physical RAM, it generates an
interrupt, called a Page Fault. This asks the system to retrieve the 4 KB page containing the
address from the page file (or in the case of code possibly from the original program file). This — a
valid page fault — normally happens quite invisibly. Sometimes, through program or hardware
error, the page is not there either. The system then has an ‘Invalid Page Fault’ error. This will be a
fatal error if detected in a program: if it is seen within the system itself (perhaps because a program
sent it a bad request to do something), it may manifest itself as a ‘blue screen’ failure with a STOP
code: consult the page on STOP Messages on this site.
If there is pressure on space in RAM, then parts of code and data that are not currently needed can
be ‘paged out’ in order to make room — the page file can thus be seen as an overflow area to make
the RAM behave as if it were larger than it is.

What is loaded in RAM?

Items in RAM can be divided into:

• The Non-Paged area. Parts of the System which are so important that they may never be
paged out - the area of RAM used for these is called in XP the ‘Non-Paged area’. Because
this mainly contains core code of the system, which is not likely to contain serious faults, a
Blue Screen referring to ‘Page Fault in Non-Paged area’ probably indicates a serious
hardware problem with the RAM modules, or possibly damaged code resulting from a
defective Hard disk. It is, though, possible that external utility software (e.g. Norton) may
put modules there too, so if such faults arise when you have recently installed or updated
something of this sort, try uninstalling it.
• The Page Pool which can be used to hold:
• Program code,
• Data pages that have had actual data written to them, and
• A basic amount of space for the file cache (known in Windows 9x systems as
Vcache) of files that have recently been read from or written to hard disk.
Any remaining RAM will be used to make the file cache larger.

Why is there so little Free RAM?

Windows will always try to find some use for all of RAM — even a trivial one. If nothing else it
will retain code of programs in RAM after they exit, in case they are needed again. Anything left
over will be used to cache further files — just in case they are needed. But these uses will be
dropped instantly should some other use come along. Thus there should rarely be any significant
amount of RAM ‘free’. That term is a misnomer — it ought to be ‘RAM for which Windows can
currently find no possible use’. The adage is: ‘Free RAM is wasted RAM’. Programs that purport
to ‘manage’ or ‘free up’ RAM are pandering to a delusion that only such ‘Free’ RAM is available
for fresh uses. That is not true, and these programs often result in reduced performance and may
result in run-away growth of the page file.

Where is the page file?

The page file in XP is a hidden file called pagefile.sys. It is regenerated at each boot — there is no
need to include it in a backup. To see it you need to have Folder Options | View set to ‘Show
Hidden and System files’, and not to ‘Hide Protected mode System files’.
In earlier NT systems it was usual to have such a file on each hard drive partition, if there were
more than one partition, with the idea of having the file as near as possible to the ‘action’ on the
disk. In XP the optimisation implied by this has been found not to justify the overhead, and
normally there is only a single page file in the first instance.

Where do I set the placing and size of the page file?

At Control Panel | System | Advanced, click Settings in the “Performance” Section. On the
Advanced page of the result, the current total physical size of all page files that may be in existence
is shown. Click Change to make settings for the Virtual memory operation. Here you can select any
drive partition and set either ‘Custom’; ‘System Managed’ or ‘No page file’; then always click Set
before going on to the next partition.

Should the file be left on Drive C:?

The slowest aspect of getting at a file on a hard disk is in head movement (‘seeking’). If you have
only one physical drive then the file is best left where the heads are most likely to be, so where most
activity is going on — on drive C:. If you have a second physical drive, it is in principle better to put
the file there, because it is then less likely that the heads will have moved away from it. If, though,
you have a modern large size of RAM, actual traffic on the file is likely to be low, even if programs
are rolled out to it, inactive, so the point becomes an academic one. If you do put the file elsewhere,
you should leave a small amount on C: — an initial size of 2MB with a Maximum of 50 is suitable
— so it can be used in emergency. Without this, the system is inclined to ignore the settings and
either have no page file at all (and complain) or make a very large one indeed on C:
In relocating the page file, it must be on a ‘basic’ drive. Windows XP appears not to be willing to
accept page files on ‘dynamic’ drives.
NOTE: If you are debugging crashes and wish the error reporting to make a kernel or full dump,
then you will need an initial size set on C: of either 200 MB (for a kernel dump) or the size of RAM
(for a full memory dump). If you are not doing so, it is best to make the setting to no more than a
‘Small Dump’, at Control Panel | System | Advanced, click Settings in the ‘Startup and Recovery’
section, and select in the ‘Write Debug information to’ panel

Can the Virtual Memory be turned off on a really large machine?

Strictly speaking Virtual Memory is always in operation and cannot be “turned off.” What is meant
by such wording is “set the system to use no page file space at all.”
Doing this would waste a lot of the RAM. The reason is that when programs ask for an allocation of
Virtual memory space, they may ask for a great deal more than they ever actually bring into use —
the total may easily run to hundreds of megabytes. These addresses have to be assigned to
somewhere by the system. If there is a page file available, the system can assign them to it — if
there is not, they have to be assigned to RAM, locking it out from any actual use.

How big should the page file be?

There is a great deal of myth surrounding this question. Two big fallacies are:
• The file should be a fixed size so that it does not get fragmented, with minimum and
maximum set the same
• The file should be 2.5 times the size of RAM (or some other multiple)
Both are wrong in a modern, single-user system. A machine using Fast User switching is a special
case, discussed below.)
Windows will expand a file that starts out too small and may shrink it again if it is larger than
necessary, so it pays to set the initial size as large enough to handle the normal needs of your system
to avoid constant changes of size. This will give all the benefits claimed for a ‘fixed’ page file. But
no restriction should be placed on its further growth. As well as providing for contingencies, like
unexpectedly opening a very large file, in XP this potential file space can be used as a place to
assign those virtual memory pages that programs have asked for, but never brought into use. Until
they get used — probably never — the file need not come into being. There is no downside in
having potential space available.
For any given workload, the total need for virtual addresses will not depend on the size of RAM
alone. It will be met by the sum of RAM and the page file. Therefore in a machine with small
RAM, the extra amount represented by page file will need to be larger — not smaller — than that
needed in a machine with big RAM. Unfortunately the default settings for system management of
the file have not caught up with this: it will assign an initial amount that may be quite excessive for
a large machine, while at the same leaving too little for contingencies on a small one.
How big a file will turn out to be needed depends very much on your work-load. Simple word
processing and e-mail may need very little — large graphics and movie making may need a great
deal. For a general workload, with only small dumps provided for (see note to ‘Should the file be
left on Drive C:?’ above), it is suggested that a sensible start point for the initial size would be the
greater of (a) 100 MB or (b) enough to bring RAM plus file to about 500 MB. EXAMPLE: Set the
Initial page file size to 400 MB on a computer with 128 MB RAM; 250 on a 256 MB computer; or
100 MB for larger sizes.
But have a high Maximum size — 700 or 800 MB or even more if there is plenty of disk space.
Having this high will do no harm. Then if you find the actual pagefile.sys gets larger (as seen in
Explorer), adjust the initial size up accordingly. Such a need for more than a minimal initial page
file is the best indicator of benefit from adding RAM: if an initial size set, for a trial, at 50MB never
grows, then more RAM will do nothing for the machine's performance.
Bill James MS MVP has a convenient tool, ‘WinXP-2K_Pagefile’, for monitoring the actual usage
of the Page file, which can be downloaded here. A compiled Visual Basic version is available from
Doug Knox's site which may be more convenient for some users. The value seen for ‘Peak Usage’
over several days makes a good guide for setting the Initial size economically.
Note that these aspects of Windows XP have changed significantly from earlier Windows NT
versions, and practices that have been common there may no longer be appropriate. Also, the ‘PF
Usage’ (Page File in Use) measurement in Task Manager | Performance for ‘Page File in Use’
include those potential uses by pages that have not been taken up. It makes a good indicator of the
adequacy of the ‘Maximum’ size setting, but not for the ‘Initial’ one, let alone for any need for more

Should the drive have a big cluster size?

While there are reports that in Windows 95 higher performance can be obtained by having the swap
file on a drive with 32K clusters, in Windows XP the best performance is obtained with 4K ones —
the normal size in NTFS and in FAT 32 partitions smaller than 8GB. This then matches the size of
the page the processor uses in RAM to the size of the clusters, so that transfers may be made direct
from file to RAM without any need for intermediate buffering

What about Fast User Switching then?

If you use Fast User Switching, there are special considerations. When a user is not active, there will
need to be space available in the page file to ‘roll out’ his or her work: therefore, the page file will
need to be larger. Only experiment in a real situation will establish how big, but a start point might
be an initial size equal to half the size of RAM for each user logged in.

Problems with Virtual Memory

It may sometimes happen that the system give ‘out of memory’ messages on trying to load a
program, or give a message about Virtual memory space being low. Possible causes of this are:
• The setting for Maximum Size of the page file is too low, or there is not enough disk space
free to expand it to that size.
• The page file has become corrupt, possibly at a bad shutdown. In the Virtual Memory
settings, set to “No page file,” then exit System Properties, shut down the machine, and
reboot. Delete PAGEFILE.SYS (on each drive, if more than just C:), set the page file up
again and reboot to bring it into use.
• The page file has been put on a different drive without leaving a minimal amount on C:.
• There is trouble with third party software. In particular, if the message happens at shutdown,
suspect a problem with Symantec’s Norton Live update, for which there is a fix posted here.
It is also reported that spurious messages can arise if NAV 2004 is installed. If the problem
happens at boot and the machine has an Intel chipset, the message may be caused by an early
version (before version 2.1) of Intel’s “Application Accelerator.” Uninstall this and then get
an up-to-date version from Intel’s site.
• Another problem involving Norton Antivirus was recently discovered by MS-MVP Ron
Martell. However, it only applies to computers where the pagefile has been manually resized
to larger than the default setting of 1.5 times RAM — a practice we discourage. On such
machines, NAV 2004 and Norton Antivirus Corporate 9.0 can cause your computer to revert
to the default settings on the next reboot, rather than retain your manually configured
settings. (Though this is probably an improvement on memory management, it can be
maddening if you don’t know why it is happening.) Symantec has published separate repair
instructions for computers with NAV 2004 and NAV Corporate 9.0 installed. [Added by
JAE 2/21/06.]
• Possibly there is trouble with the drivers for IDE hard disks; in Device Manager, remove the
IDE ATA/ATAPI controllers (main controller) and reboot for Plug and Play to start over.
• With an NTFS file system, the permissions for the page file’s drive’s root directory must
give “Full Control” to SYSTEM. If not, there is likely to be a message at boot that the
system is “unable to create a page file.”


Virtual Memory Optimization Guide Rev. 4.1!

Digg this article: | Bookmark this article:
Virtual Memory
Back in the 'good old days' of command prompts and 1.2MB floppy disks, programs needed very
little RAM to run because the main (and almost universal) operating system was Microsoft DOS
and its memory footprint was small. That was truly fortunate because RAM at that time was
horrendously expensive. Although it may seem ludicrous, 4MB of RAM was considered then to be
an incredible amount of memory.
However when Windows became more and more popular, 4MB was just not enough. Due to its
GUI (Graphical User Interface), it had a larger memory footprint than DOS. Thus, more RAM
was needed.
Unfortunately, RAM prices did not decrease as fast as RAM requirement had increased. This meant
that Windows users had to either fork out a fortune for more RAM or run only simple programs.
Neither were attractive options. An alternative method was needed to alleviate this problem.
The solution they came up with was to use some space on the hard disk as extra RAM. Although the
hard disk is much slower than RAM, it is also much cheaper and users always have a lot more hard
disk space than RAM. So, Windows was designed to create this pseudo-RAM or in Microsoft's
terms - Virtual Memory, to make up for the shortfall in RAM when running memory-intensive

How Does It Work?

Virtual memory is created using a special file called a swapfile or paging file.
Whenever the operating system has enough memory, it doesn't usually use virtual memory. But if it
runs out of memory, the operating system will page out the least recently used data in the memory to
the swapfile in the hard disk. This frees up some memory for your applications. The operating
system will continuously do this as more and more data is loaded into the RAM.
However, when any data stored in the swapfile is needed, it is swapped with the least recently used
data in the memory. This allows the swapfile to behave like RAM although programs cannot run
directly off it. You will also note that because the operating system cannot directly run programs off
the swapfile, some programs may not run even with a large swapfile if you have too little RAM.