OK, so I sort of have this pet peeve that every "cool" feature
in Vista that you see is … yes, UI!! And, in turn, it tends to be what
I'm asked about the most. The thing is, I wasn't on the shell team (or any feature team, really) and
while Aero and all that is cool, there’s so much more that appeals to us
techies than Flip3D. Nonetheless, the shell is what people see, so it tends to be what people (including magazines) talk about.
So, I'm in a hotel and have some time tonight so I thought I'd try writing a tech tidbit. While I’d
like to think this idea could be a nice weekly thing, I’m guessing
it will fizzle after this one.
Without further adieu, one cool change (IMO) in Vista that
you won’t see mentioned in PC Magazine is the new way the kernel does time
accounting. This isn’t about keeping track of what you do with your
time (I wish I had something that could do that for me), but more about what
your processes and threads are doing with the CPU's time.
This is important because, like most of you, I’ve got a million
processes running, all vying for CPU time.
In Windows XP, the kernel's scheduler used the clock timer
to measure "quantum expiration" – that is, when a thread has had enough time and gets interrupted so another thread gets a turn. This works great
… most of the time. For several reasons, it was completely possible that
threads get somewhere between 0 and 3 turns. This can happen in a
number of situations, and Mark Russinovich gave a great talk at TechEd about some of these changes in depth. But essentially, the scheduler had no way of knowing exactly how much time a
thread actually got to use the processor. Contributing to this is
processor idle time, resolution delay (about a dozen milliseconds), and interrupt time being charged to one of
the threads, meaning some other thread may have been getting a raw deal. (Invariably it's
always the one running my VOIP, Live Meeting, or other critical app.)
Vista will now measure the actual cycles a thread uses
(using the Time Stamp Counter at the time an interrupt occurs, but not counting the interrupt time). This
ensures that threads get a full turn, and thanks to some additional features, this is especially useful in multimedia applications.
The kernel can flag multimedia apps with a special priority (a realtime boost
) to ensure smooth
when you're building a huge project or otherwise stressing a system, playing
back music (et. al.) should no longer stutter.
Sysinternal's great tool, Process Explorer, can show you the cycle time consumed per process, so this if you're the tinkering type, you can measure the
actual CPU cycles yourself and there are also APIs to do this if you’re so
Why wasn't this done earlier? Efficiency. With today's hardware and ever-increasing
multitasking/multithreaded nature of software (which I think is the next big
software challenge for computer science as a whole), the perf hit of cycle counting is somewhat