Why Windows Web Pages Have Tiny Text

by Geoff Duncan

As a technical journalist, I feel compelled to address an unspoken truth of the trade: We sometimes gloss over stuff. In part, it's unavoidable. Just as knowledge can have infinite value, conveying knowledge can require an infinite number of words. But, darn it, sometimes we leave out stuff for your own good! This world contains truths that can make your head hurt, and it's our job to protect you, our innocent readers, from these malevolent abysses.

Such was the case with Adam's article "Driving the 4.5 Web Browsers" in TidBITS-465_, in which he noted: "Because Windows thinks monitors use a screen resolution of 96 dpi by default, rather than the Mac's 72 dpi, Windows-based Web designers often lower the font size so text doesn't appear too large for Windows users. Mac users are then faced with tiny text that's hard to read."

Adam's statement is correct, but TidBITS readers don't know what's good for them. More people responded to those two sentences than sometimes respond to an entire issue, asking questions about font rendering, the physical resolution of monitors, whether Windows or the Macintosh do the "right" thing, and much more.

If you can't leave well enough alone, fine. This article turns off a few of the journalistic shields we normally employ for your benefit. Don't say we didn't _try_ to protect you.

**Squint-O-Vision** -- Most Macintosh users have encountered Web pages with unbearably tiny text. If you haven't, spend a few minutes browsing Microsoft's Web site - especially pages devoted to Windows itself - where it's not uncommon for Mac users to see text one to four pixels in height.

This phenomenon isn't limited to the Web. How often have you been forced to edit a document from a Windows user who thinks 10 point Times is a wonderful screen font? Or maybe you've had to review a spreadsheet formatted in 9 point Arial? Do all Windows users have some sort of telescopic vision that makes text appear larger to them?

Why, yes. They do.

**Making Points** -- The confusion begins with a unit almost everyone uses: the point. People use points every day, choosing a 12 point font for a letter, or a 36 point font for a headline. But do you know what a point is?

Many people will tell you that a point is 1/72 of an inch. That's correct, but only for the imaging systems used in most computers (including Apple's QuickDraw and Adobe's PostScript). Outside of a computer, the definition of a point varies between different measurement systems, none of which put a point precisely equal to 1/72 of an inch.

Technically, a point is one twelfth of a pica. What's a pica? The first modern point system was published in 1737 by Pierre Fournier, who used a 12-point unit he called a cicero that was 0.1648 inches. Thus, a point would be a unit of length equal to 0.0137 inches. By 1770, Francois Ambriose Didot converted Fournier's system to sync with the legal French foot of the time, creating a larger 0.1776-inch pica, with 12 points each measuring 0.0148 inches. As fate would have it, the French converted to the metric system by the end of the 18th century, but Didot's system was influential and is still widely used in Europe. In Didot's system, a pica is larger than one-sixth of an inch, and thus his point - still called the Didot point - is larger than 1/72 of an inch.

The U.S., of course, did its own thing. In 1879 the U.S. began adopting a system developed by Nelson Hawks, who believed the idea of a point system was his and his alone. Claims of originality aside, Hawks' system came to dominate American publishing within a decade, and today an American pica measures 0.1660 inches, just under one sixth of an inch, and a point (often called a pica point) is 0.0138 inches, very close to Fournier's original value, but still a tiny bit less than 1/72 of an inch.

Also in 1879, Hermann Berthold converted Didot's point system to metric, and the Didot-Berthold system is still used in Germany, Russia, and eastern Europe. Just to make things more confusing, many Europeans measure type directly in millimeters, bypassing the point altogether.

The term pica may confuse readers old enough to remember typewriters and daisy wheel printers. Those technologies describe type in terms of pitch, or how many characters fit into a horizontal inch. Pica type corresponded to 10 characters per inch, elite to 12 characters per inch, and micro-elite to 15 characters per inch. These days, you'd simulate these pitches using a monospaced font (like Courier) at 12, 10, and 8 points, respectively.

For the purposes of understanding why text on Windows Web pages often looks too small on a Macintosh, you can do the same thing your computer does: assume there are 72 points to an inch.

**Not Your Type** -- When you refer to text of a particular point size, you're describing the height of the text, rather than its width or the dimensions of a particular character (or glyph) in a typeface. So, if there are 72 points to an inch, you might think 72 point characters would be one inch in height - but you'd almost always be wrong. The maximum height of text is measured from the top of a typeface's highest ascender (generally a lowercase d or l, or a capital letter) to the bottom of the face's lowest descender (usually a lowercase j or y). Most glyphs in a typeface use only a portion of this total height and thus are less than one inch in height at 72 points.

If this doesn't make sense, think of a period. At any point size, a period occupies only a small fraction of the height occupied by almost every other character in a typeface. When you're typing 72 point text you don't expect a period to be one inch high. Lowercase letters generally use less total vertical space than capital letters, and capital letters typically use about two thirds of the complete vertical height. (If you're curious, the tallest letter in a typeface is often the capital J: it sometimes has a descender even when capitalized.)

To make things more confusing, some typefaces break these rules. Special symbols, diacritics, and punctuation might extend beyond the limits specified by a point size, although it's rare for a single character to exceed both the upper and lower limits. Other typefaces may not use the full vertical height available; that's why Times seems smaller than many other faces at the same point size.

If a point size is an indication of text height, what about text width? Unlike points, which are (sort of) an absolute measure, text width is measured using the em. An em is a width equal to the point size of the type in which it's used. So, in 36 point type, an em is equal to 36 points; in 12 point type, an em is equal to 12 points. The em unit was originally based on an uppercase M, which was often the widest character in a typeface back in the days of handset type. Today, however, a capital M usually isn't a full em in width, allowing for a little bit of space before and after the character. The em is important because it is a relative unit, but the implications of the em are beyond the scope of this article.

**Pixel Dust** -- Now that you know a little about the somewhat fuzzy ways type is measured, how does a computer use this information to display text on a monitor?

Let's say you're writing a novel, and you set your chapter titles in 18 point text. First, the computer needs to know how tall 18 points is. Since the computer believes there are 72 points in an inch, this is easy: 18 points is 18/72 of an inch, or exactly one quarter inch. The computer then proceeds to draw text on your screen that's one quarter inch high.

This is where the universe gets strange. Your computer thinks of your monitor as a Cartesian grid made up of pixels or "dots." To a computer, your display is so many pixels wide by so many pixels tall, and everything on your screen is drawn using pixels. Thus, the physical resolution of your display can be expressed in pixels per inch (ppi) or, more commonly, dots per inch (dpi).

To draw 18 point text that's one quarter inch in height, your computer needs to know how many pixels fit into a quarter inch. To find out, you'd think your computer would talk to your display about its physical resolution - but you'd be wrong. Instead, your computer makes a patent, nearly pathological _assumption_ about how many pixels fit into an inch, regardless of your monitor size, resolution, or anything else.

If you use a Mac, your computer always assumes your monitor displays 72 pixels per inch, or 72 dpi. If you use Windows, your computer most often assumes your monitor displays 96 pixels per inch (96 dpi), but if you're using "large fonts" Windows assumes it can display 120 pixels per inch (120 dpi). Unix systems vary, but generally assume between 75 and 100 dpi. Most graphical environments for Unix have some way to configure this setting, and I'm told there's a way to configure the dpi setting used by Windows NT and perhaps Windows 98. However, the fundamental problem remains: the computer has no idea of your display's physical resolution.

These assumptions mean a Macintosh uses 18 pixels to render 18 point text, a Windows system typically uses 24 pixels, a Unix system typically uses between 19 and 25 pixels, and a Windows system using a large fonts setting uses 30 pixels.

Thus, in terms of raw pixels, most Windows users see text that's 33 percent larger than text on a Macintosh - from a Macintosh point of view, Windows users do in fact see text with telescopic vision. When you view the results on a single display where all pixels are the same size, the differences range from noticeable to dramatic. The Windows text is huge, or the Mac text is tiny - take your pick. See my sample below.

**Size Does Matter** -- This leads to the answer to our $20 question: why text on Web pages designed for Windows users often looks tiny on a Mac. Say your computer's display - or Web browser's window - measures 640 by 480 pixels. Leaving aside menu bars, title bars, and other screen clutter, the Mac can display 40 lines of 12 point text in that area (with solid leading, meaning there's no extra space between the lines). Under the same conditions, Windows displays a mere 32 lines of text; since Windows uses more pixels to draw its text, less text fits in the same area. Thus, Windows-based Web designers often _specify_ small font sizes to jam more text into a fixed area, and Macintosh users get a double whammy: text that was already displaying using fewer pixels on a Macintosh screen is further reduced in size, even to the point where the text is illegible.

**Dots Gotta Hurt** -- The fundamental issue is that the computer is trying to map a physical measurement - the point - to a display device with unknown physical characteristics. A standard computer monitor is basically an analog projection system: although its geometry can be adjusted to varying degrees, the monitor itself has no idea how many pixels it's showing over a particular physical distance. New digitally programmable displays - including both CRTs and flat LCD panels - would seem to offer hope of conveying resolution information to a computer. However, I know of no systems that do so, and full support would obviously have to be built into video hardware and operating systems. However, many displays do convey some capabilities to the host computer, including available logical resolutions in pixels.

Why do Windows and the Macintosh make such different assumptions about display resolutions? On the Macintosh, it had to do WYSIWYG: What You See Is What You Get. The Mac popularized the graphical interface, and Apple understood that the Macintosh's screen display physically must correspond as closely as possible to the Mac's printed output. Thus, pixels correspond to points: just as the Mac believes there were 72 points per inch, it displays 72 pixels per inch. In the days before 300 dpi laser printers, the Mac was a stunning example of WYSIWYG, and displays in the original compact Macs and Apple's original color displays were from 71 to 74 dpi - close enough to 72 dpi to hide the fact the Mac had no idea about the display's physical resolution.

In fact, Apple resisted higher display resolutions and multisync displays for years; after all, the further a pixel drifted from 1/72 of an inch, the less of a WYSIWYG computer the Mac became. The rising popularity of Windows, cost pressures from PC manufacturers, and strong demand for laptops finally caused Apple to relent on this issue, and today Macintosh displays generally show more than 72 dpi. A 17-inch monitor running at 1024 by 768 pixels usually displays between 85 and 90 dpi. The iMac's built-in display has a 13.8-inch viewable area (measured diagonally); a quick check with the Pythagorean Theorem indicates the iMac's screen is a rather chunky 58 dpi at 640 by 480, but almost 93 dpi at 1024 by 768 resolution. A PowerBook G3 with a 13.3-inch display area (also measured diagonally) displays over 96 dpi at 1024 by 768. SGI's upcoming 1600SW flat panel display sports a resolution of 110 dpi.

http://www.sgi.com/peripherals/flatpanel/

The historical reasons for Windows' assumption of 96 dpi displays are less clear. The standard seems to have been set by the Video Electronics Standards Association (VESA) with the VGA (Video Graphics Adapter) specification, which somewhat predates the market dominance of Windows. From what I can tell, there may have been compatibility concerns with older CGA and EGA video systems, and it seems no one with VESA felt text below 10 or 12 points in size would be legible on screen with a resolution of less than 96 dpi. The Macintosh proved this wrong, largely through careful design of its screen bitmap fonts, like Geneva, Monaco, Chicago, and New York. More ironically, although the resolution of mainstream Macintosh displays is indeed creeping closer to 96 dpi, Windows displays routinely sport resolutions well below the assumed 96 dpi. If you know a Windows user with a 17-inch monitor and a 1024 by 768 resolution, their monitor is probably displaying between 85 and 90 dpi - just as a Macintosh would be.

**Connect the Dots** -- Hopefully, this article shows how computers can take a mildly fuzzy measurement (the point), use it as a yardstick to render characters which themselves use an arbitrary portion of their point size, and finally convey that information to a display that, in all probability, does not conform to the computer's internal imaging system. The situation is a mess, even leaving platform out of the equation.

Windows advocates occasionally trumpet their systems' higher text resolution as an advantage, or claim the Mac's lower text resolution is its dirty little secret. Historically, neither claim rings particularly true. Although text on Windows system is rendered with more fidelity at a particular point size, Windows sacrifices WYSIWYG to get those extra pixels. Unfortunately, the bottom line is that no mainstream system for either platform is likely to display accurately sized text, so everyone loses.

And that's all the print that fits... or doesn't, depending on your system.