Organization: Bilateral Symmetry 
Date: Sun, 23 Nov 1997 21:20:51 -0700 
To: www-style@w3.org 
From: Todd Fahrner  
Subject: Hey Microsoft! cool it with CSS points ok?

I've had trouble getting through to anybody in your organization with something
to say about this issue, so I'm taking it public:

Do you know what you're doing with CSS? On your site? In your developer
materials? In your typography group? You're making the Web hard or impossible to
read - even with Microsoft software. Especially for non-Windows users. And
you're teaching others to help you, both explicitly and by example.

I'm not talking about anything as headline-addled as ActiveX or Java, but about
fonts. And points. Specifically, your use of point units in CSS to specify the
size of fonts in Web pages, especially Microsoft fonts. Before anybody's eyes
glaze over, have a look at the Microsoft corporate home page in Microsoft's
browser for Macintosh, IE3 or 4:
http://www.verso.com/agitprop/points/font_wars.GIF (43K). My comments are in
sticky notes, but the rendering is otherwise undoctored. This page is just the
very prominent tip of an iceberg issue.

The danger of specifying point units in CSS is compounded by their use with
special fonts, whose legibility characteristics at any nominal point size are
better than average, like "big looking" Verdana (and most of the other very fine
free MS Core Web Fonts). Verdana is legible at smaller nominal point sizes on
screen than just about any other I've known. If you specify Verdana and a small
point size in Web pages, though, and it is not available on the browsing end,
another face gets substituted, which with the same nominal point size will
appear too small. So what's the problem if Verdana is free? Ask Adobe, maybe,
whose $49 Web Type package looks too small on screen at MS-tuned point sizes.

More to the point, these sizing issues would go away if CSS authors (and their
corporate sponsors) would make it a policy not to use point or pixel units for
type in Web pages. These units render inconsistently, so any illusion of greater
control is, well, illusory, and finally unfriendly. Speaking of unfriendly, have
you all noticed that IE4 shipped without the font size adjustment thingie on the
toolbar? There's almost enough here for a conspiracy theory, or maybe a Ralph
Nader crusade.... :^)

What's the alternative?

CSS allows author/designers to specify the size of fonts and other objects like
graphics in units or expressions that can be relative to user preference or
need: these units are em, ex, %, and "larger" or "smaller".[1] Your CSS
implementations (and Netscape's, FWIW) don't implement these consistently, so (I
guess) you use the other class of units: points and pixels. These are *device
dependent* units for the purposes of screen display, subject to little or no
user adjustment. My GIF, above, shows how point units render on Macintoshes:
smaller than in Windows, possibly too small to be read. This is not a bug, it's
just a poor choice of unit for screen display, and the CSS1 Recommendation says
so.[2]

* * *

The issues are a little hard to follow, so I suspect misunderstanding has driven
some decisions. I'm hoping for public acknowledgment that this is a problem, and
an explanation of how (or if) you plan to address it.


1 http://www.verso.com/agitprop/scale/

2 http://www.w3.org/TR/REC-CSS1-961217#length-units

__________________ 
Todd Fahrner mailto:fahrner@pobox.com



*********


To: "Eric A. Meyer" , www-style@w3.org 
From: Todd Fahrner  
Subject: Re: Hey Microsoft! cool it with CSS points ok?

> >The danger of specifying point units in CSS is compounded by their use with >
>special fonts, whose legibility characteristics at any nominal point size >
>are better than average, like "big looking" Verdana (and most of the other >
>very fine free MS Core Web Fonts). > >    

> Okay, I'm not a font expert, so maybe I'm a little confused.  I had 
> thought that points measured distance, as in 1/72 of an inch.  Is this so?

Yes. Sort of. It doesn't matter. :^)

> If not, is it supposed to be so?  Because I can understand not using pixels >
to specify font size, given the wide range of monitor resolutions, but I > had
been assuming that points were a good, generic solution for the problem > of
creating legible pages that were pretty much resolution-independent.

This is the case for Web pages that are being sent to printers with
resolution-independence features. Printers know how big their dots are, and can
adjust the count of dots to print a 1" line accurately, whether it's a 300 dpi
or a 1200-dpi printer. Their dots are generally small enough that you can keep
out of trouble with even the smallest common type sizes, though a 1200-dpi
printer is capable of printing text you can read only with a magnifying glass,
while a 300-dpi printer with the same challenge will produce only a microscopic
row of toner dots.

With computer display systems, all of this goes out the window. The OS doesn't
generally know how big the dots on the monitor are - there are all these analog
controls and messy physics intervening - and anyway who cares? People don't read
from screens with the same ease and varying distance for magnification as from
paper. So the OS takes a guess.

With the Mac it guesses that the dots are 1/72 inch, which is convenient because
typographical points are that size. So when you ask a Mac to show you 9-point
type, it allocates 9 pixels (vertically), and fills in the dots as best it can.
Tell it to show you smaller, and you get the MS effect: mud. Note that the mud
is not necessarily too small in a literal physical sense - at 72 dpi, it's
accurate, but the dots themselves are too coarse. This is why it doesn't matter
whether or not the display is representing the physical measures (points)
accurately on screen.

With Windows, the OS guesses that screen dots are smaller: 1/96 inch. Or maybe
1/120 inch ("large fonts"). Relative to the Mac, this means that the system
allocates 33% more dots per glyph at any given point size. And this is the root
of our troubles: Windows authors make guesses about the legibility of
point-specified text based on how many pixels it occupies on *their* screens -
NOT how big it is (as in "pull out the calipers and measure"). If it's the
pixel-patterns they're after, they should specify pixels (but there's a better
way still).

Now, you may be saying: sounds like Windows is better in this regard. Hogwash.
Both systems are inaccurate in their representation of real physical measures
when the actual display resolution varies from either 72 or 96 dpi. Actually,
many Windows systems (with Matrox video drivers) let users customize the
resolution of their systems: to 72 dpi, 150 dpi - you name it. I don't mean
changing from, say, 800x600 to 1600x1200. I mean changing the number of dots per
inch the system will assume when asked to render physical measures like points
or inches. If you visit the MS homepage with your display system set to
rasterize type at 150 dpi, all the pixel-specified artwork would shrink
pathetically alongside the huge type.... So maybe Windows does have an edge
here, but specifying type in points for screen display is just as silly as ever.

Unless you specify all screen graphics in the same unit system: not in pixels.

>    Given that this is not, apparently, the case, what is left to us poor > Web
authors?  Todd continues... > > >More to the point, these sizing issues would go
away if CSS authors (and > >their corporate sponsors) would make it a policy not
to use point or pixel > >units for type in Web pages. These units render
inconsistently, so any > >illusion of greater control is, well, illusory, and
finally unfriendly. > >[...zap...] > >CSS allows author/designers to specify the
size of fonts and other objects > >like graphics in units or expressions that
can be relative to user > >preference or need: these units are em, ex, %, and
"larger" or > >"smaller". > >    These are all, I agree, methods of dealing with
the issue.  However, > they're still a little short of the mark at which
Microsoft et.al. are > aiming, and here's why.

What's that mark? Something you could do with a big GIF? A PDF? Leave out the
nonvisual considerations: what's the vision for the visual behavior of the page?

>    Let's say I want to create a sidebar of links in which the text is small >
enough to minimize canvas usage, but large enough to be read.  I can define >
this text as being "font-size: 66%;", and if the reader has his default >
display set to a font size which makes this text too small, then raising > that
size will make the sidebar more legible....and make the main-body text > much
larger, possibly badly upsetting the balance of the page.

What do you mean when you say "the balance of the page"? I suspect you mean "the
type areas will get out of synch with the graphics, losing alignment,
appropriate relative masses, etc." The disconnect occurs because not all
elements' sizes are specified in the same unit system. The graphics are in
pixels and the type in points (or ems or whatever). This is a pretty simple
recipe for imbalance. More importantly, because neither of these units are
susceptible to inheritance, there's no communication among elements in the
document's rendering tree. It's a dead tree, and it falls over when you push it.

If your page is like a nice orderly cemetery, moving some of the headstones
around or inflating them really can upset the balance of the space. But if your
page is like a basketball court, things get out of balance if the elements
*don't* all respond to the movements, expansions, and contractions of their
peers. I'm not talking about gratuitous typographical animation, but about
essential typographical adaptation to the constraints of the rendering
environment and the needs of the user. It's about porting visual design
intelligence into runtime, out of "design time."

There's such a rich set of possibilities for dynamic design with CSS, and maybe
a little scripting to help in a pinch. (I'd like to make line height a function
of column width and the ratio of ex to em of the face in use, and column width
in turn a function of window width, but with an upper constraint of 36em....)
Today's implementations have enough gotchas to make this a fairly academic
reverie, but hey somebody's gotta do it.

CSS2 is great, but a lot of this would be possible if there were any CSS1
implementations. You know what happens if there are bubbles in your clay when
you fire the glaze....


******


To: dfa , wwwac@legion.echonyc.com 
From: Todd Fahrner  
Subject: Re: Mac vs. Win Screen Res + CSS Font Sizing? C

dfa wrote:

> I'm confused about what screen resolution is typically used by Windows > PCs
when viewing Web pages. > > My understanding has been that typical PCs with 13"
monitors have 72dpi > physical screen resolution -- but that for some reason
Windows uses a > 96dpi (96:72) scale for *measuring* fonts.

I question whether "typical" PCs do in fact run at 640x480 on 13-inch monitors
(72 dpi). Windows is a fairly chrome-intensive UI, so to get any work done it
really does help to run in higher res. It's been my experience that most do run
at something closer to 96 dpi, with a significant minority running at upwards of
108 dpi.

At any rate, the "factory setting" for the OS assumes 96 dpi for the purpose of
rendering typographical points, and this is only when the default "small fonts"
are left chosen. "Large fonts" maps everything to a nominal 120-dpi display.
Mostly. Matrox video drivers allow Windows users to map points to pixels any way
they please. With such a setup, you can emulate a Mac's 72-dpi, or a Sun
station's typically much higher res.

The bottom line is you can't reliably make assumptions about how points will
rasterize on a PC user's display, the way you can with Mac users, where 1 point
= 1 pixel, regardless of the physical pixel density of the display. Even if you
knew the nominal display size and resolution settings, you'll never be able to
account for the state of the analog controls on the CRT, which can change the
physical density considerably (think of a projection system: that's what a CRT
is. How many centimeters tall was Princess Leia's image *on screen* in the "Help
me, Obi Wan" scene? How many points? These are not the right questions.)

> So a 9pt font would visually > appear 12pt on screen (12/72"), and a 12pt font 
would visually appear > 14pt on screen (14/72"). But that other than this font
measurement > issue, Windows PCs function in a normal 72dpi way -- GIFs produced
by > and for Windows machines are always 72dpi not 96dpi, and they don't >
appear 75% of the right size when viewed on Mac.

GIFs are not resolution-independent; i.e., they don't have any notion internally
of an inch or other physical measure. They just contain n pixels on x and y
axes. When you create a "72-dpi" graphic on a Mac, you're really just telling
Photoshop to show all pixels at 1:1, interpolating neither up nor down. When you
save as GIF, this scale information is lost. When you display this same GIF on a
system with an actual physical display of 96 dpi (which could be a Mac), then
you could call it a "96 dpi GIF". The fact that you can specify a pixel-to-inch
ratio for a graphic in Photoshop is to accommodate resolution-independent
printing, where scaling must occur unless you really want, say, 1200 pixels per
inch, for a microscopic nav bar.

When Netscape prints a GIF, it scales it based on a virtual 120-dpi display.
This is out-of-synch with the W3C suggestion to use virtual pixels at 90dpi:

The suggested reference  pixel is the visual angle of one pixel on a device with
a pixel density of 90dpi and a distance from the reader of an arm's length. For
a  nominal arm's length of 28 inches, the visual angle is about 0.0227 degrees.
(From http://www.w3.org/TR/REC-CSS1#length-units )

> But what's confusing me is why would Windows use a 96dpi standard for > sizing
fonts if monitors are typically 72dpi, and aren't there in fact > lots of
standard consumer PC systems with 800x600 13" monitors, where > the actual
physical screen resolution *is* 96dpi?

I think MS chose 96 dpi in order to accommodate budget-conscious word processors
and desktop publishers. You can display most of a letter-size page on a 17-inch
monitor at 96 dpi. At "true WYSIWYG" 72 dpi, you need a more expensive 21-inch.
It's no wonder to me why the Mac has been the designer's choice - less
confusing: a point is a pixel is a point is a pixel, on screen and in print,
even if the display scales everything down a bit.

> 	-- the user could *choose* an 800x600/96dpi resolution on many systems, >
and then use Windows' "Large Fonts" function... resulting in a 9pt font > being
enlarged to 12/72" by Windows, reduced to 9/72" by the resolution > setting, and
re-enlarged to 12/72" by "Large Fonts" (?!)... but then the > browser window and
graphics of Web pages would look too small, and the > user would know to switch
back to 72dpi and turn off "Large Fonts";

Honest question: What percentage of Windows users change their screen resolution
often in the course of normal work? I'll wager it's pretty low. What percentage
of Windows users can tell you the physical resolution of their displays? The
color depth? Single digits, if not fractions thereof.

> 	-- if a Windows user sees a Web page with type that *looks* 12/72" > high,
and prints that out, the printed page will have 9pt type and > everything will
be displaced;

If such displacement bothers you, a page-layout language like PDF will serve you
much better than HTML ever will, even with CSS.

> 	-- GIFs for Windows browsers are always 72dpi and for the typical Web >
user appear at the same visual screen size as on Mac; > > 	-- if you use
JavaScript to set a Navigator window to 200 pixels by 200 > pixels, for the
typical Web user, this window will be the same visual > size on Mac or Windows;
and,

When the physical screen resolutions are the same, yes. This is unlikely.

> FOR CSS EXPERTS: > > 	-- if you use CSS to specify a *point* size of 12pt,
this will vary in > size between Mac and Windows, but if you specify a *pixel*
size of 12 > pixels, for the typical Web user, this will be identical on Mac and
> Windows.

They will take up more or less the same number of pixels (no relation to their
real size - just like GIFs) provided the font data is identical, and the
OS-specific font rasterizers do a comparable job. These conditions will hold
only for a few fonts like Verdana, assuming the user has downloaded and
installed them. Here's the clincher, though: Mac Netscape and Win Netscape
interpret even pixels differently. IE gets it right. IE will also accept
fractional point values, while Netscape will round off.

>(So why does Netscape recommend *not* using pixel size and > using relative
sizing instead??).

Because pixel-speced type is overspeced when you can't know the value of a
pixel, and furthermore can't assume that your readers have your eyesight. Points
are generally a disaster across platforms, but at least there's the theoretical
possibility that some rendering (print?) will result in right-sized type. Best
of all are percentages and em-units, which can be made relative to some value
the user has chosen as comfortable. But Netscape 4 misinterprets ems.

IE4, following the CSS spec, lets you spec height and width of images in
typographical units like ems or points, too, allowing you to maintain a
consistent relation between word and image.

Did I mention that IE3 doesn't scale pixel-speced type when printing?
Microscopic type. And it will destroy pages speced in ems (see
http://www.verso.com/agitprop/css/csstrainwreck.gif .

This whole area is a big flaming mess, dfa.