Jump to content
The type specimens of the world in one database …

Differences distract from reading - True or not ?

Recommended Posts

Denis_Masharov
This topic was imported from the Typophile platform

I participated in FaceBook discussions about the differences in the details of letters (drops, terminals etc.)
It was two opinions:

1. Legibility should base on the ability to small differences between characters. The more equal the more difficult to read.

2. Even the smallest differences in the details "separates" letter from the word, draws attention to it and thus slowing reading.

Legibility - helps or hinders readability?

In both views there is a grain of truth... I'm confused...
Help me understand, please feel free to express your opinion.

Link to comment
Andreas Stötzner

I think it is about achieving a balance. The letters of a writing system should be ‘of one stock’, e.g. shaped out of some common ground pattern /model in order to form a unit. But the single parts need to be sufficiently different on the other hand, to make them distinguishable enough and avoid confusion. Both issues will distract the reader’s eye from comfortable recognition: too much similarity, and too much diversity.

I once found the thorough study of Bembo quite enlightening about those questions.

Link to comment
flooce

I give you my opinion as a reader, as I am not working in this industry. I find a too “reductionistic” font bland in a way that the letter or words actually “blur” more. It is harder for me to read. On the other hand a font which is skillfully crafted but too fancy might seem inappropriate for the context, but was never hard to read for me. As far as I know we do not identify single letters after each other but word shapes or syllable shapes and so I think adding more character to the individual letters will help to distinct syllables and words from each other, and will not distract. This is because (afaik) we do not pay too much attention to the individual letter, but to groups of letters.
This is based on my reading experience with serif fonts. With sans I find this question harder to answer. Do humanistic models have more diversity of letter shapes compared to geometric sans faces? Or do they follow different principles?

Link to comment
William Berkson

Based on my own work and on the work I rediscovered of Matthew Luckiesh, I think that differentiation is important up to a threshold, but beyond that other factors become more important. For example, serifs help by differentiating the extremes of letters, which seem according to research to be more salient in recognizing letters. But beyond a threshold further differentiation of letters becomes 'noise', and doesn't help and can be distracting. Above the threshold of adequate differentiation of individual letters, more important are issues of weight and rhythm of the characters and of the spacing.

Here is an account of my own explorations.

And here is a discussion of Luckiesh's work.

Link to comment
hrant

It's always nice to see a discussion where the opposition
of legibility and readability is discussed. It seems like a
paradox, but I've believed in the difference between the
two for a while now. In terms of visualizing how they
relate, I find it useful to refer to geometry: imagine two
vectors at a right angle, so separated by 90 degrees. So
they're both pointing towards the same "quadrant" but
they also oppose each other to an important extent,
and the balance one needs to find depends on how far
one wants to push into immersive reading; the further
you push into it the more letters need to contribute to
divergent boumas, as opposed to merely being divergent
on their own.

So for example sans letterforms are easier to identify,
but serif forms "talk to each other" better, essentially
becoming less themselves. And if you look at Legato
for example, the whites of the letters talk to each other
instead of with their own blacks, creating what I see
as much stronger bouma bonding.

It's also important to realize that it's quite trivial to ensure
legibility: people can read almost any letterform if given
enough time. And this is where the expressive dimension
of type (think of display type) nicely comes in. This doesn't
mean text fonts can't be expressive, it's that they express
themselves more at the level of overall texture.

BTW, here's something light I once put together:
http://www.themicrofoundry.com/ss_read1.html
For a much deeper treatment you could read
my article in issue #13 of Typo magazine.

hhp

Link to comment
hrant

Sorry, I forgot to address an important aspect of the question:
How much divergence is too much? To me this answer is simpler
than one might think: the divergence is too much when the reader's
eye jumps too far ahead (to a subsequent line) because it's distracted
by an unusual texture (which often happens when a "pushy" glyph is
doubled - often the binocular "g", especially the closed-bottom form).

hhp

Link to comment
John Hudson

Can anyone point to empirical evidence that making letters less individually recognisable ever results in improved word recognition? Or vice versa? Or that making either letters or words less recognisable ever decreases reading fatigue? [Word recognition and comfort over time seem to me the cornerstones of any measurable notion of readability.]

I don't think there is an opposition between legibility and readability if one assumes spatial frequency and stylistic harmony, which is a general assumption of type design and typography: we don't make or use 'ransom note' typefaces for reading, we make and use types which operate within single spatial frequency channels and that share stylistic features that allow the necessarily individuated forms of letters to be grouped without individual letters standing out. In order for legibility to be even independent of readability -- never mind 'in opposition' --, one first needs to throw out the spatial frequency and stylistic harmony, hence disrupting word recognition. In other words you need to throw out typography.

Hrant: So for example sans letterforms are easier to identify, but serif forms "talk to each other" better, essentially becoming less themselves. And if you look at Legato for example, the whites of the letters talk to each other instead of with their own blacks, creating what I see as much stronger bouma bonding.

Less themselves? That would imply that the letters that 'talk to each other better' -- and I agree that this is one of the virtues of Legato -- are less individually recognisable than letters that talk less well to each other and are, by your logic, more themselves. This makes absolutely no sense to me whatsoever. There is no opposition, there is no zero sum game in which working well to form recognisable words -- or 'stronger bouma bonding' if you insist -- requires the individual letters to be less legible. There is no reason at all why better recognisability of individual letters and better recognisability of words do not go hand-in-hand.

Link to comment
hrant

To me it makes sense because reading is parallel:
the bouma is fighting to be recognized as fast as
the individual letters; without help it will usually
lose, and since reading a bouma saves time/effort,
this slows down reading. The good news is that
legibility is so easy to ensure that we have a lot
more room to play than we might think.

hhp

Link to comment
Nick Shinn

Right, Andreas.
Legibility vs. readability is not a discussion worth having in general or theoretical terms, it only makes sense when applied to the instance.
One might just as well say strength vs. weight, with regard to cars or wine glasses.
It’s the designer’s task to optimize both, within the specific constraints of a product.

Link to comment
William Berkson

John, your question about legibility raises the question of what 'legibility' means. Luckiesh wanted to avoid the term because he thought is was too unclear. Tinker used it to refer basically to reading speed. Lieberman defines it (following some work in the '60s) as ease of distinguishing individual letters. That could be tested by minimum time threshold for recognition of letters with accuracy to a specified level. Or you might mean something tested by a contrast threshold. There is also the question of what distance you can recognize a character. These might not all be the same.

Then speed of individual letter recognition and word recognition might be not totally go together, because of issue of coherence and spacing and rhythm in the font. And comfort in reading lengthy text is yet again a different question.

Above I wrote about differentiation of letters, rather than legibility, but I suspect that also legibility of individual letters affects readability up to a threshold, and then other factors dominate.

Link to comment
quadibloc

I would think the answer to the issue raised in the initial post is "obvious".

The differences between individual letters that people expect to find in order to distinguish them need to be readily visible. This is why, for example, learning to read Hebrew would be more difficult than learning to read Armenian - some letters have very tiny differences.

But any irrelevant differences are distracting. If the descenders are of different lengths on a p and a q, since descender length isn't one of the expected distinctions between the two letters, this difference will only be a distraction.

Link to comment
John Hudson

Bill: Then speed of individual letter recognition and word recognition might be not totally go together, because of issue of coherence and spacing and rhythm in the font.

Oh, absolutely good recognisability of individual letters cannot compensate for bad typography. But as I said, typography presumes a lot of stuff that naturally bring letter recognition and word recognition into alignment: spatial frequency and stylistic harmony (coherence) and, of course, spacing and -- duck, Hrant -- ryhthm. The latter so much presumed that I didn't even mention them.

Yes, if you isolate and any of these normal features of text typography -- if you change font weights suddenly in the middle of a word, or mix serif and sans serif letters in a word, or track out the spacing so that the letters don't knit together, or vary the spacing or widths of individual letters so as to break up the rhythm -- then word recognition will fail long before individual letter recognition fails. But then we're not talking about typography any more.

Link to comment
Kevin Larson

John: Can anyone point to empirical evidence that making letters less individually recognizable ever results in improved word recognition?

As has been hinted at, there is empirical evidence that individual letters are recognized more accurately in time and distance threshold tasks when there is a more space between letters, but that reading speed is faster when there is less space between letters (but more between words).

In most cases I believe that faster and more accurate letter recognition will lead to faster and more accurate word recognition.

Link to comment
hrant

> there is empirical evidence that .... reading speed
> is faster when there is less space between letters

I missed that! Could you provide the reference?
Also, could you explain it?

hhp

Link to comment
John Hudson

Kevin, to clarify, when I referred to 'making letters less individually recognisable', I meant making their design less individually recognisable, not altering their setting. It makes sense that individual letters are quicker to recognise without crowding but that words composed of those letters are quicker to recognise when the letters are closer together and the words clearly spaced. But that is about the setting of the letters, not their individual recognisability. What I am suggesting is that letters that are difficult to recognise individually will make words similarly difficult to recognise, and words that are easy to recognise individually will make words easy to recognise. What I'm enquiring about is counter evidence to this, i.e. letters that are slower to recognise individually producing faster word recognition. Hrant's legibility/readability opposition hypothesis seems to suggest this possibility, even that is should be the case.

Link to comment
William Berkson

John, the confusion between the lower case l and upper case I in many fonts is an example of where legibility problems of letters don't seem to affect readability of words. I would expect that with additional ambiguous letters you would have more problems.

Peter Enneson's theory is that the way letters are designed the normal tightness of letters make them less recognizable individually, but the sub-letter units, arranged in a regular rhythm are more recognizable as a whole word pattern. If he's right, that it may explain why letters that are confusable in isolation don't pose problems within words. I've seen this with fancy script caps also—no problem reading in context, but out of it you don't know what it is. The sum of the readability of the word is not the same as the sum of the legibility of the individual letters.

It might be that more differentiation would make an isolated letter easier to distinguish, but it may be harder to read in the crowded situation of words. For example some blackletter with a lot of hairlines, or some script fonts.

I haven't thought this through yet, but it's an interesting question, particularly as a possible way to test Peter's theory.

Link to comment
Christopher Dean

@Kevin: “…there is empirical evidence that individual letters are recognized more accurately in time and distance threshold tasks when there is a more space between letters, but that reading speed is faster when there is less space between letters…”

References?

Link to comment
Nick Shinn

RSVP is weird.
The whole point of reading is that you’re in control, creating meaning, probing, traversing swaths of text at your own speed and in your own manner, with carefully aimed saccade bursts, not being force-fed. [Crossed post!]

Link to comment
Christopher Dean

Rubin, G. S. & Turano, K. (1992). Reading without saccadic eye movements. Vision Research, 32(5), 895–902.

Abstract:
To assess the limitation on reading speed imposed by saccadic eye movements, we measured reading speed in 13 normally-sighted observers using two modes of text presentations: PAGE text which presents an entire passage conventionally in static, paragraph format, and rapid serial visual presentation (RSVP) which presents text sequentially, one word at a time at the same location in the visual field. In Expt 1, subjects read PAGE and RSVP text orally across a wide range of letter sizes (2X to 32X single-letter acuity) and reading speed was computed from the number of correct words read per minute. Reading speeds were consistently faster for RSVP compared to PAGE text at all letter sizes tested. The average speeds for text of an intermediate letter size (8X acuity) were 1171 words/min for RSVP and 303 words/min for PAGE text. In Expt 2 subjects read PAGE and RSVP text silently and a multiple-choice comprehension test was administered after each passage. All subjects continued to read RSVP text faster, and 6 subjects read at the maximum testable rate (1652 words/min) with at least 75% correct on the comprehension tests. Experiment 3 assessed the minimum word exposure time required for decoding text using RSVP to minimize potential delays due to saccadic eye movement control. Successive words were presented for a fixed duration (word duration) with a blank interval (ISI) between words. The minimum word duration required for accurate oral reading averaged 69.4 msec and was not reduced by increasing ISI. We interpret these results as an indication that the programming and execution of saccadic eye movements impose an upper limit on conventional reading speed.

————

In a growing market of small screen mobile devices I suspect that more research will be done in this area. Now if there were only some way to ease and expedite the grant application process’ and ethics protocols…

Link to comment
hrant

I actually believe that saccade-free reading is not only
achievable, but also highly desirable. However this does
not mean using RSVP to figure out how we currently
read has any merit.

BTW, high-performance saccade-free reading cannot
involve machine pacing; what I mean is that the reader
must remain in control of the refreshing of the display*.
Furthermore, the main advantage I see to saccade-free
reading would be the displaying of increasingly large text
in proportion to the distance from the fixation point; this
would be in order to overcome the acuity decrease of the
retina, basically allowing for the recognition of boumas
much deeper into the field of vision.

* How? Good question.

hhp

Link to comment
Christopher Dean

Regarding pacing, I saw a fascinating demonstration where the time a word was displayed was increased or decreased depending on it’s importance (ie: words such as ‘of’ and ‘the’ were displayed for shorter times). While not controlled by the user, it resulted in significantly faster reading rates. I believe it was in a presentation from my old supervisor. I’ll see if I can dig it up. It’s quite striking.

Defining word categories and durations for and between them should be achievable. This has most certainly been done, but would require a little more looking (however, it’s getting late, and I’m only two episodes away the series finale after re-watching all of Cheers. Season 11, episode 22, is still funny enough to bring me to tears ;)

|_|)

Link to comment
John Hudson

Bill, what I asked was whether there was any evidence that making letters less legible ever results in improved word recognition, i.e. made it better than when the letters were more legible. Yes, there are some difficult to recognise or ambiguous letter forms that, in the context of words, do not pose the same level of difficulty as in isolation, presumably because of linguistic context clues. But this is not the same thing at all as suggesting that less legible letters might improve word recognition, which seems to be the implication of Hrant's opposition of legibility and readability, or the converse: that optimising letters for individual recognisability might actually hamper readability.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Our partners

Get to your apps and creative work. Explore curated inspiration, livestream learning, tutorials, and creative challenges.
Discover the fonts from the Germany foundry FDI Type. A brand of Schriftkontor Ralf Herrmann.
The largest selection of professional fonts for any project. Over 130,000 available fonts, and counting.
Discover the Best Deals for Freelance Designers.
FDI Farbmeister: simulate letterpress letters with this set of color bitmap fonts …
×
×
  • Create New...

Important Information

We are placing functional cookies on your device to help make this website better.