Jump to content
Check out our typography channel on Instagram

Unicode programmer font

Recommended Posts

I should have wrote “standard” in quotes.
For example for newsgroups, e-mailing, web-pages, TeX, TTX and other soure codes.
Simply — everything that can be edited (ie. commented)
in text editor with Unicode support.

I used Andale Mono for quite a long time,
but now I need something smarter,
for example with hebrew and japanese character support.

--
regards,
Tom

Link to comment

I see, I was taking a more purist view of 'coding'. I would pick a font that you like best for your primary language and rely of font-fallback in your OS or text editor for the secondary languages.

Hopefully we'll see more apps and OS's using 'composite fonts', where the user can specify which font (along with scaling and baseline adjustment) to use for each Unicode range - rather than pick a one-size-fits-all solution.

Link to comment

Simon,

for coding purposes (a monospaced linear font), a pan-Unicode font would actually be useful.

Monotype Imaging offers Andale Mono WT, a pan-Unicode monospaced font with the same design as the free Andale Mono font. There are four versions that cover the entire Unicode 3.0 range: Andale Mono WT J (Japanese), Andale Mono WT K (Korean), Andale Mono WT S (Simplified Chinese), Andale Mono WT T (Traditional Chinese). There is also a set of two fonts: Andale Mono WTG and Andale Mono WTG Surrogate, with the entire Unicode 3.2 range. Those fonts are also distributed e.g. by Ricoh (google for Andale Mono WT for more details).

Ascender Corp. offers Ascender Uni, a pan-Unicode monospaced font with the entire character set of Unicode 5.0. The design is similar to Andale Mono.

Microsoft offers Consolas, a great monospaced design by Luc(as) de Groot that covers extended Latin, Cyrillic and Greek that has regular, italic, bold and bold italic variants (Andale Mono and Ascender Uni only have one regular weight).

Regards,
A.

Link to comment

Eeekh…
Of course, that’s not Andale Mono. It’s not Courier too.
That’s Dialog Input shipped with Java Developer’s Kit, which works best for me as far.
Unfortunately, it’s not a masterpiece of legiblilty.
But it replaces missing characters.

I have found some pan-Unicode fonts listed here:
http://en.wikipedia.org/wiki/Unicode_fonts

I have not tested Ascender Uni yet.

You can test your fonts on this source code:
http://taat.pl/typografia/typografia_wiele_jezykow.html

Let me know if it renders all characters correctly.

--
regards,
Tom

Link to comment

When you get database files exported from multilingual forum,
you are not able to edit it,
because you are not able to distinguish characters…

But that’s not the point of this discussion :)
Should we change discussion topic to: “Should coding go for Unicode”?

--
regards,
Tom

Link to comment

When you get database files exported from multilingual forum, you are not able to edit it, because you are not able to distinguish characters…

True, but that's not coding. HTML authoring (in your earlier example) is not coding either.

Coding is programming; that is, it's the process of writing executable computer instructions (i.e., code) in a computer programming language.

For what it's worth, I've been coding for over two decades, and I've never had the need for a "Unicode" font in my IDE. However, I do appreciate the desire for a pan-Unicode font for general-purpose use, which includes HTML authoring and database editing.

But let's not call that a "coding" font.

Link to comment

Suppose, I write a PHP application for “HTML authoring”.
Then I have to insert comment:

// prints typography in ukrainian
echo 'Типографія';
// prints typography in japanese
echo 'タイポグラフィ';

You probably call this not a program, but a script.
But I might do the same in C++.

--
regards,
Tom

Link to comment

PHP counts as coding, so your example certainly would benefit from having a pan-Unicode font that supports all the varied characters that you might want to use in your literal strings.

However, as Simon said earlier, it's generally a bad idea to hard-code literal strings in code. The preferred way to handle all language-specific strings is to define them in a separate resource file and then write code to select the appropriate strings to use depending on context. That way, the code itself is always straight ASCII, and internationalization becomes much easier.

Link to comment

Yes, for big projects this techniques are named i18n and L10n.
Then, you store all language strings in separate files, as in library.

But even then, when you want to take a look on that strings,
you have to switch between fonts if you don't have a pan-Unicode one.

Anyway, I still haven't found what I'm looking for :)

--
regards,
Tom

Link to comment

Tomek,

any problems with the links I have given?

Simon, Spire,

I don't understand why you're trying to prove Tomek that there's something wrong with his concept.

Much of modern "HTML authoring" is synonymous with "development of web applications" -- that of course *is* coding because it as it often includes AJAX/JavaScript on the client side plus PHP or Java on the server side.

Not all development projects involve "localization" or "internationalization". If you operate under the presumption that the default language is English and anything else is "international", this may be true. But just as easily, people develop web or desktop applications in a language that is not English, but they don't plan to localize or internationalize either. If I develop a website in Hebrew or Russian or Arabic or Japanese only, I simply want to type texts in that language in the source code. And if I'm writing and debugging a console application, or a Python script, I need a Unicode monospaced font, be it just for the console debugging output.

In fact, I have been using Andale Mono WT for quite some time for the purpose of developing Python scripts within FontLab Studio. The Output panel (a text-only console) works with monospaced fonts and I often need to output Unicode text to it.

I must admit that I find the notion that "code should be ASCII only" very antiquated. I've been using UTF-8 for my source code in Python for several years now.

A.

Link to comment

Adam, I'm not trying to prove to Tomek that there's something wrong with his concept. In fact, I've repeatedly been saying that I appreciate his desire for a pan-Unicode font.

I've merely been trying to explain why there isn't already a wide range of pan-Unicode fonts available for use in coding: it's because most programmers have no use for non-ASCII characters in their actual source code.

You say that you find the notion that "code should be ASCII only" very antiquated. Conversely, I find the notion that code should contained hard-coded literal strings very antiquated. We are both right, when taken in different contexts.

Your earlier post was been very helpful -- to me as well. In particular, I hadn't heard of Ascender Uni, and in fact I'm pretty excited to learn of it. I've been wanting a good pan-Unicode monospaced font for a long time (albeit for another purpose).

Edward

Link to comment

Spire,

The concept that some plain-text files should use different encoding than others *is* antiquated, no matter what. In future, it only makes sense that plain-text files (even programming source code) are encoded using Unicode — even if a certain document format or programming language restricts the character set used for certain aspects, e.g. object names.

But whether string literals should be intermixed with programming code or externalized depends on the type of application. If the application is text-intense and only contains comparably little programming code (i.e. it is more like a document with elements of a program, e.g. a dynamic web page) *and* it will only be deployed in a single language, then I think externalizing strings makes little sense. It becomes useful only in scenarios where you have a large overhead of programming code over text, or when you're developing for i18n scenarios.

As you well know, XML can use any Unicode characters for element and attribute names. So you may use an XML document structure that looks like this:

<?xml version="1.0" encoding="UTF-8"?>
<authors>
<author type="main">
<firstname>Юрий</firstname>
<lastname>Ярмола</lastname>
</author>
<author>
<firstname>Adam</firstname>
<lastname>Twardoch</lastname>
</author>
</authors>

But others may prefer to define their document structure as follows:

<?xml version="1.0" encoding="UTF-8"?>
<авторы>
<автор тип="главный">
<имя>Юрий</имя>
<фамилия>Ярмола</фамилия>
</автор>
<автор>
<имя>Adam</имя>
<фамилия>Twardoch</фамилия>
</автор>
</авторы>

Both are valid XML structures, and it depends purely on the linguistic context which one will be chosen. The English language still holds some privileged position in electronic information processing, but this is changing.

A.

Link to comment

Adam, I'm not sure what the argument is still about. I did acknowledge in my earlier post that you were correct. I think we each understand what the other is saying at this point.

In response to your last post, I'll just add that document encoding and actual character use are two separate issues. I never said that ASCII encoding was still a good thing; in fact, I definitely agree that it's antiquated and really needs to be phased out already. But even if the entire world magically switched overnight to, say, UTF-8-encoding for all text files (including all source code), I think most programmers would still have little use for non-ASCII characters in their actual code.

I think the whole reason this thread got sidetracked is that Simon and I were both surprised and taken aback by Tomek's categorical claim that "every coder... needs a fixed width font with Unicode support".

In any case, the bottom line is that pan-Unicode fonts are a Good Thing, regardless of what they might be used for.

Edward

Link to comment

I have a unicode programmers font I am developing. It has been called "software developer" but it's name is going to change. It supports all the Latin Glyphs including extended Latin A & B but it lacks Cyrillic, Greek, Japanese and Chinese. Maybe I will add these over time. Okay maybe not the Japanese; apart from maybe Hiragana & Katakana. And I almost certainly not the Chinese unless I get lots of help somehow.

Tomek, Adam, Si ( & whoever else cares to comment) what gyph coverage would you suggest is going to add the most utility beyond the Latin? And in what order?

Link to comment

Thanks!

I should be done by now but other projects come up & displace it...

I am going to get it out though! Probably as Latin at first and then with time I will make a 1.5 & 2.0 versions with additional glyphs for Cyrillic and Greek.

While I am asking questions I may as well ask what environment you work in & about what rendering scheme you work in.

Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
Iwan Reschniev: a typeface based drawings by Jan Tschichold
×
×
  • Create New...

Important Information

We are placing functional cookies on your device to help make this website better.