ReadMe: IBM's International Classes For Unicode

Version: 07/22/1999


COPYRIGHT:
© Copyright Taligent, Inc., 1997
© Copyright International Business Machines Corporation, 1997 - 1999
Licensed Material - Program-Property of IBM - All Rights Reserved.
US Government Users Restricted Rights - Use, duplication, or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.



   

Contents

Introduction

Today's software market is a global one in which it is desirable to develop and maintain one application that supports a wide variety of national languages. IBM's International Classes for Unicode provides the following tools to help you write language independent applications:

It is possible to support additional locales by adding more locale data files, with no code changes.

Please refer to POSIX programmer's Guide for details on what the ISO locale ID means.

Your comments are important to making this release successful.  We are committed to fixing any bugs, and will also use your feedback to help plan future releases.

IMPORTANT: Please make sure you understand the Copyright and License information.

 

What the International Classes For Unicode Contain

All files are contained in icu-XXXXXX.zip.
Please unzip this file.  It will re-construct the source directory. Please be sure to do "unzip -a icu-XXXXXX.zip -d drive:\directory" or use WinZip on Win32 platforms.   This will convert the line feed/carriage return characters correctly on windows.  

Below, $Root is the placement of the icu directory in your file system, like "drive:\...\icu" in your environment. "drive:\..." stands for any drive and any directory on that drive that you chose to install icu into.

The following files describe the code drop:
 
 

readme.html (this file) describes the IBM's International Classes for Unicode
license.html contains IBM's public license

The following directories contain source code and data files:
 
 

$Root\source\common\ The utility classes, such as ResourceBundle, Unicode, Locale, UnicodeString. The codepage conversion library API, UnicodeConverter.
$Root\source\i18n\ The collation source files, Collator, RuleBasedCollator and CollationKey. 
The text boundary API, which locates character, word, sentence, and 
line breaks. 
The format API, which formats and parses data in numeric or date format to and from text.
$Root\source\test\intltest\ A test suite including all C++ APIs. For information about running the test suite, see docs\intltest.html.
$Root\source\test\cintltst\ A test suite including all C APIs. For information about running the test suite, see  docs\cintltst.html.
$Root\data\ The Unicode 3.0 data file.  Please see http://www.unicode.org/ for more information. 
This directory also contains the resource files for all international objects.  These files are of three types: 
  • TXT files contain general locale data. 
  • RES files contain non-portable locale data files which are generated by the genrb tool.
  • COL files are non-portable packed binary collation data files which are created by the gencol tool. 
  • UCM files which contain mapping tables {from,to} Unicode in text format
  • CNV files are non-portable packed binary conversion data generated by the makeconv tool.
$Root\source\tools\genrb This tool converts the portable locale data files in text format to machine-specific binary format for resource bundle performance efficiency.
$Root\source\tools\gencol This tool converts the collation rules in the portable locale data files in text format to machine-specific binary collation data.
$Root\source\tools\makeconv This tool converts the native encoding to/from UCS-2 mapping table in text format to machine-specific binary format.

 The following directories are populated when you've built the framework:
  (on Unix, replace $Root with the value given to the file "configure")
 

$Root\include\ contains all the public header files.
$output contains the libraries for static/dynamic linking or executable programs.

The following diagram shows the main directory structure of the IBM's International Classes for Unicode:

                                  icu-NNNN
                                     |
        output                      icu
     _____|_____       ______________|______________________________
    |           |      |       |             |          |          |
 libraries   programs  include data         source      |          |
 (built)    (built)   (built)                |      readme.html license.html
                                             |                         
                            _________________|__________________________
                           |       |         |       |         |        |
                         common  i18n      test     extra     tools   samples
                                             |                 |   
                                          ___|___           ___|_________________
                                          |      |          |      |      |     | 
                                      intltest cintltst makeconv ctestfw genrb  ....

API Overview

In the International Classes for Unicode, there are two categories:

See IBM's International Classes for Unicode Code Conventions for a discussion of code conventions common to all library classes.

See also html/aindex.html for an alphabetical index, and html/HIERjava.html for a hierarchical index to detailed API documentation.
 
 

Platform Dependencies

The platform dependencies have been isolated into the following 4 files:

Important Notes Regarding Win32

If you are building on the Win32 platform, it is important that you understand a few build details:

DLL directories and the PATH setting: As delivered, the IBM's International Classes for Unicode build as several DLLs. These DLLs are placed in the directories "icu\bin\Debug" and "icu\bin\Release".  You must add either of these directories to the PATH environment variable in your system, or any executables you build will not be able to access IBM's International Classes for Unicode libraries. Alternatively, you can copy the DLL files into a directory already in your PATH, but we do not recommend this -- you can wind up with multiple copies of the DLL, and wind up using the wrong one.

To change your PATH:  Do this under NT by using the System control panel. Pick the "Environment" tab, select the variable PATH in the lower box.  In the "value" box, append the string ";drive:\...\icu\bin\Debug" at the end of the path string.  If there is nothing there, just type in "drive:\...\icu\bin\Debug". Click the Set button, then the Ok button.

Link with Runtime libraries: All the DLLs link with the C runtime library "Debug Multithreaded DLL" or "Multithreaded DLL." (This is changed through the Project Settings dialog, on the C/C++ tab, under Code Generation.) It is important that any executable or other DLL you build which uses the IBM's International Classes for Unicode DLLs links with these runtime libraries as well. If you do not do this, you will seemingly get memory errors when you run the executable.
 
 

How to Install/Build on Win NT

Building IBM's International Classes for Unicode requires:

The steps are:

  1. Unzip the icu-XXXX.zip file, type "unzip -a icu-XXXX.zip -d drive:\directory" under command prompt or use WinZip.  drive:\directory\icu is the root ($Root) directory (you may but don't need to place "icu" into another directory). If you change the root, you will change the project settings accordingly in EACH makefile in the project, updating the include and library paths.
  2. Set the environment variable ICU_DATA, the full pathname of the data directory, to indicate where the locale data files and conversion mapping tables are.
  3. Start Microsoft Visual C++ 6.0.
  4. Choose "File" menu and select "Open WorkSpace".
  5. In the file chooser, choose icu\source\allinone\allinone.dsw. Open this workspace.
  6. This workspace includes all the IBM's International Classes for Unicode libraries, necessary tools as well as intltest and cintltest test suite projects.
  7. Set the active Project. Choose "Project" menu and select "Set active project". In the submenu, select "intltest".
  8. Set the active configuration ("Win32 Debug" or "Win32 Release") and make sure this matches your PATH setting as described in the previous chapter. (See note below.)
  9. Choose "Build" menu and select "Rebuild All". If you want to build the Debug and Release configurations at the same time, choose "Build" menu and select "Batch Build..." instead (and mark all configurations as checked), then click the button named "Rebuild All".
  10. Repeat steps 7-9 for makeconv (set active project to "makeconv"), genrb ("genrb") and gencol ("gencol") tools.
  11. Run the mkcnvfle.bat script to create the converter data files in binary format. The script requires two arguments, where first is either "Release" or "Debug" reflecting the type of build and the second is path to the icu directory.
  12. Run the genrb.bat script to create the locale data files in binary format. The script requires two arguments, where first is either "Release" or "Debug" reflecting the type of build and the second is path to the icu directory
  13. Run the gencol.exe program to pre-load the collation data and create the collation data in binary format.
  14. Save the value of the TZ environment variable and then set it to PST8PDT
  15. Reopen the "allinone" project file and run the "intltest" test.   Reset the TZ value.
  16. To run the C test suite, set "cintltst" as the active project, repeat steps 8, 9 and then run the "cintltst" test..
  17. Build and run as outlined above.

Note: To set the active configuration, two different possibilities are:

It is also possible to build each library individually, using the workspaces in each respective directory. They have to be built in the following order:
        1. common
        2. i18n
        3. makeconv
        4. genrb
        5. gencol
        6. ctestfw
        7. intltest and cintltst, if you want to run the test suite.
Regarding the test suite, please read the directions in docs/intltest.html and docs/cintltst.html

How to Install/Build on Unix

There is a set of Makefiles for Unix which supports Linux w/gcc, Solaris w/gcc and Workshop CC. and AIX w/xlc.

Building IBM's International Classes for Unicode on Unix requires:

A UNIX C++ compiler, (gcc, cc, xlc_r, etc...) installed on the target machine. A recent version of GNU make (3.7+).

The steps are:

  1. Unzip the icu-XXXX.zip file with the "-a" option.
  2. Before running the test programs or samples, please set the environment variable ICU_DATA, the full pathname of the data directory, to indicate where the locale data files and conversion mapping tables are.  If this variable is not set, the default user data directory will be used.
  3. Change directory to the "icu/source".
  4. If it is not already set, please set the executable flag for the following files (by executing 'chmod +x' command): configure, install.sh and config.*,
  5. Type "./configure" or type "./configure --help" to print the avialable options.
  6. Type "make" to compile the libraries and all the data files.
  7. Optionally, type "make check" to verify the test suite.
  8. Type "Make install" to install.

It is also possible to build each library individually, using the Makefiles in each respective directory. They have to be built in the following order:
        1. common
        2. i18n
        3. makeconv
        4. genrb
        5. gencol
        6. ctestfw
        7. intltest and cintltst, if you want to run the test suite.
Regarding the test suite, please read the directions in docs/intltest.html and docs/cintltst.html

How to add a locale data file

To add locale data files to IBM's International Classes for Unicode do the following:

1. Create a file containing the key-value pairs which value you are overriding from the parent locale data file.
    Make sure the filename is the locale ID with the extension ".txt". We recommend you copy parent file and change the values
    that need to be changed, remove all other key-pairs. Be sure to update the locale ID key (the outmost brace) with
    the name of the locale id your a creating.

2. Name the file with locale ID you are creating with a ".txt" at the end.

e.g.    fr_BF.txt
Would create a locale that inherits all the key-value pairs from fr.txt.

3. Add the name of that file (without the ".txt" extension) as a single line in "index.txt" file in the default locale directory (icu/data/).

4. Run the genrb tool to convert the file into binary format.  Under the command prompt, type:

> genrb \Full Path\fr_BF.txt

How to add resource bundle data to your application

Adding resource bundle data to your application is quite simple:

Create resource bundle files with the right format and names in a directory for resource bundles you create in your application directory tree.(for more information of that format of these files see resource bundle format)
Use that same directory name (absolute path) when instantiating a resource bundle at run time.

Where Collation Data is stored

Collation data is stored in a single directory on a local disk. Each locale's data is stored in a corresponding ASCII text file indicated by a "CollationElements" tag . For instance, the data for de_CH is stored with a tag "CollationElements" in a file named "de_CH.txt". Reading the collation data from these files can be time-consuming, especially for large pieces of data that occur in languages such as Japanese. For this reason, the Collation Framework implements a second file format, a performance-optimized, non-portable, binary format. These binary files are generated automatically by the framework the first time a collation table is parsed. They have names of the form "de_CH.col". Once the files are generated by the framework, future loading of those collations occur from the binary file, rather than the text file, at much higher speed.

In general, you don't have to do anything special with these files. They can be generated directly by using the "gencol" tool.  In addition, they can also be generated and used automatically by the framework, without intervention on your part. However, there are situations in which you will have to regenerate them. To do so, you must manually delete the ".col" files from your collation data directory and re-run the gencol tool.

You will need to regenerate your ".col" files in the following circumstances:

  1. You are moving your data to another platform.  Since the ".col" files are non-portable, you must make sure they are regenerated.
  2. DO NOT copy them from one platform to another.
  3. You have changed the "CollationElements" data in the locale's ".txt" file.  Note that if you change the default rules for some reason, which underlie all collations, then you will have to rebuild ALL your ".col" files, since they all are merged with the default rule set.

Character Set Conversion Information

The charset conversion library provides ways to convert simple text strings (e.g., char*) such as ISO 8859-1 to and from Unicode. The objective is to provide clean, simple, reliable, portable and adaptable data structures and algorithms to support the IBM's International Classes for Unicode's character codeset Conversion APIs. The conversion data in the library originated from the NLTC lab in IBM. The IBM character set conversion tables are publicly available in the published IBM document called "CHARACTER DATA REPRESENTATION ARCHITECTURE - REFERENCE AND REGISTRY". The character set conversion library includes single-byte, double-byte and some UCS encodings to and from Unicode. This document can be ordered through Mechanicsberg and it comes with 2 CD ROMs which have machine readable conversion tables on them. The license agreement is included in IBM's International Classes for Unicode agreement.

To order the document in the US you can call 1-800-879-2755 and request document number SC09-2190-00. The cost of this publication is $75.00 US not including tax.

Currently, the support code pages are:

ibm-1004: PC Data Latin-1
ibm-1008: Arabic 8bit ISO/ASCII
ibm-1038: Adobe Symbol Set
ibm-1089: ISO-8859-6
ibm-1112: MS Windows Baltic Rim
ibm-1116: PC Data Estonia
ibm-1117: PC Data Latvia
ibm-1118: PC Data Lithuania
ibm-1119: PC Data Russian
ibm-1123: Cyrillic Ukraine EBCDIC
ibm-1140:
EBCDIC USA, Canada, Netherlands, Portugal, Brazil, Australia, New Zealand - EBCDIC: Italy
ibm-1141: EBCDIC Germany, Austria
ibm-1142: EBCDIC Denmark etc.
ibm-1143: EBCDIC Sweden
ibm-1144: EBCDIC Italy
ibm-1145: EBCDIC Spain
ibm-1146: EBCDIC UK Irland
ibm-1147: EBCDIC France
ibm-1148: EBCDIC International Latin-1
ibm-1250: MS-Windows Latin-2
ibm-1251: MS-Windows Cyrillic
ibm-1252: MS-Windows Latin-1
ibm-1253: MS-Windows Greek
ibm-1254: MS-Windows Turkey
ibm-1255: MS-Windows Hebrew
ibm-1256: MS-Windows Arabic
ibm-1257: MS-Windows Baltic Rim
ibm-1258: MS-Windows Vietnamese
ibm-1275: Apple Latin-1
ibm-1276: Adobe (Postscript) Standard Encoding
ibm-1277: Adobe (Postscript) Latin-1
ibm-1280: Apple Greek
ibm-1281: Apple Turkey
ibm-1282: Apple Central European
ibm-1283: Apple Cyrillic
ibm-1361: Korean EUC Windows cp949
ibm-1383: Simplified Chinese EUC
ibm-1386: Simplified Chinese GBK
ibm-290: Japanese Katakana SBCS
ibm-37 :
CECP: USA, Canada (ESA*), Netherlands, Portugal, Brazil, Australia, New Zealand - MS Windows, Hebrew
ibm-420: Arabic (with presentation forms)
ibm-424: Hebrew
ibm-437: PC Data PC Base USA
ibm-813: ISO-8859-7
ibm-833: Korean Host Extended SBCS
ibm-852: PC Data Latin-2 Multilingual
ibm-855: PC Data Cyrillic
ibm-856: PC Data Hebrew
ibm-857: PC Data Turkey
ibm-858: PC Data with EURO
ibm-859: PC Latin-9
ibm-860: PC Data Portugal
ibm-861: PC Data Iceland
ibm-863: PC Data Canada
ibm-864: PC Data Arabic
ibm-865: PC Data Denmark
ibm-866: PC Data Russian
ibm-867: PC Data Hebrew
ibm-868: PC Data Urdu
ibm-869: PC Data Greek
ibm-874: PC Data Thai
ibm-878: Russian Internet koi8-r
ibm-912: ISO-8859-2
ibm-913: ISO-8859-3
ibm-914: ISO-8859-4
ibm-915: ISO-8859-5
ibm-916: ISO-8859-8
ibm-920: ISO-8859-9
ibm-921: Baltic 8bit
ibm-922: Estonia 8bit
ibm-923: ISO-8859-15
ibm-930: Japanese Katakana-Kanji Host
ibm-933: Korean Host Mixed
ibm-935: Simplified Chinese Host Mixed
ibm-937: Traditional Chinese Host Mixed
ibm-942: Japanese PC Data Mixed
ibm-943: Japanese PC Data for Open Environment
ibm-949: KS Code PC Data Mixed
ibm-950: BIG-5
ibm-970: Korean EUC

Programming Notes

Reporting Errors

In order for the code to be portable, only a subset of the C++ language that will compile correctly on even the oldest of C++ compilers (and also to provide a usable C interface) can be used in the implementation, which means that there's no use the C++ exception mechanism in the code.

After considering many alternatives, the decision was that every function that can fail takes an error-code parameter by reference. This is always the last parameter in the function’s parameter list. The ErrorCode type is defined as a enumerated type. Zero represents no error, positive values represent errors, and negative values represent non-error status codes. Macros were provided, SUCCESS and FAILURE, to check the error code.

The ErrorCode parameter is an input-output parameter. Every function tests the error code before doing anything else, and immediately exits if it’s a FAILURE error code. If the function fails later on, it sets the error code appropriately and exits without doing any other work (except, of course, any cleanup it has to do). If the function encounters a non-error condition it wants to signal (such as "encountered an unmappable character" in transcoding), it sets the error code appropriately and continues. Otherwise, the function leaves the error code unchanged.

Generally, only functions that don’t take an ErrorCode parameter, but call functions that do, have to declare one. Almost all functions that take an ErrorCode parameter and also call other functions that do merely have to propagate the error code they were passed down to the functions they call. Functions that declare a new ErrorCode parameter must initialize it to ZERO_ERROR before calling any other functions.

The rationale here is to allow a function to call several functions (that take error codes) in a row without having to check the error code after each one. [A function usually will have to check the error code before doing any other processing, however, since it is supposed to stop immediately after receiving an error code.] Propagating the error-code parameter down the call chain saves the programmer from having to declare one everywhere, and also allows us to more closely mimic the C++ exception protocol.

C Function and Data Type Naming

Function names. If a function is identical (or almost identical) to an ANSI or POSIX function, we give it the same name and (as much as possible) the same parameter list. A "u" is prepended onto the beginning of the name.

For functions that exist prior to version 1.2.1, that the function name should begin with a lower-case "u". After the "u" is a short code identifying the subsystem it belongs to (e.g., "loc", "rb", "cnv", "coll", etc.). This code is separated from the actual function name by an underscore, and the actual function name can be anything. For example,

UChar* uloc_getLanguage(...);
void uloc_setDefaultLocale(...);
UChar* ures_getString(...);

Struct and enum type names. For structs and enum types, the rule is that their names begin with a capital "U." There is no underscore for struct names.

       UResourceBundle;
       UCollator;
       UCollationResult;

Enum value names. Enumeration values have names that begin with "UXXX" where XXX stands for the name of the functional category.

UNUM_DECIMAL;
UCOL_GREATER;

Macro names. Macro names are in all caps, but there are currently no other requirements.

Constant names. Many constant names (constants defined with "const", not macros defined with "#define" that are used as constants) begin with a lowercase k, but this isn’t universally enforced.

Preflighting and Overflow Handling

In ICU's C APIs, the user needs to adhere to the following principles for consistency across all functional categories:

  1. All the Unicode string processing should be expressed in terms of a UChar* buffer that is always null terminated.
  2. The APIs assume that the input string parameters are statically allocated fix-sized character buffers.
  3. When the value a function is going to return is already stored as a constant value in static space (e.g., it’s coming from a fixed table, or is stored in a cache), the function will just return the const UChar* pointer.
  4. When the function can’t return a UChar* to storage the user doesn’t have to delete, the caller needs to pass in a pointer to a character buffer that the function can fill with the result. This pointer needs to be accompanied by a int32_t parameter that gives the size of the buffer.

To find out how large the result buffer should be, ICU provides a preflighting C interface.  The interface works like this:

  1. When using the "preflighting" option: you need to pass the function a NULL pointer for the buffer pointer, and the function returns the actual size of the result. You can then choose to allocate a buffer of the correct size and re-run the operation if you would like to.
  2. After allocating a buffer of some reasonable size on the stack and passes that to the function, if the result can fit in that buffer, everything  works fine. If the result doesn’t fit, the function will return the actual size needed.  You can then allocate a buffer of the correct size on the heap and try calling the same function again.
  3. Now you have created a buffer of some reasonable size on the stack and passes it to the function.  If you don't care about the completeness of the result and the allocated buffer is too small, you can continue on using the truncated result.

The following three options demonstrates how to use the preflighting interface,

/** 
 * @param result is a pointer to where the actual result will be.
 * @param maxResultSize is the number of characters the buffer pointed to be result has room for. 
 * @return The actual length of the result (counting the terminating null)
 */
int32_t doSomething( /* input params */, UChar* result,
                int32_t maxResultSize, UErrorCode* err);

In this sample, if the actual result doesn’t fit in the space available in maxResultSize, this function returns the amount of space necessary to hold the result, and result holds as many characters of the actual result as possible. If you don’t care about this, no further action is necessary. If you do care about the truncated characters, you can then allocate a buffer on the heap of the size specified by the return value and call the function again, passing that buffer’s address for result.

All preflighting functions have a fill-in ErrorCode parameter (and follow the normal ErrorCode rules), even if they are not currently doing so. Buffer overflow would be treated as a FAILURE error condition, but would not be reported when the caller passes in NULL for actualResultSize (presumably, a NULL for this parameter means the client doesn’t care if he got a buffer overflow). All other failing error conditions will overwrite the "buffer overflow" error, e.g. MISSING_RESOURCE_ERROR etc..

Arrays as return types

Returning an array of strings is fairly easy in C++, but very hard in C. Instead of returning the array pointer directly, we opted for an iterative interface instead: split the function into two functions.  One returns the number of elements in the array, and the other one returns a single specified element from the array.

int32_t countArrayItems(/* params */);
int32_t getArrayElement(int32_t elementIndex, /* other params */,
             UChar* result, int32_t maxResultSize, UErrorCode* err);

In this case, iterating across all the elements in the array would amount to a call to the count() function followed by multiple calls to the getElement() function.

for (i = 0; i < countArrayItems(...); i++) {
        UChar element[50];
        getArrayItem(i, ..., element, 50, &err);
        /* do something with element */
}

In the case of the resource bundle ures_XXXX functions returning 2-dimensional arrays, the getElement() function takes both x and y coordinates for the desired element, and the count() function returns the number of arrays (x axis).   Since the size of each array element in the resource 2-D arrays should always be the same, this provides an easy-to-use C interface.

void countArrayItems(int32_t* rows, int32_t* columns,
                        /* other params */);

int32_t get2dArrayElement(int32_t rowIndex, 
                int32_t colIndex,
                /* other params */, 
                UChar* result, 
                int32_t maxResultSize,
                UErrorCode* err);

Where to Find More Information

http://www.ibm.com/java/tools/international-classes/ is a pointer to general information about the International Classes For Unicode.

html/aindex.html is an alphabetical index to detailed API documentation.
html/HIERjava.html is a hierarchical index to detailed API documentation.

docs\collate.html is an overview to Collation.

docs\BreakIterator.html is a diagram showing how BreakIterator processes text elements.

http://www.ibm.com/java/education/international-unicode/unicode1.html is a pointer to information on how to make applications global.
 

Submitting Comments, Requesting Features and Reporting Bugs

To submit comments, request features and report bugs, please contact us.  While we are not able to respond individually to each comment, we do review all comments. Send Internet email to icu4c@us.ibm.com.


© Copyright 1997 Taligent, Inc.
© Copyright 1997-1999 IBM Corporation
IBM Center for Java Technology Silicon Valley,
10275 N De Anza Blvd., Cupertino, CA 95014
All rights reserved.