Version: February 1, 2000
COPYRIGHT:
Copyright (c) 1997-2000 International Business Machines Corporation and others.
All Rights Reserved.
Today's software market is a global one in which it is desirable to develop and maintain one application that supports a wide variety of national languages. International Components for Unicode provides the following tools to help you write language independent applications:
It is possible to support additional locales by adding more locale data files, with no code changes.
Please refer to POSIX programmer's Guide for details on what the ISO locale ID means.
Your comments are important to making this release successful. We are committed to fixing any bugs, and will also use your feedback to help plan future releases.
IMPORTANT: Please make sure you understand the Copyright and License information.
There are two ways to download the ICU releases,
For more details on how to download ICU directly from the web site, please also see http:/oss.software.ibm.com/developerworks/opensource/icu/project/download/index.html
Below, $Root is the placement of the icu directory in your file system, like "drive:\...\icu" in your environment. "drive:\..." stands for any drive and any directory on that drive that you chose to install icu into.
The following files describe the code drop:
readme.html (this file) |
describes the International Components for Unicode |
license.html |
contains IBM's public license |
The following directories contain source code and data files:
$Root\source\common\ |
The utility classes, such as ResourceBundle, Unicode, Locale, UnicodeString. The codepage conversion library API, UnicodeConverter. |
$Root\source\i18n\ |
The collation source files, Collator, RuleBasedCollator
and CollationKey. |
$Root\source\test\intltest\ |
A test suite including all C++ APIs. For information about running the test suite, see docs\intltest.html. |
$Root\source\test\cintltst\ |
A test suite including all C APIs. For information about running the test suite, see docs\cintltst.html. |
$Root\data\ |
The Unicode 3.0 data file. Please see http://www.unicode.org/ for more
information.
|
$Root\source\tools |
Tools for generating the data files. Data files are generated by invoking $Root\source\tools\makedata.bat on Win32 or $Root\source\make install on Unix. |
$Root\source\samples |
Various sample programs that use ICU |
The following directories are populated when you've built the
framework:
(on Unix, replace $Root with the value given to the file
"configure")
$Root\include\ |
contains all the public header files. |
$output |
contains the libraries for static/dynamic linking or executable programs. |
The following diagram shows the main directory structure of the International Components for Unicode:
icu-NNNN
|
output icu
_____|_____ ______________|______________________________
| | | | | | |
libraries programs include data source | |
(built) (built) (built) | readme.html license.html
|
_________________|__________________________
| | | | | |
common i18n test extra tools samples
| |
___|___ ___|_________________
| | | | | |
intltest cintltst makeconv ctestfw genrb ....
In the International Components for Unicode, there are two categories:
See IBM Classes for Unicode Code Conventions for a discussion of code conventions common to all library classes.
See also html/aindex.html for an alphabetical
index, and html/HIERjava.html for a
hierarchical index to detailed API documentation.
The platform dependencies have been isolated into the following 4 files:
· XP_CPLUSPLUS is defined for C++
· bool_t, TRUE and FALSE, int8_t, int16_t etc.
· U_EXPORT and U_IMPORT for specifying dynamic library import and export
· icu_isNaN, icu_isInfinite(double), icu_getNaN(); icu_getInfinity for handling special floating point values
· icu_tzset, icu_timezone, icu_tzname and time for reading platform specific time and timezone information
· icu_getDefaultDataDirectory, icu_getDefaultLocaleID for reading the locale setting and data directory
· icu_isBigEndian for finding the endianess of the platform
· icu_nextDouble is used specifically by the ChoiceFormat API.
Win32 Platform
If you are building on the Win32 platform, it is important that you understand a few build details:
DLL directories and the PATH setting: As delivered, the International Components for Unicode build as several DLLs. These DLLs are placed in the directories "icu\bin\Debug" and "icu\bin\Release". You must add either of these directories to the PATH environment variable in your system, or any executables you build will not be able to access International Components for Unicode libraries. Alternatively, you can copy the DLL files into a directory already in your PATH, but we do not recommend this -- you can wind up with multiple copies of the DLL, and wind up using the wrong one.
To change your PATH: Do this under NT by using the System control panel. Pick the "Environment" tab, select the variable PATH in the lower box. In the "value" box, append the string ";drive:\...\icu\bin\Debug" at the end of the path string. If there is nothing there, just type in "drive:\...\icu\bin\Debug". Click the Set button, then the Ok button.
Link with Runtime libraries: All the DLLs link with the C runtime
library "Debug Multithreaded DLL" or "Multithreaded DLL."
(This is changed through the Project Settings dialog, on the C/C++ tab, under
Code Generation.) It is important that any executable or other DLL you build
which uses the International Components for Unicode DLLs links with these
runtime libraries as well. If you do not do this, you will seemingly get memory
errors when you run the executable.
OS/390 Platform
If you are building on the OS/390 UNIX System Services platform, it is
important that you understand a
few details.
The gnu utilities gmake and gzip/gunzip are needed and can be obtained for
OS/390 from
www.mks.com. Search for os/390, register, and follow download directions.
DLL directories and the LIBPATH setting: The ICU dlls libicu-i18n and
libicu-uc.dll should be added
to the LIBPATH environment variable concatenation.
OS/390 supports both native S/390 hexadecimal floating point and, with Version
2.6 and later, IEEE binary
floating point. This is a compile time option. Applications built with IEEE
should use ICU dlls that are
built with IEEE (and vice versa). The environment variable IEEE390=1 will cause
the OS/390 version
of ICU to be built with IEEE floating point. The default is native hexadecimal
floating point.
The makedep executable is shipped with ICU for use with the OS/390 ICU build
process. The PATH
environment variable should be updated to contain the location of this
executable prior to build.
Alternatively, makedep may be moved into an existing PATH directory.
When running the test suite, the TZ environment variable should be set to
export TZ="PST8PDT" so
that time zone comparisons are correct.
Building International Components for Unicode requires:
The steps are:
Note: To set the active configuration, two different possibilities are:
It is also possible to build each library individually, using the workspaces
in each respective directory. They have to be built in the following order:
1. common
2. i18n
3. makedata (which invokes makeconv,
genrb, gencol, genccode etc.)
4. ctestfw
5. intltest and cintltst, if you
want to run the test suite.
Regarding the test suite, please read the directions in docs/intltest.html and docs/cintltst.html
There is a set of Makefiles for Unix that supports Linux w/gcc, Solaris w/gcc and Workshop CC, AIX w/xlc and OS/390 with C++.
Building International Components for Unicode on Unix requires:
A UNIX C++ compiler, (gcc, cc, xlc_r, etc...) installed on the target machine. A recent version of GNU make (3.7+). OS/390 gnu utilities for both make (gmake) and zip (gzip/gunzip) can be found at the MKS web site at http://www.mks.com. Please do a search on "os/390".
The steps are:
It is also possible to build each library individually, using the Makefiles
in each respective directory. They have to be built in the following order:
1. common
2. i18n
3. makeconv
4. genrb
5. gencol
6. gentz
7. genccode
8. ctestfw
9. intltest and cintltst, if you
want to run the test suite.
Regarding the test suite, please read the directions in docs/intltest.html and docs/cintltst.html
To add locale data files to International Components for Unicode do the following:
1. Create a file containing the
key-value pairs which value you are overriding from the parent locale data
file.
Make sure the filename is the locale ID with the extension
".txt". We recommend you copy parent file and change the values
that need to be changed, remove all other key-pairs. Be sure
to update the locale ID key (the outmost brace) with
the name of the locale id your a creating.
2. Name the file with locale ID you are creating with a ".txt" at the end.
e.g.
fr_BF.txt
Would create a locale that inherits all the key-value pairs from fr.txt.
3. Add the name of that file (without the ".txt" extension) as a single line in "index.txt" file in the default locale directory (icu/data/).
4. Regenerate the data DLL file. Please see "How to Install" section for more details on how to verify the ICU release.
How to add resource bundle data to your application
Adding resource bundle data to your application is quite simple:
Create resource bundle files with the right format and names in a directory
for resource bundles you create in your application directory tree.(for more
information of that format of these files see resource bundle documentation
or resource
bundle format).
Please note that resource bundle tag names should contain only invariant 7-bit
ASCII characters (e.g. ones from the following set: A-Z, a-z, 0-9, <SP>,
", %, &, `, (, ), *, +, ,, -, ., /, :, ;, <, =, >, ?, _).
Use that same directory name (absolute path) when instantiating a resource
bundle at run time.
Collation data is stored in a single directory on a local disk. Each locale's data is stored in a corresponding ASCII text file indicated by a "CollationElements" tag . For instance, the data for de_CH is stored with a tag "CollationElements" in a file named "de_CH.txt". Reading the collation data from these files can be time-consuming, especially for large pieces of data that occur in languages such as Japanese. For this reason, the Collation Framework implements a second file format, a performance-optimized, non-portable, binary format. These binary files are generated automatically by the framework the first time a collation table is parsed. They have names of the form "de_CH.col". Once the files are generated by the framework, future loading of those collations occur from the binary file, rather than the text file, at much higher speed.
In general, you don't have to do anything special with these files. They can be generated directly by using the "gencol" tool. In addition, they can also be generated and used automatically by the framework, without intervention on your part. However, there are situations in which you will have to regenerate them. To do so, you must manually delete the ".col" files from your collation data directory and re-run the gencol tool.
You will need to regenerate your ".col" files in the following circumstances:
The charset conversion library provides ways to convert simple text strings (e.g., char*) such as ISO 8859-1 to and from Unicode. The objective is to provide clean, simple, reliable, portable and adaptable data structures and algorithms to support the International Components for Unicode's character codeset Conversion APIs. The conversion data in the library originated from the NLTC lab in IBM. The IBM character set conversion tables are publicly available in the published IBM document called "CHARACTER DATA REPRESENTATION ARCHITECTURE - REFERENCE AND REGISTRY". The character set conversion library includes single-byte, double-byte and some UCS encodings to and from Unicode. This document can be ordered through Mechanicsberg and it comes with 2 CD ROMs which have machine-readable conversion tables on them. The license agreement is included in International Components for Unicode agreement.
Click here to view converters implemented in ICU. To see converters in action, please visit http://oss.software.ibm.com/developerworks/opensource/icu/localeexplorer/?converter&
To order the document in the US you can call 1-800-879-2755 and request document number SC09-2190-00. The cost of this publication is $75.00 US not including tax.
In order for the code to be portable, only a subset of the C++ language that will compile correctly on even the oldest of C++ compilers (and also to provide a usable C interface) can be used in the implementation, which means that there's no use the C++ exception mechanism in the code.
After considering many alternatives, the decision was that every function that can fail takes an error-code parameter by reference. This is always the last parameter in the function’s parameter list. The ErrorCode type is defined as a enumerated type. Zero represents no error, positive values represent errors, and negative values represent non-error status codes. Macros were provided, SUCCESS and FAILURE, to check the error code.
The ErrorCode parameter is an input-output parameter. Every function tests the error code before doing anything else, and immediately exits if it’s a FAILURE error code. If the function fails later on, it sets the error code appropriately and exits without doing any other work (except, of course, any cleanup it has to do). If the function encounters a non-error condition it wants to signal (such as "encountered an unmapped character" in transcoding), it sets the error code appropriately and continues. Otherwise, the function leaves the error code unchanged.
Generally, only functions that don’t take an ErrorCode parameter, but call functions that do, have to declare one. Almost all functions that take an ErrorCode parameter and also call other functions that do merely have to propagate the error code they were passed down to the functions they call. Functions that declare a new ErrorCode parameter must initialize it to ZERO_ERROR before calling any other functions.
The rationale here is to allow a function to call several functions (that take error codes) in a row without having to check the error code after each one. [A function usually will have to check the error code before doing any other processing, however, since it is supposed to stop immediately after receiving an error code.] Propagating the error-code parameter down the call chain saves the programmer from having to declare one everywhere, and also allows us to more closely mimic the C++ exception protocol.
Function names. If a function is identical (or almost identical) to an ANSI or POSIX function, we give it the same name and (as much as possible) the same parameter list. A "u" is prepended onto the beginning of the name.
For functions that exist prior to version 1.2.1, that the function name should begin with a lower-case "u". After the "u" is a short code identifying the subsystem it belongs to (e.g., "loc", "rb", "cnv", "coll", etc.). This code is separated from the actual function name by an underscore, and the actual function name can be anything. For example,
UChar* uloc_getLanguage(...);
void uloc_setDefaultLocale(...);
UChar* ures_getString(...);
Struct and enum type names. For structs and enum types, the rule is that their names begin with a capital "U." There is no underscore for struct names.
UResourceBundle;
UCollator;
UCollationResult;
Enum value names. Enumeration values have names that begin with "UXXX" where XXX stands for the name of the functional category.
UNUM_DECIMAL;
UCOL_GREATER;
Macro names. Macro names are in all caps, but there are currently no other requirements.
Constant names. Many constant names (constants defined with "const", not macros defined with "#define" that are used as constants) begin with a lowercase k, but this isn’t universally enforced.
In ICU's C APIs, the user needs to adhere to the following principles for consistency across all functional categories:
To find out how large the result buffer should be, ICU provides a preflighting C interface. The interface works like this:
The following three options demonstrates how to use the preflighting interface,
/**
* @param result is a pointer to where the actual result will be.
* @param maxResultSize is the number of characters the buffer pointed to be result has room for.
* @return The actual length of the result (counting the terminating null)
*/
int32_t doSomething( /* input params */, UChar* result,
int32_t maxResultSize, UErrorCode* err);
In this sample, if the actual result doesn’t fit in the space available in maxResultSize, this function returns the amount of space necessary to hold the result, and result holds as many characters of the actual result as possible. If you don’t care about this, no further action is necessary. If you do care about the truncated characters, you can then allocate a buffer on the heap of the size specified by the return value and call the function again, passing that buffer’s address for result.
All preflighting functions have a fill-in ErrorCode parameter (and follow the normal ErrorCode rules), even if they are not currently doing so. Buffer overflow would be treated as a FAILURE error condition, but would not be reported when the caller passes in NULL for actualResultSize (presumably, a NULL for this parameter means the client doesn’t care if he got a buffer overflow). All other failing error conditions will overwrite the "buffer overflow" error, e.g. MISSING_RESOURCE_ERROR etc..
Returning an array of strings is fairly easy in C++, but very hard in C. Instead of returning the array pointer directly, we opted for an iterative interface instead: split the function into two functions. One returns the number of elements in the array, and the other one returns a single specified element from the array.
int32_t countArrayItems(/* params */);
int32_t getArrayElement(int32_t elementIndex, /* other params */,
UChar* result, int32_t maxResultSize, UErrorCode* err);
In this case, iterating across all the elements in the array would amount to a call to the count() function followed by multiple calls to the getElement() function.
for (i = 0; i < countArrayItems(...); i++) {
UChar element[50];
getArrayItem(i, ..., element, 50, &err);
/* do something with element */
}
In the case of the resource bundle ures_XXXX functions returning 2-dimensional arrays, the getElement() function takes both x and y coordinates for the desired element, and the count() function returns the number of arrays (x axis). Since the size of each array element in the resource 2-D arrays should always be the same, this provides an easy-to-use C interface.
void countArrayItems(int32_t* rows, int32_t* columns,
/* other params */);
int32_t get2dArrayElement(int32_t rowIndex,
int32_t colIndex,
/* other params */,
UChar* result,
int32_t maxResultSize,
UErrorCode* err);
http://oss.software.ibm.com/icu is a pointer to general information about the International Components for Unicode.
docs/udata.html is a raw draft of ICU data handling.
html/aindex.html is an alphabetical
index to detailed API documentation.
html/HIERjava.html is a hierarchical
index to detailed API documentation.
docs/collate.html is an overview to Collation.
docs/BreakIterator.html is a diagram showing how BreakIterator processes text elements.
http://www.ibm.com/unicode is a
pointer to information on how to make applications global.
To submit comments, request features and report bugs, please contact us. While we are not able to respond individually to each comment, we do review all comments. Send Internet email to icu@oss.software.ibm.com.
Copyright © 1997-2000 International Business Machines Corporation and
others. All Rights Reserved.
IBM Center for Java Technology Silicon Valley,
10275 N De Anza Blvd., Cupertino, CA 95014
All rights reserved.