开发者

Java UTF-8 to ASCII conversion with supplements

we are accepting all sorts of national characters in UTF-8 string on the input, and we need to convert them to ASCII string on the output for some legacy use. (we don't accept Chinese and Japanese chars, only European languages)

We have a small utility to get rid of all the diacritics:

public static final String toBaseCharacters(final String sText) {
    if (sText == null || sText.length() == 0)
        return sText;

    final char[] chars = sText.toCharArray();
    final int iSize = chars.length;
    final StringBuilder sb = new StringBuilder(iSize);

    for (int i = 0; i < iSize; i++) {
        String sLetter = new String(new char[] { chars[i] });
        sLetter = Normalizer.normalize(sLetter, Normalizer.Form.NFC);

        try {
            byte[] bLetter = sLetter.getBytes("UTF-8");
            sb.append((char) bLetter[0]);
        } catch (UnsupportedEncodingException e) {
        }
    }
    return sb.toString();
}

The question is how to replace all the german sharp s (ß, Đ, đ) and other characters that get through the above normalization method, with their supplements (in case of ß, supplement would probably be "ss" and in case od Đ supplement would be either "D" or "Dj").

Is there some simple way to do it, without million of .replaceAll() calls?

So for example: Đonardan = Djonardan, Blaß = Blass and so on.

We can replace all "problematic" chars with empty space, but would like to 开发者_StackOverflow中文版avoid this to make the output as similar to the input as possible.

Thank you for your answers,

Bozo


You want to use ICU4J. It includes the com.ibm.icu.text.Transliterator class, which apparently can do what you are looking for.


Here's my converter which uses lucene...

private final KeywordTokenizer keywordTokenizer = new KeywordTokenizer(new StringReader(""));
private final ASCIIFoldingFilter asciiFoldingFilter = new ASCIIFoldingFilter(keywordTokenizer);
private final TermAttribute termAttribute = (TermAttribute) asciiFoldingFilter.getAttribute(TermAttribute.class);

public String process(String line)
{
    if (line != null)
    {
        try
        {
            keywordTokenizer.reset(new StringReader(line));
            if (asciiFoldingFilter.incrementToken())
            {
                return termAttribute.term();
            }
        }
        catch (IOException e)
        {
            logger.warn("Failed to parse: " + line, e);
        }
    }
    return null;
}


I'm using something like this:

Transliterator transliterator = Transliterator.getInstance("Any-Latin; Upper; Lower; NFD; [:Nonspacing Mark:] Remove; NFC", Transliterator.FORWARD);


Is there some simple way to do it, without million of .replaceAll() calls?

If you just support European, Latin-based languages, around 100 should be enough; that's definitely doable: Grab the Unicode charts for Latin-1 Supplement and Latin Extended-A and get the String.replace party started. :-)

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜