UTF-8 to String in Java
I am having a little problem with the UTF-8 charset. I have a UTF-8 encoded file which I want to load and analyze. I am using BufferedReader to read the file line by line.
BufferedReader buffReader = new BufferedReader(new InputStreamReader
(new FileInputStream(file),"UTF-8"));
My problem is that the normals String methods (trim() and equals() for example) in Java are not suitable to use with the line read from the BufferReader in every iteration of the loop that I created to read 开发者_开发知识库all the content of the BufferedReader.
For example, in the encoded file, I have < menu >
which I want my program to treat it as it is, however, for now, it is seen as ?? < m e n u >
mixed with some others strange characters.
I want to know if there is a way to remove all the charset codifications and keep just the plain text so I can use all the methods of the String class without complications.
Thank you
If your jdk is not getting too old (1.5) you can do it like this :
Locale frLocale = new Locale("fr", "FR");
Scanner scanner = new Scanner(new FileInputStream(file), "UTF-8");
scanner.useLocale(frLocale);
for (; scanner.hasNextLine(); numLine++) {
line = scanner.nextLine();
}
The scanner can also use delimiters other than whitespace. This example reads several items in from a string:
String input = "1 fish 2 fish red fish blue fish";
Scanner s = new Scanner(input).useDelimiter("\\s*fish\\s*");
System.out.println(s.nextInt());
System.out.println(s.nextInt());
System.out.println(s.next());
System.out.println(s.next());
s.close();
prints the following output:
1
2
red
blue
see Doc for Scanner here
精彩评论