Truncating Strings by Bytes
I create the following for truncating a string in java to a new string with a given number of bytes.
String truncatedValue = "";
String currentValue = string;
int pivotIndex = (int) Math.round(((double) string.length())/2);
while(!truncatedValue.equals(currentValue)){
currentValue = string.substring(0,pivotInd开发者_C百科ex);
byte[] bytes = null;
bytes = currentValue.getBytes(encoding);
if(bytes==null){
return string;
}
int byteLength = bytes.length;
int newIndex = (int) Math.round(((double) pivotIndex)/2);
if(byteLength > maxBytesLength){
pivotIndex = newIndex;
} else if(byteLength < maxBytesLength){
pivotIndex = pivotIndex + 1;
} else {
truncatedValue = currentValue;
}
}
return truncatedValue;
This is the first thing that came to my mind, and I know I could improve on it. I saw another post that was asking a similar question there, but they were truncating Strings using the bytes instead of String.substring. I think I would rather use String.substring in my case.
EDIT: I just removed the UTF8 reference because I would rather be able to do this for different storage types aswell.
Why not convert to bytes and walk forward--obeying UTF8 character boundaries as you do it--until you've got the max number, then convert those bytes back into a string?
Or you could just cut the original string if you keep track of where the cut should occur:
// Assuming that Java will always produce valid UTF8 from a string, so no error checking!
// (Is this always true, I wonder?)
public class UTF8Cutter {
public static String cut(String s, int n) {
byte[] utf8 = s.getBytes();
if (utf8.length < n) n = utf8.length;
int n16 = 0;
int advance = 1;
int i = 0;
while (i < n) {
advance = 1;
if ((utf8[i] & 0x80) == 0) i += 1;
else if ((utf8[i] & 0xE0) == 0xC0) i += 2;
else if ((utf8[i] & 0xF0) == 0xE0) i += 3;
else { i += 4; advance = 2; }
if (i <= n) n16 += advance;
}
return s.substring(0,n16);
}
}
Note: edited to fix bugs on 2014-08-25
The more sane solution is using decoder:
final Charset CHARSET = Charset.forName("UTF-8"); // or any other charset
final byte[] bytes = inputString.getBytes(CHARSET);
final CharsetDecoder decoder = CHARSET.newDecoder();
decoder.onMalformedInput(CodingErrorAction.IGNORE);
decoder.reset();
final CharBuffer decoded = decoder.decode(ByteBuffer.wrap(bytes, 0, limit));
final String outputString = decoded.toString();
I think Rex Kerr's solution has 2 bugs.
- First, it will truncate to limit+1 if a non-ASCII character is just before the limit. Truncating "123456789á1" will result in "123456789á" which is represented in 11 characters in UTF-8.
- Second, I think he misinterpreted the UTF standard. https://en.wikipedia.org/wiki/UTF-8#Description shows that a 110xxxxx at the beginning of a UTF sequence tells us that the representation is 2 characters long (as opposed to 3). That's the reason his implementation usually doesn't use up all available space (as Nissim Avitan noted).
Please find my corrected version below:
public String cut(String s, int charLimit) throws UnsupportedEncodingException {
byte[] utf8 = s.getBytes("UTF-8");
if (utf8.length <= charLimit) {
return s;
}
int n16 = 0;
boolean extraLong = false;
int i = 0;
while (i < charLimit) {
// Unicode characters above U+FFFF need 2 words in utf16
extraLong = ((utf8[i] & 0xF0) == 0xF0);
if ((utf8[i] & 0x80) == 0) {
i += 1;
} else {
int b = utf8[i];
while ((b & 0x80) > 0) {
++i;
b = b << 1;
}
}
if (i <= charLimit) {
n16 += (extraLong) ? 2 : 1;
}
}
return s.substring(0, n16);
}
I still thought this was far from effective. So if you don't really need the String representation of the result and the byte array will do, you can use this:
private byte[] cutToBytes(String s, int charLimit) throws UnsupportedEncodingException {
byte[] utf8 = s.getBytes("UTF-8");
if (utf8.length <= charLimit) {
return utf8;
}
if ((utf8[charLimit] & 0x80) == 0) {
// the limit doesn't cut an UTF-8 sequence
return Arrays.copyOf(utf8, charLimit);
}
int i = 0;
while ((utf8[charLimit-i-1] & 0x80) > 0 && (utf8[charLimit-i-1] & 0x40) == 0) {
++i;
}
if ((utf8[charLimit-i-1] & 0x80) > 0) {
// we have to skip the starter UTF-8 byte
return Arrays.copyOf(utf8, charLimit-i-1);
} else {
// we passed all UTF-8 bytes
return Arrays.copyOf(utf8, charLimit-i);
}
}
Funny thing is that with a realistic 20-500 byte limit they perform pretty much the same IF you create a string from the byte array again.
Please note that both methods assume a valid utf-8 input which is a valid assumption after using Java's getBytes() function.
String s = "FOOBAR";
int limit = 3;
s = new String(s.getBytes(), 0, limit);
Result value of s
:
FOO
Use the UTF-8 CharsetEncoder, and encode until the output ByteBuffer contains as many bytes as you are willing to take, by looking for CoderResult.OVERFLOW.
As noted, Peter Lawrey solution has major performance disadvantage (~3,500msc for 10,000 times), Rex Kerr was much better (~500msc for 10,000 times) but the result not was accurate - it cut much more than it needed (instead of remaining 4000 bytes it remainds 3500 for some example). attached here my solution (~250msc for 10,000 times) assuming that UTF-8 max length char in bytes is 4 (thanks WikiPedia):
public static String cutWord (String word, int dbLimit) throws UnsupportedEncodingException{
double MAX_UTF8_CHAR_LENGTH = 4.0;
if(word.length()>dbLimit){
word = word.substring(0, dbLimit);
}
if(word.length() > dbLimit/MAX_UTF8_CHAR_LENGTH){
int residual=word.getBytes("UTF-8").length-dbLimit;
if(residual>0){
int tempResidual = residual,start, end = word.length();
while(tempResidual > 0){
start = end-((int) Math.ceil((double)tempResidual/MAX_UTF8_CHAR_LENGTH));
tempResidual = tempResidual - word.substring(start,end).getBytes("UTF-8").length;
end=start;
}
word = word.substring(0, end);
}
}
return word;
}
you could convert the string to bytes and convert just those bytes back to a string.
public static String substring(String text, int maxBytes) {
StringBuilder ret = new StringBuilder();
for(int i = 0;i < text.length(); i++) {
// works out how many bytes a character takes,
// and removes these from the total allowed.
if((maxBytes -= text.substring(i, i+1).getBytes().length) < 0) break;
ret.append(text.charAt(i));
}
return ret.toString();
}
By using below Regular Expression also you can remove leading and trailing white space of double byte character.
stringtoConvert = stringtoConvert.replaceAll("^[\\s ]*", "").replaceAll("[\\s ]*$", "");
This is my :
private static final int FIELD_MAX = 2000;
private static final Charset CHARSET = Charset.forName("UTF-8");
public String trancStatus(String status) {
if (status != null && (status.getBytes(CHARSET).length > FIELD_MAX)) {
int maxLength = FIELD_MAX;
int left = 0, right = status.length();
int index = 0, bytes = 0, sizeNextChar = 0;
while (bytes != maxLength && (bytes > maxLength || (bytes + sizeNextChar < maxLength))) {
index = left + (right - left) / 2;
bytes = status.substring(0, index).getBytes(CHARSET).length;
sizeNextChar = String.valueOf(status.charAt(index + 1)).getBytes(CHARSET).length;
if (bytes < maxLength) {
left = index - 1;
} else {
right = index + 1;
}
}
return status.substring(0, index);
} else {
return status;
}
}
This one could not be the more efficient solution but works
public static String substring(String s, int byteLimit) {
if (s.getBytes().length <= byteLimit) {
return s;
}
int n = Math.min(byteLimit-1, s.length()-1);
do {
s = s.substring(0, n--);
} while (s.getBytes().length > byteLimit);
return s;
}
I've improved upon Peter Lawrey's solution to accurately handle surrogate pairs. In addition, I optimized based on the fact that the maximum number of bytes per char
in UTF-8 encoding is 3.
public static String substring(String text, int maxBytes) {
for (int i = 0, len = text.length(); (len - i) * 3 > maxBytes;) {
int j = text.offsetByCodePoints(i, 1);
if ((maxBytes -= text.substring(i, j).getBytes(StandardCharsets.UTF_8).length) < 0)
return text.substring(0, i);
i = j;
}
return text;
}
Binary search approach in scala:
private def bytes(s: String) = s.getBytes("UTF-8")
def truncateToByteLength(string: String, length: Int): String =
if (length <= 0 || string.isEmpty) ""
else {
@tailrec
def loop(badLen: Int, goodLen: Int, good: String): String = {
assert(badLen > goodLen, s"""badLen is $badLen but goodLen is $goodLen ("$good")""")
if (badLen == goodLen + 1) good
else {
val mid = goodLen + (badLen - goodLen) / 2
val midStr = string.take(mid)
if (bytes(midStr).length > length)
loop(mid, goodLen, good)
else
loop(badLen, mid, midStr)
}
}
loop(string.length * 2, 0, "")
}
精彩评论