开发者

Using sed for introducing newline after each > in a +1 gigabyte large one-line text file

I have a giant text file (about 1,5 gigabyte) with xml data in it. All text in the file is on a single line, and attempting to open it in any text editor (even the ones mentioned in this thread: Text editor to open big (giant, huge, large) text files ) either fails horribly or is totally unusable due to the text editor hanging when attempting to scroll.

I was hoping to introduce newlines into the file by using the following sed command

sed 's/>/>\n/g' data.xml > data_with_newlines.xml

Sadly, this caused sed to give 开发者_如何学Pythonme a segmentation fault. From what I understand, sed reads the file line-by-line which would in this case mean that it attempts to read the entire 1,5 gig file in one line which would most certainly explain the segfault. However, the problem remains.

How do I introduce newlines after each > in the xml file? Do I have to resort to writing a small program to do this for me by reading the file character-by-character?


some sed has a limit on that. GNU sed has no limit as long as it can `malloc()' more (virtual) memory, you can feed or construct lines as long as you like. (from the doc)

I would suggest, if possible, to change how you create that xml file. ( Why is it all in one line in the first place ? ). Otherwise, you could read it one character by character. eg using the shell

while read -n 1 ch
do
  case "$ch" in
   ">" ) printf "%s\n" "$ch";;
     *) printf "%s" $ch;;
  esac
done <"file"

or

while read -n 1000 str ; do
 echo "${str//>/>
}"
done < file


This might work for you.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜