开发者

Sampling Large Data Files

开发者_如何学运维I currently work in the position of Data Warehouse programmer and as such have to put numerous flat files through ETL process. Of course prior to loading the file I have to be aware of its content, the problem is that majority of the files are > 1 GB large and I can not open them using my dear old friend "notepad". Kidding. I usually use VIM or Notepad++ but it still takes a while to open the file. Could I perform a "partial" read of the file using VIM or some other editor?

P.S. I know that I could write a 10 liner script to "data sample" the file, but it would be simpler to convince team members to use a feature of an editor than a script that I wrote.

Thank you for any insight you might have.


If you want to stick with using vim, you could have a look at the LargeFile script.

Alternatively, I've always found that UltraEdit opens large files extremely quickly.


You said you had VIM, that makes me wonder if you have a unix environment as well?

If you like, you can pipe the input through unix utility top and display the raw imput on your screen. Like this:

EDIT: (thanks Honk)

terminal$> head -N 15 file.csv

(Where that 15 indicates you want to see 15 lines only).


Pretty sure there are loads of similar questions, but hey, Textpad is a good choice for this.


use the head command.


Use the 'less' on solaris ... use the same through cygwin on windows. On mainframes this problem doesn't appear, ISPF editor handles it pretty well.


UltraEdit claims to handle files over 4GB...

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜