开发者

How to create a super-huge file with pure c or linux-shell or dos-commands?

My os and drive is

OS: Windows XP sp2 or Linux SUSE 9 or Cygwin
Compiler: Visual C++ 2003 or Gcc or Cygwin
PC and os are both 32 bits

So, How can I create a super-huge file in secs

I was told to use MappingFile functions. I failed to create files over 2G So... Your warm responses will be all appreciate开发者_开发知识库d thanks


Using dd in Linux to create a 1 gb file takes 57 seconds 'wall clock time' on a somewhat loaded box with a slow disk, and about 17 seconds system time:

$ time dd if=/dev/zero of=bigfile bs=G count=1
1+0 records in
1+0 records out
1073741824 bytes (1.1 GB) copied, 53.9903 s, 19.9 MB/s

real    0m56.685s
user    0m0.008s
sys     0m17.113s
$


you can use dd

dd if=/dev/zero of=bigfile.txt bs=$((1024*1024)) count=100

or just plain shell scripting and contents doesn't matter

1) create a dummy file with a few lines, eg 10 lines
2) use cat dummy >> bigfile in a while loop

eg

    while true
    do
      cat dummy >> bigfile.txt 
      #check for number of lines more than 10000 for example and break out of loop
    done

Do another time

while true
do
  cat bigfile >> bigfile2.txt           
  # check for size and break out..
done
rm -f dummy bigfile


Does the file have to take up actual disk space? If not, you could always (in Cygwin, or Linux):

dd if=/dev/zero of=bigfile seek=7T bs=1 count=1

This will create an empty 7 TB file in a fraction of a second. Of course, it won't allocate much actual disk space: You'll have a big sparse file.

Writing a program under Cygwin or Linux, you can do the same thing in a C program with a call to ftruncate.


Depending on your system limits you can create a largefile in a fraction of a second...

FILE *fp = fopen("largefile" ,"w");
for(int i = 0; i < 102400; i++)
{
    fseek(fp, 10240000, SEEK_CUR);
}
fprintf(fp, "%c", 'x');
fclose(fp);

Play with this.


cat /dev/urandom >> /home/mybigfile

It will error out when disc space has ran out.

This is for linux/bsd/possibly cgywin


In suse in a VM I did dd if=/dev/zero of=file;rm file which filled the disk, and when it was full deleted the file. This allowed me to compress the image a little more for some reason, and I read about doing it on a forum somewhere.


You can use "fsutil" command for Win2000/XP/7:

c:> fsutil file createnew Usage : fsutil file createnew Eg : fsutil file createnew C:\testfile.txt 1000

Reagrds


If you want a sparse file you can do that also on Windows (on NTFS volumes), using CreateFile, DeviceIOFunction with FSCTL_SET_SPARSE and FSCTL_SET_ZERO_DATA; for more info see here.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜