Is there an addressing issue for writing an FTP client that needs to upload files larger than 4 gigs?
If my FTP client intends to upload files over 4 gigs in size, assuming I'm streaming the data, my pointer is going to hit the wall at around 4 gigs, if it's a 32 bit pointer, right? I'm trying to ima开发者_如何学Cgine what's going on behind the scenes and am not able to visualize how this could work... however it MUST work, since I have downloaded files larger than this in the past.
So, my question is two fold... what happens on the client (and does it need to be a 64 bit client, on a 64 bit machine) and what happens on the server (and does IT have to also be a 64 bit machine?)
I realize that the file will be broken into smaller files for transmission, but isn't the program going to explode just trying to address the parts of the file beyond the 4,294,967,295 mark?
I think this is a related post, but I'm not sure what conclusion they come to. The answers seem to point both to the limitations of the pointer (in their case PERL) and the OS. Why can't my Perl program create files over 4 GB on Windows?
The client or server should read the data in chunks (I would do a multiple of the page size or something similar) and write the chunks to disk. There is no need to have the whole file in RAM all at once.
Something like this psuedo code (error checking and similar omitted) on the receiving end:
chunk = new byte[4096];
while(int size = recv(socket, chunk, 4096)) {
write(file, chunk, size);
}
So the above sample is for the server, the client would do something similar too.
chunk = new byte[4096];
while(int size = read(file, chunk, 4096)) {
send(sock, chunk, size);
}
EDIT:
To address your comment. One thing you have to keep in mind is that the offset in the file isn't neccessarily 32-bit on a 32-bit system, it can be 64-bit since it is not actually a pointer, it is simply an offset from the beginning of the file. If the OS supports 64-bit offsets (and modern windows/linux/osx all do), then you don't have to worry about it. As noted elsewhere, the filesystem the OS is trying to access is also a factor, but I figure if you have a file that is greater than 4GB, then it is clearly on a filesystem that supports it ;-).
I think your confusion may stem from the overloaded use of the word "pointer". A file's current position pointer is not the same as a pointer to an object in memory. Modern 32-bit OSes support 64-bit file pointers just fine.
32 or 64 bit client has nothing to do with file size, 32 bit OS supports files larger then 4GB, the only thing needed is the underlying file system must support it. FAT16 does not support files bigger then 4GB, however FAT32 and NTFS does.
Every programming SDK supports 64 bit addressing for files, even inside 32 bit operating system. So even if you have 32 bit server and client you can still transfer file more then 4GB.
The handle of file used inside program maintains LONG integer(8 bytes), http://www.cplusplus.com/reference/clibrary/cstdio/ftell/ you can see that long is 8 bytes in most systems.
However if your SDK or OS only supports 32 bit file pointers, then you have problem.
精彩评论