using curl or wget commandline to download files
I apologize if this question was asked earlier and if its a simple one.
I am trying to download a file from http website onto my unix machine using command line.I log onto this website using a username and password.
Say I have this link (not a working link) http://www.abcd.org/portal/ABCPortal/private/DataDownload.action?downloadFile=&workspace.id=4180&datasetId=76999
Say if I paste this link in a browser, I get a box that opens up to ask if I want to save the zip file that it links to (say xyz.zip). These files are of ~1GB size.
I want to be able to get that zip file that this URL has onto my unix machine using the command line. I tried using the wget and curl with开发者_JAVA技巧 the above kind of URL (providing user name and password). I get the html form but not the zip file. Is there a way I can get the zip file that this kind of URL links to? I do not know any thing about the directory structures on the machine where the files are.
Thanks for your help,
i guess you did not pass the Accept-Encoding header. The browsers are passing it by default, with your CLI tools you have to trigger these options yourself
I don't know about wget but have a try with curl (-v as verbose command so you can follow the request/response headers):
curl -v "http://www.abcd.org/portal/ABCPortal/private/DataDownload.action?downloadFile=&workspace.id=4180&datasetId=76999" -H "Accept-Encoding: gzip" > /tmp/yourZippedFile.gz
If above is not the case maybe you give a real-site example so we can follow your problem at a concrete example. It is difficult to say without seeing the HTTP traffic.
FYI for windows curl, i have to add a User-Agent:
curl -H "Accept-Encoding: gzip,deflate" -H "User-Agent: Mozilla/5.0 (Windows NT 5.1)" www.google.com > test3.gz
because without the User-Agent it won't give me a gzip file
curl -H "Accept-Encoding: gzip,deflate" www.google.com > test
精彩评论