Can I use wget to check , but not download
Can I use wget to check for a 404 an开发者_开发知识库d not actually download the resource? If so how? Thanks
There is the command line parameter --spider
exactly for this. In this mode, wget does not download the files and its return value is zero if the resource was found and non-zero if it was not found. Try this (in your favorite shell):
wget -q --spider address
echo $?
Or if you want full output, leave the -q
off, so just wget --spider address
. -nv
shows some output, but not as much as the default.
If you want to check quietly via $? without the hassle of grep'ing wget's output you can use:
wget -q "http://blah.meh.com/my/path" -O /dev/null
Works even on URLs with just a path but has the disadvantage that something's really downloaded so this is not recommended when checking big files for existence.
You can use the following option to check for the files:
wget --delete-after URL
Yes easy.
wget --spider www.bluespark.co.nz
That will give you
Resolving www.bluespark.co.nz... 210.48.79.121
Connecting to www.bluespark.co.nz[210.48.79.121]:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
200 OK
Yes, to use the wget
to check , but not download the target URL/file, just run:
wget --spider -S www.example.com
If you are in a directory where only root have access to write in system. Then you can directly use wget www.example.com/wget-test
using a standard user account. So it will hit the url but because of having no write permission file won't be saved..
This method is working fine for me as i am using this method for a cronjob.
Thanks.
sthx
精彩评论