Downloading unknown filenames with Powershell
I'm trying to set up a powershell script to go fetch an encryption key. The company puts it out on a website and expects us to manually go get it every 6 months, but well, I'm lazy and forgetful. So I wrote a script to do it for me which I'll schedule to run every 3 months or so. Here is what I have so far:
$url ="http://www.theirwebsite.com/pgp/corp07272011Public.asc"
$client = new-object System.Net.WebClient
$proxy = New-Object System.Net.WebProxy("http://OurProxy:80")
$proxy.UseDefaultCredentials = $true
$client.proxy = $proxy
$client.DownloadFile( $url, ".\CorpPublicKey.asc" )
And that works for their current key. BUT, the file I'm downloading uses the date as part of it's name. When 7/27/2011 rolls around, presumably the new key will be named after 1/27/2011, or something similar.
So this may be a simple .NET issue, but how do I find what files are out on a website? Or is there an entirely better way to go about this?
EDIT: ok, one tidbit that I didn't mention is that this file is linked to on a known website. Soooo... Theoretically I could parse the webpage for the link, and use that to find the filename. I'll post the code when/if I get it all sorted out.
EDIT, part Deux
So I figured this one out. 1) You can't simply list the files out on a webserver. This is why webmasters publish file trees. Alternatively you have a spider crawl through all the links 开发者_JAVA百科on a site. 2) Knowing that there was always going to be a link on the website to my target, I fetch the webpage, find the link/filename, and can then go download the file. 3) The proper way of doing this is to parse it into xml and then search the anchor tags for my link with Xpath or something. 4) But that's annoying so I just regexed for the .asc file and called it done.$url ="http://www.theirwebsite.com/pgp/"
$client = new-object System.Net.WebClient
$proxy = New-Object System.Net.WebProxy("http://ourproxy:80")
$proxy.UseDefaultCredentials = $true
$client.proxy = $proxy
$webstring = $client.DownloadString( $url)
$webstring -match "aspecifcname.*asc"
$matches[0]
$url ="http://www.theircompany.com/pgp/" + $matches[0]
"url: " + $url
#Huh, we need to use the full path for downloading this file. .\ didn't cut it
$client.DownloadFile( $url, "C:\Program Files (x86)\theirCompany\savedKey.asc" )
Answering just so there isn't unanswered questions hanging out there.
So I figured this one out. 1) You can't simply list the files out on a webserver. This is why webmasters publish file trees. Alternatively you have a spider crawl through all the links on a site. 2) Knowing that there was always going to be a link on the website to my target, I fetch the webpage, find the link/filename, and can then go download the file. 3) The proper way of doing this is to parse it into xml and then search the anchor tags for my link with Xpath or something. 4) But that's annoying so I just regexed for the .asc file and called it done.
$url ="http://www.theirwebsite.com/pgp/"
$client = new-object System.Net.WebClient
$proxy = New-Object System.Net.WebProxy("http://ourproxy:80")
$proxy.UseDefaultCredentials = $true
$client.proxy = $proxy
$webstring = $client.DownloadString( $url)
$webstring -match "aspecifcname.*asc"
$matches[0]
$url ="http://www.theircompany.com/pgp/" + $matches[0]
"url: " + $url
#Huh, we need to use the full path for downloading this file. .\ didn't cut it
$client.DownloadFile( $url, "C:\Program Files (x86)\theirCompany\savedKey.asc" )
精彩评论