Extract a pattern from the output of curl
I would like to use curl, on the command line, to grab a url, pipe it to a pattern, and return a list of urls that match that pattern.
I am running into problems with greedy aspects of the pattern, and can not seem to get past it. Any help on this would be apprecaited.
curl http://www.reddit.com/r/pics/ | grep -ioE "http://imgur\.com/.+(jpg|jpeg|gif|png)"
So, grab the data from the url, which returns a mess of html, which may need some linebreaks somehow replaced in, onless the regex can return more than one pattern in a single line. The patter is pretty simple, any string that matches...
- starts with http://imgur.com/
- has A-Z a-z 0-9 (maybe some others) and is so far, 5 chars long, 8 should cover it forever if I wanted to limit that aspect of the patter, which I don't
- ends in a .grraphic_file_format_extention (jpg, jpeg, gif, png)
Thats about it, at that url, with default settings, I should generally get back a good set of images. I would not be objectionable to using the RSS feel url for the same page, it may be easier to parse actually.
Thanks everyone!
Edit Thanks for a quick answer, my final command is now:
$curl -s http://www.reddit.com/r/pics/ | grep -ioE "http:\/\/imgur\.com\/.{1,10}\.(jpg|jpeg|gif|png)"
开发者_StackOverflow
Try:
http:\/\/imgur\.com\/.{5,8}\.(jpg|jpeg|gif|png)
Cool. Grep your WAN IP from URL:
curl -s https://hostpapastatus.com/ip/ | grep -ioE "([0-9]{1,3}[\.]){3}[0-9]{1,3}"
精彩评论