Squid client purge utility [closed]
This question does not appear to be about a specific programming problem, a software algorithm, 开发者_开发知识库or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this questionI've been using the purge utility ie.
squidclient -m PURGE http://www.example.com/
The above command will purge that exact link but it leaves everything else under it in the cache. (eghttp://www.example.com/page1)
I was wondering is there a way to purge every document under that url?
I've had limited success messing with this line:
awk '{print $7}' /var/log/squid/access.log | grep www.example.com | sort | uniq | xargs -n 1 squidclient -m PURGE -s
First of all thank you KimVais for advising me to ask in serverfault, I have found a solution there.
as answered in serverfault:
The 3rd-party purge utility will do exactly what you seek:
The purge tool is a kind of magnifying glass into your squid-2 cache. You can use purge to have a look at what URLs are stored in which file within your cache. The purge tool can also be used to release objects which URLs match user specified regular expressions. A more troublesome feature is the ability to remove files squid does not seem to know about any longer.
For our accelerating (reverse) proxy, I use a config like this:
purge -c /etc/squid/squid.conf -p localhost:80 -P0 -se 'http://www.mysite.com/' -P0 will show the list of URLs but not remove them; change it to -P1 to send PURGE to the cache, as you do in your example.
The net-purge gem adds Net::HTTP::Purge to ruby, so you can easily purge your cache.
require 'net-purge'
Net::HTTP.start('417east.com') {|http|
request = Net::HTTP::Purge.new('/')
response = http.request(request)
puts response.body # Guru Meditation
}
I'd like to add that there's no O(1) way to do invalidate multiple objects in Squid cache. See the Squid FAQ for details.
For comparison, Nginx and Apache Traffic Server seem to lack this feature, too. OTOH, Varnish implements banning, which in practice should do what you want.
We have a lot of ways to purge. Example 2 ways I alway use:
With client using MacOS or Linux:
curl -X PURGE http://URL.of.Site/ABC.txt
Direct on server which is running Squid:
squidclient -m PURGE http://URL.of.Site/ABC.txt
Absolutely, squid.conf must add
acl Purge method PURGE
http_access allow localhost Purge
http_access allow localnet Purge
http_access deny Purge
Apache Traffic Server v6.0.0 adds a "cache generation ID" which can be set per remap rule. So you can effectively purge an entire "site" at no cost at all, it really doesn't do anything other than making the old versions unavailable.
This works well with the ATS cache, because it's a cyclical cache (we call it cyclone cache), objects are never actively removed, just "lost".
Using this new option is fairly straight forward, e.g.
map http://example.com http://real.example.com \
@plugin=conf_remap.so \
proxy.config.http.cache.generation=1
To instantly (zero cost) purge all cached entries for example.com, simply bump the generation ID to 2, and reload the configuration the normal way.
I should also say that writing a plugin that loads these generation IDs from some other (external) source other than our remap.config would be very easy.
精彩评论