C# WebClient disable cache
Good day.
I'm using the WebClient
class in my C# application in order to download the same file every minute, and then the application performs a simple check to see if the file has been changed, and if it does do something with it.
Well since this file is downloaded every minute the WebClient
caching system is caching the file, and not downloading the file again, just simply getting it from the cache, and that gets in the way of checking if the file downloaded is new.
So i would like to know how can disable the caching system of the WebClient
class.
I've tried.
Client.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.BypassCache);
I also tried headers.
WebClient.Headers.Add("Cache-Control", "no-cache");
Didn't work as well. So how can i disable the cache for good?
Thanks.
EDIT
I also tried the following CacheLevels
: NoCacheNoStore
, BypassCache
, Reload
. No effect, however if i reboot my computer the cache seems to be cleared, but i can't be rebooting the computer every time.
UPDATE in face of recent activity (8 Set 2012)
The answer marked as accepted solved my issue. To put it simple, I used Sockets to download the file and that solved my issue. Basically a GET request for the desired file, I won't go into details on how to do it, because I'm sure you can find plenty of "how to" right here on SO in order to do the same yourself. Although this doesn't mean that my solution is also the best for you, my first advice is to read other answers and see if any are useful.
Well anyway, since this questions has seen some recent activity, I thought about adding this update to include some hints or ideas that I think should be considered by those facing similar problems who tried everything they could think off, and are sure the problem doesn't lie with their code. Likely to be the code for most cases, but sometimes we just don't quite see it, just go have a walk and come back after a few minutes, and you will probably see it point blank range like it was the most obvious thing in the first place.
Either way if you're sure, then in that case I advise to check weather your request goes through some other device with caching capabilities (computers, routers, proxies, ...) until it gets to the intended destination.
Consider that most requests go through some of such devices mentioned before, more commonly routers, unless of course, you are directly connected to the Internet via your service provider network.
In one time my own router was caching the file, odd I know, but it was the case, whenever I rebooted it or connected directly to the Internet my caching problem went away. And no there wasn't any other device connected to the router that can be blamed, only the computer and router.
And by the way, a general advice, although it mostly applies to those who work in t开发者_开发百科heir company development computers instead of their own. Can by any change your development computer be running a caching service of sorts? It is possible.
Furthermore consider that many high end websites or services use Content Delivery Networks (CDN), and depending on the CDN provider, whenever a file is updated or changed, it takes some time for such changes to reflect in the entire network. Therefore it might be possible you were in the bad luck of asking for a file which might be in a middle of a update, and the closest CDN server to you hasn't finished updating.
In any case, specially if you are always requesting the same file over and over, or if you can't find where the problem lies, then if possible, I advise you to reconsider your approach in requesting the same file time after time, and instead look into building a simple Web Service, to satisfy the needs you first thought about satisfying with such file in the first place.
And if you are considering such option, I think you will probably have a easier time building a REST Style Web API for your own needs.
I hope this update is useful in some way to you, sure it would be for me while back. Best of luck with your coding endeavors.
You could try appending some random number to your url as part of a querystring each time you download the file. This ensures that urls are unique each time.
For ex-
Random random = new Random();
string url = originalUrl + "?random=" + random.Next().ToString();
webclient.DownloadFile(url, downloadedfileurl);
From the above I would guess that you have problem somewhere else. Can you log http requests on server side? What do you get when you alter some random seed parameter?
Maybe SERVER caches the file (if the log shows that request is really triggered every minute.
Do you use ISA or SQUID?
What is http response code for your request?
I know that answering with answers might not be popular, but comment doesn't allow me this much text :)
EDIT:
Anyway, use HttpRequest
object instead of WebClient
, and hopefully (if you place your doubts in WebClient
) everything will be solved. If it wasn't solved with HttpRequest
, then the problem really IS somewhere else.
Further refinement:
Go even lower: How do I Create an HTTP Request Manually in .Net?
This is pure sockets, and if the problem still persists, then open a new question and tag it WTF :)
Try NoCacheNoStore
:
Never satisfies a request by using resources from the cache and does not cache resources. If the resource is present in the local cache, it is removed. This policy level indicates to intermediate caches that they should remove the resource. In the HTTP caching protocol, this is achieved using the no-cache cache control directive.
client.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.NoCacheNoStore);
In some scenarios, network debugging software can cause this issue. To make sure your url is not cached, you can append a random number as last parameter to make url unique. This random parameter in most cases is ignored by servers (which try to read parameters sent as name value pairs).
Example: http://www.someserver.com/?param1=val1&ThisIsRandom=RandomValue
Where ThisIsRandom=RandomValue is the new parameter added.
client.CachePolicy = new RequestCachePolicy(RequestCacheLevel.BypassCache);
Should work. Just make sure you clear the cache and delete any temporary downloaded files in Internet Explorer before running the code as System.Net
and IE both use the same cache.
I had a similar problem with powershell using webClient, which was also present after switching to use webRequest. What I discovered is that the socket is reused and that causes all sorts of server/network side caching (and in my case a load balancer got in the way too especially problematic with https). The way around this is to disable keepalive and possibly pipeling in the webrequest object as below which will force a new socket for each request:
#Define Funcs Function httpRequest {
param([string]$myurl)
$r = [System.Net.WebRequest]::Create($myurl)
$r.keepalive = 0
$sr = new-object System.IO.StreamReader (($r.GetResponse()).GetResponseStream())
$sr.ReadToEnd() }
I Guess you will have to use webrequest/webresponse rather than webclient
WebRequest request = WebRequest.Create(uri);
// Define a cache policy for this request only.
HttpRequestCachePolicy noCachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.NoCacheNoStore);
request.CachePolicy = noCachePolicy;
WebResponse response = request.GetResponse();
//below is the function for downloading the file
public static int DownloadFile(String remoteFilename,
String localFilename)
{
// Function will return the number of bytes processed
// to the caller. Initialize to 0 here.
int bytesProcessed = 0;
// Assign values to these objects here so that they can
// be referenced in the finally block
Stream remoteStream = null;
Stream localStream = null;
WebResponse response = null;
// Use a try/catch/finally block as both the WebRequest and Stream
// classes throw exceptions upon error
try
{
// Create a request for the specified remote file name
WebRequest request = WebRequest.Create(remoteFilename);
// Define a cache policy for this request only.
HttpRequestCachePolicy noCachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.NoCacheNoStore);
request.CachePolicy = noCachePolicy;
if (request != null)
{
// Send the request to the server and retrieve the
// WebResponse object
response = request.GetResponse();
if (response != null)
{
if (response.IsFromCache)
//do what you want
// Once the WebResponse object has been retrieved,
// get the stream object associated with the response's data
remoteStream = response.GetResponseStream();
// Create the local file
localStream = File.Create(localFilename);
// Allocate a 1k buffer
byte[] buffer = new byte[1024];
int bytesRead;
// Simple do/while loop to read from stream until
// no bytes are returned
do
{
// Read data (up to 1k) from the stream
bytesRead = remoteStream.Read(buffer, 0, buffer.Length);
// Write the data to the local file
localStream.Write(buffer, 0, bytesRead);
// Increment total bytes processed
bytesProcessed += bytesRead;
} while (bytesRead > 0);
}
}
}
catch (Exception e)
{
Console.WriteLine(e.Message);
}
finally
{
// Close the response and streams objects here
// to make sure they're closed even if an exception
// is thrown at some point
if (response != null) response.Close();
if (remoteStream != null) remoteStream.Close();
if (localStream != null) localStream.Close();
}
// Return total bytes processed to caller.
return bytesProcessed;
}
Using HTTPRequest
is definitely the right answer for your problem. However, if you wish to keep your WebBrowser/WebClient object from using cached pages, you should include not just "no-cache"
but all of these headers:
<meta http-equiv="Cache-control" content="no-cache">
<meta http-equiv="Cache-control" content="no-store">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="-1">
In IE11, it didn't work for me until I included either one or both of the last two.
All methods here seems can't solve a problem: If a web page has ever been accessible and now deleted from the server, the method HttpWebResponse.GetResponse() will give you a response for a cached copy starting with "Before a period of sufficient time has passed, or you restart computer, it will NOT trigger the expected exception for 404 page not found error, you cannot know that web page now does not exsit at all now.
I tried everything:
- Set header like ("Cache-Control", "no-cache")
- Set "request.CachePolicy" to "noCachePolicy"
- Delete IE tem/history files.
- Use wired Internet without router .......... DOES NOT WORK!
Fortunately, if the web page has changed its content, HttpWebResponse.GetResponse()
will give you a fresh page to reflect the change.
Check that you are not being rate limited! I was getting this back from an nginx server:
403 Forbidden
Rate limited exceeded, please try again in 24 hours.Here is the program I was using (C#)
using System;
using System.IO;
using System.Net;
namespace ConsoleApplication1
{
class Program
{
static void Main(string[] args)
{
DownloadFile();
Console.ReadLine();
}
public static void DownloadFile()
{
var downloadedDatabaseFile = Path.Combine(Path.GetTempPath(), Path.GetTempFileName());
Console.WriteLine(downloadedDatabaseFile);
var client = new WebClient();
client.DownloadProgressChanged += (sender, args) =>
{
Console.WriteLine("{0} of {1} {2}%", args.BytesReceived, args.TotalBytesToReceive, args.ProgressPercentage);
};
client.DownloadFileCompleted += (sender, args) =>
{
Console.WriteLine("Download file complete");
if (args.Error != null)
{
Console.WriteLine(args.Error.Message);
}
};
client.DownloadFileAsync(new Uri("http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dats.gz"), downloadedDatabaseFile);
}
}
}
The console prints out:
C:\Users\jake.scott.WIN-J8OUFV09HQ8\AppData\Local\Temp\2\tmp7CA.tmp
Download file complete
The remote server returned an error: (403) Forbidden.
since I use the following:
wclient.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.NoCacheNoStore);
wclient.Headers.Add("Cache-Control", "no-cache");
I get no cached file anymore.
I additionally added this function I found, to delete IE temp files before every call:
private void del_IE_files()
{
string path = Environment.GetFolderPath(Environment.SpecialFolder.InternetCache);
//for deleting files
System.IO.DirectoryInfo DInfo = new DirectoryInfo(path);
FileAttributes Attr = DInfo.Attributes;
DInfo.Attributes = FileAttributes.Normal;
foreach (FileInfo file in DInfo.GetFiles())
{
file.Delete();
}
foreach (DirectoryInfo dir in DInfo.GetDirectories())
{
try
{
dir.Delete(true); //delete subdirectories and files
}
catch
{
}
}
}
If you have Access to the webserver, open Internet Explorer go to
Internet Explorer
-> Internet Options
-> Browsing History "Settings"
-> Temporary Internet Files "never"
Clear the Browser Cache and voila, it will work!
精彩评论