开发者

Googlebot causes .NET System.Web.HttpException

I have an ASP.NET website mixed with classic asp (we are working on a conversion to .NET) and I recently upgraded from .NET 1.1 to .NET 4.0 and switched to integrated pipeline in IIS 7.

Since these changes ELMAH is reporting errors from classic asp pages with practicaly no detail (and status code 404):

System.Web.HttpException (0x80004005)
   at System.Web.Cache开发者_如何学GodPathData.ValidatePath(String physicalPath)
   at System.Web.HttpApplication.PipelineStepManager.ValidateHelper(HttpContext context)

But when I request the page myself, no error occurs. All these errors showing up in ELMAH are caused by the Googlebot crawler (user agent string).

How come .NET picks up errors for classic asp pages? Has this got to do with the integrated pipeline?

Any ideas why the error only happens when Google crawls the page or how I can get more details to find the underlying fault?


Add this to your web.config file:

<httpRuntime relaxedUrlToFileSystemMapping="true" />

This disables the default check to makes sure that requested URLs conform to Windows path rules.

To reproduce the problem, add %20 (URL-escaped space) to the end of the URL, e.g. http://example.org/%20. It's fairly common to see this problem from search crawlers when they encounter mis-typed links with spaces, e.g. <a href="http://example.org/ ">example</a>.

The HttpContext.Request.Url property seems to trim the trailing space, which is why logging tools like ELMAH don't reveal the actual problem.


When you changed from classic pipeline to integrated pipeline, you essentially turned control over to .NET, meaning .NET will call up the ASP Parser. This adds the ability for custom HTTPModules coded in .NET Managed code that can change the output of the response or in the case of elmah, give you logging details.

I would look at the log, see what user agent googlebot is using at the time when the error occurrs and follow the exact same path it did with your user agent changed.

Mozilla Firefox is the best browser for this with the User Agent Switcher addon


It looks like Google crawlers goes througt a links that does no longer existing. IE there could be some documents on you site that refer some another documents, but they are deleted.

I does not look serious as for me, so you might consider to filter out that exception.


This only applies if you are using Angular, but you'll see this if

<httpRuntime relaxedUrlToFileSystemMapping="false" /> (as mentioned in the previous answers)

and you use src instead of ng-src on an image or script tag, i.e

<img src="{{SomeModelValue}}" />

should be

<img ng-src="{{SomeModelValue}}" />

This could also affect A tags where you are using href instead of ng-href.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜