A privacy app ineffectually tries to block tracking data for our web analytics. Should we detect the failed attempt and not track their users? [closed]
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this questionMy company has a web analytics package which we use for our own and customer marketing campaign tracking. It uses a combination of server logs, JS & image web bugs, cookies, unique cached files, and ETag headers to collect and collate user activity.
Recently we have found that a certain (unnamed) privacy-guard application which plugs into the user's browser is munging certain tracking codes with the apparent intent of preventing the user's activity from being tracked. We have purchased a copy of the app and tested locally, and it does the same for many other web bug and analytics applications including Google Analytics.
For most of these, the way in which the data is altered would prevent the tracking software from operating properly. However, they use a consistent pattern for the alterations, and due to the way that our collation works, their changes have no effect on the operation of our tracking and analytics package. (Well, there is one side effect which reduces accuracy of some timing calculations from millis to seconds.)
In a nutshell, the situation is:
Our analytics results are unaffected by the application's attempt to subvert the data
The user clearly intends to prevent analysis of their online activity
It is possible for us to alter our application to detect the attempted blocking
We would have to spend time and money patching and testing our application in order to make the attempted privacy blocking actually successful
So there is an ethical quandary, as to how much effort we should take to detect and honor the user's wishes. Some of the issues involved are:
开发者_运维百科Isn't it the responsibility of the privacy app to perform as expected? There are ways they could alter the data which would prevent our analytics from tracking their users.
It our responsibility to to enhance our application to detect the user's intent? This would incur both the development cost as well as eliminate valuable data (roughly 2% of our traffic is using this app).
What do you think our ethical responsibility should be?
We should ignore it and have our application work as-is
We should take the expense, lose the data, and honor the users' implied desire
We should contact the developers of the app and tell them a better way to stop our system from working
We should publicize that their software does not perform as expected
Other...?
To clarify, the privacy tool simply doesn't work. Our application, without alteration, still tracks users who use it. We would have to change our app in order to not track these users.
We do have a cookie-based opt-out which the user can select from the tracker's home page.
We sent a note to the company that developed the privacy application, and they said they would look into it.
I have been active in computer privacy issues for more than 20 years and this is the very first time I have come across a question such as yours. It is very interesting.
You have no obligation to attempt to modify your application to detect the user's efforts, and there are several reasons why I would recommend that you not follow this course of action:
- There may be other applications that you are also rendering ineffective. You don't want to favor one application over another.
- If you take this action, you will need to be careful for upgrades to both your application and the privacy application.
- If you just silently modify your application, the privacy community will lose a valuable "teachable moment."
Sadly, the "privacy negotiation" part of P3P was never really implemented. It would have been an ideal situation here.
If you feel strongly about this, you are welcome to contact the developer and tell them what they are doing wrong. Alternatively, if you have an academic bent, you could write an article for a privacy conference; it would be an interesting "lessons learned" piece. You could also write a blog post, but I suspect that you do not wish the publicity.
If you want to send me a private message, I would be happy to relay the message to the developer.
I would provide a way to disable your tracking, and contact the authors of the tool and ask them to use that explicitly. Don't get into an arms-race trying to undo their work; (it will only continue); provide a trivially 'off' switch and everyone will be happy.
I would just leave everything as-is and not contact the developer. If you get on their radar they might update the app to break your analytics completely.
I think the whole "tracking" fear of users is very interesting.
When tracking is used in an ethical manner it actually benefits users because the company can tell what works from a marketing perspective. This means less money wasted on marketing that can be used in other areas such a product development or even selling products at a lower cost.
IMO, makers of some of these "privacy" applications are guilty of exaggerating the "dangers" of ethical tracking to boost the demand for their products.
What ethical obligation do you have to assist me in anything that you suspect I am attempting and failing at?
I think the correct solution is to let the user decide if he wants to be tracked. As I see it, there are two ways to reach this goal:
- Filter those users out in your application.
- Tell the developer of the other application of its weaknesses.
I'd chose the approach that is less work for you. Write them an e-mail. If they don't improve their app, I would happily continue tracking. (At the same time you could consider an opt-out API like others have suggested.)
I could even imagine that you could benefit if you've got someone at that other company that knows you/your company in a positive way.
I would first let the developers know about their product's deficiency, with as much information as you can give them to make it easy for them to fix. I would then add something to detect the tool in your own code and automatically notify any sites that think they are protected, but aren't, with an explanation of what you have done. I would not alter your crawler in such a way as to respect the poorly implemented privacy guard's intention - that seems, to me, to be asking too much, and sends you down a dangerous path where you wind up (at least being perceived as) responsible for other people's software. Our software systems are generally too complex as it is - writing in special cases for broken software would, I think, just add to your system's complexity, at no great value to you, your customers, or web users generally.
If you do notify the developers and nothing has happened in a reasonable amount of time, I would consider making the product's deficiencies public. Not to shame them, but to protect users; if your software can (however inadvertently) overcome their defenses, malicious software certainly could as well.
I can't cite chapter and verse of why this course of action is ethical, but it's what makes sense, what feels right, to me. This is a great question.
I don't see you as having even a remote social responsibility to tell the customer "hey, you're not evading our system properly", so I'd tend towards "keep going as-is".
If it sincerely bothers you, then maybe contact the developers like you mention, but there's no reason for you to expend anything on it.
Personally, you should just ignore their efforts.
You're already inconsiderate of others trying to protect their privacy, why should this person using this software be any different?
Consider this, what is the primary reason "cookies" don't work for tracking? It's because people disable them on their browsers. I mean, I'm sure there's a browser out there somewhere that doesn't support cookies, maybe something ca.1993.
But today? Seriously? The only reason a cookie wouldn't work is because the user chose to disable cookies.
Sure, there may be some minute portion of the market that isn't "allowed" to use cookies, maybe a school browser, or one locked down in a business. But, really, what fraction of the total cookie blockages can that really be? (I don't know your application or what kind of traffic you're getting.)
Since cookies don't work, you've resorted to JS tricks, again something that the bulk of folks turn off due to wanting to protect themselves. Obviously, JS isn't as widespread as cookies (mobile browsers notably) in terms of support. But the argument is the same, in terms of overall data "lost" to those edge case browsers.
So, I don't know why suddenly this new system concerns you so. I'd just push on like you're doing already.
精彩评论