开发者

How can I de- and re-classify data?

Some of the data I work with contain sensitive information (names of persons, dates, locations, etc). But I sometimes need to开发者_高级运维 share "the numbers" with other persons to get help with statistical analysis, or process it on more powerful machines where I can't control who looks at the data.

Ideally I would like to work like this:

  1. Read the data into R (look at it, clean it, etc.)
  2. Select a data frame that I want to de-classify, run it through a package and receive two "files": the de-classified data and a translation-file. The latter I will keep myself.
  3. The de-classified data can be shared, manipulated and processed without worries.
  4. I re-classify the processed data together with the translation-file.

I suppose that this can also be useful when uploading data for processing "in the cloud" (Amazon, etc.).

Have you been in this situation? I first thought about writing a "randomize" function myself, but then I realized there is no end on how sophisticated this can be done (for example, offsetting time-stamps without losing order). Maybe there is already a defined method or tool?

Thanks to everyone who contributes to [r]-tag here at Stack Overflow!


One way to do this is with match. First I make a small dataframe:

foo <- data.frame( person=c("Mickey","Donald","Daisy","Scrooge"), score=rnorm(4))
foo
   person       score
1  Mickey -0.07891709
2  Donald  0.88678481
3   Daisy  0.11697127
4 Scrooge  0.31863009

Then I make a key:

set.seed(100)
key <- as.character(foo$person[sample(1:nrow(foo))])

You must save this key obviously somewhere. Now I can encode the persons:

foo$person <- match(foo$person, key)
foo
  person      score
1      2  0.3186301
2      1 -0.5817907
3      4  0.7145327
4      3 -0.8252594

If I want the person names again I can index the key:

key[foo$person]
[1] "Mickey"  "Donald"  "Daisy"   "Scrooge"

Or use tranform, this also works if the data is changed as long as the person ID remains the same:

foo <-rbind(foo,foo[sample(1:4),],foo[sample(1:4,2),],foo)
foo
   person      score
1       2  0.3186301
2       1 -0.5817907
3       4  0.7145327
4       3 -0.8252594
21      1 -0.5817907
41      3 -0.8252594
31      4  0.7145327
15      2  0.3186301
32      4  0.7145327
16      2  0.3186301
11      2  0.3186301
12      1 -0.5817907
13      4  0.7145327
14      3 -0.8252594
transform(foo, person=key[person])
    person      score
1   Mickey  0.3186301
2   Donald -0.5817907
3    Daisy  0.7145327
4  Scrooge -0.8252594
21  Donald -0.5817907
41 Scrooge -0.8252594
31   Daisy  0.7145327
15  Mickey  0.3186301
32   Daisy  0.7145327
16  Mickey  0.3186301
11  Mickey  0.3186301
12  Donald -0.5817907
13   Daisy  0.7145327
14 Scrooge -0.8252594


Can you simply assign a GUID to the row from which you have removed all of the sensitive information? As long as your colleagues lacking the security clearance don't mess with the GUID, you'd be able to incorporate any changes and additions they may make simply by joining on the GUID. Then it becomes simply a matter of generating bogus ersatz values for the columns whose data you have purged. LastName1, LastName2, City1, City2, etc etc. EDIT: You'd have a table for each purged column, e.g. City, State, Zip, FirstName, LastName each of which contains the distinct set of the real classified values in that column and an integer value. So that "Jones" could be represented in the sanitized dataset as, say, LastName22, "Schenectady" as City343, "90210" as Zipcode716. This would give your colleagues valid values to work with (e.g. they'd have the same number of distinct cities as your real data, just with anonymized names) and the interrelationships of the anonymized data are preserved.. EDIT2: if the goal is to give your colleagues sanitized data that is still statistically meaningful, then date columns would require special processing. E.g. if your colleagues need to do statistical computations on the age of the person, you have to give them something close to the original date, not so close that it could be revealing, yet not so far that it could skew the analysis.


Sounds like Statistical Disclosure Control problem. Take a look at sdcMicro package.

EDIT: Just realized that you have slightly different problem. The point of Statistical Disclosure Control is to "damage" data so that the risk of disclosure is reduced. By "damaging" data you are loosing some information - this is the price you are paying for reduced risk of disclosure. Your data will contain less information - so your analysis can give different or less results as analysis done on original data.

Depends on what you are going to do with your data.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜