Just How Good Is ChatGPT in Data Science?

Many of you may have heard of ChatGPT, a dazzling new AI tool. We are hearing lots of gushing praise for the tool. Well, how well does it do in data science contexts? I tried a few queries here, and found interesting results.

I first requested, “Write an R function that returns every other element of a vector x, starting with the third.” I won’t show the code here, but suffice it to say that it worked! It did give me correct code.

Next, “In R’s built-in dataset CO2, fit a linear regression with ‘uptake’ as the outcome variable, and find the standard error of the intercept term.” Alas, it begged ignorance: “…I do not have the ability to perform calculations or access specific datasets…” Not really true; unfortunately, ChatGPT was not able to pick up the key phrase, “built-in.”

I then gave it what turned out to be an easier question, ” In the output of R’s ‘lm’ function, how do I find the standard error of the intercept term?” It returned a correct and useful answer, stating the ‘lm’ returns lots of material, “…including the standard errors of the model coefficients. The standard error of the intercept term can be accessed by calling the summary function on the model object and looking at the output…”

Next: “In R, what are the advantages of using the density() function in iieu of hist()?” The reply had some correct ingredients, but had problems:

“…the resulting plot [from density()] is smoother and more continuous, which can make it easier to see the underlying distribution of the data. This can be especially useful when the data is noisy or when the number of data points is small…The density function allows you to specify the bandwidth, or the smoothness, of the curve, which can be useful for highlighting different features of the data. In contrast, the hist function does not allow you to control the smoothness of the resulting plot.”

Putting aside the fact that the phrase “more continuous” is nonsense, there are other issues. It is not necessarily true that density() will do better on small/noisy data. And the app misses the point that hist() does have an analog of the bandwidth, namely the bin width.

Finally, I asked a question that every statistician is asked by non-stat people: “How can public opinion polls be accurate if they are based on only a small fraction of the population?” The response (not shown here) notes some standard practices such as stratified sampling, but basically begs the question.

Say for example 1200 people are sampled, as is common, and we stratify by race, That would mean we design the sample to include about 160 Black people. But anyone who wondered about the small sample overall would be even more puzzled as to why 160 African-Americans is “representative.”

So in this case, ChatGPT would give a very misleading answer to an important, common question.

And we see that machines can fail Statistics, just like college students. 🙂

Use of Differential Privacy in the US Census–All for Nothing?

The field of data privacy has long been of broad interest. In a medical database, for instance, how can administrators enable statistical analysis by medical researchers, while at the same time protecting the privacy of individual patients? Over the years, many methods have been proposed and used. I’ve done some work in the area myself.

But in 2006, an approach known as differential privacy (DP) was proposed, by a group of prominent cryptography researchers. With its catchy name and theoretical underpinnings, DP immediately attracted lots of attention. As it is more mathematical than many other statistical disclosure control methods, thus good fodder for theoretical research–it immediately led to a flurry of research papers, showing how to apply DP in various settings.

DP was also adopted by some firms in industry, notably Apple. But what really gave DP a boost was the decision by the US Census Bureau to use DP for their publicly available data, beginning with the most recent census, 2020. On the other hand, that really intensified the opposition to DP. I have my own concerns about the method.

The Bureau, though, had what it considered a compelling reason to abandon their existing privacy methods: Their extensive computer simulations showed that current methods were vulnerable to attack, in such a manner as to exactly reconstruct large portions of the “private” version of the census database. This of course must be avoided at all costs, and DP was implemented.

But now…it turns out that the Bureau’s claim of reconstructivity. was incorrect, according to a recent paper by Krishna Muralidhar, who writes,

“This study shows that there are a practically infinite number of possible reconstructions, and each reconstruction leads to assigning a different identity to the respondents in the reconstructed data. The results reported by the Census Bureau researchers are based on just one of these infinite possible reconstructions and is easily refuted by an alternate reconstruction.”

This is one of the most startling statements I’ve seen in my many years in academia. It would appear that the Bureau committed a “rush to judgment” on a massive scale, just mind boggling, and in addition–much less momentous but still very concerning–gave its imprimatur to methodology that many believe has serious flaws.

Base-R and Tidyverse Code, Side-by-Side

I have a new short writeup, showing common R design patterns, implemented side-by-side in base-R and Tidy.

As readers of this blog know, I strongly believe that Tidy is a poor tool for teaching R learners who have no coding background. Relative to learning in a base-R environment, learners using Tidy take longer to become proficient, and once proficient, find that they are only equipped to work in a very narrow range of operations. As a result, we see a flurry of online questions from Tidy users asking “How do I do such-and-such,” when a base-R solution would be simple and straightforward.

I believe the examples here illustrate that base-R solutions tend to be simpler, and thus that base-R is a better vehicle for R learners. However, another use of this document would be as a tutorial for base-R users who want to learn Tidy, and vice versa.

A New Approach to Fairness in Machine Learning

During the last year or so, I’ve been quite interested in the issue of fairness in machine learning. This area is more personal for me, as it is the confluence of several interests of mine:

  • My lifelong activity in probability theory, math stat and stat methodology (in which I include ML).
  • My lifelong activism aimed at achieving social justice.
  • My extensive service as an expert witness in litigation involving discrimination (including a land mark age discrimination case, Reid v. Google).

(Further details in my bio.) I hope I will be able to make valued contributions.

My first of two papers in the Fair ML area is now on arXiv. The second should be ready in a couple of weeks.

The present paper, with my former student Wenxi Zhang, is titled, A Novel Regularization Approach to Fair ML. It’s applicable to linear models, random forests and k-NN, and could be adapted to other ML models.

Wenxi and I have a ready-to-use R package for the method, EDFfair. It uses my qeML machine learning library. Both are on GitHub for now, but will go onto CRAN in the next few weeks.

Please try the package out on your favorite fair ML datasets. Feedback, both on the method and the software, would be greatly appreciated.

Base-R Is Alive and Well

As many readers of this blog know, I strongly believe that R learners should be taught base-R, not the tidyverse. Eventually the students may settle on using a mix of the two paradigms, but at the learning stage they will benefit from the fact that base-R is simple and more powerful. I’ve written my thoughts in a detailed essay.

One of the most powerful tools in base-R is tapply(), a workhorse of base-R. I give several examples in my essay in which it is much simpler and easier to use that function instead of the tidyverse.

Yet somehow there is a disdain for tapply() among many who use and teach Tidy. To them, the function is the epitome of “what’s wrong with” base-R. The latest example of this attitude arose in Twitter a few days ago, in which two Tidy supporters were mocking tapply(), treating it as a highly niche function with no value in ordinary daily usage of R. They strongly disagreed with my “workhorse” claim, until I showed them that in the code of ggplot2, Hadley has 7 calls to tapply(),

So I did a little investigation of well-known R packages by RStudio and others. The results, which I’ve added as a new section in my essay, are excerpted below.

——————————–

All the breathless claims that Tidy is more modern and clearer, whilc base-R is old-fashioned and unclear, fly in the face of the fact that RStudio developers, and authors of other prominent R packages, tend to write in base-R, not Tidy. And all of them use some base-R instead of the corresponding Tidy constructs.

package *apply() calls mutate() calls
brms 333 0
broom 38 58
datapasta 31 0
forecast 82 0
future 71 0
ggplot2 78 0
glmnet 92 0
gt 112 87
knitr 73 0
naniar 3 44
parsnip 45 33
purrr 10 0
rmarkdown 0 0
RSQLite 14 0
tensorflow 32 0
tidymodels 8 0
tidytext 5 6
tsibble 8 19
VIM 117 19

Striking numbers to those who learned R via a tidyverse course. In particular, mutate() is one of the very first verbs one learns in a Tidy course, yet mutate() is used 0 times in most of the above packages. And even in the packages in which this function is called a lot, they also have plenty of calls to base-R *apply(), functions which Tidy is supposed to replace.

Now, why do these prominent R developers often use base-R, rather than the allegedly “modern and clearer” Tidy? Because base-R is easier.

And if it’s easier for them, it’s even further easier for R learners. In fact, an article discussed later in this essay, aggressively promoting Tidy, actually accuses students who use base-R instead of Tidy as taking the easy way out. Easier, indeed!

Comments on the New R OOP System, R7

Object-Oriented Programming (OOP) is more than just a programming style; it’s a philosophy. R has offered various forms of OOP, starting with S3, then (among others) S4, reference classes, and R6, and now R7. The latter has been under development by a team broadly drawn from the R community leadership, not only the “directors” of R development, the R Core Team, but also the prominent R services firm RStudio and so on.

I’ll start this report with a summary, followed by details (definition of OOP, my “safety” concerns etc.). The reader need not have an OOP background for this material; an overview will be given here (though I dare say some readers who have this background may learn something too).

This will not be a tutorial on how to use R7, nor an evaluation of its specific features. Instead, I’ll first discuss the goals of the S3 and S4 OOP systems, which R7 replaces, especially in terms of whether OOP is the best way to meet those goals. These comments then apply to R7 as well.

SUMMARY

Simply put, R7 does a very nice job of implementing something I’ve never liked very much. I do like two of the main OOP tenets, encapsulation and polymorphism, but S3 offers those and it’s good enough for me. And though I agree in principle with another point of OOP, “safety,” I fear that it often results in a net LOSS of safety. . R7 does a good job of combining S3 and S4 (3 + 4 = 7), but my concerns about complexity and a net loss in safety regarding S4 remain in full.

OOP OVERVIEW

The first OOP language in wide use was C++, an extension of C that was originally called C with Classes. The first widely used language designed to be OOP “from the ground up” was Python. R’s OOP offerings have been limited.

Encapsulation:

This simply means organizing several related variables into one convenient package. R’s list structure has always done that. S3 classes then tack on a class name as attribute.

Polymorphism:

The term here, meaning “many forms,” simply means that the same function will take different actions when it is applied to different kinds of objects.

For example, consider a sorting operation. We would like this function to do a numeric sort if it is a applied to a vector of (real) numbers, but do an alphabetical sort on character vectors. Meanwhile, we would like to use the same function name, say ‘sort’.

S3 accomplishes this via generic functions. Even beginning R users have likely made calls to generic functions without knowing it. For instance, consider the (seemingly) ordinary plot() function. Say we call this function on a vector x; a graph of x will be displayed. But if we call lm() on some data, then call plot() on the output lmout R will display some graphs depicting that output:

mtc <- mtcars
plot(mtc$mpg) # plots mpg[i] against i
lmout <- lm(mpg ~ .,data=mtc)
plot(lmout)  # plots several graphs, e.g. residuals

The “magic” behind this is dispatch. The R interpreter will route a nominal call to plot() to a class-specific function. In the lm() example, for instance, lm() returns an S3 object of class ‘lm’, so the call plot(lmout) will actually be passed on to another function, plot.lm().

Other well-known generics are print(), summary(), predict() and coef().

Note that the fact that R and Python are not strongly-typed languages made polymorphism easy to implement. C++ on the other hand is strongly-typed, and the programmer will likely need to use templates, very painful.

By the way, I always tell beginning and intermediate R users that a good way to learn about functions written by others (including in R itself) is to run the function through R’s debug() function. In our case here, they may find it instructive to run debug(plot) and then plot(lmout) to see dispatch up close.

Inheritance:

Say the domain is pets. We might have dogs named Norm, Frank and Hadley, cats named JJ, Joe, Yihui and Susan, and more anticipated in the future.

To keep track of them, we might construct a class ‘pets’, with fields for name and birthdate. But we could then also construct subclasses ‘dogs’ and ‘cats’. Each subclass would have all the fields of the top class, plus others specific to dogs or cats. We might then also construct a sub-subclass, ‘gender.’

“Safety”:

Say you have a generic function defined for the class, with two numeric arguments, returning TRUE if the first is less than the second:

f <- function(x,y) x < y

But you accidentally call the function with two character strings as arguments. This should produce an error, but won’t

In a mission-critical setting, this could be costly. If the app processes incoming sales orders, say, there would be downtime while restarting the app, possibly lost orders, etc.

If you are worried about this, you could add error-checking code, e.g.

> f
function(x,y) {
   if (!is.numeric(x) || !is.numeric(y))
      stop('non-numeric arguments')
   x < y
}

More sophisticated OOP systems such as S4 can catch such errors for you. There is no free lunch, though–the machinery to even set up your function becomes more complex and then you still have to tell S4 that x and y above must be numeric–but arguably S4 is cleaner-looking than having a stop() call etc.

Consider another type of calamity: As noted, S3 objects are R lists. Say one of the list elements has the name partNumber, but later in assigning a new value to that element, you misspell at as partnumber:

myS3object <- partnumber

Here we would seem to have no function within which to check for misspelling etc. Thus S4 or some other “safe” OOP system would seem to be a must–unless we create functions to read or write elements of our object. And it turns out that that is exactly what OOP people advocate anyway (e.g. even in S4 etc.), in the form of getters and setters.

In the above example, for instance, say we have a class ‘Orders’, one of whose fields is partNumber. In S3, the getter in the above example would be named partNumber, and for a particular sales order thisOrder, one would fetch the part number via

get_partNumber(thisOrder)

rather than the more direct way of accessing an R list:

pn <- thisOrder$partNumber

The reader may think it’s silly to write special functions for list read and write, and many would agree. But the OOP philosophy is that we don’t touch objects directly, and instead have functions to act as intermediaries. At any rate, we could place our error-checking code in the getters and setters. (Although there still would be no way under S3 to prevent direct access.)

ANALYSIS

I use OOP rather sparingly in R, S3 in my own code, and S4, reference classes or R6 when needed for a package that I obtain from CRAN (e.g. ebimage for S4), In Python, I use OOP again for library access, e.g. threading, and to some degree, just for fun, as I like Python’s class structure.

But mostly, I have never been a fan of OOP. In particular, I never have been impressed by the “safety” argument. Here’s why:

Safety vs. Complexity

Of course, OOP does not do anything to prevent code logic errors, which are far more prevalent than, say, misspellings. And, most important:

  • There is a direct relation between safety and code complexity.
  • There is a direct relation between code logic errors and code complexity.

One of my favorite R people is John Chambers, “Father of the S Language” and thus the “Grandfather of R.” In his book, Software for Data Analysis, p.335, he warns that “Defining [an S4] class is a more serious piece of programming …than in previous chapters…[even though] the number of lines is not large…” He warns that things are even more difficult for the user of a class than it was for the author in designing it, with “advance contemplation” of what problems users may encounter. And, “You may want to try several different versions [of the class] before committing to one.”

In other words, safety in terms of misspellings etc. comes at possibly major expense in logic errors. There is no avoiding this.

There Are Other Ways to Achieve Safety:

As noted above, we do have alternatives to OOP in this regard, in the form of inserting our own error-checking code. (Note too that error-checking may be important in the middle of your code, using stopifnot().) Indeed, this can be superior to using OOP, as one has much more flexibility, allowing for more sophisticated checks.

Why the Push for R7 Now?

Very few of the most prominent developers of R packages use S4 as of now. One must conclude either that there is not a general urgency for safety and/or authors find that safety is more easily and effectively achieved through alternative means, per the above discussion.

As to encapsulation and inheritance, S3 already does a reasonably good job there. Why, then, push for R7?

The impetus seems to be a desire to modernize/professionalize R, moving closer to status as a general-purpose language. Arguably, OOP had been a weak point of R in that sense, and now R can hold its head high in the community of languages.

That’s great, but as usual I am concerned about the impact on the teaching of R to learners without prior programming experience. I’ve been a major critic of the tidyverse in that regard, as Tidy emphasizes “modern” functional programming/loop avoidance to students who barely know what a function is. Will R beginners be taught R7? That would be a big mistake, and I hope those who tend to be enthralled with The New, New Thing resist such a temptation.

Me, well as mentioned, I’m not much of an OOP fan, and don’t anticipate using R7. But the development team has done a bang-up job in creating R7, and for those who feel the need for a strong OOP paradigm, I strongly recommend it.

A Major Contribution to Learning R

Prominent statistician Frank Harrell has come out with a radically new R tutorial, rflow. The name is short for “R workflow,” but I call it “R in a box” –everything one needs for beginning serious usage of R, starting from little or no background.

By serious usage I mean real applications in which the user has a substantial computational need. This could be a grad student researcher, a person who needs to write data reports for her job, or simply a person who is doing personal analysis such as stock picking.

Like other tutorials/books, rflow covers data manipulation, generation of tables and graphics, etc. But UNLIKE many others, rflow empowers the user to handle general issues as they inevitably pop up, as opposed to just teaching a few basic, largely ungeneralizable operations. I’ve criticized the tidyverse in particular for that latter problem, but really no tutorial, including my own, has this key “R in a box” quality.

The tutorial is arranged into 19 short “chapters,” beginning with R Basics, all the way through such advanced topics as Manipulating Longitudinal Data and Parallel Computing. The exciting new Quarto presentation tool by RStudio is featured, as is the data.table package, essential for practical use of large datasets.

Note carefully that this tutorial is the product of Frank’s long experience “in the trenches,” conducting intensive data analysis in biomedical applications. (This specific field of application is irrelevant; rflow is just as useful to, say, marketing analysts, as it is for medicine.) His famous monograph, Regression Modeling Strategies, is a standard reference in the field. Even I, as the author of my own regression book, often find myself checking out what Frank has to say in his book about various topics.

This point about rflow arising from Frank’s long experience dealing with real data is absolutely key, in my view. And his choice of topics, and especially their ordering, reflects that. For instance, he brings in the topic of missing data early in the tutorial.

Anyone who teaches R, or is learning R, should check out rflow.

Greatly Revised Edition of Tidyverse Skeptic

As a longtime R user and someone with a passionate interest in how people learn, I continue to be greatly concerned about the use of the Tidyverse in teaching noncoder learners of R. Accordingly, I have now thoroughly revised my Tidyverse Skeptic essay. It is greatly reorganized with focus on teaching R, with a number of new examples, and some material on historical context of the rise of Tidy. I continue to on the one hand thank RStudio for its overall contribution to the R community but on the other believe that using Tidy for teaching beginners is actually an obstacle to learning for that group.

I close the essay by first noting that RStudio is now a Public Interest Corporation, thus with much broader public responsibility. I then renew a request I made to RStudio founder/CEO JJ Allaire when he met with me in 2019: “Please encourage R instructors to use a mixture of Tidy and base-R in their teaching.”

Please read the revised essay at the above link. Its Overview section is reproduced below.

  • Again, my focus here is on teaching R to those with little or no coding background. I am not discussing teaching Computer Science students.
  • Tidy was consciously designed to equip learners with just a small set of R tools. The students learn a few dplyr verbs well, but that equips them to do much less with R than a standard R beginners course would teach. That leaves the learners less equipped to put R to real use, compared to “graduates” of standard base-R courses.
  • Thus the “testimonials” in which Tidy teachers of R claim great success are misleading. The “success” is due to watering down the material (and false conflation with ggplot2). The students learn to mimic a few example patterns, but are not equipped to go further.
  • The refusal to teach ‘$’, and the de-emphasis of, or even complete lack of coverage of, R vectors is a major handicap for Tidy “graduates” to making use of most of R’s statistical functions and statistical packages.
  • Tidy is too abstract for beginners, due to the philosophy of functional programming (FP). The latter is popular with many sophisticated computer scientists, but is difficult even for computer science students. Tidy is thus unsuited as the initial basis of instruction for nonprogrammer students of R. FP should be limited and brought in gradually. The same statement applies to base-R’s own FP functions.
  • The FP philosophy replaces straightforward loops with abstract use of functions. Since functions are the most difficult aspect for noncoder R learners, FP is clearly not the right path for such learners. Indeed, even many Tidy advocates concede that it is in various senses often more difficult to write Tidy code than base-R. Hadley says, for instance, “it may take a while to wrap your head around [FP].”
  • A major problem with Tidy for R beginners is cognitive overload: The basic operations contain myriad variants. Though of course one need not learn them all, one needs some variants even for simple operations, e.g. pipes on functions of more than one argument.
  • The obsession among many Tidyers that one must avoid writing loops, the ‘$’ operator, brackets and so on often results in obfuscated code. Once one goes beyond the simple mutate/select/filter/summarize level, Tidy programming can be of low readability.
  • Tidy advocates also concede that debugging Tidy code is difficult, especially in the case of pipes. Yet noncoder learners are the ones who make the most mistakes, so it makes no sense to have them use a coding style that makes it difficult to track down their errors.
  • Note once again, that in discussing teaching, I am taking the target audience here to be nonprogrammers who wish to use R for data analysis. Eventually, they may wish to make use of FP, but at the crucial beginning stage, keep it simple, little or no fancy stuff.

Issues in Differential Privacy

Differential privacy (DP) is an approach to maintaining the privacy of individual records in a database, while still allowing  statistical analysis. It is now perceived as the go-to method in the data privacy area, enjoying adoption by the US Census Bureau and several major firms in industry, as well as a highly visible media presence. DP has developed a vast research literature. On the other hand, it is also the subject of controversy, and now, of lawsuits.

Some preparatory remarks:

I’ve been doing research in the data privacy area off and on for many years, e.g. IEEE Symposium on Security and Privacy; ACM Trans. on Database Systems; several book chapters; and current work in progress, arXiv. I was an appointed member of the IFIP Working Group 11.3 on Database Security in the 1990s. The ACM TODS paper was funded in part by the Census Bureau.

Notation: the database consists of n records on p variables.

The R package diffpriv makes use of standard DP methods easy to implement, and is recommended for any reader who wishes to investigate these issues further.

Here is a classical example of the privacy issue. Say we have an employee database, and an intruder knows that there is just one female electrical engineer in the form. The intruder submits a query on the mean salary of all female electrical engineers, and thus illicitly obtains her salary.

What is DP?

Technically DP is just a criterion, not a method, but the term generally is taken to mean methods whose derivation is motivated by that criterion.

DP is actually based on a very old and widely-used approach to data privacy, random noise perturbation. It’s quite simple. Say we have a database that includes a salary variable, which is considered confidential. We add random, mean-0 noise, to hide a person’s real income from intruders. DP differs from other noise-based methods, though, in that it claims a quantitative measure of privacy.

Why add noise?

The motivation is that, since the added noise has mean 0, researchers doing legitimate statistical analysis can still do their work. They work with averages, and the average salary in the database in noisy form should be pretty close to that of the original data, with the noise mostly “canceling out.” (We will see below, though, that this view is overly simple, whether in DP or classical noise-based methods.)

In DP methods, the noise is typically added to the final statistic, e.g. to a mean of interest, rather than directly to the variables.

One issue is whether to add a different noise value each time a query arrives at the data server, vs. adding noise just once and then making that perturbed data open to public use. DP methods tend to do the former, while classically the latter approach is used.

Utility:

A related problem is that for most DP settings, a separate DP version needs to be developed for every statistical method. If a user wants, say, to perform quantile regression, she must check whether a DP version has been developed and code made available for it. With classical privacy methods, once the dataset has been perturbed, users can apply any statistical method they wish. I liken it to an amusement park. Classical methods give one a “day pass” which allows one to enjoy any ride; DP requires a separate ticket for each ride.

Privacy/Accuracy Tradeoffs:

With any data privacy method, DP or classical, there is no perfect solution. One can only choose a “dial setting” in a range of tradeoffs. The latter come in two main types:

  • There is a tradeoff between protecting individual privacy on the one hand, and preserving the statistical accuracy for researchers. The larger the variance of added noise, the greater the privacy but the larger the standard errors in statistical quantities computed from the perturbed data.
  • Equally important, though rarely mentioned, there is the problem of attenuation of relationships between the variables. This is the core of most types of data analysis, finding and quantifying relationships; yet the more noise we add to the data, the weaker the reported relationships will be. This problem arises in classical noise addition, and occurs in some DP methods, such as ones that add noise to counts in contingency tables. So here we have not only a variance problem but also a bias problem; the absolute values of correlations, regression coefficients and so on are biased downward. A partial solution is to set the noise correlation structure equal to that of the data, but that doesn’t apply to categorical variables (where the noise addition approach doesn’t make much sense anyway).

Other classical statistical disclosure control  methods:

Two other major data privacy methods should be mentioned here.

  1.  Cell suppression: Any query whose conditions are satisfied by just one record in the database is disallowed. In the example of the female EE above, for instance, that intruder’s query simply would not be answered. One problem with this approach is that it is vulnerable to set-differencing attacks. The intruder could query the total salaries of all EEs, then query the male EEs, and then subtract to illicitly obtain the female EE’s salary. Elaborate methods have been developed to counter such attacks.
  2. Data swapping: For a certain subset of the data — either randomly chosen, or chosen according to a record’s vulnerability to attack — some of the data for one record is swapped with that of a similar record. In the female EE example, we might swap occupation or salaries, say.

Note that neither of these methods avoids the problem of privacy/accuracy tradeoffs. In cell suppression, the more suppression we impose, the greater the problems of variance and bias in stat analyses. Data swapping essentially adds noise, again causing variance and bias.

The DP privacy criterion:

Since DP adds random noise, the DP criterion is couched in probabilistic terms. Consider two datasets, D and D’, with the same variables and the same number of records n, but differing in 1 record. Consider a given query Q. Denote the responses by Q(D) and Q(D’). Then the DP criterion is, for any set S in the image of Q,

P(Q(D) in S) < P(Q(D’) in S) exp(ε)

for all possible (D,D’) pairs and for a small tuning parameter ε. The smaller ε, the greater the privacy.

Note that the definition involves all possible (D,D’) pairs; D here is NOT just the real database at hand (though there is a concept of local sensitivity in which D is indeed our actual database). On the other hand, in processing a query, we ARE using the database at hand, and we compute the noise level for the query based on n for this D.

DP-compliant methods have been developed for various statistical quantities, producing formulas for the noise variance as a function of ε and an anticipated upper bound on Δ = |Q(D) – Q(D’)|.  Again, that upper bound must apply to all possible (D,D’) pairs. For human height, say, we know that no one will have height, say, 300 cm, which could take for Δ If our query Q() is for the mean height, we divide Δ by n; it’s a rather sloppy bound, but it would work.

Problems with DP’s claims of quantifiable guaranteed privacy:

(Many flaws have been claimed for DP, but to my knowledge, this analysis is new.)

Again consider the female EE example. One problem that arises right away is that, since this is a conditional mean, Q(D) and/or Q(D’) will be often be undefined.

It’s not clear that existing implementations of DP are prepared to handle this. E.g. consider the diffpriv package. It appears to do  nothing to deal with this problem. The Google Differential Privacy Library uses SQL, we have trouble:

If database access is via SQL, the problem would result in returning NULL if the conditioning set is empty for some D or D’. Since such queries are central to data analysis, this would be a serious problem. 

It’s not clear that the Census Bureau’s TopDown algorithm handles the problem either. They treat the data as a giant contingency table,  adding noise to each cell. All queries are then processed as functions of the cell counts, with no further noise added.

A major issue appears to be that many cells that had non-0 counts originally will now be 0s in the noisy version of the data. The added random noise will produce negative numbers in many cases, and though the Bureau’s procedure changes those to at least 0, many will stay at 0. One then has the “0 denominator” issue described above.

Another issue is queries for totals, say the total salaries for all female workers.  The noise that would be added, say based on maximum salary, would be the same regardless of whether the there are 10 women in the firm or 1000. While DP would give mathematically correct noise here, having the noise be the same regardless of the overall size of the total seems unwarranted.

Bias problems:

As noted, DP methods that work on contingency tables by adding independent noise values to cell counts can attenuate correlation and thus produce bias. The bias will be substantial for small tables.

Another issue, also in DP contingency table settings, is that bias may occur from post-processing. If Laplacian noise is added to counts, some counts may be negative. As shown in Zhu et al, post-processing to achieve nonnegativity can result in bias.

The US Census Bureau’s adoption of DP:

The Census Bureau’s DP methodology replaces the swapping-based approach used in past census reports. Though my goal in this essay has been mainly to discuss DP in general, I will make a few comments.

First, what does the Bureau intend to do? They will take a 2-phase approach. They view the database as one extremely large contingency table (“histogram,” in their terminology). Then they add noise to the cell counts. Next, they modify the cell counts to satisfy nonnegativity and certain other constraints, e.g. taking the total number of residents in a census block to be invariant. The final perturbed histogram is released to the public.

Why are they doing this? The Bureau’s simulations indicate that, with very large computing resources and possibly external data, an intruder could reconstruct much of the original, unperturbed data.

Criticisms of the Census Bureau’s DP Plan:

In addition to the general concerns about DP, there are also ones specific to the Bureau’s DP methods.

The Bureau concedes that the product is synthetic data. Well, isn’t any perturbed data synthetic? Yes, but here ALL of the data is perturbed, as opposed to swapping, where only a small fraction of the data changes.

Needless to say, then, use of synthetic data has many researchers up in arms. They don’t trust it, and have offered examples of undesirable outcomes, substantial distortions that could badly effect research work in business, industry and science. There is also concern that there will be serious impacts on next year’s congressional redistricting, which strongly relies on census data, though one analysis is more optimistic.

There has already been one lawsuit against the Bureau’s use of DP. Expect a flurry more, after the Bureau releases its data — and after redistricting is done based on that data. So it once again boils down to the privacy/accuracy tradeoff. Critics say the Bureau’s reconstruction scenarios are unlikely and overblown. Again, add to that the illusory nature of DP’s privacy guarantees, and the problem gets even worse.

Final comments:

How did we get here? DP has some serious flaws. Yet it has largely become entrenched in the data privacy field. In addition to being chosen as the basis of the census data, it is used somewhat in industry. Apple, for instance, uses classic noise addition, applied to raw data, but with a DP privacy budget.

As noted, early DP development was done mainly by CS researchers.  CS people view the world in terms of algorithms, so that for example they feel very comfortable with investigating the data reconstruction problem and applying mathematical optimization techniques. But the CS people tend to have poor insight into what problems the users of statistical databases pursue in their day-to-day data analysis activities.

Some mathematical statisticians entered the picture later, but by then DP had acquired great momentum, and some intriguing theoretical problems had arisen for the math stat people to work on. Major practical issues, such as that of conditional quantities, were overlooked.

In any case, the major players in DP continue to be in CS. For example, all 10 of the signatories in a document defending the Census Bureau DP plan are in CS.

In other words, in my view, the statistician input into the development of DP came too little, too late. Also, to be frank, the DP community has not always been open to criticism, such as the skeptical material in Bambauer, Muralidhar and Sarathy.

Statistical disclosure control is arguably one of the most important data science issues we are facing today.  Bambauer et al in the above link sum up the situation quite well:

The legal community has been misled into thinking that differential privacy can offer the benefits of data research without sacrificing privacy. In fact, differential privacy will usually produce either very wrong research results or very useless privacy protections. Policymakers and data stewards will have to rely on a mix of approaches: perhaps differential privacy where it is well-suited to the task, and other disclosure prevention techniques in the great majority of situations where it isn’t.

A careful reassessment of the situation is urgently needed.

 

Musings, useful code etc. on R and data science