Several years ago political activist Ron Unz wrote a lengthy essay suggesting that elite universities have held Asian applicants to much higher academic standards than other groups, particularly Jews. He cites the disproportionately Jewish composition of top universities, and uses a questionable analysis of PSAT records to conclude that elite universities are part of a vast Jewish conspiracy (not that you’ve ever heard this story before…):
Taken in combination, these trends all provide powerful evidence that over the last decade or more there has been a dramatic collapse in Jewish academic achievement, at least at the high end.
Unz reaches this conclusion by analyzing lists of high performers that appear to be overly Asian, at least by last name. This isn’t necessarily false – though there is significant bias given that one can with near certainty identify a name as Asian, but would have a relatively harder time achieving such power with Jewish names – but it certainly isn’t a good way to measure bias.
Despite widespread criticism, this article is favorably featured in a number of top blogs and publications, including the New York Times. The point of this post is to conclusively add to the already-large repository of evidence that Ron Unz is a liar.
If the claim is that an admissions committee is biased towards Jews and therefore holds Asians to a relatively higher standard, it necessarily follows that admitted Jews underperform the general university population. The honest way to test this hypothesis would be to check the extent to which ethnicity is associated with academic performance.
Maybe Unz and others were too lazy to compile the necessary data but, given what it reveals, it is likelier that they wanted to hide it. While universities don’t explicitly make available academic performance of accepted students by race, let alone ethnicity, this information can be inferred from public data – a number of universities publish a PDF of commencement programs, which usually contain the names of all graduating seniors, their department of study, and important academic honors they’ve received (Latin honors and prestigious fellowships among others).
The somewhat similar structure of these documents (for example, see the University of Pennsylvania or Princeton) makes it possible to parse the PDF files for the underlying information (it’s a somewhat messy task – send me an email if you want the code). Since these files are available from 2007 or so, with various omitted years for various universities, it is possible to generate matched data of name, graduation year, major, school, and any academic awards received.
It’s neither difficult to infer gender from the first name nor catastrophically harder to infer ethnicity from the last. It is easy to stratify Asian ethnicity by referencing a sufficiently large list of common last names (from the Census and other sources). The false positive rate here is extremely low (that is Varun Agarwal is extremely unlikely not to be Indian).
Identifying Ashkenazic names is harder, as there is commonality with European, especially German, surnames. Additionally, Jewish-Americans may have adopted more generic names that makes powerful identification challenging. However, for any given name, using an extremely large list of common Ashkenazic last names, and adding the score of the top 3 matches based on a Levenshtein distance fuzzy matching algorithm, it is possible to get some sense of probability that a name is Ashkenazic.
Since I don’t have verified data on the ethnicity of graduates by name, it’s difficult to test or train this strategy. That said this doesn’t really disturb the results. For one, this error is probably random – i.e. associations between ethnicity and achievement, if any, are unlikely to depend on the conviction with which an algorithm can determine ethnicity. Moreover, to the extent there are attenuated coefficients, the bias will only be in favor of Unz’s claim – i.e. odds appear to be lower than expected by a factor proportional to the ratio between classification error and total error. Furthermore, even if there is a strong association one way or the other simply between Jewish sounding names and odds of success, that remains an interesting finding in and of itself.
For a flavor of the classification strategy, the table below charts 10 randomly sampled names at various thresholds of an “Ashkenazi score”. Clearly there is a link between strictness of threshold and what one might consider to be typical Jewish-American last names.
Next, to estimate the order of attenuation bias, one can compare the estimated coefficient on Jewish odds by for each threshold:
When including scores above 100 (which includes pretty much every non-Asian name, and some Asian names as well) the variable has little explanatory power if any – the measurement error dwarfs the residual error because this factor is true for pretty much every name. Clearly as we increase the threshold above which we admit a name as Jewish, the coefficient increases. This doesn’t imply anything about those who have more obviously Jewish surnames as much as give an order of magnitude estimate of attenuated coefficient bias, which is obviously significant. Of course if it is true that Jewish students have higher academic performance, part of the increase also emerges from the increasing proportion of the included group that is actually Jewish.
To maintain comparability over time by school, Phi Beta Kappa and equivalent honor societies are used as proxies for underlying academic achievement, exactly the dimension along which Ron Unz argues that Jews underperform. Phi Beta Kappa (and equivalents for engineering and business majors) is composed of students roughly in the top 10% of their graduating classes by GPA, with additional input from faculty recommendations.
It is not inconceivable that there is bias ingrained in the election process. That said, it is unlikely that this bias is somehow systematically focused against Asians in favor of Jews further moderated by the fact that GPA is still the predominant requirement. Here are the results based on regression on almost every Penn graduate over the past decade:
The important point is that, controlling for graduation year, gender, school, and dual-degree status, the odds of being in an academic honor society increases by about 1.5x for those very likely to have Jewish last names. Achievement by major offers a simple reality check – it is relatively harder to maintain a high GPA in a Wharton + engineering dual degree program. I have been able to compile similar data for Brown and Princeton, where I notice similar trends.
The point of this data – to be clear – is not to make a judgement on the academic achievement of various ethnic groups and therefore a claim on how admissions offices should operate. Other factors are may be much more important – however this data is a useful antidote to anti-scientific crusaders maligning large groups of people, without realizing that the underlying statistics on these questions are inherently complicated and do not yield easy interpretation.
One can say with little qualification that the set of vapid claims Ron Unz makes, which the New York Times editorial page approvingly cites, is predicated on bogus data. Not only does it fail to make any effort to identify Jewishness beyond non-fuzzy matching with top 100 lists, but it also fails to use data on actual achievement at school in favor of questionable data from PSAT scores. The above analysis is much more conceptually sound given the more careful identification of ethnicity, and the superior measure of achievement which requires no assumption about the admission officer preference function.
Unz and his co-conspirators have not only promoted sloppy data and a sloppy analysis, but have also completely disregarded any scholastic standards whatsoever before committing to a negative story about an entire category of students, which would be a cautionary tale if it was not so laughable in face of actual results.
Be careful when citing this post. You might notice that while Jews perform extremely well relative to baseline in every school where there is a significant variable at all, the coefficient for engineers is low (even though math and physics would be included in the first category where performance is fine). Ron Unz has an explanation for that:
We should also remember that Jewish intellectual performance tends to be quite skewed, being exceptionally strong in the verbal subcomponent, much lower in math, and completely mediocre in visuospatial ability; thus, a completely verbal-oriented test such as Wordsum would actually tend to exaggerate Jewish IQ.
Albert Einstein and Richard Feynman; those literary geniuses. He reaches this conclusion, of course, based on some colorless tale of orthodox Jewish reproduction patterns in urbanized environments (or something like that I didn’t actually bother to figure out what he meant). Of course, Unz is probably banking on the hope that Jews are too mathematically illiterate to see through his lies.