Yglesias makes an innumerate (non-)funny

Yesterday, Matt Yglesias tried to point out the absurdity of a Joe Klein column by turning the tables with the numbers Klein invoked, thereby mimicking and mocking Klein’s argument.

Yglesias quotes Klein:

Blacks represent 13% of the population but commit 50% of the murders; 90% of black victims are murdered by other blacks. The facts suggest that history is not enough to explain this social disaster.

Yglesias then links to the relevant FBI data and points out that:

Back in 2011, the most recent year for which data is available, a staggering 83 percent of white murder victims were killed by fellow Caucasians.

He uses this to (jokingly) make the exaggerated case that white-on-white murder is out of control, that white violence is a social disaster, and so on.

Now, I understand what he’s trying to do here, but, either because of innumeracy or willful negligence, his whole jokey approach fails miserably, at least as long as the reader isn’t innumerate.

Note which parts of the Klein quote above have analogs in the Yglesias quote and which parts do not. The key to Klein’s argument is that “Blacks represent 13% of the population but commit 50% of the murders,” but Yglesias only invokes the white analog to Klein’s “90% of black victims are murdered by other blacks.

Yglesias’ essay depends on the reader not noticing that the relevant analog to the important part of Klein’s argument is that whites represent 72% of the population and commit (approximately) 50% of the murders (based on the FBI data linked above).

To be clear, this is not intended as a defense of (or an argument against) Klein. It’s just a response to Yglesias’ shiftiness (or ignorance) and his willingness to exploit readers’ possible innumeracy to make his point. It’s an unfunny failure of an argument.

Posted in statistical description | Comments Off

Truly random

In today’s new What If?, Randall Munroe discusses, in part, randomly aiming a laser at the sky, writing “if you aimed in a truly random direction, you would have an almost 50% chance of hitting the Earth.”

This is one of those cases wherein my training in linguistics and statistics conflict with one another. Kind of like how people use “a fraction of” to indicate a small fraction of something, despite the fact that, for example, 999,999/1,000,000 is a fraction, too. As is, for that matter, 10/3. So, saying that this year’s budget deficit is a fraction of last year’s isn’t, technically speaking, informative. It certainly does not imply that this year’s deficit is a small proportion of last year’s, though this is pretty much always what people mean when they say things like this.

So, my linguistics training lets me understand that the phrase is used to mean “a small fraction,” and that’s fine. But my statistics (and assorted math) training just can’t let it go.

I have similar issues with using a phrase like “truly random” to mean “uniformly distributed.” Clearly, Munroe means the latter, but a laser could be pointed randomly as if it were governed by a Gaussian distribution, which, with sufficiently small variance(s) and an appropriate mean (vector), would produce a very small chance of hitting the Earth.

Again, my linguistics tells me that it’s fine to use “truly random” to mean “uniformly distributed” in regular old language, but my statistics training just can’t let this kind of thing go.

It’s particularly hard to digest when it comes from a mathematically sophisticated writer in a semi-technical setting.

 

 

Posted in language, mildly informative filler | Comments Off

What counts as an upset?

Five Thirty Eight has been discussing the World Cup and their predictions based on what I can only assume is a fancy-pants Bayesian statistical model (done in Excel, natch).

A few days ago, Nate Silver his own self wrote a post on this topic. In trying to give it a counter-intuitive hook, the post got a little too cute for its own good. The headline sums up what’s wrong, but I understand that headlines are often written separately from article proper, so we might forgive statistical silliness up top. But the same statistical silliness makes an appearance in the body. Silver writes:

For instance, the probability of correctly identifying the winners in each of the first four knockout matches — Brazil over Chile, Colombia over Uruguay, the Netherlands over Mexico and Costa Rica over Greece — was about 23 percent, or one chance in 4.3. And the chance of going 12 for 12, as the FiveThirtyEight favorites have done so far, is just one in 75.

It’s an upset, in other words, when all the favorites prevail. On average, we’d have expected three or four upsets through this point in the knockout round.

How interesting! They’re favorites, but it’s still, somehow, a (huge) upset when they prevail! What gives?

What gives is that Silver is comparing a single outcome – all favorites prevail – to every other possible outcomes. With 12 games (8 in the first round, 4 in the second), there are 2^{12} = 4096 possible outcomes.

(As often happens to me with counting problems like this, I came up with that answer fairly quickly, then immediately doubted the assumptions that led me to this answer were correct. Specifically, I was worried that I was somehow counting second round outcomes that were ruled out by first round outcomes (e.g., Brazil winning in the second round after losing in the first round). I’m pretty sure 2^{12} is right, though, and we can spell out a slightly simplified version to see why.

Suppose the semi-finals feature, say, Germany vs Brazil in one game and Argentina vs Holland in the other. Call this ’round one’ and call the final ’round two.’ Using my logic, there should be 2^3 = 8 possible outcomes in these three games, and indeed there are. Listing all possible combinations of winners: (1) Brazil, Argentina, Brazil; (2) Brazil, Argentina, Argentina; (3) Brazil, Holland, Brazil; (4) Brazil, Holland, Holland; (5) Germany, Argentina, Germany; (6) Germany, Argentina, Argentina; (7) Germany, Holland, Germany; (8) Germany, Holland, Holland.

We could, in theory, list out all the possibilities for the rounds with 8 and 4 games – the real rounds one and two – to get the number of possible outcomes stated above, 2^{8+4} = 2^{12}.)

Okay, so, when Silver says that the teams that have won so far were all favorites, he was saying that the single outcome we’ve observed is the single most probable outcome. If it weren’t the most probably single outcome, then at least one team that won wouldn’t have been a favorite.

On the other hand, when he says that it’s an upset that all the favorites have won, he’s saying that the this single most probable outcome is less probable than the sum of the probabilities of all of the other possible outcomes.

Given that there are 4095 other possible outcomes, the favorites would have to have been favored to an absurd degree for the sum total of all other outcomes to be less probable.

The point being that it’s not particularly interesting to compare the single outcome of all favorites winning to every possible other outcome. It gets you a catchy headline, I guess, but it doesn’t provide any useful insight into how the tournament is playing out.

Posted in SCIENCE!, statistical description | Comments Off

Partial progress plotting PyMC properties

As noted yesterday, I figured out how to sample PyMC chains in parallel recently. I’ve also been working on some plotting functions to assess chain convergence and autocorrelation, goodness of model fit, and to illustrate a fitted (multilevel GRT) model.

In assessing chain convergence, you can calculate the Gelman-Rubin statistic \hat{R}, which is, in essence, an F-like statistic providing an ANOVA-like test of any overall differences between the values in your chains. PyMC has a number of useful diagnostic tools built in, including a gelman_rubin method. It also has methods for calculating and plotting autocorrelation functions and histograms of chains, but for some reason, with the model I’m dealing with, the out-of-the-box plotting functions aren’t working well for me. So, I wrote my own, and for each parameter, it produces a figure like this (which is illustrating \hat{R}, autocorrelation (using the statsmodels package time series analysis acf function), and each of three chain’s distribution of values for a parameter called mu_mx):

plot_chains_example

 

I’m pretty happy with it, even if there are still some kinks to be worked out (e.g., the x-axis label for the top left panel is clipped by the bottom left panel; for some parameters, the tick labels under the histogram overlap).

I also wrote a function to plot observed and predicted (identification-confusion) response probabilities for the model I’m working with:

predobs_exampleThe vertical error bars indicate 95% highest density intervals, with the circles indicating the mean posterior value (on the y-axis) and the observed value (on the x-axis). The closer the symbols to the diagonal line, the better the fit.

Finally, I wrote a function to illustrate the fitted model for each individual subject (the small panels) and the group-level model (the big panel). These two figures are for models fit to the non-speech auditory perception data presented in my 2009 paper with Lentz and Townsend (see my CV for more details, if you’re so inclined). This one illustrates the multilevel GRT model fit to the frequency-by-duration, broadband noise stimuli:

fxd_ml_fit

And this one illustrates the model fit to the pitch-by-timbre (F0-by-spectral-prominence-location) data:

pxt_ml_fitAgain, there are probably some ways these could be improved, but overall I’m quite happy with how they look. I’m using my recently-acquired ellipse-plotting knowledge for its intended purpose (for the purpose I intended for it, anyway), which is nice, and it’s very easy to make this kind of multi-panel plot with matplotlib, which is also nice.

It feels good to have this model functioning in a way that others could, in principle, use. The two papers I’ve published using this model presented analyses done with WinBUGS, and the only way I could figure out how to get WinBUGS to fit this model was by feeding it a giant array of pre-calculated bivariate normal CDF values and using trilinear interpolation. The model works, but it’s ungainly. I don’t think I would want anyone else to mess with it, and I don’t imagine others would particularly want to do so. This isn’t a recipe for reproducible research.

I never could figure out how to get this model to work in JAGS, which is a superior, cross-platform version of BUGS. JAGS would just hang when I tried to feed it the giant CDF array (BUGS would always seem like it was hanging, but if I went away for long enough, it would have a model fit for me – no such luck with JAGS).

I haven’t tried very hard (at all, really) to get this model working with Stan, though I did request multivariate normal CDFs as a feature (it’s on the to-do list, so maybe I’ll come back to work on this model in Stan later on). Given that it’s functioning in PyMC, I don’t feel much need to get it working in Stan right now.

 

 

Posted in Python, statistical graphics, statistical modeling | Comments Off

Parallel processing of PyMC models in IPython

I’ve been learning how to use PyMC to fit Bayesian models over the last few months, and recently I decided I wanted to figure out how to sample separate chains in parallel on different cores on whatever machine I’m using.

I poked around a bit, looking at various multi-processing Python packages, and then I found this IPython notebook that walks through a fairly simple example using parallel processing machinery in IPython.

It’s easy to run multiple chains of a PyMC in serial by writing a for loop and calling a model’s sample method once for each chain (see here for details about model fitting in PyMC). But if you want lots of samples in each chain, or if it takes a long time to get even a small number of samples from your model, it seems very inefficient to sample your chains in serial.

Okay, so here’s a very quick and dirty rundown of how to run chains in parallel.

First, you need IPython and PyMC installed, and you need a model script that does everything (i.e., reads in data, defines your model, writes your samples to disk, etc…). Let’s call your model script model_script.py.

The IPython notebook linked above seems to make some of this more complicated than it needs to be, in my opinion, importing PyMC for each core, then pushing the data to each core, and only then feeding the model script to the cores. If your script works when you call it using Python from the command line, it should work as described below.

Let’s say you want three chains. Open one terminal windows and, after the prompt ($), type:

$ ipcluster start -n 3

Now, open another terminal window (or tab) and start IPython. Once that’s up and running, type:

In [1]: from IPython.parallel import Client

In [2]: client = Client()

In [3]: direct = client[:]

In [4]: direct.block = True

In [5]: model_script = open('model_script.py').read()

In [6]: direct.execute(model_script)

I don’t have a particularly deep understanding of what all is going on here, but I know it works. For the time being, that’s good enough for me. You can, and I probably should (and maybe even will, at some point), read all about ipcluster and various associated tools here.

Anyway, assuming, as mentioned above, that your model_script.py saves your samples for you (rather than, e.g., only keeping them in working memory for use in IPython), you should be good to go.

For what it’s worth, I ran into a bit of a problem when I first got this working. Specifically, the separate cores were creating databases with identical names in rapid succession, so only the last core to create a database was actually saving anything.

I fixed this by using something that I joked about when I first learned of its existence, namely Unix time, which is the number of seconds that have elapsed since the beginning of January 1, 1970.

I was fiddling with the time package in Python, trying to get unique time stamps to add to the database names (since I can’t figure out how to feed each core a unique string to add to its database name), and realized that time.time() returns a number with a few decimal places. I tried adding the following to the database names, under the assumption that somewhere in the neighborhood of the fourth, fifth, or sixth decimal place of Unix time there would be measurable delays between the creation of the databases from the different cores.

clock_string = str(int(time.time()*1000000))

It works, and it looks like I could even shave a couple zeros off and still get distinct names.

Posted in Python, statistical modeling | 1 Comment

Bivariate normal ellipse plotting in Python

When I decided to start using Python for data analysis and visualization, I was a bit worried about the latter, as I had become fairly fond of the flexibility of the R base graphics. The more I use matplotlib, though, the happier I am with it.

Case in point: today I figured out how to plot ellipses for bivariate normal densities. In much of my work with general recognition theory (GRT), I have focused on a fairly restricted set of models. Specifically, I have focused on cases in which the marginal variances of the modeled perceptual distributions are not identifiable (well, technically speaking, the issue is that the marginal variances and means are not both identifiable).

In general, a useful way to illustrate GRT models is by taking a bird’s-eye view and looking down at contours describing the modeled distributions, like so:

Phonetic Trapezoid
Phonetic Trapezoid

In this kind of simple case, you can just pick a level above the plane at which to figuratively slice the modeled densities, and you get contours that are comparable across the perceptual distributions.

However, in certain GRT models and associated data sets (e.g., in which you have more than two response levels on each dimension), the marginal variances are identifiable (along with the means). In this case, slicing each density at the same height produces bad graphics, since densities with larger marginal variances are more spread out and lower down than otherwise comparable densities with smaller marginal variances. Hence, slicing at a height sufficiently high up off the base plane can, counterintuitively, produce smaller ellipses for densities with larger marginal variances.

The solution is to plot ellipses that enclose a specified volume (i.e., for which the integral over the region specified by the ellipse takes a particular value) rather than at a specified height.

The code I wrote to plot the height-based figures calculates values for the bivariate normal densities on a grid, then uses built-in ‘contour’ functions to plot the ellipses for a given height. To plot based on volume, this approach would be either very ugly or totally non-functional. Thankfully, I was able to find (and adapt) code that takes an analytical approach based on the density mean and covariance parameters and uses some nice built-in matplotlib features. The adaptation consists mostly of switching from specifying the number of standard deviations to specifying the volume, based on the nice description of the relationships between density parameters and volumes given here.

Here’s a simple example figure (not from a fitted model, just from some semi-randomly chosen covariance and mean values), with each ellipse enclosing half of the volume of each bivariate normal density, and with unit marginal variances for the density located at the origin:

Ellipses!

Here’s the adapted code:

def plot_cov_ellipse(cov, pos, volume=.5, ax=None, fc='none', ec=[0,0,0], a=1, lw=2):
    """
    Plots an ellipse enclosing *volume* based on the specified covariance
    matrix (*cov*) and location (*pos*). Additional keyword arguments are passed on to the 
    ellipse patch artist.

    Parameters
    ----------
        cov : The 2x2 covariance matrix to base the ellipse on
        pos : The location of the center of the ellipse. Expects a 2-element
            sequence of [x0, y0].
        volume : The volume inside the ellipse; defaults to 0.5
        ax : The axis that the ellipse will be plotted on. Defaults to the 
            current axis.
    """

    import numpy as np
    from scipy.stats import chi2
    import matplotlib.pyplot as plt
    from matplotlib.patches import Ellipse

    def eigsorted(cov):
        vals, vecs = np.linalg.eigh(cov)
        order = vals.argsort()[::-1]
        return vals[order], vecs[:,order]

    if ax is None:
        ax = plt.gca()

    vals, vecs = eigsorted(cov)
    theta = np.degrees(np.arctan2(*vecs[:,0][::-1]))

    kwrg = {'facecolor':fc, 'edgecolor':ec, 'alpha':a, 'linewidth':lw}

    # Width and height are "full" widths, not radius
    width, height = 2 * np.sqrt(chi2.ppf(volume,2)) * np.sqrt(vals)
    ellip = Ellipse(xy=pos, width=width, height=height, angle=theta, **kwrg)

    ax.add_artist(ellip)
Posted in Python, statistical graphics | Comments Off

Why no log toggle in Google Ngrams?

Josh sent me a link to an interesting blog post about whether or not we’re in a tech bubble. As Josh pointed out to me, it’s interesting not because it answers that question, but, rather, because it presents a clear and concise description of one way that low interest rates can distort prices.

Instead of any of that, I’m going to talk about the recency illusion and statistical graphics. The author of that post uses the verb “incent.” My first thought when I read that was to be mildly annoyed at a buzzword-y, business-speak verbed noun.

 

But then I remembered the recency illusion, after which I remembered the Google Ngram viewer, so I looked up “incent,” “incentivize,” and “incentive”:

Three Words

 

Not terribly helpful, so I got rid of “incentive” and found that “incent” has a fairly respectable history (i.e., my initial reaction to it was, in fact, a case of the recency illusion), while my intuition that “incentivize” is buzzword-y business-speak was pretty much correct:

Two Words

 

Which brings me to the point probed in the title of this post. Why is there not an option to toggle a log transformation of the y-axis on the Google Ngram viewer? It would be very helpful to when comparing words that differ by multiple orders of magnitude in frequency.

Posted in language, statistical graphics | Comments Off

Multivariate normal CDF values in Python

I was very happy to realize recently that a subset of Alan Genz’s multivariate normal CDF functions are available in Scipy. I first learned of Dr. Genz’s work when I started using the mnormt R package, which includes a function called sadmvn that gives very precise, and very accurate, multivariate normal CDF values very quickly.

In case you don’t know, this is quite an achievement, since there is not a closed form solution. I’ve spent far too much time reading strange, complicated papers found in the deepest recesses of google (i.e., the third page of search results) that claim to provide fast, accurate approximations to multivariate normal CDFs. As far as I can tell, none of these claims hold any water. None other than Genz’s, anyway.

Okay, so Scipy has two relevant functions, but they’re kind of buried, and it might not be obvious how to use them (at least if you don’t know to look at Genz’s Fortran documentation). So, for the benefit of others (and myself, in case I need a refresher), here’s where they are and how to use them.

First, where. In the Scipy stats library, there is a chunk of compiled Fortran code called mvn.so. I’ve copied it here, just in case it disappears from Scipy someday. Should that come to pass, and should you want this file, just save that ‘plain text’ file and rename it mvn.so and you should be good to go.

Otherwise, if you’ve got Scipy, you can just do this:

from scipy.stats import mvn

Now, mvn will have three methods, two of which – mvndst and mvnun – are what we’re looking for here.

The first works like this:

error,value,inform = mvndst(lower,upper,infin,correl,...)

Which is to say that it takes, as arguments, lower and upper limits of integration, ‘infin’ (about which more shortly), and correl (as well as some optional arguments). This is, in turn, to say that it assumes that your multivariate normal distribution is centered at the origin and that you’ve normalized all the variances.

This function is straightforward to use, except for, perhaps, the ‘infin’ argument. From Genz’s documentation:

*     INFIN  INTEGER, array of integration limits flags:
*           if INFIN(I) < 0, Ith limits are (-infinity, infinity);
*           if INFIN(I) = 0, Ith limits are (-infinity, UPPER(I)];
*           if INFIN(I) = 1, Ith limits are [LOWER(I), infinity);
*           if INFIN(I) = 2, Ith limits are [LOWER(I), UPPER(I)].

Which is to say that you put a negative number in if you want, on dimension I, to integrate from -Inf to Inf, 0 if you want to integrate from -Inf to your designated upper bound, 1 if you want to integrate from your designated lower bound to Inf, and 2 if you want to use both of your designated bounds.

Also from Genz’s documentation:

*     INFORM INTEGER, termination status parameter:
*          if INFORM = 0, normal completion with ERROR < EPS;
*          if INFORM = 1, completion with ERROR > EPS and MAXPTS 
*                         function vaules used; increase MAXPTS to
*                         decrease ERROR;
*          if INFORM = 2, N > 500 or N < 1.

Here, N seems to be the number of dimensions (and is the first argument in Genz’s MVNDST Fortran function, but is not in the similar/corresponding R or Python functions).  In any case, it’s the 0 and 1 that seem most informative, and the MAXPTS variable is one of the optional arguments I mentioned above.

The other function allows for non-zero means and covariance (as opposed to correlation) matrices, but it doesn’t, technically speaking, allow for integration to or from +/-Infinity:

value,inform = mvnun(lower,upper,means,covar,...])

As it happens, and as shouldn’t be too surprising, you can give it large magnitude bounds and get essentially the same answer. As long as you’re sufficiently far away from the mean (meaning as long as you’re more than a few standard deviation units away), the difference between the +/-Inf bound and the finite bound will only show up quite a few decimal places into your answer.

If you’ve got numpy imported as np, you could, for example, do this:

In [54]: low = np.array([-10, -10])

In [55]: upp = np.array([.1, -.2])

In [56]: mu = np.array([-.3, .17])

In [57]: S = np.array([[1.2,.35],[.35,2.1]])

In [58]: p,i = mvn.mvnun(low,upp,mu,S)

In [59]: p
Out[59]: 0.2881578675080012

With more extreme values for low, we get essentially the same answer (with a difference only showing up in the 12th decimal place):

In [60]: low = array([-20, -20])

In [61]: p,i = mvncdf(low,upp,mu,S)

In [62]: p
Out[62]: 0.2881578675091007

Still more extreme values doesn’t change it at all:

In [63]: low = array([-100, -100])

In [64]: p,i = mvncdf(low,upp,mu,S)

In [65]: p
Out[65]: 0.2881578675091007

All of this is important to me because I’m working on building a Bayesian GRT (e.g.) model in PyMC, and I’m hoping I’ll be able to use this function to get fast and accurate probabilities, given a set of mean and covariance parameters.

Posted in Python, R, statistical modeling | Comments Off

Visualizing confusion matrices

For my Current Topics in Communication Sciences course, we read So & Best’s 2010 paper on non-native tone perception. I won’t go much into the paper’s implications for language or speech, though these are interesting and worth thinking about. Rather, I want to focus (as is my wont) on data analysis and illustrating some models and data analysis tools that I wish were used more often in this kind of research.

The data analysis in So & Best is not so good. The data consist of confusion matrices for Cantonese, Japanese, and English native listeners’ identification of Mandarin tones. The first part of the analysis focuses on ‘tone sensitivity’ and uses A’, a non-parametric measure of perceptual sensitivity, which I assume is (something like) the A’ define by Grier [pdf] (So & Best cite a textbook rather than a paper, so I don’t know for sure how they calculated A’).

It’s probably good that they use A’ rather than d’, given that, for each tone, they’re lumping all three incorrect responses together, which certainly violates pretty much all of the assumptions underlying Gaussian signal detection theory (though, not surprisingly, A’ has a downside, too). But, then, even if they’ve avoided violating one set of assumptions by using A’, A’ values violate pretty much all of the assumptions underlying ANOVA, as do the confusion counts they (also) crank through the ANOVA machine.

Which brings me to my point, namely that there are better statistical tools for analyzing confusion matrices. Some such tools are pretty standard, plug-and-play models like log-linear analysis (a.k.a. multiway frequency analysis). Others are less standard and perhaps less easy to use, but they are far superior with respect to providing insight and understanding of the patterns in the data.

I’ve written about the Similarity Choice Model (SCM) before; as I wrote in the linked post:

In the SCM, the probability of giving response r to stimulus s is (where \beta_r is the bias to give response r and \eta_{sr} is the similarity between stimulus s and stimulus r, and N is the number of stimuli):

(1)   \begin{equation*}p_{sr} = \frac{\beta_r\eta_{sr}}{\sum_{i=1}^{N}\beta_i\eta_{si}}\end{equation*}

You might want to use this model because it has convenient, closed-form solutions for the similarity and bias parameters:

(2)   \begin{align*}\eta_{sr} &= \sqrt{\frac{p_{sr}p_{rs}}{p_{ss}p_{rr}}}\\ \quad \\\beta_r &= \frac{1}{\sum_{k=1}^{N}\sqrt{\frac{p_{rk}p_{kk}}{p_{kr}p_{rr}}}}\end{align*}

To illustrate the ease and utility of using the SCM, I estimated parameters for the confusion matrices reported by So & Best (in R and Python - note that the Python code is in a .txt file, not a .py file, since my website host seems to think I’m up to no good if I try to use the latter).

Here’s the data file assumed by the Python script (the matrices are hard-coded in the R script), if you want to play along at home. The first four rows contain the confusion matrix for Cantonese listeners, the next four for Japanese listeners, and the last four for English listeners.

Both the R and Python scripts do essentially the same thing. I’ve been using Python more than R lately for various reasons, so that’s what I’ll focus on here.

I wrote a function that adjusts for any zeros in the confusion matrices then calculates response bias, similarities, and distances (d_{ij} = \sqrt{-log(\eta_{ij})}) for an input matrix:

def sbd(Mt):
    # zeros are bad
    Mt = Mt + .01
    # renormalize
    for ri in range(4):
        Mt[ri,:] = Mt[ri,:]/np.sum(Mt[ri,:])
    # initialize similarity matrix
    St = np.zeros((4,4))
    # calculate similarities
    for ri in range(4):
        for ci in range(4):
            St[ri,ci] = np.sqrt(Mt[ri,ci]*Mt[ci,ri]/(Mt[ri,ri]*Mt[ci,ci]))
    # distances
    Dt = np.abs(np.sqrt(-np.log(St)))
    # bias
    Bt = np.zeros(4)
    for ri in range(4):
        Bk = np.zeros(4)
        for ki in range(4):
            Bk[ki] = np.sqrt(Mt[ri,ki]*Mt[ki,ki]/(Mt[ki,ri]*Mt[ri,ri]))
        Bt[ri] = 1/np.sum(Bk)
    Bt = Bt/sum(Bt)
    return St, Dt, Bt

I also wrote a function for calculating predicted confusion probabilities for a given set of parameters so that I could see how well the model fits the data:

def scm(s,b):
    nr = len(b)
    Mp = np.zeros((nr,nr))
    for ri in range(4):
        for ci in range(4):
            Mp[ri,ci] = b[ci]*s[ri,ci]
        Mp[ri,:] = Mp[ri,:]/np.sum(Mp[ri,:])
    return Mp

You can look at either script to see how to use the estimated similarities/distances to fit hierarchical clustering or MDS models.

Here’s a plot showing the observed and predicted confusion probabilities (closer to the diagonal = better fit):

so_best_predobsOverall, the fit seems to be pretty good, though it’s not perfect. There aren’t any really huge discrepancies between the observed and predicted probabilities. For whatever reason, the model seems to fit the Cantonese listeners’ data best, with the largest discrepancies for a couple data points from the Japanese listeners.

As a side note, the plots generated by the R script look a bit better than the plots generated by the Python script, but I’ve been fiddling with R plots for a few years now, while I’m still figuring out how to do this kind of thing with Python. I’m pretty happy with Python so far, though I’d really like to be able to remove the box and just have x- and y-axes. But I digress…

Here are dendrograms for the hierarchical cluster models fit to the estimated distances. In each of these, the y-axis indicates (estimated) distance, with the relative heights of the clusters indicating the relative dissimilarity of the tones (indicated by the numbers at the bottom):

so_best_cant_dend so_best_jpns_dend so_best_engl_dend

 

There are two obvious things to note about these plots. The most obvious similarity is that tones 1 and 4, on the one hand, and tones 2 and 3, on the other, form the bottom two clusters for each language group, indicating that 1 and 4 are more similar (less distant) to one another than either is to tone 2 or 3, and vice versa. The most obvious difference is that tones 1 and 4 are more similar than tones 2 and 3 for the English listeners, but tones 2 and 3 are more similar than tones 1 and 4 for the Japanese and Cantonese listeners. (This would be more obvious if I knew how to make the labels and colors consistent across these plots, so that 1 and 4 were always on the right and constituted the red cluster, with 2 and 3 on the left in the green cluster, but, again, I’m still figuring all this out in Python, so this will have to do for now.) It’s also pretty clear that the 1-4 and 2-3 clusters are less similar for the Cantonese listeners than for either other group.

Here’s an MDS plot with all three groups’ data presented together (the letters and colors indicate the language group, and the numbers indicate the tones):

so_best_mds

 

As with the cluster analysis, it’s clear that tones 1 and 4 pattern together as do tones 2 and 3. The differences in 1-4 vs 2-3 similarity across groups is also evident here.

Finally, here’s a plot of each groups’ bias parameters (colors are as in the MDS plot, tones are, again, indicated by the numbers on the x-axis):

so_best_biasI was a bit surprised by how similar the bias parameters are for all three groups, though there are some potentially interesting differences. The English listeners mostly just didn’t want to label anything “4″, while the Japanese listeners seemed to be more biased toward “1″ and, to a greater degree, “2″ responses than either “3″ or “4″ responses. The Cantonese listeners exhibit a similar pattern, though with a weaker bias toward “2″ responses.

Okay, so where does all this leave us? None of what I’ve done here actually provides statistical tests of any patterns in the data, though the SCM (and related models) can be elaborated and rigorous tests can be carried out by constraining similarities and biases in various ways and comparing fitted models. And, as mentioned above, log-linear analysis is a better out-of-the-box method for analyzing this kind of data than is ANOVA.

Statistical tests aside, though, I would argue that the SCM and clustering/scaling methods are far better than the presentation and visualization of the data presented by So & Best. The SCM allows us to look separately at pairwise similarity between stimuli and response bias, but it also allows us to readily generate easy to interpret figures that illustrate patterns that are not at all obvious in the raw data (or in other figures; e.g., I find So & Best’s Figure 3 to be rather difficult to interpret or draw any kind of generalization from).

As much as I’d like tools like these to be more widely used, I’m not terribly hopeful that they will be any time soon. But I’ll keep promoting them anyway.

Posted in statistical description, statistical graphics, statistical modeling | Comments Off

The Demarcation Game

About a week and a half ago, I posted a summary of Larry Laudan’s essay The Demise of the Demarcation Problem in order to set the stage for some posts on the first few essays in a new book on The Philosophy of Pseudoscience, edited by Massimo Pigliucci and Maarten Boudry. This post addresses the first chapter, The Demarcation Problem, A (Belated) Response to Laudan, Mr. Pigliucci’s (non-editorial) contribution (the entirety of which seems to be available via the ‘look inside’ feature on Amazon, for what it’s worth).

All of this will make more sense if you have read the original Laudan essay already. If you can’t find a copy of that, you could read my earlier post. And if you don’t feel like bothering with that, here’s an extremely concise summary: How and why we label things science, non-science, or pseudo-science isn’t philosophically interesting, whereas it is philosophically interesting to (try to) understand how, when, and why claims about the world are epistemically warranted and how, when, and why this or that methodology licenses such claims.

Of course, it’s more complicated than that, but this is the nub. The labeling problem might be semantically or sociologically interesting, but it doesn’t have much, if any, bearing on whether or not certain claims about the world are belief-worthy.

To elaborate a bit more on Laudan’s position, he outlines three metaphilosophical points that constrain any putative demarcation criterion. First, it should accurately label paradigmatic cases of science, non-science, or pseudo-science by virtue of the epistemic and/or methodological features that science has and that its complement does not, and it should be precise enough so that this labeling can actually be carried out. Second, it should supply necessary and sufficient conditions for appropriate application of such labels. Third, because it will have potentially important social and political consequences, it should be especially compelling.

Pigliucci’s essay starts with a nice discussion of Popper’s take on demarcation, noting its relation to the problem of induction as well as some of the well-known problems with it. He follows the section on Popper with a discussion of Laudan’s recounting of the history of demarcation and a discussion of Laudan’s metaphilosophical interlude. He finishes with a sketch of how a new demarcation project might proceed.

The whole essay, though, simply assumes that the labeling problem is important (and that it’s closely related to problems that are more obviously of interest in philosophy of science; more below). Because Pigliucci assumes this, he doesn’t go about making a case for it, and so he doesn’t really engage with the main thrust of Laudan’s essay.

In addition, everything kind of goes off the rails after the introductory discussion of Popper. Throughout his discussion of Laudan’s essay, Pigliucci conflates issues that should be kept distinct, presents ideas that are consistent with (or even identical to) Laudan’s position as arguments against him, and misreads Laudan in ways that are, at best, lazy, and, at worst, willfully negligent.

And as good as it is, even the section on Popper has a humdinger of specious reasoning. Pigliucci writes:

Regardless of whether one agrees with Popper’s analysis of demarcation, there is something profoundly right about the contrasts he sets up between relativity theory and psychoanalysis or Marxist history: anyone who has had even a passing acquaintance with both science and pseudoscience cannot but be compelled to recognize the same clear difference that struck Popper as obvious. I maintain in this essay that, as long as we agree that there is indeed a recognizable difference between, say, evolutionary biology on the one hand and creationism on the other, then we must also agree that there are demarcation criteria – however elusive they may be at first glance.

It seems to me that the most obvious recognizable difference between evolutionary biology and creationism is that the former has mountains of evidence supporting its claims while the latter has none (and, in fact, has mountains of evidence against it). That is, these two examples are recognizably different with respect to their epistemic warrant, whether we label them both science or not. I’ll even go one better and say that it is only by placing evolutionary biology and creationism on the same playing field that we know that one is well-supported and the other is not.

Aside from this, as noted above, the section on Popper is pretty good. On the other hand, the section on Laudan’s history of demarcation is kind of a mess. Pigliucci implies that Laudan’s conclusion that the demarcation project has failed is inconsistent with the idea that philosophy makes progress. But, of course, Laudan doesn’t have a problem with progress in philosophy; in The Demise of the Demarcation Problem, he writes that “cognitive progress is not unique to the ‘sciences.’ Many disciplines (e.g., literary criticism, military strategy, and perhaps even philosophy) can claim to know more about their respective domains than they did 50 or 100 years ago.” In fact, I would guess that Laudan considers his essay at least a small contribution to the progress of philosophy.

A few pages later, Pigliucci laments that Laudan “reads this history [of Mill and Whewell's treatment of induction] in an entirely negative fashion” and complains that these philosophers’ works “are milestones in our understanding of inductive reasoning and the workings of science, and to dismiss them as “ambiguous” and “embarrassing” is both presumptuous and a disservice to philosophy as well as to science.” Of course, Laudan’s point in this essay is that, with respect to demarcation, such negativity is justified. In other works, Laudan has quite a lot to say about the role of induction in the history and philosophy of science, but, somehow, Pigliucci forgets the scope of the essay in question and isn’t aware of Laudan’s other treatments of this history.

But misreading Laudan as being presumptuous, overly negative, and inconsistent with philosophical progress are fairly minor problems.

As mentioned above, Pigliucci conflates various issues that should be kept distinct. He conflates probability and reliability of scientific hypotheses (p. 14), quoting Laudan’s claim that “several nineteenth century philosophers of science” responded to fallibilism “by suggesting that scientific opinions were more probable or more reliable than non-scientific ones,” and then snarkily noting that “surely Laudan is not arguing that scientific “opinion” is not more probable than “mere” opinion. If he were, we should count him amongst postmodern epistemic relativists, a company that I am quite sure he would eschew.”

Of course, one can coherently and reasonably reject the philosophical position that scientific ideas are evaluated in terms of their probability without immediately becoming an epistemic relativist; see, e.g., Deborah Mayo. Or see the other option that Pigliucci himself included in his quote from Laudan, namely reliability.

Pigliucci later conflates theory comparison and demarcation (p. 15) as well as theories and the proponents of theories (p. 16).

Ultimately, the only real substance of Pigliucci’s argument against Laudan boils down to the idea that necessary and sufficient conditions for demarcation are outdated, since sciences are related by family resemblances, and science is a “cluster concept.”

Given that Laudan describes science as having substantial “epistemic heterogeneity,” I don’t imagine he would take issue with the application of family resemblances to science as a category. But whereas Laudan takes this as an indication that any (epistemic or methodological) demarcation project is futile, Pigliucci wants it to be the basis of demarcation.

This idea as applied to science is illustrated in Figures 1.1 and 1.3 in the Pigliucci essay (1.2 illustrates ‘games’ and family resemblance). Figure 1.1 is presented in relation to Laudan’s first metaphilosophical point, but Pigliucci adds a lot more structure than is implied by Laudan’s ‘paradigmatic cases’, and neither the figure nor the corresponding text do much in the way of stating any key epistemic or methodological features of science nor or providing a precise set of demarcation criteria:

Figure 1.3 provides more substance, but in so doing, it also nicely (if implicitly) illustrates the distinction between the philosophically uninteresting labeling problem and the sorts of issues that are at the heart of philosophy of science:

It seems to me that the real work of philosophy of science (and the kind of thing explicitly endorsed by Laudan in his essay) consists of carefully defining things like ‘theoretical understanding’ and ‘empirical knowledge’ and then figuring out how, and to what degree, different theories provide these. If we can define this kind of space and then precisely locate fields of inquiry in it, what does it matter if we call this field “science” but not that one?

To reiterate the point made above, it might be semantically or sociologically interesting to figure how how (people’s intuitions about) these labels work, but it’s superfluous to the substance of philosophy of science. Pigliucci assumes that the labeling problem is both interesting and closely tied to issues like theoretical understanding and empirical knowledge, but he doesn’t make any kind of case for it. If anything, his invocation of fuzzy logic and fuzzy set theory militates against the importance of the labeling problem, since if you’re arguing for the utility of gradient membership in the set science, you’ve pretty much given up on analogous discrete labels.

Pigliucci ends his essay with “reasonable answers to Laudan’s three “metaphilosophical” questions” (Pigliucci uses scare quotes around “metaphilosophical” throughout, claiming not to understand why the “meta” prefix is necessary):

(1) What conditions of adequacy should a proposed demarcation criterion satisfy?

A viable demarcation criterion should recover much (though not necessarily all) of the intuitive classification of sciences and pseudosciences generally accepted by practicing scientists and many philosophers of science, as illustrated in figure 1.1.

(2) Is the criterion under consideration offering necessary or sufficient conditions, or both, for scientific status?

Demarcation should not be attempted on the basis of a small set of individually necessary and jointly sufficient conditions because “science” and “pseudoscience” are inherently Wittgensteinian family resemblance concepts (fig. 1.2). A better approach is to understand them via a multidimensional continuous classification based on degrees of theoretical soundness and empirical support (fig. 1.3), an approach that, in principle, can be made rigorous by the use of fuzzy logic and similar instruments.

(3) What actions or judgments are implied by the claim that a certain belief or activity is “scientific” or “unscientific”?

Philosophers ought to get into the political and social fray raised by discussions about the value (or lack thereof) of both science and pseudoscience. This is what renders philosophy of science not just an (interesting) intellectual exercise, but a vital contribution to critical thinking and evaluative judgment in the broader society.

Pigliucci’s answer to (1) is just a subset of Laudan’s answer to (1).

Pigliucci’s answer to (2) conflates the labeling problem with philosophically interesting problems, as discussed above, and all but gives the demarcation game away. Recall that Laudan’s motivation for requiring necessity and sufficiency in a demarcation criterion is that necessity alone does not allow us to label something scientific and sufficiency alone does not allow us to label something non-scientific. Whatever “multidimensional continuous classification” is, if it’s not providing necessary and sufficient conditions, it’s going to make classification errors. This isn’t fatal to a classification scheme, of course (see, e.g, detection theory), but Pigliucci doesn’t seem to be thinking of demarcation as noisy classification; there’s no mention of classification error or how it might be minimized, for example. And it bears noting that, for all his talk of fuzzy boundaries and gradient set membership and the like, Pigliucci seems to be ready, able, and willing to definitively classify different fields as established scienceproto-sciencesoft science, or pseudo-science – his Figures 1.1 and 1.3 look to me to have rather sharp boundaries.

Pigliucci’s answer to (3) is pretty much exactly Laudan’s answer to (3). Bizarrely, Pigliucci writes (on p. 20) that “I also markedly disagree with Laudan in answer to his question 3,” and then he quotes Laudan saying, in part:

Philosophers should not shirk from the formulation of a demarcation criterion merely because it has these judgmental implications associated with it. Quite the reverse, philosophy at its best should tell us what is reasonable to believe and what it not. But the value-loaded character of the term “science” (and its cognates) in our culture should make us realize that the labeling of a certain activity as “scientific” or “unscientific” has social and political ramifications which go well beyond the taxonomic task of sorting beliefs into two piles.

Pigliucci seems oddly confused about (3), stating one page later that “there simply is no way, nor should there be, for the philosopher to make arguments to the rest of the world concerning what is or is not reasonable to believe without not just having, but wanting political and social consequences.” Aside from the “wanting” bit, this is pretty much what he just quoted Laudan saying.

It’s maybe also worth noting, contra both Laudan and Pigliucci, that it’s not that difficult to think of cases in which science tells use something about what we should believe with essentially no attendant social or political consequences. To mention just the most prominent recent example, the big results from the LHC in 2012 tell us we should probably believe in (some version of) the Higgs boson, but for the vast, vast majority of humanity, this work has exactly no substantive social or political consequences at all.

To sum up, Chapter 1 of TPoP does little, if anything, to establish that Laudan was wrong in declaring the demarcation problem dead. I’ll come back to some of the subsequent chapters in the near future.

Posted in philosophy of science | 3 Comments