A frustrating discussion

I’m not totally sure what my opinion of Data Colada is. It’s an odd mix of thought-provoking and frustrating. The most recent example is yesterday’s post (by Uri Simonsohn) on the prejudice of (one particular type of) Bayesian t-test. Directly related, and similarly frustrating, is the response from one of the people (Jeff Rouder) who has been arguing for this particular Bayesian test.

(Disclaimer that’s less a disclaimer than a mildly amusing anecdote: Jeff Rouder and I once went to get a cup of coffee while were at a Math Psych meeting, and as we walked to the coffee shop, I mentioned a paper that I found interesting, incorrectly stating that it was a paper that he had (co-)authored, which he quickly pointed out was not the case. I can’t imagine he remembers this, but I felt like a knucklehead, and that moment of awkwardness has stuck with me. I have not yet had a chance to say anything stupid to Uri Simonsohn in person.)

There are quite a few issues one could dig into here, as I suppose would be obvious to anyone who’s been following the frequentist-v-Bayesian argument. In order to keep things orderly, I’ll focus on one small corner of the larger debate that’s playing an interesting role in this particular discussion.

Specifically, I’ll focus on the role of inferential errors in statistical reasoning. Simonsohn wants to know how often Rouder’s Bayesian t-test produces the incorrect result, so he ran some simulations. In the simulations, the true effect size is known, which allows us to evaluate the testing procedure in a way that we can’t when we don’t know the truth ahead of time. Simonsohn posted this figure to illustrate the results of his simulations:

The take-home point is that the Bayesian t-test in question erroneously supports the null hypothesis ~25% of the time when the effect size is “small.” It is perhaps worth noting in passing that the fact that power is 50% in these simulations means that the standard null hypothesis tests that Simonsohn favors are, by definition, wrong a full 50% of the time. Depending on what you do with the Bayesian “inconclusive” region, this is up to twice the error rate of the Bayesian test Simonsohn is arguing against.

Anyway, Rouder’s take on this is as follows:

Uri’s argument assumes that observed small effects reflect true small effects, as shown in his Demo 1 [the simulations illustrated above – NHS]. The Bayes factor is designed to answer a different question: What is the best model conditional on data, rather than how do statistics behave in the long run conditional on unknown truths?

Now, Rouder is right that Simonsohn seems to assume that observed (small) effects reflect true (small) effects, which is problematic. This isn’t obvious from the simulations, but Simonsohn’s second case makes it clear. Briefly, the second case is an(other) absurd Facebook study that found, with a sample size of ~7,000,000 (~6,000,000 + ~600,000), that 6 million people who saw pictures of friends voting (on Facebook) were 0.39% more likely to vote than 600k people that didn’t see pictures of friends voting (on Facebook). The result was statistically significant, but I don’t know that I’ve ever seen a better illustration of the difference between statistical and practical significance (I mean, aside from the other ridiculous Facebook study linked above). Who could possibly care that a 0.39% increase in voting was statistically significant? I’m having a hard time figuring out why anyone – authors, editors, or reviewers – would think this result is worth publishing. It’s just spectacularly meaningless. But I digress.

Ahem.

Simulations like those run by Simonsohn are useful, and I can’t figure out why so many Bayesians don’t like them. The point of simulations isn’t to understand how “statistics behave in the long run conditional on unknown truths,” it’s to leverage known truths to provide information about how much we can trust observed results when we don’t know the truth (i.e., in every non-simulated case).

That is, simulations are useful because they tell us about the probability that an inference about the state of the world is false given an observation, not because they tell us about the probability of sets of observations given a particular state of the world. I would have thought that Bayesians would recognize and appreciate this kind of distinction, given how similar it is to the distinction between the likelihood and the posterior in Bayes’ rule.

One argument for Bayesian reasoning is that frequentist reasoning can’t coherently assign probabilities to one-off events. I’ve never found this particularly compelling, in large part because I’m interested in statistics as applied to scientific research, which is to say that I’m interested in statistics as applied to cases in which we are explicitly concerned with prediction, replication, and anything other than one-off events. The point being that the frequentist properties of our statistical models are important. Or, put slightly differently, our models have frequentist properties, whether we’re Bayesian or not, and whether the models are Bayesian or not.

So, Simonsohn was right to run simulations to test Rouder’s Bayesian t-test, even if he was wrong to then act as if a statistically significant difference of 0.39% is any more meaningful than no effect at all, and even if he glosses over the fact that null hypothesis testing doesn’t really do any better than the Bayesian method he’s critiquing. And Rouder is right that it’s good to know what the best model is given a set of observations, even if he’s (also) wrong about what simulations tell us.

Posted in SCIENCE!, statistical modeling | Comments Off on A frustrating discussion

Book Review: IPython Notebook Essentials, Ch. 1

On a Google+ statistics group, sometime around the new year, I saw an offer for a free book on IPython Notebooks in exchange for a review of the book. I wanted to learn about IPython Notebooks, so I offered to write a review. The spring semester had started before I got my copy of the book, and what I should have foreseen happened: I got too busy to read the book and write the review. Well, I’m making a good faith effort to fulfill my side of the deal now. Better late than never, right?

The book is IPython Notebook Essentials, written by L. Felipe Martins, and published by Packt Publishing. I’ve read Chapters 1 and 2 so far. Both are clear and well-written, with only a few minor problems, and those possibly as much due to my own proclivities as anything else.

Chapter 1 covers the basics of installing a suitable scientific python package (e.g., Anaconda, which I’m a big, big fan of), creating a notebook, and executing some basic code in that notebook. The code given is a bit goofy, implementing a model of change in coffee temperature as a function of time and initial temperature, but introductory code in this kind of book is often fairly ridiculous.

I was already reasonably familiar with Python before getting this book, so it’s a bit difficult for me to judge the pedagogical effectiveness of the code in Chapter 1 for someone with little to no programming experience. Of course, it’s not meant to be a comprehensive introduction to Python, so this kind of intro has to fit into kind of an awkward space. It can’t be too boring to someone with some (or a lot of) experience, but it can’t be too confusing or complicated for less experienced programmers. This book does a pretty good job balancing the needs of a potentially wide variety of reader experience levels, I think.

One small issue I have with the code is that the convention for variable naming uses a lot of full words and underscores. The benefit of this is, of course, that the names are unambiguous and you mostly know what variables are just by looking at their names. The downside is that the code often spans multiple lines in the book, which can make it slightly more difficult to read. I tend to err very much in the other direction in my own code, giving variables short, but sometimes confusing, names. So, my taste runs a bit against the convention in this book. I don’t think the author should have gone as far as I often do, but I think some of the code snippets in the book would look nicer and be a bit easier to read if the variable names had been a bit shorter.

Okay, so Chapter 1 is fine. It’s not spectacular, but it does its job okay. I’ll give it B.

Posted in Python | Comments Off on Book Review: IPython Notebook Essentials, Ch. 1

More thoughts on GRT wIND

Since yesterday’s post on GRT wIND, I’ve been thinking quite a bit about the model. I am now reasonably confident I know why the two models I simulated didn’t produce identical data, my intuition still tells me that the model is over-parameterized, and I am now back to feeling like GRT wIND doesn’t solve the problem of identifiability of failures of decisional separability. I say “back to” because, when I first heard about the model, my initial thought was that it didn’t solve this problem. I thought about the model some more, and my uncertainty about this issue increased a good bit, but now that I’ve thought about it even more, I’m back to where I started.

However, I don’t want this to be a purely negative effort, so I will use my discussion of why I’ve come full circle with respect to GRT wIND to illustrate one of the most valuable, and (possibly) counterintuitive, aspects of mathematical modeling. Here’s the punchline, in case you don’t want to wade through the technical details below: mathematical modeling allows us to formulate and ask incisive questions. More below, though please note that if you haven’t read yesterday’s post, this one isn’t going to make much sense.

As I discussed yesterday, there are a number of identifiability issues with the model for the standard 2 \times 2 GRT task. The task is identification of stimuli defined by the factorial combination of two levels on each of two dimensions (e.g., red square, purple square, red octagon, purple octagon), and the model consists of bivariate Gaussian perceptual distributions corresponding to the stimuli and two decision bounds that define response regions corresponding to each possible response (in the standard task, there is a unique response for each stimulus).

One issue, as mentioned above, is that failures of decisional separability are, in general, not identifiable. Recall that decisional separability is defined as decision bounds that are parallel to the coordinate axes. In the simplest case, you have linear bounds. If you have a failure of decisional separability with linear bounds, you can apply a sequence of linear transformations to produce an empirically equivalent model in which decisional separability holds. You can see some more details about this here, and you can read the proof in my 2013 Psychonomic Bulletin & Review paper with Robin Thomas (that’s a link to a paywall, so if you want a copy, just email me at noahpoah at any of a number of different domains, including the obvious one, given where you’re reading this).

Another issue is that the means and marginal variances of the perceptual distributions are not simultaneously identifiable. If you have a model with decisional separability and arbitrary marginal variances in the perceptual distributions, it’s easy to transform each distribution so that each marginal variance is one and the means are shifted to preserve the predicted response probabilities. I showed how this works in yesterday’s post.

I contend that the first issue applies to GRT wIND (contra the claims of Soto, et al., in the paper that introduced the model), and given the first issue, the second issue seems tied up in the over-parameterization problem I can’t seem to let go of.

As I discussed yesterday, GRT wIND has a group-level set of perceptual distributions with arbitrary mean vectors, marginal variances, and correlations (with one distribution fixed at the origin and with unit marginal variances). The individual-level model for subject k has a scaled version of the group level model for its perceptual distributions and two linear decision bounds. As I mentioned yesterday, the scaling can be formalized as a linear transformation of the group-level covariance matrix \Sigma_{A_i,B_j} (where, e.g., A_i = “red” and B_j = square):

    \begin{eqnarray*}\mathbf{\Sigma}_{k,A_i,B_j} &=& \mathbf{T\Sigma}_{A_i,B_j}\mathbf{T}^t\\ &=&\left[\begin{array}{cc}{\frac{1}{\sqrt{\kappa_k\lambda_k}}&0\\ 0 &\frac{1}{\sqrt{\kappa_k(1-\lambda_k)}}\end{array} \right]\left[\begin{array}{cc}{\sigma_{xx}&\sigma_{xy}\\\sigma_{yx}&\sigma_{yy}\end{array} \right]\left[\begin{array}{cc}{\frac{1}{\sqrt{\kappa_k\lambda_k}}&0\\ 0 &\frac{1}{\sqrt{\kappa_k(1-\lambda_k)}}\end{array} \right]\end{eqnarray*}

It occurred to me today that you can formalize this as a (possibly illicit) affine transformation that transforms the covariance matrix as desired while leaving the perceptual distribution’s mean vector unchanged:

    \begin{equation*}\mathbf{T} + \mathbf{(I-T)}\boldsymbol{\mu} = \left[\begin{array}{cc}{\frac{1}{\sqrt{\kappa_k\lambda_k}}&0\\ 0 &\frac{1}{\sqrt{\kappa_k(1-\lambda_k)}}\end{array} \right] + \left[\begin{array}{c}\left(1-\frac{1}{\sqrt{\kappa_k\lambda_k}}\right)\mu_x \\\left(1-\frac{1}{\sqrt{\kappa_k(1-\lambda_k)}}\right)\mu_y\end{array} \right]\end{equation*}

This is possibly illicit because it seems like it could cause some problems to include the mean in the transformation. But this is beside the main point.

The main point being that each individual subject has a transformation that scales the whole space (\kappa) and scales the two dimensions with respect to one another (\lambda), and, together with the two decision bounds for subject k, these produce a standard 2 \times 2 Gaussian GRT model. Hence, the model for subject k can be transformed to ensure that decisional separability holds, and once decisional separability holds, the marginal variances and means of each distribution can be scaled and shifted so that all marginal variances are one. Put slightly differently, in addition to the scaling transformation described above, subject k can also have rotation and shear transformations to enforce decisional separability and the unit-variance transformation to force all the action onto the means.

There’s no mathematical rationale for allowing each subject to have \kappa– and \lambda-based scaling but not rotations, shear transformations, and unit variances. Rotation, shear transformation, and scaling to unit variance all produce models that are empirically equivalent to the model pre-transformation. All of which is to say that for every GRT wIND model with failures of decisional separability, there is an empirically equivalent model in which decisional separability holds. Ergo, GRT wIND does not, by itself, solve the identifiability problems discussed in Silbert & Thomas (2013).

Of course, there may be psychological reasons for allowing one type of transformation and not another. In my second review of the more recent GRT wIND paper, I suggested one such line of reasoning. Specifically, I suggested that scaling transformations could be allowed while rotations and shear transformations were not by invoking the notion of primary perceptual dimensions. The basic argument along these lines would be that subjects share a common set of primary dimensions, and so may vary with respect to the overall salience of the stimulus set (\kappa) and the salience of one dimension relative to the other (\lambda), but not with respect to how the perceptual distributions are aligned with the coordinate axes in perceptual space (rotations, shear transformations).

This approach doesn’t rule out re-scaling to enforce unit variances, and I strongly suspect that this is linked to the over-parameterization problem I (still) think the model has, but I’m going to leave this as an open issue for now, since I want to get back to the positive message I’m trying desperately to focus on (and, if I’m being honest, because I started working on trying to prove that this problem exists, and the elements of the scaled, rotated, and sheared covariance matrices were getting kind of ridiculous to work with).

To repeat the punchline from above: mathematical modeling allows us to formulate and ask incisive questions. In this case, it allows us to formulate and ask incisive questions about which kinds of transformations of perceptual space are consistent with human cognition and behavior. More to the point, it allows us to ask this about (absolute and relative) scaling, rotation, and shear transformations.

There has long been compelling evidence that scaling can provide a good account of at least some perceptual behavior (e.g., Nosofsky’s 1986 and 1987 papers on selective attention). There is also some evidence that rotation of at least some dimensions causes substantial changes in behavior (e.g., a 1990 paper by Melara & Marks). I’m not aware of any research on shear transformations of perceptual space, but given that scaling and rotation have been investigated with some success, shear transformation seem very likely to be amenable to empirical and theoretical inquiry.

In the end, then, mathematical analysis (of GRT, in this case) allows us to see where we might productively aim future efforts. It enables us to ask very specific questions about scaling, rotation, and shear transformations. Perhaps unfortunately, though, because of the empirical equivalence of the (rotation and shear) transformed and untransformed GRT models, it does not (yet) give us the tools to answer these questions.

Of course, if this case is at all illustrative of a general state of affairs, working out new mathematical tools to answer these questions seems very likely to allow us to formulate and ask any number of new, and as of yet unanticipated, incisive questions.

Lather, rinse, repeat, ad nauseam.

Posted in statistical modeling | Comments Off on More thoughts on GRT wIND

A critical assessment of GRT wIND

I’ve blogged about GRT (General Recognition Theory) at least a few times before (e.g., here, here, here). The third of those links goes to an introductory GRT post (#1 of an unknown number of planned posts). Here’s the super-abbreviated version: GRT is a multidimensional extension of signal detection theory (SDT). The simplest GRT model consists of four bivariate Gaussian densities (perceptual distributions) and two decision bounds. These are used to model identification confusion patterns for stimuli consisting of a factorial combination of two levels on each of two dimensions (e.g., square or octagon on a shape dimension, red or purple on a hue dimension).

Robin Thomas and I have done some theoretical work with this model recently. More specifically, we wrote a paper in which we described how failures of decisional separability (i.e., decision bounds that are not parallel to the coordinate axes) are not identifiable in the 2 \times 2 model. The basic issue is that, with linear bounds, affine transformations of the modeled perceptual space can always map a model with failure of DS onto a model in which DS holds. (This is illustrated in the second ‘here’ link above.)

More recently, Soto, et al. developed a multilevel GRT model (called GRT wIND – GRT with Individual Differences) and claimed to have solved the identifiability problem that Robin and I detailed (pdf). Still more recently, Soto and Ashby wrote a second paper with GRT wIND (pdf), which I reviewed (twice) for the journal Cognition. Yesterday, I got an email (as a member of the Society for Mathematical Psychology) announcing an R package for fitting GRT wIND (and carrying out various other GRT-related analyses).

The paper I reviewed has since been accepted for publication, and I confess some measure of frustration with the fact that Soto and Ashby (and the editor who handled the paper for Cognition) seem to have completely ignored the comments and suggestions from my second (and signed) review.

One key issue I raised was a concern that the GRT wIND model is over-parameterized, i.e., that given a data set, the parameters are not uniquely identifiable. In part motivated by my frustration, and in (larger) part because I actually want to know if my concern is valid or not, and in part because they made the model fitting software available, I decided to ry to validate (or not) my concern about over-parameterization in GRT wIND.

Some GRT wIND background first: The group level model consists of four bivariate Gaussian densities. Each has a mean vector and a covariance matrix. The individual subject level for subject k consists of four bivariate Gaussian densities with the same means and transformed covariance matrices as well as two decision bounds, the latter possibly not parallel with the coordinate axes.

If the group level covariance matrix corresponding to stimulus A_i,B_j (i.e., the i^{th} level of the A dimension and the j^{th} level of the B dimension) is given by:

    \begin{equation*}\mathbf{\Sigma}_{A_i,B_j} = \left[\begin{array}{cc}{\sigma_{xx}&\sigma_{xy}\\\sigma_{yx}&\sigma_{yy}\end{array} \right]\end{equation*}

then the corresponding individual level matrix for subject k is given by:

    \begin{equation*}\mathbf{\Sigma}_{k,A_i,B_j} = \left[\begin{array}{cc}{\frac{\sigma_{xx}}{\kappa_k \lambda_k}&\frac{\sigma_{xy}}{\kappa_k\sqrt{\lambda_k(1-\lambda_k)}}\\\ \frac{\sigma_{yx}}{\kappa_k\sqrt{\lambda_k(1-\lambda_k)}}&\frac{\sigma_{yy}}{\kappa_k (1-\lambda_k)}\end{array} \right]\end{equation*}

The two additional parameters - \kappa and \lambda - scale the subject’s space overall (\kappa) and scale the two dimensions relative to one another (\lambda). As \kappa increases, the marginal variances decrease, thereby increasing the salience of the stimuli and reducing the number of (predicted) errors. As \lambda increases, the marginal variance on the x axis decreases and the marginal variance on the y axis increases, shifting the relative salience of the two dimensions. As Soto, et al., note, equivalent changes in response probabilities could be obtained by leaving the covariance matrices alone and shifting the means of the group level densities, instead.

As a brief aside, it’s interesting (to me) to note that you can represent the individual level matrix as a linearly-transformed group level matrix:

    \begin{eqnarray*}\mathbf{\Sigma}_{k,A_i,B_j} &=& \mathbf{T\Sigma}_{A_i,B_j}\mathbf{T}^t\\ &=&\left[\begin{array}{cc}{\frac{1}{\sqrt{\kappa_k\lambda_k}}&0\\ 0 &\frac{1}{\sqrt{\kappa_k(1-\lambda_k)}}\end{array} \right]\left[\begin{array}{cc}{\sigma_{xx}&\sigma_{xy}\\\sigma_{yx}&\sigma_{yy}\end{array} \right]\left[\begin{array}{cc}{\frac{1}{\sqrt{\kappa_k\lambda_k}}&0\\ 0 &\frac{1}{\sqrt{\kappa_k(1-\lambda_k)}}\end{array} \right]\end{eqnarray*}

The point here being that the Silbert & Thomas results concerning linear transformations and identifiability of decisional separability apply to the individual subject level models in GRT wIND. Soto, et al., say that the individual subjects share a perceptual space, but in reality they share some, but not all, properties of the group level perceptual space. In GRT wIND, expansion/contraction of the space and stretching/shrinking the group space along one or another coordinate axis are acceptable transformations, but rotations, shears, etc…, are not. So, GRT wIND solves the decisional separability identifiability issue if this distinction is valid for a given stimulus set and experimental subject group, but it doesn’t solve the problem in general.

But I digress.

A little more background about GRT: In some of the oldest papers on GRT, there are mentions of the fact that the means and marginal variances of the perceptual distributions are not both uniquely identifiable (this is closely related to the note above about scaling the variances vs the means in GRT wIND). I long assumed that this was true, but to the best of my knowledge, no one had proven it. I figured out that an affine transformation can be applied to each perceptual distribution (with arbitrary means and covariance matrices) in a GRT model to force it to have unit variances and appropriately shifted means (where by “appropriately” I mean that the shift in the means exactly balances the scaled variances, producing the same predicted response probabilities for a given set of decision bounds, assuming decisional separability holds, which see above with regard to identifiability). More specifically, if you have decision criteria c_x and c_y, the following transformation gives you unit variances and appropriately shifted means:

    \begin{equation*} \mathbf{T} + \Delta = \left[\begin{matrix}\frac{1}{\sqrt{\sigma_{xx}}} & 0\\ 0 & \frac{1}{\sqrt{\sigma_{yy}}}\end{matrix}\right] + \left[\begin{matrix}c_x -\frac{c_x}{\sqrt{\sigma_{xx}}}\\c_y - \frac{c_y}{\sqrt{\sigma_{yy}}}\end{matrix}\right] \end{equation*}

Application of this transformation produces a new covariance matrix \mathbf{R} = \mathbf{T}\Sigma\mathbf{T}^T and new mean vector \boldsymbol{\eta} = \mathbf{T}\boldsymbol{\mu} + \Delta. The transformed covariance matrix is the correlation matrix:

    \begin{eqnarray*} \mathbf{T}\Sigma\mathbf{T}^T &=& \left[\begin{matrix}\frac{1}{\sqrt{\sigma_{xx}}} & 0\\ 0 & \frac{1}{\sqrt{\sigma_{yy}}}\end{matrix}\right]\left[\begin{matrix}\sigma_{xx} & \sigma_{xy}\\ \sigma_{yx} & \sigma_{yy} \end{matrix} \right]\left[\begin{matrix}\frac{1}{\sqrt{\sigma_{xx}}} & 0\\ 0 & \frac{1}{\sqrt{\sigma_{yy}}}\end{matrix}\right]\\ &=&\left[\begin{matrix}\frac{\sigma_{xx}}{\sqrt{\sigma_{xx}\sigma_{xx}}} & \frac{\sigma_{xy}}{\sqrt{\sigma_{xx}\sigma_{yy}}}\\\frac{\sigma_{yx}}{\sqrt{\sigma_{xx}\sigma_{yy}}} & \frac{\sigma_{yy}}{\sqrt{\sigma_{yy}\sigma_{yy}}}\end{matrix}\right]\\ &=& \left[\begin{matrix}1 & \rho_{xy} \\ \rho_{yx} & 1\end{matrix}\right] \end{eqnarray*}

And the transformed mean vector is shifted by the signed distance (in standard deviation units) between the mean and the decision criterion (on each dimension):

    \begin{eqnarray*} \boldsymbol{\eta} &=& \mathbf{T}\boldsymbol{\mu} + \Delta\\ &=& \left[\begin{matrix}\frac{1}{\sqrt{\sigma_{xx}}} & 0\\ 0 & \frac{1}{\sqrt{\sigma_{yy}}}\end{matrix}\right]\left[\begin{matrix}\mu_x\\\mu_y\end{matrix}\right] + \left[\begin{matrix}c_x -\frac{c_x}{\sqrt{\sigma_{xx}}}\\c_y - \frac{c_y}{\sqrt{\sigma_{yy}}}\end{matrix}\right]\\ &=& \left[\begin{matrix}c_x + \frac{\mu_x-c_x}{\sqrt{\sigma_{xx}}}\\c_y + \frac{\mu_y-c_y}{\sqrt{\sigma_{yy}}}\end{matrix}\right] \end{eqnarray*}

Note that this transformation is invertible. The upshot of this is that a standard 2 \times 2 GRT model can be scaled arbitrarily. Or, to repeat the point made above, the means and marginal variances are not both identifiable.

So, what does this all have to do with GRT wIND? My intuition was that this identifiability issue applies to GRT wIND, too. I didn’t feel like working through all the algebra to prove that this is the case, so I did the next best thing. Specifically, I simulated individual subject data using two GRT wIND models. In one, the group level perceptual means describe a square centered on the origin of the space with sides of length four, and the group level covariance matrices have randomly generated variances and correlations. In the second model, I applied the “unit-variance” transformation to the model (with the coordinate axes as the ‘decision criteria’).

I also randomly generated a set of 50 \kappa, \lambda, \phi, and \omega values, where \kappa and \lambda are the GRT wIND parameters described above, and \phi specifies the angle between c_y and the x axis, and \omega specifies the angle between c_x and c_y:

Linear failure of decisional separability
Linear failure of decisional separability

The following figure illustrates the mean (red), median (blue), and maximum (green) absolute error between the predicted identification confusion probabilities for the two models:

grt_wind_rescale_error And here are the \phi and \omega values:grt_wind_noDS_angles

And here are the \lambda and \kappa values:

grt_wind_lam_kapAnd here is the R script that runs the simulations and generates these plots.

I’ve run the simulation a number of times, and, while sometimes the maximum errors are larger (~.15 or thereabouts), the two models consistently produce extremely similar, in many cases essentially identical, confusion matrices.

I take this as strong support for my intuition that GRT wIND is over-parameterized. A mathematical proof would be better, of course, but simulation results provide relevant evidence. Now, I’m not totally sure why there is any deviation between the models, though there is no correlation between any of the randomly generated parameters and the error. But because there is some error, the simulations don’t provide the kind of airtight case a proof would. Nonetheless, it is all but unimaginable to me that the true parameters of the two models could be recovered by fitting sets of confusion matrices that are this similar to one another.

As it happens, I would like to fit GRT wIND models to simulated data to test how accurately known parameters are recovered and to come at the problem from the opposite direction (i.e., instead of seeing if different parameter values can generate essentially identical data, see if different parameter values can provide essentially identical model fits to identical data). Alas, in the R package, the function grt_wind_fit crashes R, and the function grt_wind_fit_parallel returns an error. I’ve opened up an issue on the GitHub repository, so maybe I’ll try again when/if I can get the code to work on my computer(s). And, I suppose, depending on some of my colleagues and collaborators react, I may also get to work on a more rigorous mathematical treatment of the matter.

Posted in SCIENCE!, statistical modeling | Comments Off on A critical assessment of GRT wIND

Vox lies with statistics

Okay, maybe lies is too strong a word, but Vox‘s German Lopez sure does a good job illustrating numerical dishonesty in this piece on marijuana use in Colorado and Washington since recreational legalization.

The headline is “Marijuana use rises in states with legalization,” and here are the first two take-home bullet points:

  • Past-month marijuana use in Colorado and Washington state rose following the legalization of personal possession, according to new data from a federal survey.
  • Nationwide, Americans reported smaller percentage point increases in marijuana use than people in Colorado or Washington.

Okay, so that’s maybe interesting, but you can’t really interpret increases in Colorado and Washington without knowing something about what happened in other states, none of which legalized marijuana.

Here’s the data. If you bother to look at it, you’ll see that there were also statistically-significant, similar-magnitude increases in the District of Columbia, Georgia, Maine, Maryland, Michigan, Missouri, and New Hampshire.

And this points to a problem with the comparison between Colorado and Washington, on the one hand, and the nation as a whole, on the other. It seem a priori very likely that marijuana use varies fairly substantially across states, and the data confirm this. A single pair of numbers for the whole country obscures this variation.

Nothing Lopez writes in the article is technically wrong. It’s true that past-month use increased following legalization in Colorado in Washington. It’s true that these were the first states to legalize marijuana in 2012. It’s true that use increased less in the nation as a whole than in either state.

But it’s also true that use increased in a number of states without legalization.

Obviously, I don’t know why Lopez wrote this article the way he did. Whether or not it was written with the intention to deceive, it exploits the careless reader’s inclination to conflate correlation and causation.

The fact that the data is directly accessible means that if you care to look, you will quickly see how meaningless the article really is. But it also means that it’s very easy to take what is written at face value under the assumption that it would be ridiculous to link to data that undermines the whole point of the article.

 

Posted in SCIENCE!, statistical description | Comments Off on Vox lies with statistics

Yglesias makes an innumerate (non-)funny

Yesterday, Matt Yglesias tried to point out the absurdity of a Joe Klein column by turning the tables with the numbers Klein invoked, thereby mimicking and mocking Klein’s argument.

Yglesias quotes Klein:

Blacks represent 13% of the population but commit 50% of the murders; 90% of black victims are murdered by other blacks. The facts suggest that history is not enough to explain this social disaster.

Yglesias then links to the relevant FBI data and points out that:

Back in 2011, the most recent year for which data is available, a staggering 83 percent of white murder victims were killed by fellow Caucasians.

He uses this to (jokingly) make the exaggerated case that white-on-white murder is out of control, that white violence is a social disaster, and so on.

Now, I understand what he’s trying to do here, but, either because of innumeracy or willful negligence, his whole jokey approach fails miserably, at least as long as the reader isn’t innumerate.

Note which parts of the Klein quote above have analogs in the Yglesias quote and which parts do not. The key to Klein’s argument is that “Blacks represent 13% of the population but commit 50% of the murders,” but Yglesias only invokes the white analog to Klein’s “90% of black victims are murdered by other blacks.

Yglesias’ essay depends on the reader not noticing that the relevant analog to the important part of Klein’s argument is that whites represent 72% of the population and commit (approximately) 50% of the murders (based on the FBI data linked above).

To be clear, this is not intended as a defense of (or an argument against) Klein. It’s just a response to Yglesias’ shiftiness (or ignorance) and his willingness to exploit readers’ possible innumeracy to make his point. It’s an unfunny failure of an argument.

Posted in statistical description | Comments Off on Yglesias makes an innumerate (non-)funny

Truly random

In today’s new What If?, Randall Munroe discusses, in part, randomly aiming a laser at the sky, writing “if you aimed in a truly random direction, you would have an almost 50% chance of hitting the Earth.”

This is one of those cases wherein my training in linguistics and statistics conflict with one another. Kind of like how people use “a fraction of” to indicate a small fraction of something, despite the fact that, for example, 999,999/1,000,000 is a fraction, too. As is, for that matter, 10/3. So, saying that this year’s budget deficit is a fraction of last year’s isn’t, technically speaking, informative. It certainly does not imply that this year’s deficit is a small proportion of last year’s, though this is pretty much always what people mean when they say things like this.

So, my linguistics training lets me understand that the phrase is used to mean “a small fraction,” and that’s fine. But my statistics (and assorted math) training just can’t let it go.

I have similar issues with using a phrase like “truly random” to mean “uniformly distributed.” Clearly, Munroe means the latter, but a laser could be pointed randomly as if it were governed by a Gaussian distribution, which, with sufficiently small variance(s) and an appropriate mean (vector), would produce a very small chance of hitting the Earth.

Again, my linguistics tells me that it’s fine to use “truly random” to mean “uniformly distributed” in regular old language, but my statistics training just can’t let this kind of thing go.

It’s particularly hard to digest when it comes from a mathematically sophisticated writer in a semi-technical setting.

 

 

Posted in language, mildly informative filler | Comments Off on Truly random

What counts as an upset?

Five Thirty Eight has been discussing the World Cup and their predictions based on what I can only assume is a fancy-pants Bayesian statistical model (done in Excel, natch).

A few days ago, Nate Silver his own self wrote a post on this topic. In trying to give it a counter-intuitive hook, the post got a little too cute for its own good. The headline sums up what’s wrong, but I understand that headlines are often written separately from article proper, so we might forgive statistical silliness up top. But the same statistical silliness makes an appearance in the body. Silver writes:

For instance, the probability of correctly identifying the winners in each of the first four knockout matches — Brazil over Chile, Colombia over Uruguay, the Netherlands over Mexico and Costa Rica over Greece — was about 23 percent, or one chance in 4.3. And the chance of going 12 for 12, as the FiveThirtyEight favorites have done so far, is just one in 75.

It’s an upset, in other words, when all the favorites prevail. On average, we’d have expected three or four upsets through this point in the knockout round.

How interesting! They’re favorites, but it’s still, somehow, a (huge) upset when they prevail! What gives?

What gives is that Silver is comparing a single outcome – all favorites prevail – to every other possible outcomes. With 12 games (8 in the first round, 4 in the second), there are 2^{12} = 4096 possible outcomes.

(As often happens to me with counting problems like this, I came up with that answer fairly quickly, then immediately doubted the assumptions that led me to this answer were correct. Specifically, I was worried that I was somehow counting second round outcomes that were ruled out by first round outcomes (e.g., Brazil winning in the second round after losing in the first round). I’m pretty sure 2^{12} is right, though, and we can spell out a slightly simplified version to see why.

Suppose the semi-finals feature, say, Germany vs Brazil in one game and Argentina vs Holland in the other. Call this ’round one’ and call the final ’round two.’ Using my logic, there should be 2^3 = 8 possible outcomes in these three games, and indeed there are. Listing all possible combinations of winners: (1) Brazil, Argentina, Brazil; (2) Brazil, Argentina, Argentina; (3) Brazil, Holland, Brazil; (4) Brazil, Holland, Holland; (5) Germany, Argentina, Germany; (6) Germany, Argentina, Argentina; (7) Germany, Holland, Germany; (8) Germany, Holland, Holland.

We could, in theory, list out all the possibilities for the rounds with 8 and 4 games – the real rounds one and two – to get the number of possible outcomes stated above, 2^{8+4} = 2^{12}.)

Okay, so, when Silver says that the teams that have won so far were all favorites, he was saying that the single outcome we’ve observed is the single most probable outcome. If it weren’t the most probably single outcome, then at least one team that won wouldn’t have been a favorite.

On the other hand, when he says that it’s an upset that all the favorites have won, he’s saying that the this single most probable outcome is less probable than the sum of the probabilities of all of the other possible outcomes.

Given that there are 4095 other possible outcomes, the favorites would have to have been favored to an absurd degree for the sum total of all other outcomes to be less probable.

The point being that it’s not particularly interesting to compare the single outcome of all favorites winning to every possible other outcome. It gets you a catchy headline, I guess, but it doesn’t provide any useful insight into how the tournament is playing out.

Posted in SCIENCE!, statistical description | Comments Off on What counts as an upset?

Partial progress plotting PyMC properties

As noted yesterday, I figured out how to sample PyMC chains in parallel recently. I’ve also been working on some plotting functions to assess chain convergence and autocorrelation, goodness of model fit, and to illustrate a fitted (multilevel GRT) model.

In assessing chain convergence, you can calculate the Gelman-Rubin statistic \hat{R}, which is, in essence, an F-like statistic providing an ANOVA-like test of any overall differences between the values in your chains. PyMC has a number of useful diagnostic tools built in, including a gelman_rubin method. It also has methods for calculating and plotting autocorrelation functions and histograms of chains, but for some reason, with the model I’m dealing with, the out-of-the-box plotting functions aren’t working well for me. So, I wrote my own, and for each parameter, it produces a figure like this (which is illustrating \hat{R}, autocorrelation (using the statsmodels package time series analysis acf function), and each of three chain’s distribution of values for a parameter called mu_mx):

plot_chains_example

 

I’m pretty happy with it, even if there are still some kinks to be worked out (e.g., the x-axis label for the top left panel is clipped by the bottom left panel; for some parameters, the tick labels under the histogram overlap).

I also wrote a function to plot observed and predicted (identification-confusion) response probabilities for the model I’m working with:

predobs_exampleThe vertical error bars indicate 95% highest density intervals, with the circles indicating the mean posterior value (on the y-axis) and the observed value (on the x-axis). The closer the symbols to the diagonal line, the better the fit.

Finally, I wrote a function to illustrate the fitted model for each individual subject (the small panels) and the group-level model (the big panel). These two figures are for models fit to the non-speech auditory perception data presented in my 2009 paper with Lentz and Townsend (see my CV for more details, if you’re so inclined). This one illustrates the multilevel GRT model fit to the frequency-by-duration, broadband noise stimuli:

fxd_ml_fit

And this one illustrates the model fit to the pitch-by-timbre (F0-by-spectral-prominence-location) data:

pxt_ml_fitAgain, there are probably some ways these could be improved, but overall I’m quite happy with how they look. I’m using my recently-acquired ellipse-plotting knowledge for its intended purpose (for the purpose I intended for it, anyway), which is nice, and it’s very easy to make this kind of multi-panel plot with matplotlib, which is also nice.

It feels good to have this model functioning in a way that others could, in principle, use. The two papers I’ve published using this model presented analyses done with WinBUGS, and the only way I could figure out how to get WinBUGS to fit this model was by feeding it a giant array of pre-calculated bivariate normal CDF values and using trilinear interpolation. The model works, but it’s ungainly. I don’t think I would want anyone else to mess with it, and I don’t imagine others would particularly want to do so. This isn’t a recipe for reproducible research.

I never could figure out how to get this model to work in JAGS, which is a superior, cross-platform version of BUGS. JAGS would just hang when I tried to feed it the giant CDF array (BUGS would always seem like it was hanging, but if I went away for long enough, it would have a model fit for me – no such luck with JAGS).

I haven’t tried very hard (at all, really) to get this model working with Stan, though I did request multivariate normal CDFs as a feature (it’s on the to-do list, so maybe I’ll come back to work on this model in Stan later on). Given that it’s functioning in PyMC, I don’t feel much need to get it working in Stan right now.

 

 

Posted in Python, statistical graphics, statistical modeling | Comments Off on Partial progress plotting PyMC properties

Parallel processing of PyMC models in IPython

I’ve been learning how to use PyMC to fit Bayesian models over the last few months, and recently I decided I wanted to figure out how to sample separate chains in parallel on different cores on whatever machine I’m using.

I poked around a bit, looking at various multi-processing Python packages, and then I found this IPython notebook that walks through a fairly simple example using parallel processing machinery in IPython.

It’s easy to run multiple chains of a PyMC in serial by writing a for loop and calling a model’s sample method once for each chain (see here for details about model fitting in PyMC). But if you want lots of samples in each chain, or if it takes a long time to get even a small number of samples from your model, it seems very inefficient to sample your chains in serial.

Okay, so here’s a very quick and dirty rundown of how to run chains in parallel.

First, you need IPython and PyMC installed, and you need a model script that does everything (i.e., reads in data, defines your model, writes your samples to disk, etc…). Let’s call your model script model_script.py.

The IPython notebook linked above seems to make some of this more complicated than it needs to be, in my opinion, importing PyMC for each core, then pushing the data to each core, and only then feeding the model script to the cores. If your script works when you call it using Python from the command line, it should work as described below.

Let’s say you want three chains. Open one terminal windows and, after the prompt ($), type:

$ ipcluster start -n 3

Now, open another terminal window (or tab) and start IPython. Once that’s up and running, type:

In [1]: from IPython.parallel import Client

In [2]: client = Client()

In [3]: direct = client[:]

In [4]: direct.block = True

In [5]: model_script = open('model_script.py').read()

In [6]: direct.execute(model_script)

I don’t have a particularly deep understanding of what all is going on here, but I know it works. For the time being, that’s good enough for me. You can, and I probably should (and maybe even will, at some point), read all about ipcluster and various associated tools here.

Anyway, assuming, as mentioned above, that your model_script.py saves your samples for you (rather than, e.g., only keeping them in working memory for use in IPython), you should be good to go.

For what it’s worth, I ran into a bit of a problem when I first got this working. Specifically, the separate cores were creating databases with identical names in rapid succession, so only the last core to create a database was actually saving anything.

I fixed this by using something that I joked about when I first learned of its existence, namely Unix time, which is the number of seconds that have elapsed since the beginning of January 1, 1970.

I was fiddling with the time package in Python, trying to get unique time stamps to add to the database names (since I can’t figure out how to feed each core a unique string to add to its database name), and realized that time.time() returns a number with a few decimal places. I tried adding the following to the database names, under the assumption that somewhere in the neighborhood of the fourth, fifth, or sixth decimal place of Unix time there would be measurable delays between the creation of the databases from the different cores.

clock_string = str(int(time.time()*1000000))

It works, and it looks like I could even shave a couple zeros off and still get distinct names.

Posted in Python, statistical modeling | 1 Comment