Back when I was a PhD student working on generalisations of GEE, I was interested in ‘sparse’ correlations, defined by ‘most’ small sets of observations being independent. One way to get this structure is from crossed clustering variables; another is for the basic units in your analysis to be pairs (or larger tuples) of observations. If you drew a graph with the variables as vertices, and connected correlated variables, then two subsets of the variables would be independent of each other if there was no edge between them.
To see what this implies in large samples, consider a balanced incomplete multi-rater experiment with raters and objects to be rated, in which each rater rates of the objects and each object is rated by of the raters. The number of ratings is . Two ratings are connected by an edge in the dependence graph if they share a rating or share a rating. The maximal degree of the graph is . We are interested in the case where and are both large.
Consider a simple parametric random-effects model for the rating by rater of object where is a fixed constant, , , and . The variance of is
The ratio will be bounded above and bounded away from zero if and are both non-zero, and we will have as long as .
Now, let’s look at a scaled version of . We can’t just scale it by as in the independent setting, because the variance of is rather than . We can still hope for since at least it’s the right size. In the simple multi-rater case it certainly works.
Encouragingly, we get the same sort of limit if we have identical copies of each of independent items, as another way to get sparse correlation.
Proving it?
The first thing I tried was to see if the moments of did anything helpful. I thought I had a proof, in fact, but it was wrong. The idea does work if you do it with cumulants; Svente Janson did it correctly, ten years earlier, and published it under the title Normal Convergence by Higher Semiinvariants with Applications to Sums of Dependent Random Variables and Random Graphs. I hadn’t found it in searching, and no-one I talked to at the time had heard of it.
The second thing I tried was to adapt a Central Limit Theorem for random fields by Xavier Guyon, which I’d already used in my thesis. The proof uses something called Stein’s Method, and involves a counting bound on ‘close’ pairs of pairs of observations and a long-range weak dependence bound on ‘distant’ pairs of pairs. I could just dump the ‘distant’ part of the proof and make it work for sparse correlation. Success!!
Nicole Mayer-Hamblett and I had a manuscript about asymptotics for sparsely-correlated generalised linear models, with some nice examples, and we got revision requests from a journal and then neither of us had time to do the revision. Then I worked out how to prove an exponential tail bound for sparse correlation and sent it off to a probability journal. They thought it was maybe ok, but that I needed to look up something called graph-structured dependence in the probability literature. That turned out to be the same Central Limit Theorem proof that I had (only a bit tidier and with an explicit error bound) and had already been published. In 1989. Sigh.
The Theorem
Let be random variables such that , , and define . Let be the maximal degree of a dependency graph for the . Then for a standard Normal variable where is the Wasserstein distance.
Scaling
The example of the parametric inter-rater experiment suggests that should be enough for a central limit theorem, but the actual theorem seems to require a stronger condition. If the were identically distributed and scaled as , the first term would have order and we would need . However, in the setup of the inter-rater experiment, if the rater and object variance components are non-zero, we actually have scaling as , so scales as . The first term in the bound is and is indeed sufficient.
Janson’s result using cumulants instead of Stein’s method doesn’t need the extra variance components to be non-zero, but it does impose stronger conditions on the tails of the .
When working on asymptotic distributions for mixed-model parameters I’m reasonably happy to assume that true variance components are positive – if instead they are on the boundary of the parameter space, special arguments will be needed in any case. The variance assumptions are more of a concern when the graph-structured dependence is induced by sampling rather than by a model for the outcome.