A few days ago I wrote a post about copulas and operations on them that have a group structure. Here’s another example of group structure for copulas. As in the previous post I’m just looking at two-dimensional copulas to keep things simple.

Given two copulas *C*_{1} and *C*_{2}, you can define a sort of product between them by

Here *D*_{i} is the partial derivative with respect to the *i*th variable.

The product of two copulas is another copula. This product is associative but not commutative. There is an identity element, so copulas with this product form a semigroup.

The identity element is the copula

that is,

for any copula *C*.

The copula *M* is important because it is the upper bound for the Fréchet-Hoeffding bounds: For any copula *C*,

There is also a sort of null element for our semigroup, and that is the independence copula

It’s called the independence copula because it’s the copula for two independent random variables: their joint CDF is the product of their individual CDFs. It acts like a null element because

This tells us we have a semigroup and not a group: the independence copula cannot have an inverse.

Reference: Roger B. Nelsen. An Introduction to Copulas.

This is interesting because it looks a lot like a ring, but isn’t one. The multiplication operation distributes over ordinary addition, but the sum of two copulas isn’t a copula, and the element that behaves like 0 with respect to multiplication looks nothing like the additive identity.

Is there also an operator for which the lower Fréchet-Hoeffding bound is the identity?

Good question. Nelsen does give some equations involving W, the Fréchet-Hoeffding lower bound copula.

For one, W*W = M, so W is a sort of square root of M. Also, W*C*W gives the survival copula associated with C. So the ^ operator that takes a copula to its survival copula is a sort of conjugation.

Is there a probabilistic interpretation of the resulting random pair?

Something similar to the fact that the sum two independent random variables has as distribution the convolution of the distributions of the summands.

@Felipe: Yes. For a Markov process and times s < u < t, the copula for X_s and X_t is the convolution product of the copulas for X_s to X_u and X_u to X_t. See Nelsen Theorem 6.4.3. Nelsen cites WF Darsow et al. Copulas and Markov processes. Illinois J Math 36:600-642.

Sounds like W is a kind of -1? At least it seems that W*Π = Π. Then we could define -C as W*C, but that would only make sense it W*C = C*W and I’m not sure about that.

@John

Thanks for the reference!

It seems that

(W * C)(u, v) = v – C(1 – u, v)

and

(C * W)(u, v) = u – C(u, 1 – v).