In linear mixed models, we fit models like these (the Ware-Laird formulation--see Pinheiro and Bates 2000, for example):

\begin{equation}

Y = X\beta + Zu + \epsilon

\end{equation}

Let $u\sim N(0,\sigma_u^2)$, and this is independent from $\epsilon\sim N(0,\sigma^2)$.

Given $Y$, the ``minimum mean square error predictor'' of $u$ is the conditional expectation:

\begin{equation}

\hat{u} = E(u\mid Y)

\end{equation}

We can find $E(u\mid Y)$ as follows. We write the joint distribution of $Y$ and $u$ as:

\begin{equation}

\begin{pmatrix}

Y \\

u

\end{pmatrix}

=

N\left(

\begin{pmatrix}

X\beta\\

0

\end{pmatrix},

\begin{pmatrix}

V_Y & C_{Y,u}\\

C_{u,Y} & V_u \\

\end{pmatrix}

\right)

\end{equation}

$V_Y, C_{Y,u}, C_{u,Y}, V_u$ are the various variance-covariance matrices.

It is a fact (need to track this down) that

\begin{equation}

u\mid Y \sim N(C_{u,Y}V_Y^{-1}(Y-X\beta)),

Y_u - C_{u,Y} V_Y^{-1} C_{Y,u})

\end{equation}

This apparently allows you to derive the BLUPs:

\begin{equation}

\hat{u}= C_{u,Y}V_Y^{-1}(Y-X\beta))

\end{equation}

Substituting $\hat{\beta}$ for $\beta$, we get:

\begin{equation}

BLUP(u)= \hat{u}(\hat{\beta})C_{u,Y}V_Y^{-1}(Y-X\hat{\beta}))

\end{equation}

Here is a working example: