Variational Inference Step-by-Step (Part 3: Mean Field V.I. cont.)

5 minute read

Published:

Hello everyone! Thanks for following the series so far. This post will be very likely to be the last part of the series. Due to the depth of the topic itself, we would not be able to cover it from end to end. Nonetheless, by the end of this post, we expected you to at least be familiar with the derivation of mean field variational inference. Enjoy!


Mean Field Variational Inference (cont.)

We have so far done quite some work in deriving variational inference. We arrived at the point where we split the large equation of \(\mathcal{L}\) into 3 parts, and simplify it again further. In this section, we will do even more tricks and mathematical manipulation. Hang on, we’re almost there!

The first thing we do is splitting this constant \(\mathcal{C}\) into another two negative constants: \(-\mathcal{C}_1\) and \(-\mathcal{C}_2\). Our equation would now looks like this:

\[ \begin{aligned} \mathcal{L} = \sum_{z_1} q(z_1) \Bigg[ E_{z_2,z_3} \Big[ \ln p(x_1,x_2,x_3,z_1,z_2,z_3) \Big] + \mathcal{C}_1 + \mathcal{C}_2 \Bigg] - \sum q(z_1) \ln q(z_1) \end{aligned} \]

Lets shift our focus into this part: \(E_{z_2,z_3} \Big[ \ln p(X, Z)\Big] + \mathcal{C}_1\). We will try to argue that this part corresponds to a log of some function \(\ln f(X,Z)\). The goal here is so that we can subtitute that part in the equation with \(\ln f(X,Z)\).

Follow carefully our following arguments: we know that \(E_{z_2,z_3} \Big[ \ln p(X, Z)\Big]\) is the expectation of a log of some function. We also know that \(\mathcal{C}_1\) is some constant. It is obvious that log of a function added by a constant is also a log of some function. But notice that \(f(X,Z)\) is not just a regular function, it is a probability function! It means that it has a property of summing up to 1. How do we know that?

Well, we could first get rid of the log in front of \(f(X,Z)\). This could be done by taking the exponential of both side of the equation:

\[ \begin{aligned} \ln f(X,Z) & = E_{z_2,z_3} \Big[ \ln p(X, Z)\Big] + \mathcal{C}_1\newline e^{\big(\ln f(X,Z)\big)} & = e^{\big( (Ez_2,z_3 \Big[ \ln p(X, Z)\Big] + \mathcal{C}_1\big)} \newline f(X,Z) & = e^{\big( \mathcal{C}_1 \big)} e^{\big( (Ez_2,z_3 \Big[ \ln p(X, Z)\Big] \big)} & \text{(by exponential rule)}\newline f(X,Z) & = \mathcal{K}*e^{\big(Ez_2,z_3 \Big[ \ln p(X, Z)\Big] \big)} & (e^{\big( \mathcal{C}_1 \big)} \text{could be replaced by another constant $\mathcal{K}$)}\newline \end{aligned} \]

In order to hold the property that \(f(X,Z)\) to be a probability function that sum to 1, we need to find \(\mathcal{K}\) such that \[ \mathcal{K} = \frac{1}{e^{(E_{z_2,z_3} (\ln p(X, Z))}} \]

This should always be possible since we can pick \(\mathcal{K}\) by arbitrarily split \(\mathcal{C}\) into \(\mathcal{C}_1\) and \(\mathcal{C}_2\). We have shown that indeed \(f(X,Y)\) is a probability function!


Having the property of \(\ln f(X,Y)\) assured, we could substitute it back to the \(\mathcal{L}\) equation. Namely:

\[\begin{aligned} \mathcal{L} & = \sum_{z_1} q(z_1) \Bigg[ \ln f(X,Y) + \mathcal{C}_2 \Bigg] - \sum_{z_1} q(z_1)\ln q(z_1)\newline & = \sum_{z_1} q(z_1) \Bigg[ \ln f(X,Y) \Bigg] - \sum_{z_1} q(z_1)\ln q(z_1) + \mathcal{C}_2 \sum_{z_1} q(z_1)\newline & = \sum_{z_1} q(z_1) \Bigg[ \ln f(X,Y) \Bigg] - \sum_{z_1} q(z_1)\ln q(z_1) + \mathcal{C}_2 & \text{(probability sum to 1)}\newline & = \sum_{z_1} q(z_1) \Bigg[ \ln \frac{ f(X,Y)}{q(z_1)} \Bigg] + \mathcal{C}_2 & \text{(by logarithm rule)}\newline & \text{Constant can be ignored as we are maximizing } \mathcal{L}\newline & = \sum_{z_1} q(z_1) \Bigg[ \ln \frac{ f(X,Y)}{q(z_1)} \Bigg] \end{aligned}\]

Having only this portion of equation left to maximize, we should look back at the equation for finding KL-divergence. You should notice that this portion is actually the negative of KL divergence of \(q(z_1)\) and \(f(X,Y)\). That is, \[ KL(q(z_1)\mid \mid f(X,Z)) = - \sum_{z_1} q(z_1) \Bigg[ \ln \frac{ f(X,Y)}{q(z_1)} \Bigg] \]

Again, we could see that maximizing the \(\mathcal{L}\) terms is equals to minimizing \(KL(q(z_1)\|f(X,Z))\). Remember that the minimum of KL divergence could be achieved when \(q(z_1) = f(X,Z)\). So our goal could be restated as finding \(q(z_1)\) which equals to \(f(X,Z)\). You could trace back the equations above to see what the equations to compute \(f(X,Z)\).

This whole derivation is done with respect to \(q(z_1)\). Fortunately, we could do the exact same formulation for other variables in \(Z\). Therefore, this whole variational inference could be done by solving all equations of \(f(X,Z) = q(z_i)\) where \(z_i \in Z\). Namely, for this example, all equations that we need to solve are:

\[\begin{aligned} f(X,Y) = q(z_1) = \mathcal{K}_1 * e^{\sum_{z_2}\sum_{z_3} q(z_2) q(z_3) \ln p(X,Z)}\newline f(X,Y) = q(z_2) = \mathcal{K}_2 * e^{\sum_{z_1}\sum_{z_3} q(z_1) q(z_3) \ln p(X,Z)}\newline f(X,Y) = q(z_3) = \mathcal{K}_3 * e^{\sum_{z_1}\sum_{z_2} q(z_1) q(z_2) \ln p(X,Z)} \end{aligned}\]

You could see clearly the pattern in the equations above! We could generalize this to arbitrary number of variables of \(Z\) ofcourse. We would then have the multiple summations over the product of all \(q(z_i)\).

We now know how to compute each \(q(z_i)\) individually and later use it to compute \(q(Z)\). However, you shall notice that as we compute \(q(z_1)\), we do not know \(q(z_2)\) and \(q(z_3)\). This applies for all \(q(z_i)\). We are now in the ‘chicken and egg situation’ where we have variables that dependent on each other to compute. On top of that, we also don’t know the value of any constant \(\mathcal{C}_i\).

One approach to address this problem is by using coordinate ascent inference. It is done by iteratively optimizing one variational distribution \(q(z_i)\) at a time, while holding the others fixed. We could first initialize these distribution randomly, and run this iterative algorithm until it converges.


Unfortunately, this series would not discuss the algoritm that address the given problem. But hopefully, after this series of post you’ll be better equipped when reading from another resources that cover more depth and perhaps in rigor manner. We will end this post here and see you in another post!

Thank you!