Zé Vinícius bio photo

Zé Vinícius

EE undergrad

Email GitHub Twitter LinkedIn Instagram Speakerdeck

In this short blog post I am going to derive the probability density function of the sum between Poisson and Gaussian random variables.

This assumption appears in many practical scenarios, specially in imaging in which a photon noise component (usually Poisson distributed) gets combined with a thermal noise component (usually assumed to be Gaussian distributed).

Consider an experiment that outputs . Assume that is a sequence of independent but not necessarily identically distributed Poisson random variables, each of which has mean . Assume further that is a sequence of iid Gaussian random variables with zero mean and variance , .

The first step into deriving the likelihood function of is to get the pdf of every . Since is the sum of a Poisson random variable and a Gaussian random variable, we can go ahead and perform the convolution between their pdfs in order to get the pdf of . However, nevermind, let’s try something different today.

Note that, conditioned on , follows a Gaussian distribution with mean and variance , i.e.

Now, we can use the Law of Total Probability to derive as follows

Using the fact that are independent random variables, the pdf of follows as

and the log-likelihood can be written as

Any suggestions on how to make this likelihood computationally tractable? Maybe via approximation theory?