We often have a (posterior) distribution \(\;p(\theta)\;\) that is intractable, and we would like to compute an expectation:
\[ \int_\Theta f(\theta)\, p(\theta)\, \text{d}\theta \]Note that this is equivalent to a weighted average, where \(\;p(\theta)\;\) assigns an infinitesimal weight to each \(f(\theta)\). Another way to assign these weights would be to sample an infinite amount of times from the posterior distribution \(p(\theta)\), and the "weight" would be determined by how often a certain value of \(\theta\) is retrieved.
\[ \int_\Theta f(\theta)\, p(\theta)\, \text{d}\theta \;\, = \;\, \lim_{n \to \infty} \; \frac{1}{n} \sum_{i=1}^n f(\theta_i), \quad \quad \theta_i \sim p(\theta) \]In practice we cannot draw infinite samples, but we can use this to approximate expectations if we sample a large enough amount of samples.
\[ \int_\Theta f(\theta)\, p(\theta)\, \text{d}\theta \;\, \approx \;\, \frac{1}{n} \sum_{i=1}^n f(\theta_i), \quad \theta_i \sim p(\theta) \quad \text{for large }n \]NOTE We do not need \(\; p(\theta) \;\) to be normalized. If we draw samples from an unnormalized distribution \(\; \tilde{p}(\theta), \: p(\theta) = \frac{\tilde{p}(\theta)}{Z_p} \;\) (for example, using slice sampling), the approximation will give exactly the same result as for the normalized distribution \(\; p(\theta) \;\). This is because what matters is the relative frequency between the different sample values, rather than the absolute number of occurrences of each sample value. For example, if you duplicated each sample (repeated it), you would obtain exactly the same result because the relative frequency between different sample values would not change.