However, there exist equivalent formulations which don't make directly ask for the posterior.
For example, Bayes estimators minimise average error when averaged over the joint distribution of model parameters and observations, and this is well-posed without introducing the notion of the posterior.
For example, Bayes estimators minimise average error when averaged over the joint distribution of model parameters and observations, and this is well-posed without introducing the notion of the posterior.
Comments
θ ≈〈γ,G(x)〉
in the sense of minimising mean squared error.
γ = 𝐄[G(X)G(X)⊤]⁻¹𝐄[ΘG(X)⊤],
with the expectation taken over the joint distribution of (Θ,X). This then yields the estimator
θ̂(x) = 𝐄[G(X)G(X)⊤]⁻¹𝐄[Θ⋅〈G(X), G(x)〉].
γ = 𝐄[ΘG(X)⊤]⋅𝐄[G(X)G(X)⊤]⁻¹
and
θ̂(x) = 𝐄[ΘG(X)⊤]⋅𝐄[G(X)G(X)⊤]⁻¹⋅G(x)
= 𝐄[Θ K(X, x)]
for some function K which can be worked out from the above.
L(θ,x) = 𝐄[G(X)⊤ {𝐄[G(X)G(X)⊤]⁻¹} G(x) | θ]
(where the expectation is taken with respect to the law of X given θ), it holds that
θ̂(x) = ∫ Prior(dθ)⋅L(θ,x)⋅θ,
i.e. this "G-conditional expectation" resembles a posterior mean!
As such, it becomes tempting to define the "posterior" of θ given x as being
Posterior(dθ | X = x) = Prior(dθ)⋅L(θ, x).
That is, one can define a roughly-consistent way of approximating conditional expectations in a given basis, which correspond to integrating against some measure, but it might be a signed measure!