**Reference: **
Dagum, P. &
Luby, M. On the Approximation of Probabilistic Inference. Knowledge Systems Laboratory, Medical Computer Science, September, 1994.

**Abstract:** Previous work proves that approximating $\Pr[X=x|E=e]$ in {\em
any} sense, even for a single evidence node $E$, is {\bf NP}-hard
\cite{Dagum91c}. This result holds for
belief networks with {\em extreme} conditional probabilities---that
is, a conditional probability $\Pr[X=x|\pi(X)]$ such that for some
instantiations of the parents $\pi(X)$, $\Pr[X=x|\pi(X)]$ is
arbitrarily close to zero and for other instantiations it is
arbitrarily close to one. Nonetheless, all previous approximation
algorithms failed to efficiently approximate many inferences even for
belief networks without extreme conditional probabilities. Empirical
evidence, however, suggested that the actual performance of these
approximation algorithms exceeds the theoretically predicted behavior
\cite{Cousins93}. Thus, two unresolved problems plagued the practicioners
interested in fast approximations of inference probabilities for large
belief networks. First, can we efficiently approximate $\Pr[X=x|E=e]$
for belief networks without extreme probabilities, or is this problem
also {\bf NP}-hard? Second, can we prove tighter upper bounds on the
run time of approximation algorithms to better predict their behavior
in general belief networks. That is, can we prove bounds consistent
with the empirical results observed for selected belief networks?
We resolve the latter problem by proving a {\em tight} stopping rule
for the number of simulations required by approximation algorithms.
This result follows from the proof of the Zero-One Estimator Theorem
and a similar stopping rule for zero-one valued random variables
appears elsewhere \cite{Karp89}. Other previous results on a stopping
rule for simulation algorithms used Bayesian statistics to derive the
stopping rule \cite{Dagum91d}. Our new result does not use Bayesian
statistics, and therefore, it is independent of any assumptions about
the prior distribution on the mean of the estimate. The bounds are
tight in the sense that an algorithm halts after a number of
simulations that is within a factor of ??two of the {\em minimum}
possible number of simulations.
We resolve the first problem by constructing two approximation
algorithms for probabilistic inference. Let $\Pr[X=x|E=e]$ denote
{\em any} inference probability in a belief network without extreme
conditional probabilities. The first algorithm, the {\em bounded
variance algorithm}, is a variant of the known likelihood weighting
algorithm \cite{Fung90a,Shachter90a}. Unlike likelihood weighting and
other randomized approximation algorithms for probabilistic inference,
however, we prove that the bounded variance algorithm approximates
$\Pr[X=x|E=e]$ efficiently. The second algorithm uses current
advances in the theory of pseudorandom generators to design a
deterministic algorithm that approximates inference probabilities
$\Pr[X=x|E=e]$ in worst-case time that is subexponential $2^{(\log
n)^d}$, for some integer $d$.
We prove that for any integer $c$ both algorithms compute {\em
exactly} the first $c\log_{10} n$ significant digits of
$\Pr[X=x|E=e]$. The bounded variance algorithm outputs these digits
correctly in polynomial time with very high probability. The
deterministic algorithm always outputs these digits correctly. Thus,
in contrast to the exponential worst-case behavior of exact
algorithms, we prove that if we accept a small probability of failure
then we can compute inferences in polynomial worst-case time and
otherwise we can {\em deterministically} compute inferences in
subexponential worst-case time for belief networks without extreme
conditional probabilities. We compare our results with previous work
showing that if we allow extreme conditional probabilities then to
even compute the first significant digit of an inference $\Pr[X=x]$ is
{\bf NP}-hard \cite{Cooper90a} and to approximate in any sense the
first significant digit of an inference $\Pr[X=x|E=e]$ is {\bf
NP}-hard \cite{Dagum91c}.

Send mail to: ksl-info@ksl.stanford.edu to send a message to the maintainer of the KSL Reports.