What one would like for practical purposes is that the simulations are *conditioned* on knowledge that is already available. In particular on information from well logs. Now as long as there is *one* well this is easily done in the way described before: simply use the information from the vertical^{5} well as initial states for the left hand column of the simulation to be generated (The states in the top row are obtainable too, as they describe the surface.) However, a particulary interesting situation is one where there is information from *two* wells. How is conditioning going to be performed in this case? Already in the one-dimensional case such a conditioning does not seem feasible at first sight: a Markov chain evolves typically from past to future so how can you correctly fill in the near future if you already know the far future? Actually it is mathematically very simple to accomplish this. Let be the chain of states at time , and let be the matrix of transition probabilities, i.e., for all and

Furthermore let be the matrix of -step transition probabilities. Suppose we are given time instants and (the far future). Then a simple calculation shows that

0.8Fig2.eps

Figure 2. *Top: the target reservoir. Left: well log information (2,3,5 and 7 wells. Right: simulated reservoirs conditioned on well logs.*

This formula yields a cheap way to generate Markov chain realisations conditioned on the future. The two-dimensional case is much more complicated, it is even not clear what ``future'' means in that case. In the geological application it is however clear where one wants to condition on: the left most column, and the right most column which represent data from two wells. In our paper an ``engineering'' solution has been chosen to the conditioning problem: the horizontal chain is conditioned as described above, and *then* this conditioned chain is coupled to the vertical chain. For exact conditioning it is useful to note (see also Galbraith and Walley) that a unilateral Markov random field can also be described by a one-dimensional Markov chain in a random environment (the random environment is generated by the chain itself). In fact, define for the matrix by

Then the following holds for

Now if we define the future of to be , where is the index of the right most column, then we obtain similarly to the computation for the one dimensional case

It is clear that this exact conditioning is computationally much more expensive. See Figure 2 for simulations with this model using the approximative (cheap) conditioning mentioned earlier. The perceptive reader will note that there are still some unwanted effects in the results.