This method estimates endoscopic camera motion based on image matching of the real endoscopic camera image and the virtual endoscopic image (Fig. 4.18). It is possible to generate a virtual view V(p, Q) from preoperative CT data by giving a program a viewpointp and an orientation Q. A similarity-measure function S(A, B), which measures the similarity of two images of A and B, can also be generated. Image-based tracking can be formulated as
where R means the real endoscopic image. The result of this optimization process (p*, Q*) gives a position and an orientation for the real endoscope camera, which is represented in the CT coordinate system C(c) (Fig. 4.18).
Although this is a simple formulation of image-based endoscope tracking, the performance of this method depends heavily on the accuracy of the initial guess used in Eq.4.8. There are many methods for guess optimization. Mori et al. used epipolar geometry analysis for obtaining a good initial guess. They recovered endoscopic
Fig. 4.18 Basic concept of image-based endoscope tracking motion only from real endoscopic images. Luo et al. introduced a stochastic process for preventing Eq.4.8 goes to local minima. Combination with physical sensors shown in Sect.18.104.22.168 is a good way to optimize the initial guess.