Augmented Reality In The Operating Room – No-Scar Surgery

Richard Technology

“The abolition of pain in surgery is a chimera. It is absurd to go on seeking it. . . . Knife and pain are two words in surgery that must forever be associated in the consciousness of the patient. (Dr. Alfred Velpeau (1839) French surgeon).

Introduction.

Minimally invasive surgery represents one of the main evolutions of surgical techniques aimed at providing a greater benefit to the patient. However, minimally invasive surgery increases the operative difficulty since the depth perception is usually dramatically reduced, the field of view is limited and the sense of touch is transmitted by an instrument. However, these drawbacks can currently be reduced by computer technology guiding the surgical gesture. Indeed, from a patient’s medical image (US, CT or MRI), Augmented Reality (AR) can increase the surgeon’s intra-operative vision by providing a virtual transparency of the patient. AR is based on two main processes: the 3D visualization of the anatomical or pathological structures appearing in the medical image, and the registration of this visualization on the real patient. 3D visualization can be performed directly from the medical image without the need for a pre-processing step thanks to volume rendering, surface rendering and 3D modeling. Registration can be performed interactively or automatically.

Virtual reality techniques allow a pre-operative 3D visualization of the patient that can be manipulated in real time through the use of a patient-specific surgical simulation .In addition, augmented reality techniques superimpose this 3D image on the real image .Thanks to augmented reality it is thus possible to compensate the lack of the sense of touch with visualization of these forces by providing an artificial 3D view included transparency.The combination of visualization software, augmented reality and robotic technology should overcome the current limitations of minimal access surgery and perform extremely safe procedures with no scars. The goal is to develop Virtual Patient Modeling software that uses patient-specific data to enable pre-operative assessment and the diagnosis. Virtual planning software enables navigation and tool positioning within 3D images that can be reconstructed from any multimedia-equipped computer.

Development of Augmented Reality Visualization And Fusion Technology.

The end users of the technology will be surgeons working to give the best possible care to patients. The clinical expert plays an important role in specifying details for the targeted clinical applications, the procedural workflow and evaluation aspects of the different treatment strategies involved. Because the work situation is one of high cognitive and physical workload with high risks when human errors are made, intuitive and easy to learn interfaces are required. New interface design techniques are emerging involving medical end users in system design with the aim to come up with intuitive and error resistant interfaces. In the medical domain not much actual application of such techniques in interface design has been demonstrated yet. True workflow supportive interface design does not exist yet. Advanced multi-modal design: matching the natural strengths of the human user by applying less obvious combinations, such as gesture and eye movement tracking, speech control, 3D haptic/visual combinations, etc. are needed.Choices will be dictated by what is best for the user easy to learn and easy to use (and also feasible). Depending on which type of surgery, the stage of surgery, the personal preferences of the clinician user, etc.; adaptively must be provided while maintaining consistency in the interaction of the system when the user trusts the system he/she will be more likely to delegate tasks to it.

Image Processing.

Automatic segmentation of organs and other structures is vital for intraoperative usage and is used as input to other parts of the system. A combination of both model-based and statistical methods for the segmentation will be used. In this way not only information contained in the images themselves, but also knowledge of anatomy, pathology etc. is incorporated into the process. The models will be designed to fit structures of interest and at the same time have flexibility to handle unusual morphology, as is often the case with pathological organs. Statistical information will be gathered from previously segmented image sets and guide the segmentation process towards probable solutions.

Image Fusion.

Research in image fusion will address new challenges in the fusion with medical volume data. The development will go beyond current rigid-body assumptions and look at fusion between pre-operative volume data and intra-operative volume data taking deformations into account. An example will be accurate fusion between a high-definition pre-operative MR data set of the heart and intraoperative3D ultrasound data set. The intra-operative data set will typically be different from the pre-operative data set since mass has been removed and the anatomy deformed. Fusion algorithms for robust merging of surface descriptive data (example: video) and volume descriptive data will also be investigated.

Interactive 3D Visualization and Navigation.

In the proposed project, Augmented Reality (AR) could be used as a means of displaying multi-modal image information in an interactive manner. Reliable tools for the application of AR technology in minimally invasive therapy could  be developed. This field is relatively new and immature. AR defines a wide spectrum of research topics, having in common that 3D graphics techniques previously used in Virtual Reality (VR) are being used and improved to augment the reality as we see it with digital content. AR thus leverages development from the field of computer graphics. In this context, an important objective of the proposed project is to develop real-time rendering techniques of complex information merged into single 3D scenery. Computer graphics for medical imaging until recently has focused on delivering the highest quality (for the sake of the patient), but has not addressed the real-time requirements of AR application. In particular, the necessary 3D volume rendering at interactive frame rates is a challenge that will be addressed in the scope of this project using and extending some recently developed real-time techniques.

Future Research

A Tele-Immersive System for Surgical Consultation and Implant modeling .The major goal of this proposed  research is to develop and deploy a networked collaborative surgical system for tele-immersive consultation, surgical pre-planning, implant design, post operative evaluation and education. Tele-immersion enables users in different locations to collaborate in a shared, virtual, or simulated environment as if they are in the same room. It is the ultimate synthesis of networking and media technologies to enhance collaborative environments.Tele-immersive applications combine audio, avatars (representations of participants), virtual worlds, computation and tele-conferencing into an integrated networked system. This technology could  in the future be combined with robotics and haptics technology to make telerobotic surgery reality.

Original Source: Jacques Kpodonu,MD (cardiac surgeon,medical author,digital health innovation)