Looking at the future of surgery with the augmented eye
Editorial

Looking at the future of surgery with the augmented eye

Jacques Marescaux1,2, Michele Diana1,2

1IRCAD, Research Institute against Cancer of the Digestive System, Strasbourg, France; 2IHU-Strasbourg, Institute of Image-Guided Surgery, Strasbourg, France

Correspondence to: Jacques Marescaux, MD, FACS, (Hon) FRCS, (Hon) FJSES. IRCAD and IHU Strasbourg, Minimally Invasive Image-Guided Surgical Institute, 1, place de l’Hôpital, 67091 Strasbourg, France. Email: jacques.marescaux@ircad.fr.

Received: 14 October 2016; Accepted: 30 October 2016; Published: 29 November 2016.

doi: 10.21037/ales.2016.11.07


Over recent years, the art of surgery has been remodeled very rapidly by the outstanding technological developments, particularly those generated by computer and robotic sciences.

In the present editorial, we will endeavor to provide our insights on ongoing and/or easily foreseeable surgical developments.

The introduction of minimally invasive surgery (MIS) almost 30 years ago was considered a groundbreaking revolution, resulting in a drastic reduction in surgical trauma when compared to the conventional open approach. In MIS, high-definition cameras provide a clear and magnified vision of the internal anatomy and micro-instruments are inserted through reduced skin incisions to perform the procedure. The uptake of video-assisted MIS has been relatively rapid across the majority of surgical specialties, with proven benefits for patients.

Simultaneously, other disciplines such as interventional radiology and interventional flexible endoscopy have made substantial progress. Today, they could challenge the surgical therapeutic approach.

Interventional radiologists can benefit from evolved imaging systems, enabling precision characterization of diseases and advanced image-guided percutaneous or cath-based procedures, which are progressively taking over some previous surgical indications. The same evolutive process is occurring in flexible endoscopy, with advanced tools enabling gastroenterologists to step into the surgical arena, where they perform effective, non-invasive endoluminal surgery of early stage cancers. The current evolution of MIS, endoluminal and percutaneous surgery, taken individually, seems to reach a natural plateau, with little incremental developments generating only small added value for patients.

However, there is an intriguing window of opportunity in merging the triad of disciplines into a hybrid image-guided therapeutic approach (1).

Such a combination holds the promise to modify the current management algorithms of many current surgical pathologies, by increasing the cases which could be successfully treated with less invasive targeted procedures.

This hybrid discipline shift obviously requires an open-minded, interdisciplinary, and collaborative approach to surgical pathologies, in order to select the most suitable clinical indications. In addition, a radical change in the operating room design is required to optimize ergonomics and workflow, allowing for functional and safe complementation between imaging tools, robotic effectors, and surgical or interventional instruments.

Additionally, this revolution is occurring during a favorable mind-changing period in the healthcare community, with the concept of “precision medicine”.

Precision medicine is the ongoing healthcare revolution based on the human-machine collaboration. Modern computers with increasing computational power and endowed with artificial intelligence can work out the “big data” coming from human beings and can analyze patient-specific health fingerprints through the massive acquisition of omics data and all types of imaging (2). This approach creates automatic and personalized theranostic algorithms, with computers suggesting the right strategy to the physician (3).

In 2012, we aimed to move towards this hybrid, computer-assisted paradigm and in collaboration with the IRCAD, Research Institute against Digestive Cancer, and the University of Strasbourg, we launched a large research project which culminated with the creation of the IHU-Strasbourg, University Hospital Institute of Image-Guided Surgery. The IHU-Strasbourg is a Research Foundation totally dedicated to translational and clinical research in hybrid image-guided precision therapies for digestive system pathologies. The clinical operational platform of the IHU-Strasbourg is made up of operating rooms with a hybrid design, integrating last generation imaging systems (Figure 1) and conceived to integrate commercial and developing surgical or interventional electromechanical telemanipulators or fully robotic systems. This is our present vision. Among the earliest working packages of the IHU-Strasbourg, there is a business intelligence-based work of optimization of current techniques and technologies related to the triad of the hybrid approach, since each one comes with specific challenges and limitations. For instance, currently available flexible endoscopes are not conceived for advanced endoluminal surgery and lack the ability to provide surgical triangulation, which is an MIS fundamental.

Figure 1 IHU-Strasbourg hybrid room for image-guided procedures. The hybrid operating room (OR) at IHU-Strasbourg integrates (I) a robotic multiaxis 3D angiographic system (Artis Zeego, Siemens Healthcare, Erlangen; Germany); (II) a CT-scan, which is mounted over rails and can be “called in”, when required, in a matter of seconds; and (III) an MRI room with the door which opens in the OR (not shown).

Our mid-term vision of surgery, which represents one of the largest research blocks at the IHU-Strasbourg, is what we like to define as “cybernetic surgery” (4). Cybernetic means control it is the art of regulating a system: physiology is the cybernetic of natural systems and information technology and robotics are cybernetic of artificial systems. Along with more clinically orientated translational projects, we are working on the role of human-machine collaboration, deep machine learning and of Artificial Intelligence (AI) at any point of the surgical patient trajectory.

Computers with cognitive technologies (IBM Watson: https://www.ibm.com/watson/health/) are now being used in an increasing number of health projects to provide an “augmented brain” to doctors, in a human-machine collaboration paradigm. With improvements and evidence of efficacy, it can be expected that IBM Watson will replace the diagnostic role of the physician, leading to a total reshaping of the healthcare system.

We are convinced that the glorious era of doctors skilled in the arts of diagnosis based on their clinical skills and experience is destined to a natural decline, and clinically based diagnosis should be replaced by machine-based diagnosis and decision-making. Consequently, cybernetic doctors can have augmented brains, hands, and eyes in the immediate present/future, and in the next few years, it can be expected that the physician curriculum should be totally redesigned, towards a more engineering-based and technical profile, with the progressive reduction in the need of human-based clinical decision-making.

The internal developments of our R&D and of several of our partners (both corporate and academic) in robotics, software and mechatronics are mainly directed towards the harmonization and robotisation of theranostics instruments and workflows (5).

The smart operating room has to sense all potential data coming from the environment and from numerous medical devices, which produce a massive amount of information, overwhelming human possibilities. The IHU-Strasbourg hosts the CAMMA (Computational Analysis and Modelling of Medical Activities: http://camma.u-strasbg.fr/) Research Group. For instance, they have developed a machine learning system which recognizes the operative steps and sends the information to the ward, in order to optimize the timing of patient transfer to the OR (6).

However, real cybernetic therapies, as we see them, are a blend of “real” surgical robots, self-operated and sensor-controlled robots, which are fed with “big data”, such as patient-specific 3D anatomical imaging of all sorts, and with massive lab-on-the-chip biological data. The optimal treatment strategy in the modern cybernetic paradigm should be conceived by cognitive technologies and subsequently executed autonomously by AI robotic effectors, provided with sensors and feedbacks, according to precision algorithms.

Obviously, in order for the machine to work alone, this process of human-machine “separation” will be gradual, with generations of computer-assisted cybernetic physicians to ensure this transition to the next level of precision. We are preparing for this transition, working on the augmented hand, in a human-robot co-manipulation or collaboration workflow, and on the augmented eye by exploiting the potentialities of medical imaging, in order to increase the safety and efficacy profile of operations.


The augmented hand: robotics

In 2001, thanks to a collaboration with NASA and with France Telecom, we performed a remote surgical procedure across the Atlantic Ocean. The surgeon (Jacques Marescaux) was sitting in New York at the robotic console and the patient was installed in the OR in Strasbourg. This real-life demonstration of cybernetic surgery was made possible thanks to a specific development in data transmission (Asynchronous Transfer Mode), which allowed to significantly reduce the lag time of data transmission down to a reasonable delay for safe surgery (7).

Since then, at the IRCAD and European Institute of TeleSurgery (EITS) in Strasbourg, in collaboration with the iCube lab (UMR7357, Télécom Physique, Strasbourg), we have pursued an intense activity around the development of robotic helping hands to human physicians.

For instance, we have developed a surgical robotic flexible platform, with operative instruments controlled by ergonomic haptic interfaces. This telemanipulator is based on the architecture of a previously developed mechanical platform (8); it allows for advanced and precise endoluminal surgery, and it is currently undergoing successful preclinical evaluation effectors (9,10).

Additionally, in view of future developments of autonomous planning and execution of gestures, we have developed a proprietary robotic technology based on visual servoing, allowing to constantly track and lock a moving anatomical target, based on intraoperative video image analysis.

Today, surgical robots can be programmed to autonomously execute some tasks, such as suturing—see video by the CAMMA group at http://camma.u-strasbg.fr/videos. This is, however, far from the concept of purely autonomous surgery, “decided, planned, and performed” by machines, but it is the starting point to seriously accept that the concept is not science fiction and that today’s augmented hand might become the autonomous hand of tomorrow.


The augmented eye

The core of the research activity at IHU-Strasbourg, Institute of Image-Guided Surgery, is the concept of the augmented eye for precision therapies, especially in soft tissue surgeries, always paying attention to surgical workflows and to human factors. The optimal technology to enhance surgical precision should also be easy to implement in the OR, with minimal impact on workflow and easily accepted by users.

We usually divide the augmented surgical eye projects into three blocks, according to the intraoperative ability provided by the technology, in: (I) seeing by transparency; (II) see the microscopic; and (III) see the invisible.

Seeing by transparency

Seeing “by transparency” the interior of the human body, without actually opening it, with a sort of X-ray ability of the eyes, is the dream of any surgeon. This has been made possible by the application of virtual reality (VR) and augmented reality (AR) technologies to the medical field.

VR is a 3D digital environment in which the user is immersed and can interact with synthetic elements through various sensors and effectors. Today, VR synthesis has reached incredible levels of realism, but only a minimal portion of the huge potentialities of VR technologies is exploited in medicine.

Our R&D department has produced software solutions to generate the virtual clone of the patient, using 3D reconstruction of medical imaging data. After an initial automatic 3D reconstruction, the anatomical model is further refined by a semi-automatic procedure of organ segmentation. This allows for virtual surgical exploration and enhanced understanding of patient-specific anatomy (11).

Additionally, it is possible to plan the procedure on the segmented model, defining optimal resection planes, and also to simulate surgery or the targeted energy-based ablation, such as predicting the extent of the ablated area or visualizing the effects on organ perfusion using the virtual application of vessel-sealing tools.

Ultimately, during the surgical procedure, the virtual model enables the vision of internal structures by transparency. This enhanced view is obtained by merging and superimposing the synthetic images with the ones coming from the real-life patient. This virtual-to-real fusion process is defined as AR, and confers the ability to see organs by transparency. It can demonstrate critical, deeply situated anatomical structures (e.g., vessels, ureters, etc.) in order to prevent inadvertent injuries and also guide and define optimal resection margins.

Today, this fascinating surgical navigation technique suffers from many technological limitations, which are partly due to the “silicon limits” and the related deliverable computational power: this is spectacular for everyday life solutions but still insufficient for surgical precision.

We have studied and characterized the limitations with AR and at the IHU-Strasbourg, we are working to solve the challenges of AR. The most crucial one is the ability to constantly and accurately register the virtual onto the real-time images. In fact, the real organs are deformed during the surgical manipulation, while the virtual model is “rigid”, and represents a 3D snapshot which is obtained mostly preoperatively, with the patient in a given position.

The current approach is defined as “rigid registration” and is clearly inadequate in case of highly mobile structures, even in case of accurate environment tracking of the region of interest, including the positioning of instruments.

We are building our research program on our pioneer experiences with clinical AR applied to visceral surgery, since the world-first human application of AR in surgery was performed at IRCAD during laparoscopic adrenalectomies (12). AR could provide the position of adrenal vessels and of the tumor with a quite high accuracy, which can be explained by the relatively fixed position of retroperitoneal organs.

Subsequently, we applied AR navigation to minimally invasive hepatic resections (4), to video-assisted parathyroidectomies (13), and to Whipple procedures (10,14,15).

We are working to provide a non-rigid, patient-specific, and fully automatic anatomical reconstruction and registration through real-time simulation algorithms and using biomechanical modeling of tissues properties. This allows for the prediction of organ deformations (16). Our hypothesis is that the development of real-time magnetic resonance imaging (MRI), with the increasing uptake of MRI guided procedures, will solve the problem, providing a high update rate.

In the meantime, we are working to obtain a real-time refresh of the patient model using the Artis Zeego (Siemens Healthcare) for both endoscopic and laparoscopic procedures (Figure 2) and also a 4D ultrasound system, allowing to fuse CT-scan or MRI images with real-time ultrasonography scanning (17).

Figure 2 Example of workflow of the hybrid operating room (OR) imaging. The 3D virtual model of the patient can be improved by combining multiple imaging modalities, which can complement each other. In this example from the CT-scan, it is possible to obtain liver imaging in the arterial and portal phase, while MRI can provide optimal imaging of the biliary tree and, finally, the Artis Zeego can provide intraoperative fusion and virtually error-free registration of all data. MRI, magnetic resonance imaging.

See the microscopic

We are actively testing the role of emerging real-time optical imaging systems allowing to perform digital “virtual” biopsies at micron-level resolution, including confocal laser endomicroscopy (CLE). Experimentally, we have used CLE to perform “in vivo immunohistochemistry” and evaluate the presence of micro-metastasis in lymph nodes (18) intraoperatively. Additionally, we have assessed CLE as a means of real-time computation of bowel perfusion (19,20). Clinically, we are currently assessing intraoperative CLE imaging to optimize surgical decision-making during minimally invasive rectal surgery (ClinicalTrials.gov Identifier: NCT01887509), with encouraging preliminary results.

See the invisible

Fluorescence image-guided surgery (FIGS) is an emerging surgical navigation technique, which is based on the detection of structures or physiological processes, after administration of fluorophores, which are excited by near-infrared laser cameras. As compared to other imaging modalities, FIGS can be easily integrated into the surgical workflow; it does not require a bulky equipment, and obtains images in real time, and it is relatively inexpensive (21).

FIGS can help to solve several challenges, including: (I) prevention of surgical complications; (II) improving intraoperative staging; (III) improving diagnosis; and (IV) improving radical cancer removal, with the use of cancer-specific fluorophores (22).

We strongly believe in the potentialities of FIGS and, we have created a research unit (IHU-SPECTRA) at the IHU, which is totally dedicated to the development and clinical implementation of FIGS. Several experimental and clinical trials are being launched and expected to start by early 2017. As an example, we have developed an accurate, computer-assisted, fluorescence-based evaluation of anastomotic perfusion (19,23-25), which we are currently translating into the clinical setting (NCT02626091).

To conclude, we believe that the immediate next steps of surgical therapies are orientated towards the diffuse and harmonious use of robotics, AI, big data, advanced imaging systems, and sources of energy, in a transdisciplinary hybrid fashion.

If we had to reply to the question: “In your opinion, what is the long-term future of surgery?” The answer would be: “Hopefully, the future will lead to the end of surgery!”

Ongoing, impressive, and groundbreaking technologies coming from the “nano” world and from tissue engineering and “omics” manipulation will probably lead to the obsolescence of any form of surgery as we know it today. And nobody will miss it.

However, this is a different story…


Acknowledgments

Authors are grateful to Guy Temporal and Christopher Burel for their assistance with English proofreading.

Funding: None.


Footnote

Provenance and Peer Review: This article was commissioned by the editorial office, Annals of Laparoscopic and Endoscopic Surgery. The article did not undergo external peer review.

Conflicts of Interest: Both authors have completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/ales.2016.11.07). Dr. Diana reports grants from Foundation ARC for Cancer Research, personal fees from Diagnostic Green, grants from SATT Conectus, outside the submitted work. Dr. Jacques Marescaux is the President of both IRCAD and IHU Strasbourg, which are both partly funded by Karl Storz Endoskope (Tuttlingen, Germany), Medtronic, and Siemens Healthcare.

Ethical Statement: The authors are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.


References

  1. Marescaux J, Diana M. Next step in minimally invasive surgery: hybrid image-guided surgery. J Pediatr Surg 2015;50:30-6. [Crossref] [PubMed]
  2. Ashley EA. The precision medicine initiative: a new national effort. JAMA 2015;313:2119-20. [Crossref] [PubMed]
  3. Doyle-Lindrud S. Watson will see you now: a supercomputer to help clinicians make informed treatment decisions. Clin J Oncol Nurs 2015;19:31-2. [Crossref] [PubMed]
  4. Pessaux P, Diana M, Soler L, et al. Towards cybernetic surgery: robotic and augmented reality-assisted liver segmentectomy. Langenbecks Arch Surg 2015;400:381-5. [Crossref] [PubMed]
  5. Marescaux J, Diana M. Inventing the future of surgery. World J Surg 2015;39:615-22. [Crossref] [PubMed]
  6. Twinanda AP, Shehata S, Mutter D, et al. EndoNet: A Deep Architecture for Recognition Tasks on Laparoscopic Videos. IEEE Trans Med Imaging 2016; [Epub ahead of print]. [Crossref] [PubMed]
  7. Marescaux J, Leroy J, Gagner M, et al. Transatlantic robot-assisted telesurgery. Nature 2001;413:379-80. [Crossref] [PubMed]
  8. Diana M, Chung H, Liu KH, et al. Endoluminal surgical triangulation: overcoming challenges of colonic endoscopic submucosal dissections using a novel flexible endoscopic surgical platform: feasibility study in a porcine model. Surg Endosc 2013;27:4130-5. [Crossref] [PubMed]
  9. Diana M, Marescaux J. Robotic surgery. Br J Surg 2015;102:e15-28. [Crossref] [PubMed]
  10. Diana M, Pessaux P, Marescaux J. New technologies for single-site robotic surgery in hepato-biliary-pancreatic surgery. J Hepatobiliary Pancreat Sci 2014;21:34-42. [Crossref] [PubMed]
  11. D'Agostino J, Diana M, Vix M, et al. Three-dimensional virtual neck exploration before parathyroidectomy. N Engl J Med 2012;367:1072-3. [Crossref] [PubMed]
  12. Marescaux J, Rubino F, Arenas M, et al. Augmented-reality-assisted laparoscopic adrenalectomy. JAMA 2004;292:2214-5. [PubMed]
  13. D'Agostino J, Diana M, Vix M, et al. Three-dimensional metabolic and radiologic gathered evaluation using VR-RENDER fusion: a novel tool to enhance accuracy in the localization of parathyroid adenomas. World J Surg 2013;37:1618-25. [Crossref] [PubMed]
  14. Marzano E, Piardi T, Soler L, et al. Augmented reality-guided artery-first pancreatico-duodenectomy. J Gastrointest Surg 2013;17:1980-3. [Crossref] [PubMed]
  15. Pessaux P, Diana M, Soler L, et al. Robotic duodenopancreatectomy assisted with augmented reality and real-time fluorescence guidance. Surg Endosc 2014;28:2493-8. [Crossref] [PubMed]
  16. Haouchine N, Dequidt J, Berger MO, et al. Deformation-based augmented reality for hepatic surgery. Stud Health Technol Inform 2013;184:182-8. [PubMed]
  17. Diana M, Halvax P, Mertz D, et al. Improving Echo-Guided Procedures Using an Ultrasound-CT Image Fusion System. Surg Innov 2015;22:217-22. [Crossref] [PubMed]
  18. Diana M, Robinet E, Liu YY, et al. Confocal Imaging and Tissue-Specific Fluorescent Probes for Real-Time In Vivo Immunohistochemistry. Proof of the Concept in a Gastric Lymph Node Metastasis Model. Ann Surg Oncol 2015; [Epub ahead of print]. [Crossref] [PubMed]
  19. Diana M, Dallemagne B, Chung H, et al. Probe-based confocal laser endomicroscopy and fluorescence-based enhanced reality for real-time assessment of intestinal microcirculation in a porcine model of sigmoid ischemia. Surg Endosc 2014;28:3224-33. [Crossref] [PubMed]
  20. Diana M, Noll E, Charles AL, et al. Precision real-time evaluation of bowel perfusion: accuracy of confocal endomicroscopy assessment of stoma in a controlled hemorrhagic shock model. Surg Endosc 2016; [Epub ahead of print]. [Crossref] [PubMed]
  21. Weissleder R, Pittet MJ. Imaging in the era of molecular oncology. Nature 2008;452:580-9. [Crossref] [PubMed]
  22. Nguyen QT, Tsien RY. Fluorescence-guided surgery with live molecular navigation--a new cutting edge. Nat Rev Cancer 2013;13:653-62. [Crossref] [PubMed]
  23. Diana M, Agnus V, Halvax P, et al. Intraoperative fluorescence-based enhanced reality laparoscopic real-time imaging to assess bowel perfusion at the anastomotic site in an experimental model. Br J Surg 2015;102:e169-76. [Crossref] [PubMed]
  24. Diana M, Halvax P, Dallemagne B, et al. Real-time navigation by fluorescence-based enhanced reality for precise estimation of future anastomotic site in digestive surgery. Surg Endosc 2014;28:3108-18. [Crossref] [PubMed]
  25. Diana M, Noll E, Diemunsch P, et al. Enhanced-reality video fluorescence: a real-time assessment of intestinal viability. Ann Surg 2014;259:700-7. [Crossref] [PubMed]
doi: 10.21037/ales.2016.11.07
Cite this article as: Marescaux J, Diana M. Looking at the future of surgery with the augmented eye. Ann Laparosc Endosc Surg 2016;1:36.

Download Citation