• First micro device case
  • Collaboration
  • Slicer
  • Group Photo 2013

Our Approach

The Surgical Navigation and Robotics Laboratory focuses on development of novel computer and engineering methods for image-guided therapy.

Our unique approach, where imaging, computing and robotics are integrated into one unit to enhance the capability of image-guided therapy, aims to advance a minimally invasive therapy and ultimately develop new treatment methods.

Being part of a clinical research program in a Harvard affiliated hospital, we stress actual clinical applications of the developed methods. We do science, engineering, and applications. The laboratory is under the direction of Dr. Nobuhiko Hata.

Our Mission

The Surgical Navigation and Robotics Laboratory enables more effective and less invasive image-guided therapy.

We fulfill this mission through a commitment to:

  • Developing innovative devices and mechanisms for robotic surgery
  • Inventing computer and engineering methods for surgical navigation
  • Applying the developed technologies in actual clinical cases and delivering unique feedback to the scientific research community
  • Sharing our research data, software, and device design with industry and academic peers
  • Applying synergistic coupling to scientific disciplines unaware of or presently disconnected from image-guided therapy

Recent Publications

Visually Navigated Bronchoscopy using three cycle-Consistent generative adversarial network for depth estimation

Artur Banach, Franklin King, Fumitaro Masaki, Hisashi Tsukada, and Nobuhiko Hata. 2021. “Visually Navigated Bronchoscopy using three cycle-Consistent generative adversarial network for depth estimation.” Med Image Anal, 73, Pp. 102164.Abstract
[Background] Electromagnetically Navigated Bronchoscopy (ENB) is currently the state-of-the art diagnostic and interventional bronchoscopy. CT-to-body divergence is a critical hurdle in ENB, causing navigation error and ultimately limiting the clinical efficacy of diagnosis and treatment. In this study, Visually Navigated Bronchoscopy (VNB) is proposed to address the aforementioned issue of CT-to-body divergence. [Materials and Methods] We extended and validated an unsupervised learning method to generate a depth map directly from bronchoscopic images using a Three Cycle-Consistent Generative Adversarial Network (3cGAN) and registering the depth map to preprocedural CTs. We tested the working hypothesis that the proposed VNB can be integrated to the navigated bronchoscopic system based on 3D Slicer, and accurately register bronchoscopic images to pre-procedural CTs to navigate transbronchial biopsies. The quantitative metrics to asses the hypothesis we set was Absolute Tracking Error (ATE) of the tracking and the Target Registration Error (TRE) of the total navigation system. We validated our method on phantoms produced from the pre-procedural CTs of five patients who underwent ENB and on two ex-vivo pig lung specimens. [Results] The ATE using 3cGAN was 6.2 +/- 2.9 [mm]. The ATE of 3cGAN was statistically significantly lower than that of cGAN, particularly in the trachea and lobar bronchus (p < 0.001). The TRE of the proposed method had a range of 11.7 to 40.5 [mm]. The TRE computed by 3cGAN was statistically significantly smaller than those computed by cGAN in two of the five cases enrolled (p < 0.05). [Conclusion] VNB, using 3cGAN to generate the depth maps was technically and clinically feasible. While the accuracy of tracking by cGAN was acceptable, the TRE warrants further investigation and improvement.
Read more

The translational and regulatory development of an implantable microdevice for multiple drug sensitivity measurements in cancer patients

Christine Dominas, Sharath Bhagavatula, Elizabeth H Stover, Kyle Deans, Cecilia Larocca, Yolonda Lorig Colson, Pier Paolo Peruzzi, Adam S Kibel, Nobuhiko Hata, Lillian L Tsai, Yin P Hung, Rob Packard, and Oliver Jonas. 2021. “The translational and regulatory development of an implantable microdevice for multiple drug sensitivity measurements in cancer patients.” IEEE Trans Biomed Eng, PP.Abstract
OBJECTIVE: The purpose of this article is to report the translational process of an implantable microdevice platform with an emphasis on the technical and engineering adaptations for patient use, regulatory advances, and successful integration into clinical workflow. METHODS: We developed design adaptations for implantation and retrieval, established ongoing monitoring and testing, and facilitated regulatory advances that enabled the administration and examination of a large set of cancer therapies simultaneously in individual patients. RESULTS: Six applications for oncology studies have successfully proceeded to patient trials, with future applications in progress. CONCLUSION: First-in-human translation required engineering design changes to enable implantation and retrieval that fit with existing clinical workflows, a regulatory strategy that enabled both delivery and response measurement of up to 20 agents in a single patient, and establishment of novel testing and quality control processes for a drug/device combination product without clear precedents. SIGNIFICANCE: This manuscript provides a real-world account and roadmap on how to advance from animal proof-of-concept into the clinic, confronting the question of how to use research to benefit patients.
Read more

Technical validation of multi-section robotic bronchoscope with first person view control for transbronchial biopsies of peripheral lung

Fumitaro Masaki, Franklin King, Takahisa Kato, Hisashi Tsukada, Yolonda Lorig Colson, and Nobuhiko Hata. 2021. “Technical validation of multi-section robotic bronchoscope with first person view control for transbronchial biopsies of peripheral lung.” IEEE Trans Biomed Eng, PP.Abstract
This study aims to validate the advantage of the new engineering method to maneuver multi-section robotic bronchoscope with first person view control in transbronchial biopsy. Six physician operators were recruited and tasked to operate a manual and a robotic bronchoscope to the peripheral area placed in patient-derived lung phantoms. The metrics collected were the furthest generation count of the airway the bronchoscope reached, force incurred to the phantoms, and NASA-Task Load Index. The furthest generation count of the airway the physicians reached using the manual and the robotic bronchoscopes were 6.6 +/- 1.2th and 6.7 +/- 0.8th. Robotic bronchoscopes successfully reached the 5th generation count into the peripheral area of the airway, while the manual bronchoscope typically failed earlier in the 3rd generation. More force was incurred to the airway when the manual bronchoscope was used (0.24 +/- 0.20 [N]) than the robotic bronchoscope was applied (0.18 +/- 0.22 [N], p<0.05). The manual bronchoscope imposed more physical demand than the robotic bronchoscope by NASA-TLX score (55 +/- 24 vs 19 +/- 16, p<0.05). These results indicate that a robotic bronchoscope facilitates the advancement of the bronchoscope to the peripheral area with less physical demand to physician operators. The metrics collected in this study would expect to be used as a benchmark for the future development of robotic bronchoscopes.
Read more

Computer vision-guided bronchoscopic navigation using dual CNN-generated depth images and ICP registration

Xinqi Liu, Jonah Berg, Franklin King, and Nobuhiko Hata. 2020. “Computer vision-guided bronchoscopic navigation using dual CNN-generated depth images and ICP registration.” In Medical Imaging 2020: Image-Guided Procedures, Robotic Interventions, and Modeling, edited by Baowei Fei and Cristian A. Linte, 11315: Pp. 607 – 612. International Society for Optics and Photonics. Publisher's VersionAbstract
Navigated bronchoscopy for the lung biopsy using an electro-magnetic (EM) sensor is often inaccurate due to patient breathing movement during procedures. The objective of this study is to evaluate whether registration of neural network- generated depth images can localize the bronchoscope in navigated bronchoscopy negating the need for EM sensor and error caused by breathing motion. [Methods] Dual CNN-generated depth images followed chained ICP registration were validated in the study. Accuracy was measured by the error between the location after registration and the location of the standard electromagnetic sensor. Difference in accuracy between regions that the neural networks had trained on (seen regions) and regions the networks had never encountered (unseen regions) was validated. [Results] The data collected points to the success of the bronchoscopic localization. Overall mean error of accuracy was 8.75 mm and the overall standard deviation was 4.76mm. For the seen region, the mean error was 6.10mm and the standard deviation was 2.65mm. For the unseen region, the mean error was 11.6mm and the standard deviation was 4.87mm. The results of the two-sample t-test shows that there is a statistically significant difference between the unseen and the seen region. [Conclusion] The results for registration demonstrate that this technique has potential to be implemented in navigational bronchoscopy. The technique produced less error than the electromagnetic sensor in practice, especially accounting for the estimated practical error due to experimental setup.
Read more

Ring-arrayed Forward-viewing Ultrasound Imaging System: A Feasibility Study

Ryosuke Tsumura, Doua P Vang, Nobuhiko Hata, and Haichong K Zhang. 2020. “Ring-arrayed Forward-viewing Ultrasound Imaging System: A Feasibility Study.” Proc SPIE Int Soc Opt Eng, 11319.Abstract
Current standard workflows of ultrasound (US)-guided needle insertion require physicians to use their both hands: holding the US probe to locate interested areas with the non-dominant hand and the needle with the dominant hand. This is due to the separation of functionalities for localization and needle insertion. This requirement does not only make the procedure cumbersome, but also limits the reliability of guidance given that the positional relationship between the needle and US images is unknown and interpreted with their experience and assumption. Although the US-guided needle insertion may be assisted through navigation systems, recovery of the positional relationship between the needle and US images requires the usage of external tracking systems and image-based tracking algorisms that may involve the registration inaccuracy. Therefore, there is an unmet need for the solution that provides a simple and intuitive needle localization and insertion to improve the conventional US-guided procedure. In this work, we propose a new device concept solution based on the ring-arrayed forward-viewing (RAF) ultrasound imaging system. The proposed system is comprised with ring-arrayed transducers and an open whole inside the ring where the needle can be inserted. The ring array provides forward-viewing US images, where the needle path is always maintained at the center of the reconstructed image without requiring any registration. As the proof of concept, we designed single-circle ring-arrayed configurations with different radiuses and visualized point targets using the forward-viewing US imaging through simulations and phantom experiments. The results demonstrated the successful target visualization and indicates the ring-arrayed US imaging has a potential to improve the US-guided needle insertion procedure to be simpler and more intuitive.
Read more
More