• First micro device case
  • Collaboration
  • Slicer
  • Group Photo 2013

Our Approach

The Surgical Navigation and Robotics Laboratory focuses on development of novel computer and engineering methods for image-guided therapy.

Our unique approach, where imaging, computing and robotics are integrated into one unit to enhance the capability of image-guided therapy, aims to advance a minimally invasive therapy and ultimately develop new treatment methods.

Being part of a clinical research program in a Harvard affiliated hospital, we stress actual clinical applications of the developed methods. We do science, engineering, and applications. The laboratory is under the direction of Dr. Nobuhiko Hata.

Our Mission

The Surgical Navigation and Robotics Laboratory enables more effective and less invasive image-guided therapy.

We fulfill this mission through a commitment to:

  • Developing innovative devices and mechanisms for robotic surgery
  • Inventing computer and engineering methods for surgical navigation
  • Applying the developed technologies in actual clinical cases and delivering unique feedback to the scientific research community
  • Sharing our research data, software, and device design with industry and academic peers
  • Applying synergistic coupling to scientific disciplines unaware of or presently disconnected from image-guided therapy

Recent Publications

Computer-based airway stenosis quantification from bronchoscopic images: preliminary results from a feasibility trial

Artur Banach, Masahito Naito, Franklin King, Fumitaro Masaki, Hisashi Tsukada, and Nobuhiko Hata. 2022. “Computer-based airway stenosis quantification from bronchoscopic images: preliminary results from a feasibility trial.” Int J Comput Assist Radiol Surg.Abstract
PURPOSE: Airway Stenosis (AS) is a condition of airway narrowing in the expiration phase. Bronchoscopy is a minimally invasive pulmonary procedure used to diagnose and/or treat AS. The AS quantification in a form of the Stenosis Index (SI), whether subjective or digital, is necessary for the physician to decide on the most appropriate form of treatment. The literature reports that the subjective SI estimation is inaccurate. In this paper, we propose an approach to quantify the SI defining the level of airway narrowing, using depth estimation from a bronchoscopic image. METHODS: In this approach we combined a generative depth estimation technique combined with depth thresholding to provide Computer-based AS quantification. We performed an interim clinical analysis by comparing AS quantification performance of three expert bronchoscopists against the proposed Computer-based method on seven patient datasets. RESULTS: The Mean Absolute Error of the subjective Human-based and the proposed Computer-based SI estimation was [Formula: see text] [%] and [Formula: see text] [%], respectively. The correlation coefficients between the CT measurements were used as the gold standard, and the Human-based and Computer-based SI estimation were [Formula: see text] and 0.46, respectively. CONCLUSIONS: We presented a new computer method to quantify the severity of AS in bronchoscopy using depth estimation and compared the performance of the method against a human-based approach. The obtained results suggest that the proposed Computer-based AS quantification is a feasible tool that has the potential to provide significant assistance to physicians in bronchoscopy.
Read more

Predicting reachability to peripheral lesions in transbronchial biopsies using CT-derived geometrical attributes of the bronchial route

Masahito Naito, Fumitaro Masaki, Rebecca Lisk, Hisashi Tsukada, and Nobuhiko Hata. 2022. “Predicting reachability to peripheral lesions in transbronchial biopsies using CT-derived geometrical attributes of the bronchial route.” Int J Comput Assist Radiol Surg.Abstract
PURPOSE: The bronchoscopist's ability to locate the lesion with the bronchoscope is critical for a transbronchial biopsy. However, much less study has been done on the transbronchial biopsy route. This study aims to determine whether the geometrical attributes of the bronchial route can predict the difficulty of reaching tumors in bronchoscopic intervention. METHODS: This study included patients who underwent bronchoscopic diagnosis of lung tumors using electromagnetic navigation. The biopsy instrument was considered "reached" and recorded as such if the tip of the tracked bronchoscope or extended working channel was in the tumors. Four geometrical indices were defined: Local curvature (LC), plane rotation (PR), radius, and global relative angle. A Mann-Whitney U test and logistic regression analysis were performed to analyze the difference in geometrical indices between the reachable and unreachable groups. Receiver operating characteristic analysis (ROC) was performed to evaluate the geometrical indices to predict reachability. RESULTS: Of the 41 patients enrolled in the study, 16 patients were assigned to the unreachable group and 25 patients to the reachable group. LC, PR, and radius have significantly higher values in unreachable cases than in reachable cases ([Formula: see text], [Formula: see text], [Formula: see text]). The logistic regression analysis showed that LC and PR were significantly associated with reachability ([Formula: see text], [Formula: see text]). The areas under the curve with ROC analysis of the LC and PR index were 0.903 and 0.618. The LC's cut-off value was 578.25. CONCLUSION: We investigated whether the geometrical attributes of the bronchial route to the lesion can predict the difficulty of reaching the lesions in the bronchoscopic biopsy. LC, PR, and radius have significantly higher values in unreachable cases than in reachable cases. LC and PR index can be potentially used to predict the navigational success of the bronchoscope.
Read more

Automatic segmentation of prostate and extracapsular structures in MRI to predict needle deflection in percutaneous prostate intervention

Satoshi Kobayashi, Franklin King, and Nobuhiko Hata. 2022. “Automatic segmentation of prostate and extracapsular structures in MRI to predict needle deflection in percutaneous prostate intervention.” Int J Comput Assist Radiol Surg.Abstract
PURPOSE: Understanding the three-dimensional anatomy of percutaneous intervention in prostate cancer is essential to avoid complications. Recently, attempts have been made to use machine learning to automate the segmentation of functional structures such as the prostate gland, rectum, and bladder. However, a paucity of material is available to segment extracapsular structures that are known to cause needle deflection during percutaneous interventions. This research aims to explore the feasibility of the automatic segmentation of prostate and extracapsular structures to predict needle deflection. METHODS: Using pelvic magnetic resonance imagings (MRIs), 3D U-Net was trained and optimized for the prostate and extracapsular structures (bladder, rectum, pubic bone, pelvic diaphragm muscle, bulbospongiosus muscle, bull of the penis, ischiocavernosus muscle, crus of the penis, transverse perineal muscle, obturator internus muscle, and seminal vesicle). The segmentation accuracy was validated by putting intra-procedural MRIs into the 3D U-Net to segment the prostate and extracapsular structures in the image. Then, the segmented structures were used to predict deflected needle path in in-bore MRI-guided biopsy using a model-based approach. RESULTS: The 3D U-Net yielded Dice scores to parenchymal organs (0.61-0.83), such as prostate, bladder, rectum, bulb of the penis, crus of the penis, but lower in muscle structures (0.03-0.31), except and obturator internus muscle (0.71). The 3D U-Net showed higher Dice scores for functional structures ([Formula: see text]0.001) and complication-related structures ([Formula: see text]0.001). The segmentation of extracapsular anatomies helped to predict the deflected needle path in MRI-guided prostate interventions of the prostate with the accuracy of 0.9 to 4.9 mm. CONCLUSION: Our segmentation method using 3D U-Net provided an accurate anatomical understanding of the prostate and extracapsular structures. In addition, our method was suitable for segmenting functional and complication-related structures. Finally, 3D images of the prostate and extracapsular structures could simulate the needle pathway to predict needle deflections.
Read more

The Translational and Regulatory Development of an Implantable Microdevice for Multiple Drug Sensitivity Measurements in Cancer Patients

Christine Dominas, Sharath Bhagavatula, Elizabeth Stover, Kyle Deans, Cecilia Larocca, Yolanda Colson, Pier Paolo Peruzzi, Adam Kibel, Nobuhiko Hata, Lillian Tsai, Yin Hung, Robert Packard, and Oliver Jonas. 2022. “The Translational and Regulatory Development of an Implantable Microdevice for Multiple Drug Sensitivity Measurements in Cancer Patients.” IEEE Trans Biomed Eng, 69, 1, Pp. 412-421.Abstract
OBJECTIVE: The purpose of this article is to report the translational process of an implantable microdevice platform with an emphasis on the technical and engineering adaptations for patient use, regulatory advances, and successful integration into clinical workflow. METHODS: We developed design adaptations for implantation and retrieval, established ongoing monitoring and testing, and facilitated regulatory advances that enabled the administration and examination of a large set of cancer therapies simultaneously in individual patients. RESULTS: Six applications for oncology studies have successfully proceeded to patient trials, with future applications in progress. CONCLUSION: First-in-human translation required engineering design changes to enable implantation and retrieval that fit with existing clinical workflows, a regulatory strategy that enabled both delivery and response measurement of up to 20 agents in a single patient, and establishment of novel testing and quality control processes for a drug/device combination product without clear precedents. SIGNIFICANCE: This manuscript provides a real-world account and roadmap on how to advance from animal proof-of-concept into the clinic, confronting the question of how to use research to benefit patients.
Read more

Rapid Quality Assessment of Nonrigid Image Registration Based on Supervised Learning

Eung-Joo Lee, William Plishker, Nobuhiko Hata, Paul B Shyn, Stuart G Silverman, Shuvra S Bhattacharyya, and Raj Shekhar. 2021. “Rapid Quality Assessment of Nonrigid Image Registration Based on Supervised Learning.” J Digit Imaging, 34, 6, Pp. 1376-1386.Abstract
When preprocedural images are overlaid on intraprocedural images, interventional procedures benefit in that more structures are revealed in intraprocedural imaging. However, image artifacts, respiratory motion, and challenging scenarios could limit the accuracy of multimodality image registration necessary before image overlay. Ensuring the accuracy of registration during interventional procedures is therefore critically important. The goal of this study was to develop a novel framework that has the ability to assess the quality (i.e., accuracy) of nonrigid multimodality image registration accurately in near real time. We constructed a solution using registration quality metrics that can be computed rapidly and combined to form a single binary assessment of image registration quality as either successful or poor. Based on expert-generated quality metrics as ground truth, we used a supervised learning method to train and test this system on existing clinical data. Using the trained quality classifier, the proposed framework identified successful image registration cases with an accuracy of 81.5%. The current implementation produced the classification result in 5.5 s, fast enough for typical interventional radiology procedures. Using supervised learning, we have shown that the described framework could enable a clinician to obtain confirmation or caution of registration results during clinical procedures.
Read more

Visually Navigated Bronchoscopy using three cycle-Consistent generative adversarial network for depth estimation

Artur Banach, Franklin King, Fumitaro Masaki, Hisashi Tsukada, and Nobuhiko Hata. 2021. “Visually Navigated Bronchoscopy using three cycle-Consistent generative adversarial network for depth estimation.” Med Image Anal, 73, Pp. 102164.Abstract
[Background] Electromagnetically Navigated Bronchoscopy (ENB) is currently the state-of-the art diagnostic and interventional bronchoscopy. CT-to-body divergence is a critical hurdle in ENB, causing navigation error and ultimately limiting the clinical efficacy of diagnosis and treatment. In this study, Visually Navigated Bronchoscopy (VNB) is proposed to address the aforementioned issue of CT-to-body divergence. [Materials and Methods] We extended and validated an unsupervised learning method to generate a depth map directly from bronchoscopic images using a Three Cycle-Consistent Generative Adversarial Network (3cGAN) and registering the depth map to preprocedural CTs. We tested the working hypothesis that the proposed VNB can be integrated to the navigated bronchoscopic system based on 3D Slicer, and accurately register bronchoscopic images to pre-procedural CTs to navigate transbronchial biopsies. The quantitative metrics to asses the hypothesis we set was Absolute Tracking Error (ATE) of the tracking and the Target Registration Error (TRE) of the total navigation system. We validated our method on phantoms produced from the pre-procedural CTs of five patients who underwent ENB and on two ex-vivo pig lung specimens. [Results] The ATE using 3cGAN was 6.2 +/- 2.9 [mm]. The ATE of 3cGAN was statistically significantly lower than that of cGAN, particularly in the trachea and lobar bronchus (p < 0.001). The TRE of the proposed method had a range of 11.7 to 40.5 [mm]. The TRE computed by 3cGAN was statistically significantly smaller than those computed by cGAN in two of the five cases enrolled (p < 0.05). [Conclusion] VNB, using 3cGAN to generate the depth maps was technically and clinically feasible. While the accuracy of tracking by cGAN was acceptable, the TRE warrants further investigation and improvement.
Read more
More