PURPOSE: To develop and assess a needle-guiding manipulator for MRI-guided therapy that allows a physician to freely select the needle insertion path while maintaining remote center of motion (RCM) at the tumor site.
MATERIALS AND METHODS: The manipulator consists of a three-degrees-of-freedom (DOF) base stage and passive needle holder with unconstrained two-DOF rotation. The synergistic control keeps the Virtual RCM at the preplanned target using encoder outputs from the needle holder as input to motorize the base stage.
RESULTS: The manipulator assists in searching for an optimal needle insertion path which is a complex and time-consuming task in MRI-guided ablation therapy for liver tumors. The assessment study showed that accuracy of keeping the virtual RCM to predefined position is 3.0 mm. In a phantom test, the physicians found the needle insertion path faster with than without the manipulator (number of physicians = 3, P = 0.001). However, the alignment time with the virtual RCM was not shorter when imaging time for planning were considered.
CONCLUSION: The study indicated that the robot holds promise as a tool for accurately and interactively selecting the optimal needle insertion path in liver ablation therapy guided by open-configuration MRI.
[Background] Electromagnetically Navigated Bronchoscopy (ENB) is currently the state-of-the art diagnostic and interventional bronchoscopy. CT-to-body divergence is a critical hurdle in ENB, causing navigation error and ultimately limiting the clinical efficacy of diagnosis and treatment. In this study, Visually Navigated Bronchoscopy (VNB) is proposed to address the aforementioned issue of CT-to-body divergence. [Materials and Methods] We extended and validated an unsupervised learning method to generate a depth map directly from bronchoscopic images using a Three Cycle-Consistent Generative Adversarial Network (3cGAN) and registering the depth map to preprocedural CTs. We tested the working hypothesis that the proposed VNB can be integrated to the navigated bronchoscopic system based on 3D Slicer, and accurately register bronchoscopic images to pre-procedural CTs to navigate transbronchial biopsies. The quantitative metrics to asses the hypothesis we set was Absolute Tracking Error (ATE) of the tracking and the Target Registration Error (TRE) of the total navigation system. We validated our method on phantoms produced from the pre-procedural CTs of five patients who underwent ENB and on two ex-vivo pig lung specimens. [Results] The ATE using 3cGAN was 6.2 +/- 2.9 [mm]. The ATE of 3cGAN was statistically significantly lower than that of cGAN, particularly in the trachea and lobar bronchus (p < 0.001). The TRE of the proposed method had a range of 11.7 to 40.5 [mm]. The TRE computed by 3cGAN was statistically significantly smaller than those computed by cGAN in two of the five cases enrolled (p < 0.05). [Conclusion] VNB, using 3cGAN to generate the depth maps was technically and clinically feasible. While the accuracy of tracking by cGAN was acceptable, the TRE warrants further investigation and improvement.
OBJECTIVE: The purpose of this article is to report the translational process of an implantable microdevice platform with an emphasis on the technical and engineering adaptations for patient use, regulatory advances, and successful integration into clinical workflow.
METHODS: We developed design adaptations for implantation and retrieval, established ongoing monitoring and testing, and facilitated regulatory advances that enabled the administration and examination of a large set of cancer therapies simultaneously in individual patients.
RESULTS: Six applications for oncology studies have successfully proceeded to patient trials, with future applications in progress.
CONCLUSION: First-in-human translation required engineering design changes to enable implantation and retrieval that fit with existing clinical workflows, a regulatory strategy that enabled both delivery and response measurement of up to 20 agents in a single patient, and establishment of novel testing and quality control processes for a drug/device combination product without clear precedents.
SIGNIFICANCE: This manuscript provides a real-world account and roadmap on how to advance from animal proof-of-concept into the clinic, confronting the question of how to use research to benefit patients.
This study aims to validate the advantage of the new engineering method to maneuver multi-section robotic bronchoscope with first person view control in transbronchial biopsy. Six physician operators were recruited and tasked to operate a manual and a robotic bronchoscope to the peripheral area placed in patient-derived lung phantoms. The metrics collected were the furthest generation count of the airway the bronchoscope reached, force incurred to the phantoms, and NASA-Task Load Index. The furthest generation count of the airway the physicians reached using the manual and the robotic bronchoscopes were 6.6 +/- 1.2th and 6.7 +/- 0.8th. Robotic bronchoscopes successfully reached the 5th generation count into the peripheral area of the airway, while the manual bronchoscope typically failed earlier in the 3rd generation. More force was incurred to the airway when the manual bronchoscope was used (0.24 +/- 0.20 [N]) than the robotic bronchoscope was applied (0.18 +/- 0.22 [N], p<0.05). The manual bronchoscope imposed more physical demand than the robotic bronchoscope by NASA-TLX score (55 +/- 24 vs 19 +/- 16, p<0.05). These results indicate that a robotic bronchoscope facilitates the advancement of the bronchoscope to the peripheral area with less physical demand to physician operators. The metrics collected in this study would expect to be used as a benchmark for the future development of robotic bronchoscopes.
Navigated bronchoscopy for the lung biopsy using an electro-magnetic (EM) sensor is often inaccurate due to patient breathing movement during procedures. The objective of this study is to evaluate whether registration of neural network- generated depth images can localize the bronchoscope in navigated bronchoscopy negating the need for EM sensor and error caused by breathing motion. [Methods] Dual CNN-generated depth images followed chained ICP registration were validated in the study. Accuracy was measured by the error between the location after registration and the location of the standard electromagnetic sensor. Difference in accuracy between regions that the neural networks had trained on (seen regions) and regions the networks had never encountered (unseen regions) was validated. [Results] The data collected points to the success of the bronchoscopic localization. Overall mean error of accuracy was 8.75 mm and the overall standard deviation was 4.76mm. For the seen region, the mean error was 6.10mm and the standard deviation was 2.65mm. For the unseen region, the mean error was 11.6mm and the standard deviation was 4.87mm. The results of the two-sample t-test shows that there is a statistically significant difference between the unseen and the seen region. [Conclusion] The results for registration demonstrate that this technique has potential to be implemented in navigational bronchoscopy. The technique produced less error than the electromagnetic sensor in practice, especially accounting for the estimated practical error due to experimental setup.
Current standard workflows of ultrasound (US)-guided needle insertion require physicians to use their both hands: holding the US probe to locate interested areas with the non-dominant hand and the needle with the dominant hand. This is due to the separation of functionalities for localization and needle insertion. This requirement does not only make the procedure cumbersome, but also limits the reliability of guidance given that the positional relationship between the needle and US images is unknown and interpreted with their experience and assumption. Although the US-guided needle insertion may be assisted through navigation systems, recovery of the positional relationship between the needle and US images requires the usage of external tracking systems and image-based tracking algorisms that may involve the registration inaccuracy. Therefore, there is an unmet need for the solution that provides a simple and intuitive needle localization and insertion to improve the conventional US-guided procedure. In this work, we propose a new device concept solution based on the ring-arrayed forward-viewing (RAF) ultrasound imaging system. The proposed system is comprised with ring-arrayed transducers and an open whole inside the ring where the needle can be inserted. The ring array provides forward-viewing US images, where the needle path is always maintained at the center of the reconstructed image without requiring any registration. As the proof of concept, we designed single-circle ring-arrayed configurations with different radiuses and visualized point targets using the forward-viewing US imaging through simulations and phantom experiments. The results demonstrated the successful target visualization and indicates the ring-arrayed US imaging has a potential to improve the US-guided needle insertion procedure to be simpler and more intuitive.