TY - CHAP
T1 - Fiducial Marker Based Patient-to-Robot Registration and Target Tracking for Automatic Extra-body Ultrasound Imaging
AU - Zheng, Yixuan
AU - Wang, Weizhao
AU - Ferhusoglu, Hamza
AU - Subramaniyam, Tharun
AU - Xu, Zhouyang
AU - Housden, James
AU - Rhode, Kawal
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024/8/28
Y1 - 2024/8/28
N2 - Ultrasound imaging is recognized for its safety, non-invasiveness, and radiation-free real-time visualization capabilities. In contrast to the complete automation achieved in MRI and CT modalities, ultrasound scanning remains reliant on operator skill, limiting its accessibility. Despite ongoing research into force control and automated image analysis for ultrasound robots, current systems show low autonomy, particularly in determining the initial scanning position. To address this gap, our study introduces an efficient method for autonomous initial probe positioning. This technique uses a pair of augmented reality (AR) markers with computer vision algorithms for patient-to-robot registration. This empowers the robotic system to autonomously discern the patient's positioning and move the scanning probe to an optimal starting location, independent of the camera position. We evaluated our method on phantoms and a healthy volunteer. In evaluating 70 target poses to confirm method repeatability, the average depth error, measured as the distance between the ultrasound probe face center and the patient surface, was recorded at 8.2 mm and the average xy-plane error was 5.5 mm. The target pose was dynamically updated in real-time, ensuring that the robot can return to the original scanning location even in instances of patient movement. These results not only confirm the system's capability to autonomously identify scanning start points and follow the target in real-time but also provide a foundational basis for the development of a fully autonomous robotic ultrasound system.
AB - Ultrasound imaging is recognized for its safety, non-invasiveness, and radiation-free real-time visualization capabilities. In contrast to the complete automation achieved in MRI and CT modalities, ultrasound scanning remains reliant on operator skill, limiting its accessibility. Despite ongoing research into force control and automated image analysis for ultrasound robots, current systems show low autonomy, particularly in determining the initial scanning position. To address this gap, our study introduces an efficient method for autonomous initial probe positioning. This technique uses a pair of augmented reality (AR) markers with computer vision algorithms for patient-to-robot registration. This empowers the robotic system to autonomously discern the patient's positioning and move the scanning probe to an optimal starting location, independent of the camera position. We evaluated our method on phantoms and a healthy volunteer. In evaluating 70 target poses to confirm method repeatability, the average depth error, measured as the distance between the ultrasound probe face center and the patient surface, was recorded at 8.2 mm and the average xy-plane error was 5.5 mm. The target pose was dynamically updated in real-time, ensuring that the robot can return to the original scanning location even in instances of patient movement. These results not only confirm the system's capability to autonomously identify scanning start points and follow the target in real-time but also provide a foundational basis for the development of a fully autonomous robotic ultrasound system.
UR - http://www.scopus.com/inward/record.url?scp=85208241635&partnerID=8YFLogxK
U2 - 10.1109/CASE59546.2024.10711463
DO - 10.1109/CASE59546.2024.10711463
M3 - Conference paper
T3 - IEEE International Conference on Automation Science and Engineering
SP - 2117
EP - 2122
BT - 2024 IEEE 20th International Conference on Automation Science and Engineering, CASE 2024
PB - IEEE Xplore
ER -