Abstract
Introduction
The current study presents a Deep Learning framework to determine, in real-time, position and rotation of a target organ from an endoscopic video. These inferred data are used to overlay the 3D model of patient's organ over its real counterpart. The resulting augmented video flow is streamed back to the surgeon as a support during laparoscopic Robot-Assisted procedures.
Methods
This framework exploits semantic segmentation and, thereafter, two techniques, based on Convolutional Neural Networks and motion analysis, were used to infer the rotation.
Results
The segmentation shows optimal accuracies, with a mean IoU score greater than 80% in all tests. Different performance levels are obtained for rotation, depending on the surgical procedure.
Discussion
Even if the presented methodology has various degrees of precision depending on the testing scenario, this work sets the first step for the adoption of Deep Learning and Augmented Reality to generalize the automatic registration process.
This article is protected by copyright. All rights reserved.
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου
Σημείωση: Μόνο ένα μέλος αυτού του ιστολογίου μπορεί να αναρτήσει σχόλιο.