嘉宾简介：Hongliang Ren received his Ph.D. in Electronic Engineering (Specialized in Biomedical Engineering) from The Chinese University of Hong Kong (CUHK) in 2008. He serves as an Associate Editor for IEEE Transactions on Automation Science & Engineering (T-ASE) and Medical & Biological Engineering & Computing (MBEC). He has navigated his academic journey through Chinese University of Hong Kong, Johns Hopkins University, Children’s Hospital Boston, Harvard Medical School, Children’s National Medical Center, United States, and National University of Singapore (NUS). He is currently Associate Professor, Department of Electronic Engineering at Chinese University of Hong Kong, and Adjunct Associate Professor, Department of Biomedical Engineering at National University of Singapore. His areas of interest include biorobotics, intelligent control, medical mechatronics, soft continuum robots, soft sensors, and multisensory learning in medical robotics. He is the recipient of NUS Young Investigator Award and Engineering Young Researcher Award, IAMBE Early Career Award 2018, Interstellar Early Career Investigator Award 2018, ICBHI Young Investigator Award 2019, and Health Longevity Catalyst Award 2022 by NAM & RGC.
报告题目：Surgical motion generation and motion understanding towards augmented minimally invasive robotic procedures
报告摘要：This talk will highlight some recent developments in dexterous robotic motion generation with motion understanding towards image-guided minimally invasive procedures. The procedure-specific telerobotic surgical systems can assist surgeons in performing dexterous manipulations using the master-slave console bilateral motion generation & mapping mechanism with variable stiffness.Meanwhile, surgical motion understanding aims to learn from the multi-domain surgical perceptions and describe the semantic relationship between instruments and surgical region of interest. Automatically understanding the instrument motions in robotic surgery is crucial to enhance surgical outcomes, enable surgical camera automation, and facilitate surgical training. To that end, we generate the task-aware saliency maps and scanpath of the instruments beyond tracking and segmentation, similar to the surgeon’s visual perception, to get the priority focus on selected surgical instruments. Furthermore, generating a surgical report in robot-assisted surgery, together with surgical scene understanding, can play a significant role in document entry tasks, surgical training, and post-operative analysis.