About Us

KEYNOTE Speakers

 

Prof. Bruce Hunter Thomas
University of South Australia

Biography: Professor Thomas is currently Emeritus Professor at the University of South Australia. His current research interests include the following: user interfaces, augmented reality, virtual reality, visualisation, wearable computers, CSCW, tabletop display interfaces, and the use of cognitive psychology in virtual environments research. He has served in many roles for the IEEE International Symposium on Mixed and Augmented Reality, IEEE Virtual Reality, and IEEE/ACM International Symposium on Wearable Computers, including program chair, general chair and on the steering committee. He also founded the ACM Interactive Surfaces and Spaces Conference (formerly IEEE Tabletop). He was awarded the ACM International Symposium on Wearable Computers (ISWC) 20-Year Impact Award. Prof. Thomas' academic qualifications include the following: a BA in Physics from George Washington University, an MS in Computer Science from the University of Virginia, and a PhD in Computer Science from Flinders University. Prof. Thomas has over 350 publications and has been cited over 10900 times.

 

 

Keynote Speaker-ICVARS 2023

Prof. Dinesh Manocha (ACM Fellow, IEEE Fellow, AAAI Fellow, AAAS Fellow)
University of Maryland, USA

Biography: Dinesh Manocha is Paul Chrisman-Iribe Chair in Computer Science & ECE and Distinguished University Professor at University of Maryland College Park. His research interests include virtual environments, physically-based modeling, and robotics. His group has developed a number of software packages that are standard and licensed to 60+ commercial vendors. He has published more than 700 papers & supervised 43 PhD dissertations. He is a Fellow of AAAI, AAAS, ACM, and IEEE, member of ACM SIGGRAPH Academy, and Bézier Award from Solid Modeling Association. He received the Distinguished Alumni Award from IIT Delhi the Distinguished Career in Computer Science Award from Washington Academy of Sciences. He was a co-founder of Impulsonic, a developer of physics-based audio simulation technologies, which was acquired by Valve Inc in November 2016. He is also a co-founder of Inception Robotics.

Title: Augmented Intelligence

Abstract: We give an overview of recent developments in terms of developing next generation human-computing interface technologies that combine AI Technologies based on vision, speech and NLP with virtual and augmented reality methods. This includes novel handheld and immersive devices that use multiple sensors to capture the environment and increase the sense of realism.  We will describe new methods for generating intelligent virtual avatars and demonstrate their benefits for social VR and 3D Interactions. We will highlight their benefits for virtual try on, digital virtual assistants and computer-aided design.

 

Prof. Hideo Saito
Keio University, Japan

Biography: HIDEO SAITO received a Ph.D. degree in electrical engineering from Keio University, Japan, in 1992. Since 1992, he has been in the Faculty of Science and Technology, Keio University. From 1997 to 1999, he joined the Virtualized Reality Project at the Robotics Institute, Carnegie Mellon University, as a Visiting Researcher. Since 2006, he has been a Full Professor of Keio University. Since 2020, he is the chair professor of the Department of Information and Computer Science at Keio University. His research interests include computer vision and pattern recognition, and their applications to augmented reality, virtual reality, and human-robotic interaction. His recent activities in academic conferences include being the Program Chair of ACCV 2014, the General Chair of ISMAR 2015, the Program Chair of ISMAR 2016, and the Scientific Program Chair of Euro VR2020. He is a fellow of IEICE (Institute of Electronics, Information and Communication Engineers) and VRSJ (Virtual Reality Society of Japan).

Speech Title: Image Based Rendering by Deep-Learning based Computer Vision

Abstract: Ten years have passed since deep learning was shown to be capable of dramatically improving image recognition performance. In the past decade, various studies have shown that deep learning can innovatively improve conventional image recognition and sensing performance.  Image Based Rendering (IBR) for the computational synthesis of images that cannot be captured with generic cameras has also made innovative progress with deep-learning-based computer vision.  In this talk, I would like to talk about recent IBR technologies driven by such deep-learning algorithms for visualizing scenes that cannot be captured by ordinary cameras. Deep learning with a huge amount of training data makes what was impossible with the classical techniques possible.  I will demonstrate it by showing deep-learning-enhanced estimation of geometric information, such as camera poses, human poses, and 3D structures of environments.  Then, I would also show examples of how deep learning can contribute to practical applications of IBR including AR/VR visualizations.  I will finally give my expectation for the future of the marriage of such deep learning and computer vision technologies.

 

 

 

 
Copyright © the 8th International Conference on Virtual and Augmented Reality Simulations (ICVARS). All Rights Reserved.