A/Prof Manoranjan Paul
Charles Sturt University, Australia
Biography: Manoranjan Paul received B.Sc.Eng. (Hons.) degree in Computer Science and Engineering from Bangladesh University of Engineering and Technology (BUET), in 1997 and PhD degree from Monash University, Australia in 2005. He was a Post-Doctoral Research Fellow in the University of New South Wales from 2005 to 2006, Monash University from 2006 to 2009, and Nanyang Technological University from 2009 to 2011. Currently he is an Associate Professor and Director of E-Health Research at Charles Sturt University (CSU). His major research interests are in the field of Data Science such as Data Compression, Video Technology, Computer Vision, E-Health, Hyperspectral Imaging, Medical Imaging, and Medical Signal Processing. He has published more than 140 refereed publications. He was an invited keynote speaker in DICTA 2017 & 2013, CWCN 2017, IEEE WoWMoM 2014, and IEEE ICCIT 2010. A/Prof Paul is a Senior Member of the IEEE and ACS (Australian computer Society). He has served as a guest editor of five issues of Journal of Multimedia and Journal of Computers. Currently A/Prof Paul is an Associate Editor of EURASIP Journal on Advances in Signal Processing. He was a Program Chair of PSIVT 2017 and Publicity Chair of IEEE DICTA 2016. He is the ICT Researcher of the Year 2017 (Finalist) selected by Australian Computer Society. He obtained Research Excellence Supervision Award 2015 and Research Excellence Award 2013 at Faculty level, CSU. He also obtained Research Excellence Awards 2017 and 2011 at School level. He obtained more than $15M competitive grant money including the most prestigious Australian Research Council (ARC) Discovery Project Grant, Cybersecurity CRC.
Title: Video data compression, processing and evaluation with human feedback in the loop
Abstract: Video has the highest capability to engage human-being compared to any other media. It is processed by the brain 60,000 times faster than text. CISCO predicts that 80% of global Internet consumption will be video content by 2019 and 75% of mobile traffic will be video by 2020. This information tells us the importance of the video data in our daily life. To get benefit of the video data, we are facing a number of challenges: (i) how we tackling the huge volume of video data as some applications require ultra-compression for transmission, (ii) how we extract important information from the video data, and (iii) how we evaluate the quality of compressed/extracted end products. The coding, computer vision and quality experts have carried out research in these areas mostly in isolation. For example, computer vision researchers have mainly concentrated on various analysis techniques without giving much attention to the quality of the videos that are directly used from the video capturing devices in raw formats. Video coding researchers, on the other hand, have concentrated on extreme compression in the face of limited communication bandwidth without caring much about the end users’ receiving somehow distorted lossy images. Quality researchers have concentrated on evaluating the end product of videos without providing enough human perceived feedback for video processing/understanding. For high performance we need to integrate the knowledge from computer vision, coding, and human-computer interaction. This need arises due to the recent expansion of new technologies such as augmented/virtual/mixed reality, CCTV cameras, mobile devices, wireless communication, IoT, eye tracking technology, devices for capturing brain signals e.g. EEG, YouTube, etc. In this talk I like to highlight of our recent contributions to address the above mentioned challenges. More specifically, the contributions are in the area of vision-aided video coding, virtual view synthesis, multi/free viewpoint video, video summarization with eye tracking & EEG, and quality assessment with eye tracking technology.