Geraldine Morin | University of Toulouse, France
Abstract: 3D content is spreading fast as a commonplace media. New technologies like 3D capture devices, head mounted display or 3D printers as well as numerous applications like e-commerce to virtual visits are fostering the use of 3D models. Starting from classical geometric representations limited to dedicated context (CAM-CAD, or game development), adapted models have to be devised in order to meet the constraints imposed by such different application contexts and technologies.
The goal of this talk is to give an overview of the different geometric representations (discrete or continuous, with and without topology) and their advantages and drawbacks. We introduce the existing adaptations of these models for creating, manipulating or sharing these models and their adaptation to specific multimedia applications. In particular, we review 3D compression, 3D model analysis for creation and edition based both on geometry and user interactions, and the different 3D characteristics for indexing 3D models.
Bio: Géraldine Morin is an Associate Professor at University of Toulouse and a member of the IRIT (Institut de Recherche en Informatique de Toulouse), since 2002. She got her Ph.D. in Computer Science (geometric modeling) from Rice University, Houston, USA and she was a postdoc at the Free University in Berlin, Germany. In January 2014, she defended her habilitation on the use of 3D models as multimedia content. In addition to her recognized work in compact and progressive representation and streaming for 3D plant models (best paper ACM MM 2008), Géraldine Morin worked on image based representations. More recently, she worked on identifying similarities in a 3D model and 3D applications indexing. In addition, she participated in work on the analysis of user interactions to adapt 3D content and their interactions. She is currently interested in using skeletons for shape analysis and image based reconstruction. Géraldine Morin served a member of SIAM-Geometric Design Group office, she is an associate member of the French-Singaporean lab IPAL and she is co-head of the French Geometric Modeling Group; she participated in the program committee of several national and international conferences. She published 14 papers in international journals and 36 in international conferences and workshops, and co-advised 7 graduated Ph.D. students and is currently directing two.
Niall Murray | Athlone Institute of Technology, Ireland
Abstract: Immersive multimedia experiences have the possibility to engage users perceptually, cognitively and emotionally and there is significant interest due to their applicability across numerous application domains (film, entertainment, health, education, training, tourism, manufacturing). Important findings from psychological and neuroscience research, increased computational power, advances in sensor and display technologies are striving towards making truly immersive multimedia experiences a reality. In the context of various immersive, multimodal and mixed reality applications we have worked on; this talk will present an overview of our research towards understanding factors the make experiences immersive. It will focus on the user as the key stakeholder in the quality evaluation process. The talk will conclude by highlighting potential benefits, some trends and future research directions.
Bio: Dr. Niall Murray (www.niallmurray.info) is a permanent Lecturer and researcher with the Faculty of Engineering and Informatics in the Athlone Institute of Technology (AIT), Ireland. He received his BE (Electronic and Computer Engineering) from National University of Ireland, Galway (2003), MEng (Computer and Communication Systems) from the University of Limerick (2004) and PhD from the Software Research Institute (SRI) in the Athlone IT in 2014.
Since 2004, he has worked in R&D roles across a number of industries: Telecommunications, Finance, Health and Education. In 2014 He founded the Truly Immersive and Interactive Multimedia Experiences lab (TIIMEx). His research interests include Immersive Multimedia Communication, Multisensory Multimedia, Quality of Experience and Multimedia Synchronization. In this context, TIIMEx builds and evaluates from a user perceived quality perspective, end-to-end communication systems and novel immersive and interactive applications.
Ketan Mayer-Patel | University of North Carolina at Chapel Hill, NC, USA
Abstract: There is a gap to be sparked between the fields of multimedia systems and computer vision. That gap is a vision-driven interface for rate controlled compressed video. Advances in vision have made real-world, real-time vision-based applications a reality. In doing so, there is now an imperative to integrate and negotiate systems-level tradeoffs with these vision algorithms as these real-world applications are realized. In this talk, I’ll motivate and propose a rate controlled interface extension for OpenCV to compressed video sources. I’ll discuss a possible compressed-domain implementation that is adaptable to a wide array of existing video standards. Finally, I’ll speculate about the potential impact of such an interface on future directions of research in vision-based multimedia systems.
Bio: Ketan Mayer-Patel is an associate professor in the Department of Computer Science at the University of North Carolina at Chapel Hill. He received his Ph.D. in Computer Science in 1999 from the University of California at Berkeley. He was a recipient of an NSF CAREER award and is the current chair of the MMSys executive committee. His general research interest is in multimedia systems. Currently, he is investigating vision-based rate control, scalable display interfaces, and distributed archival video encoding.
Ragnhild Eg | Westerdals Oslo School of Arts, Norway
Abstract: Computers make humans wait. When interacting with a computer system, no response is instant; every key stroke or mouse movement must be processed before the output is rendered and presented. If signals have to travel across a network, the waiting time, or latency, can extend into the perceptible and further into the detrimental. In addressing perceptible delays between motor inputs and visual outputs, different communities use different jargons. Yet they share a common interest in the perceptual and cognitive consequences of lagging responses. This talk will give an overview of current insights about temporal human-computer interactions, presenting findings from HCI, multimedia and psychological research.
Bio: Ragnhild Eg is an associate professor at Westerdals Oslo School of Arts, Communication and Technology, where she combines her background and interest in perceptual psychology with digital marketing. She completed a PhD in psychology at the University of Oslo while working at Simula Research Laboratory. At Simula, she was part of a multi-disciplinary project that focused on the human perception of multimedia. Current projects relate to temporal human-computer interactions and the impact of delayed responses on performance. Ragnhilds research interests relate to the perceptual processing and integration of sensory information, particularly how the perceptual process is affected by the constraints imposed by technology.
Context-aware, perception-guided workload characterization and resource scheduling on mobile phones for interactive applications
Chung-Ta King | National Tsing Hua University, Taiwan
Chun-Han Lin | National Taiwan Normal University, Taiwan
Abstract: Mobile phones are indispensable in our daily life. We conduct many daily activities through our phones. Mobile phones have many interesting features. They are highly interactive and personalized, very rich in displayed contents, constrained in power and heat dissipation, and yet required to deliver very high quality of services. Although new ways of user interactions are emerging, mobile phones nowadays interact with their users primarily through the display. Users’ Quality of Experience (QoE) is thus affected in a large part by the quality at which the display can present the contents. A major challenge for mobile phone developers is to provide increasingly rich QoE through the display while reducing power consumption.
A general strategy to address the challenge is to schedule the resources of the phone, e.g. CPU, GPU, display, etc., to deliver a display quality that satisfies user’s QoE just enough. The question is how to know the users' acceptable display quality that can be used to guide resource scheduling. A suitable metric is difficult to get because it is subjective, personalized, and dependent on the contexts. Apparently, there is a need to understand the relationship between user contexts, user perceived display quality, interactive workload characteristics, and system resource utilization.
In this overview talk, we first examine how mobile phones display contents on the screen and common resource scheduling strategies considering user QoE. We then discuss how to conduct experiments on and collect data from real users to understand the effects of contexts on interactive workloads and user perceived display quality. Finally, we give an overview of context-aware design for displays on mobile phones, considering particularly the user context, user experience, network effects, and power consumption.
Bio: Chung-Ta King is a professor in Department of Computer Science and the director of Computer and Communication Center at National Tsing Hua University. He received the Ph.D. degree from Department of Computer Science, Michigan State University in 1988. His research interests include parallel and distributed processing, embedded systems, and computer architecture. He has organized and served in several major international conferences, including IEEE Cluster 2017, IEEE SC2 2017, IEEE ICPADS 2014, IEEE CloudCom 2013, and ESWeek 2011.
Chun-Han Lin is an assistant professor in the Department of Computer Science and Information Engineering at National Taiwan Normal University. He received the Ph.D. degree from the Department of Computer Science at National Tsing Hua University in 2010. His research interests include embedded systems and sensor networks.