Sam Ding | VMFive CEO
Abstract: The technology of app streaming, including mobile virtual machine, hybrid cloud infrastructure and adaptive streaming will be presented in this talk. There are several real cases will be shared to understand the pros and cons when using app streaming. Finally, we will think about the trend of app streaming.
Bio: Sam Ding is the founder and CEO of VMFive, works on the mobile virtualization, app streaming and programatic advertising technology. When he studied Ph.D. degree in NTHU, he led three advanced simulation and virtualization projects, PQEMU, ARMvisor and HSAemu. In 2011, PQEMU had two parallelization models to accerlate the simulation speed of QEMU and the the publication of PQEMU won the best paper award in ICPADS 2011. In 2012, the ARMvisor can run multiple embedded virtual machines on an ARM based mobile platform. In 2014, the HSAemu was the first system level emulator for HSA platform, which can do co-simulation of CPU and GPU to run unmodified OpenCL program. From 2014, he has found several startups to make innovations on mobile and cloud technology.
Program overview of MMSys 2017 is online. Please check out the Program page.
Dr. Shuen-Huei Guan | Technical Director of KKStream
Abstract: Video streaming (or more broadly OTT, over-the-top content) becomes the hot topic after giant players, including YouTube, Apple, Twitch, Netflix, Hulu, Amazon, and even Facebook. Netflix, among the list, leads the business, paradigm shift (cord cutting), content production and even streaming technology. In this casual talk, I'd like to try bring out some technical or operational issues and todos from streaming service provider's viewpoint. Let's see what we can discover from them to cook out new research topics or even startup ideas with practical needs.
Bio: Before joining the big trend of video streaming in KKBOX (and then KKStream), Drake has been serving as R&D manager and technology pioneer in Digimax, an animation studio, for 9 years. He was fortunate to participate some animation projects with partners like National Palace Museum (故宮博物院), NASA Jet Propulsion Laboratory (JPL), U-Theatre (優人神鼓) and even Pixar's RenderMan team. He started as a Computer Graphics researcher (SIGGRAPH maniac), stereoscopic vision lover, casual movie fan and then video streaming pioneer in Taiwan. He believes people first than technology even though his background is in STEM.
MMSys 2017 registration page is open now! Authors please be sure to obtain a FULL registration for each paper to be presented at MMSys'17 and its co-located workshops by April 30. Please also note that the early-bird registration deadline is April 30. You are much encouraged to plan your trip to MMSys early and feel free to contact us for any questions!
Wesley Kuo | Founder of i@solution & Ubitus
Abstract: To share Ubitus' experience on cloud game streaming across multiple platforms with broader business reach; provide global market trend and application examples. Then convey problems to overcome and how to catch the new business opportunities including VR, 4K and console markets.
- 2000 found i@soluiton Inc. covered 40% market share of Java phone market with world leading J2ME tech solution.
- 2004 sold to Aplix at NTD 2.4 billion.
- 2007 found Ubitus Inc. focusing on streaming technology.
Ubitus now owns the world-leading N-screen cloud gaming platform and is building up new advertising business: C2P and VR/AR ads solution. Proven record with Google, nVIDIA, Alibaba, Samsung, LG, NTT docomo etc. Employees across Taiwan, China, USA, Japan, Korea. NTT docomo and Samsung are our strategic investors.
The first keynote for ACM MMSys 2017 is confirmed and will be held by Prof. Henry Fuchs, the Federico Gil Distinguished Professor of Computer Science and Adjunct Professor of Biomedical Engineering at UNC Chapel Hill.
The AR/VR Renaissance: opportunities, pitfalls, and remaining problems | Prof. Henry Fuchs | University of North Carolina at Chapel Hill, United States
Abstract: Augmented and virtual reality are hailed today as “the next big thing,” the next personal computing platform, logical successors to the previous three generations of PCs, laptops, and mobile. Others worry that today’s AR and VR systems are not yet sufficiently advanced for mass adoption, that they are more like the 1990s Apple Newton than the 2007 Apple iPhone — exciting proofs of concept, but not yet useful nor cost-effective for most consumers. This talk will review the historical development of AR and VR technologies, and survey some representative current work, sample applications, and remaining problems. Current work with encouraging results include 3D scene capture and 3D reconstruction of dynamic, populated spaces; compact and wide field-of-view AR displays; low-latency and high-dynamic range AR display systems; and near-eye lightfield displays that may reduce the vergence-accommodation conflicts that plague current AR and VR display designs.
Bio: Henry Fuchs (PhD, Utah, 1975) is the Federico Gil Distinguished Professor of Computer Science and Adjunct Professor of Biomedical Engineering at UNC Chapel Hill, coauthor of over 200 papers, mostly on rendering algorithms (BSP Trees), graphics hardware (Pixel-Planes), head-mounted / near-eye and large-format displays, virtual and augmented reality, telepresence, medical and training applications. He is a member of the National Academy of Engineering, a fellow of the American Academy of Arts and Sciences, recipient of the 2013 IEEE VGTC Virtual Reality Career Award, and the 2015 ACM SIGGRAPH Steven Anson Coons Award.
Ragnhild Eg | Westerdals Oslo School of Arts, Norway
Abstract: Computers make humans wait. When interacting with a computer system, no response is instant; every key stroke or mouse movement must be processed before the output is rendered and presented. If signals have to travel across a network, the waiting time, or latency, can extend into the perceptible and further into the detrimental. In addressing perceptible delays between motor inputs and visual outputs, different communities use different jargons. Yet they share a common interest in the perceptual and cognitive consequences of lagging responses. This talk will give an overview of current insights about temporal human-computer interactions, presenting findings from HCI, multimedia and psychological research.
Bio: Ragnhild Eg is an associate professor at Westerdals Oslo School of Arts, Communication and Technology, where she combines her background and interest in perceptual psychology with digital marketing. She completed a PhD in psychology at the University of Oslo while working at Simula Research Laboratory. At Simula, she was part of a multi-disciplinary project that focused on the human perception of multimedia. Current projects relate to temporal human-computer interactions and the impact of delayed responses on performance. Ragnhilds research interests relate to the perceptual processing and integration of sensory information, particularly how the perceptual process is affected by the constraints imposed by technology.
ACM MMSys 2017 will also feature a series of industrial talks and one of them will be held by Mr. Raymond Pao from HTC.
Virtual Reality: The New Era of Future World | WeiGing Ngang | Relationship Manager, HTC VIVE
Abstract: Virtual Reality (VR) has been one of the hottest topics since 2016. It starts to revolutionize the ways we used to have in terms of education, entertainment, gaming, design, retail, consumption patterns, social experiences…, and so on. Through this talk, people would start to know more about VR and step into the charming and unlimited VR world of imagination.
Bio: Weiging is a Relationship Manager of HTC Vive. He is responsible for VR Eco-system Enabling for Asia Pacific, developer partnership management as well as exploring potential business opportunity. Prior to joining VR team, he has rich experience in big data analysis and software architecture review for mobile devices.
Ketan Mayer-Patel | University of North Carolina at Chapel Hill, NC, USA
Abstract: There is a gap to be sparked between the fields of multimedia systems and computer vision. That gap is a vision-driven interface for rate controlled compressed video. Advances in vision have made real-world, real-time vision-based applications a reality. In doing so, there is now an imperative to integrate and negotiate systems-level tradeoffs with these vision algorithms as these real-world applications are realized. In this talk, I’ll motivate and propose a rate controlled interface extension for OpenCV to compressed video sources. I’ll discuss a possible compressed-domain implementation that is adaptable to a wide array of existing video standards. Finally, I’ll speculate about the potential impact of such an interface on future directions of research in vision-based multimedia systems.
Bio: Ketan Mayer-Patel is an associate professor in the Department of Computer Science at the University of North Carolina at Chapel Hill. He received his Ph.D. in Computer Science in 1999 from the University of California at Berkeley. He was a recipient of an NSF CAREER award and is the current chair of the MMSys executive committee. His general research interest is in multimedia systems. Currently, he is investigating vision-based rate control, scalable display interfaces, and distributed archival video encoding.