- Luntian Mou, Beijing University of Technology, Beijing, China
- Feng Gao, Peking University, Beijing, China
- Zijin Li, China Conservatory of Music, Beijing, China
- Jiaying Liu, Peking University, Beijing, China
- Wen-Huang Cheng, National Chiao Tung University, Taiwan
- Ling Fan, Tezign.com; Tongji University Design Artificial Intelligence Lab, Shanghai, China
Artificial Intelligence (AI) has already fueled many academic fields as well as industries. In the area of art creation, AI has demonstrated its great potential and gained increasing popularity. People are greatly impressed by AI painting, composing, writing, and designing. And the emerging technology of metaverse even provide more opportunity for AI art. AI has not only exhibited a certain degree of creativity, but also helped in uncovering the principles and mechanisms of creativity and imagination from the perspective of neuroscience, cognitive science and psychology.
This is the 4th AIART workshop to be held in conjunction with ICME 2022 in Taipei, and it aims to bring forward cutting-edge technologies and most recent advances in the area of AI art in terms of the enabling creation, analysis, understanding, and rendering technologies. The theme topic of AIART’21 will be Affective computing for AI art. And we plan to invite 5 keynote speakers to present their insightful perspectives on AI Art.
- Giuseppe Valenzise, CNRS, France
- Federica Battisti, University of Padova, Italy
- Homer Chen, National Taiwan University, Taiwan
- Søren Forchhammer, Technical University of Denmark, Denmark
- Mylène Farias, University of Brasilia, Brazil
The aim of hyper-realistic media is to faithfully represent the physical world. The ultimate goal is to create an experience which is perceptually indistinguishable from a real scene. Traditional technologies can only capture a fraction of the audio-visual information, limiting the realism of the experience. Recent innovations in computers and audio-visual technology have made it possible to circumvent these bottlenecks in audio-visual systems. As a result, new multimedia signal processing areas have emerged such as light fields, point clouds, ultra-high definition, high frame rate, high dynamic range imaging and novel 3D audio and sound field technologies. The novel combinations of those technologies can facilitate a hyper-realistic media experience. Without a doubt, this will be the future frontier for new multimedia systems. However, several technological barriers and challenges need to be overcome in developing the best solutions perceptually.
This third ICME workshop on Hyper-Realistic Multimedia for Enhanced Quality of Experience aims at bringing forward recent advances related to capturing, processing, and rendering technologies. The goal is to gather researchers with diverse and interdisciplinary backgrounds to cover the full multimedia signal chain, to efficiently develop truly perceptually enhanced multimedia systems.
- Wu Liu, JD Explore Academy, Beijing, China
- Hao Su, University of California San Diego, USA
- Olga Diamanti, Institute for Geometry at TU Graz, Austria
- Yang Cong, Shenyang Institute of Automation, Chinese Academy of Sciences, China
- Tao Mei, JD Explore Academy, Beijing, China
Today, ubiquitous multimedia sensors and large-scale computing infrastructures are producing at a rapid velocity of 3D multi-modality data, such as 3D point cloud acquired with LIDAR sensors, RGB-D videos record by Kinect cameras, meshes of varying topology, and volumetric data. 3D multimedia combines different content forms such as text, audio, images, and video with 3D information, which can perceive the world better since the real world is 3-dimensional instead of 2-dimensional. For example, the robots can manipulate objects successfully by recognizing the object via RGB frames and perceiving the object size via point cloud. Researchers have strived to push the limits of 3D multimedia search and generation in various applications, such as autonomous driving, robotic visual navigation, smart industrial manufacturing, logistics distribution, and logistics picking. The 3D multimedia (e.g., the videos and point cloud) can also help the agents to grasp, move and place the packages automatically in logistics picking systems.
Therefore, 3D multimedia analytics is one of the fundamental problems in multimedia understanding. Different from 3D vision, 3D multimedia analytics mainly research on how to fuse the 3D content with other media. It is a very challenging problem that involves multiple tasks such as human 3D mesh recovery and analysis, 3D shapes and scenes generation from real-world data, 3D virtual talking head, 3D multimedia classification and retrieval, 3D semantic segmentation, 3D object detection and tracking, 3D multimedia scene understanding, and so on. Therefore, the purpose of this workshop is to:
- bring together the state-of-the-art research on 3D multimedia analysis;
- call for a coordinated effort to understand the opportunities and challenges emerging in 3D multimedia analysis;
- identify key tasks and evaluate the state-of-the-art methods; 4) showcase innovative methodologies and ideas;
- showcase innovative methodologies and ideas;
- introduce interesting real-world 3D multimedia analysis systems or applications;
- propose new real-world or simulated datasets and discuss future directions.
We solicit original contributions in all fields of 3D multimedia analysis that explore the multi-modality data to generate the strong 3D data representation.
- Huang-Chia Shih, Yuan Ze University, Taiwan
- Takahiro Ogawa, Hokkaido University, Japan
- Rainer Lienhart, Augsburg University, Germany
- Jenq-Neng Hwang, University of Washington, USA
- Thomas B. Moeslund, Aalborg University, Denmark
Sports data analytics nowadays has attracted much attention and promoted the sports industry. Coaches and teams are constantly searching for competitive sports data analytics that utilizes AI and computer vision techniques to understand the deeper and hidden semantics of sports. By learning detailed statistics, coaches can assess defensive athletic performance and develop improved strategies. Data-driven machine learning technique plays an important role in developing and improving sports in recent years. Many approaches have been proposed to extract semantic concepts or abstract attributes, such as objects (athletes and rackets), events, scene types, and captions, from sports videos. The spatiotemporal content and sensor data from sports matches in online and offline scenarios can be analyzed, filtered and visualized via multi-model fusion mechanism. Specific details and strategies can be extracted from the data to help coaches and players see the whole picture with clarity. Coaches and athletes are able to utilize these data to make better decisions for developing their teams. Nowadays, popular sports like football and basketball fuel the drive for technological advances in AI and machine learning.
The goal of this workshop is to advance the field of research on the techniques of AI for sports data, develop more techniques to accurately evaluate and organize the data, and further strengthen the synergy between sports and science.
- Weiyao Lin, Shanghai Jiao Tong University, China
- John See, Heriot-Watt University (Malaysia Campus), Malaysia
- Xiatian Zhu, Samsung AI Centre, Cambridge, UK
With the rapid growth of video surveillance applications and services, the amount of surveillance videos has become extremely "big" which makes human monitoring tedious and difficult. Therefore, there exists a huge demand for smart surveillance techniques which can perform monitoring in an automatic or semi-automatic way. Firstly, with the huge amount of surveillance videos in storage, video analysis tasks such as event detection, action recognition, and video summarization are of increasing importance in applications including events-of-interest retrieval and abnormality detection. Secondly, with the fast increase of semantic data (e.g., objects' trajectory & bounding box) extracted by video analysis techniques, the semantic data have become an essential data type in surveillance systems, introducing new challenging topics, such as efficient semantic data processing and semantic data compression, to the community. Thirdly, with the rapid growthfrom the static centric-based processing to the dynamic collaborative computing and processing among distributed video processing nodes or cameras, new challenges such as multi-camera joint analysis, human re-identification, or distributed video processingare being issued in front of us. The requirement of these challenges is to extend the existing approaches or explore new feasible techniques.
This workshop is intended to provide a forum for researchers and engineers to present their latest innovations and share their experiences on all aspects of design and implementation of new surveillance video analysis and processing techniques.
Authors should prepare their manuscript according to the Guide for Authors of ICME available at Author Information and Submission Instructions.
Workshop paper submission deadline:
March 12, 2022 March 21 - April 9, 2022 (workshop dependent)
Corresponding Paper Submission Deadline
|AIART'22||April 9, 2022|
|HRMEQE||March 26, 2022|
|3DMM||March 20, 2022|
|AI-Sports||April 11, 2022|
|BIG-Surv||March 26, 2022|