Affect and Sentiment in Multimedia (ASM) - an ACM MM'15 workshop

30 October 2015, Brisbane, Australia

In recent years, there has been a dramatic proliferation of research on multimedia retrieval and indexing based on highly subjective concepts such as emotion, preference and aesthetics. These retrieval methods are considered human-centered, intuitive, and beyond the conventional keyword- or object-based retrieval paradigm. In addition, the problem is considered challenging because it requires multidisciplinary understanding of human behavior and perception as well as multimodal integration of different modalities (music, image, video, text) for better performance. In the meanwhile, the rise of social media, such as Twitter, YouTube and SoundCloud has opened new opportunities to better understand the role of affect and sentiment in people’s interaction with multimedia content. From such user contributed data one can study the interrelationship between users (e.g. user affect, user sentiment, personality, and demographics) and multimedia (e.g. affective and semantic content), and to model and predict user behavior (e.g. preference, search intent, ad liking, purchase behavior).

This workshop aims to provide a forum for the presentation of state-of-the-art research results in this emerging field and to address the growing interests in affective analysis in multimedia, including content-based affective understanding of music, video and text, sentiment analysis in multimedia, and affect-based retrieval and recommendation.

The call for papers is available here.

**Submission deadline is extended to 13 July 11:59:59 PM PST**

**Keynotes slides are available under the Keynotes page**

Scope

The workshop topics include, but are not limited to:

- Affective/emotional content analysis of music, images and videos

- Sentiment analysis in multimedia

- Multimedia mid-level affective attributes

- User behavior understanding from social media

- Affect in content retrieval and recommendation

- Image and video summarization based on affect

- Affective benchmarking development

- Multimodal integration for affective content understanding

- User affective comment prediction

- Affect + X applications

Organizers

Mohammad Soleymani, University of Geneva, Switzerland

Yi-Hsuan (Eric) Yang, Academia Sinica, Taiwan

Yu-Gang Jiang, Fudan University, China

Shih-Fu Chang, Columbia University, USA

Program

The technical program is available here.

Invited keynote talks

Keynote 1:The Sentiment is in the Details: Using the Many Available Theories to Organize that Thing Called Emotion

Nicole Nelson, University of Queensland, Australia

Slides are available here.

Keynote 2: Blending Users, Content, and Emotions for Movie Recommendations

Shlomo Berkovsky, CSIRO, Australia

Slides are available here

Abstract: Recommender systems were initially deployed in eCommerce applications, but they are used nowadays in a broad range of domains and services. They alleviate online information overload by highlighting items of potential interest and helping users make informed choices. Many prior works in recommender systems focussed on the movie recommendation task, primarily due to the availability of several movie rating datasets. However, all these works considered two main input signals: ratings assigned by users and movie content information (genres, actors, directors, etc). We argue that in order to generate high-quality recommendations, recommender systems should possess a much richer user information. For example, consider a 3-star rating assigned to a 2-hour movie. It is evidently a mediocre rating meaning that the user liked some features of the movie and disliked others. However, a single rating does not allow to identify the liked and disliked features. In this talk we discuss the use of emotions as an additional source of rich user modelling data. We argue that user emotions elicited over the course of watching a movie mirror user responses to the movie content and the emotional triggers planted in there. This implicit user modelling can be seen as a virtual annotation of the movie timeline with the emotional user feedback. If captured and mined properly, this emotion-annotated movie timeline can be superior to the one-off ratings and feature preference scores gathered by traditional user modelling methods. We will discuss several open challenges referring to the use of emotion-based user modelling in movie recommendations. How to capture the user emotions in an unobtrusive manner? How to accurately interpret the captured emotions in context of the movie content? How to integrate the derived user modelling data into the recommendation process? Finally, how can this data be leveraged for other types of content, domains, or personalisation tasks?

Bio: Shlomo Berkovsky is a Senior Researcher at the Digital Productivity Flagship, CSIRO. Shlomo received his PhD (summa cum laude) from the University of Haifa, where his research focused on mediation of user models in recommender systems. At CSIRO he was the research leader of the Personalised Information Delivery team and worked on a project focusing on personalised eHealth applications. Shlomo's broad research interests include user modelling, Web personalisation, and recommender systems. Specifically, he is interested in collaborative and content-based recommenders, personalised persuasion, privacy-enhanced personalisation, ubiquitous user modelling, personalisation on the Social Web, and context-aware recommender systems. Shlomo is the author of more than 90 refereed papers published in journals, books, and conference proceedings. His works won the Best Paper Award of the AH conference and 3 iAward prizes. Shlomo presented 5 keynotes and 10 tutorials on personalisation and recommender systems, including at WWW, IJCAI, and KDD. He served on the organising committee of 10 conferences and 12 workshops.

PC Members

Program committee members:

Anna Aljanaki, Utrecht University, the Netherlands
Cheng-Te Li, Academia Sinica, Taiwan
Deshun Yang, Peking University, China
Eduardo Coutinho, Imperial College London, UK
Eva Zangerle, University of Innsbruck, Austria
Gareth Jones, Dublin City University, Ireland
Guillaume Chanel, Swiss Center of Affective Sciences, Switzerland
Jens Madsen, Danmarks Tekniske Universitet, Denmark
Jia-Ching Wang, National Central University, Taiwan
Ju-Chiang Wang, Cisco, USA
Marko Tkalcic, Johannes Kepler University, Austria
Martha Larson, Delft University of Technology, the Netherlands
Matevz Pesek, University of Ljubljana, Slovenia
Mathieu Barthet, Queen Mary University of London, UK
Ming-Feng Tsai, National ChengChi University, Taiwan
Olivier Lartillot, Aalborg University, Denmark
Renato Panda, University of Coimbra, Portugal
Rongrong Ji, Xiamen University, China
Szu-Yu Chou, Academia Sinica, Taiwan
Tak-Shing Chan, Academia Sinica, Taiwan
Tanaya Guha, University of Southern California, USA
Yan-Ying Chen, FX PAL, USA
Yanwei Fu, Disney Research, USA
Yoann Baveye, Technicolor & Ecole Centrale de Lyon, France
Yupeng Gu, Indiana University Bloomington, USA
Zuxuan Wu, Fudan University, China

Important dates

Important dates:

CFP issued:

15 March 2015

Paper submission:

24 June Extended to 13 July 2015 (11:59:59PM PST)

Notification of acceptance:

28 July 4 7 August 2015

Camera ready deadline:

15 August 2015

Submission instructions:

Please submit your original work with the maximum length of 6 pages. Each submission will receive at least three reviews by expert reviewers. Reviews will be double-blind. Therefore, authors must conceal their identity (no author names, no affiliations, no acknowledgment of sponsors, no direct references to previous work). Please consult the ACM MM 2015 website for the correct templates. Submissions must be in pdf format.

Please submit your paper here.

Please prepare your oral presentations for 17 minutes - the presentation slots including questions are 20 minutes long.

Back to the top!