Skip to main content

Multimodal Analysis of User-Generated Multimedia Content

  • Book
  • © 2017

Overview

  • Detailed summary of multimodal analysis of user-generated multimedia content literature
  • Proposed frameworks for several significant multimedia systems based on user-generated content
  • Leveraging multimodal information in solving several significant multimedia analytics problems

Part of the book series: Socio-Affective Computing (SAC, volume 6)

This is a preview of subscription content, log in via an institution to check access.

Access this book

eBook USD 129.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book USD 169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book USD 169.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Other ways to access

Licence this eBook for your library

Institutional subscriptions

Table of contents (8 chapters)

Keywords

About this book

This book presents a summary of the multimodal analysis of user-generated multimedia content (UGC). Several multimedia systems and their proposed frameworks are also discussed. First, improved tag recommendation and ranking systems for social media photos, leveraging both content and contextual information, are presented. Next, we discuss the challenges in determining semantics and sentics information from UGC to obtain multimedia summaries. Subsequently, we present a personalized music video generation system for outdoor user-generated videos. Finally, we discuss approaches for multimodal lecture video segmentation techniques. This book also explores the extension of these multimedia system with the use of heterogeneous continuous streams.

Authors and Affiliations

  • School of Computing, National University of Singapore, Singapore, Singapore

    Rajiv Shah, Roger Zimmermann

About the authors

Rajiv Ratn Shah received his B.Sc. with honors in Mathematics from Banaras Hindu University, India in 2005. He received his M.Tech. in Computer Technology and Applications from Delhi Technological University, India in 2010. Prior joining Indraprastha Institute of Information Technology Delhi (IIIT Delhi), India as an assistant professor, Dr Shah has received his Ph.D. in Computer Science from the National University of Singapore, Singapore. Currently, he is also working as a research fellow in living analytics research centre (LARC) at the Singapore Management University, Singapore. His research interests include the multimodal analysis of user-generated multimedia content in the support of social media applications, multimodal event detection and recommendation, and multimedia analysis, search, and retrieval. Dr Shah is the recipient of several awards, including the runner-up in the Grand Challenge competition of ACM International Conference on Multimedia. He is involved in reviewingof many top-tier international conferences and journals. He has published several research work in top-tier conferences and journals such as Springer MultiMedia Modeling, ACM International Conference on Multimedia, IEEE International Symposium on Multimedia, and Elsevier Knowledge-Based Systems.

Bibliographic Information

Publish with us