Publications

last updated: September 21, 2016

Journal Articles

Tapaswi_J1_PlotRetrieval

Aligning Plot Synopses to Videos for Story-based Retrieval

Makarand Tapaswi, Martin Bäuml and Rainer Stiefelhagen
International Journal of Multimedia Information Retrieval (IJMIR vol. 4 (1), pp. 3-16), 2015.

PDF Project DOI

@article{Tapaswi_J1_PlotRetrieval,
author = {Makarand Tapaswi and Martin Bäuml and Rainer Stiefelhagen},
title = {{Aligning Plot Synopses to Videos for Story-based Retrieval}},
year = {2015},
journal = {International Journal of Multimedia Information Retrieval (IJMIR)},
volume = {4},
pages = {3-16},
doi = {10.1007/s13735-014-0065-9}
}
We propose a method to facilitate search through the storyline of TV series episodes. To this end, we use human written, crowdsourced descriptions -- plot synopses -- of the story conveyed in the video. We obtain such synopses from websites such as Wikipedia and propose various methods to align each sentence of the plot to shots in the video. Thus, the semantic story-based video retrieval problem is transformed into a much simpler text-based search. Finally, we return the set of shots aligned to the sentences as the video snippet corresponding to the query. The alignment is performed by first computing a similarity score between every shot and sentence through cues such as character identities and keyword matches between plot synopses and subtitles. We then formulate the alignment as an optimization problem and solve it efficiently using dynamic programming. We evaluate our methods on the fifth season of a TV series Buffy the Vampire Slayer and show encouraging results for both the alignment and the retrieval of story events.


Conference Proceedings

Tapaswi2016_MovieQA

MovieQA: Understanding Stories in Movies through Question-Answering

Makarand Tapaswi, Yukun Zhu, Rainer Stiefelhagen, Antonio Torralba, Raquel Urtasun and Sanja Fidler
IEEE Conference on Computer Vision and Pattern Recognition (CVPR Spotlight, Acceptance rate=9.7%), Las Vegas, NV, USA, Jun. 2016.

PDF Project spotlight-youtube poster-2 arXiv spotlight-pdf poster-1

@inproceedings{Tapaswi2016_MovieQA,
author = {Makarand Tapaswi and Yukun Zhu and Rainer Stiefelhagen and Antonio Torralba and Raquel Urtasun and Sanja Fidler},
title = {{MovieQA: Understanding Stories in Movies through Question-Answering}},
year = {2016},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {Jun.},
doi = {}
}
We introduce the MovieQA dataset which aims to evaluate automatic story comprehension from both video and text. The dataset consists of 14,944 questions about 408 movies with high semantic diversity. The questions range from simpler ``Who'' did ``What'' to ``Whom'', to ``Why'' and ``How'' certain events occurred. Each question comes with a set of five possible answers; a correct one and four deceiving answers provided by human annotators. Our dataset is unique in that it contains multiple sources of information -- video clips, plots, subtitles, scripts, and DVS. We analyze our data through various statistics and methods. We further extend existing QA techniques to show that question-answering with such open-ended semantics is hard. We make this data set public along with an evaluation benchmark to encourage inspiring work in this challenging domain.


AlHalah2016_AssociationPrediction

Recovering the Missing Link: Predicting Class-Attribute Associations for Unsupervised Zero-Shot Learning

Ziad Al-Halah, Makarand Tapaswi and Rainer Stiefelhagen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR Poster, Acceptance rate=29.9%), Las Vegas, NV, USA, Jun. 2016.

PDF

@inproceedings{AlHalah2016_AssociationPrediction,
author = {Ziad Al-Halah and Makarand Tapaswi and Rainer Stiefelhagen},
title = {{Recovering the Missing Link: Predicting Class-Attribute Associations for Unsupervised Zero-Shot Learning}},
year = {2016},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {Jun.},
doi = {}
}
Collecting training images for all visual categories is not only expensive but also impractical. Zero-shot learning (ZSL), especially using attributes, offers a pragmatic solution to this problem. However, at test time most attribute-based methods require a full description of attribute associations for each unseen class. Providing these associations is time consuming and often requires domain specific knowledge. In this work, we aim to carry out attribute- based zero-shot classification in an unsupervised manner. We propose an approach to learn relations that couples class embeddings with their corresponding attributes. Given only the name of an unseen class, the learned relationship model is used to automatically predict the class-attribute associations. Furthermore, our model facilitates transferring attributes across data sets without additional effort. Integrating knowledge from multiple sources results in a significant additional improvement in performance. We evaluate on two public data sets: Animals with Attributes and aPascal/aYahoo. Our approach outperforms state-of-the-art methods in both predicting class-attribute associations and unsupervised ZSL by a large margin.


Haurilet2016_SubttOnly

Naming TV Characters by Watching and Analyzing Dialogs

Monica-Laura Haurilet, Makarand Tapaswi, Ziad Al-Halah and Rainer Stiefelhagen
IEEE Winter Conference on Applications of Computer Vision (WACV Acceptance rate=42.3%), Lake Placid, NY, USA, Mar. 2016.

PDF DOI poster presentation

@inproceedings{Haurilet2016_SubttOnly,
author = {Monica-Laura Haurilet and Makarand Tapaswi and Ziad Al-Halah and Rainer Stiefelhagen},
title = {{Naming TV Characters by Watching and Analyzing Dialogs}},
year = {2016},
booktitle = {IEEE Winter Conference on Applications of Computer Vision (WACV)},
month = {Mar.},
doi = {10.1109/WACV.2016.7477560}
}
Person identification in TV series has been a popular research topic over the last decade. In this area, most approaches either use manually annotated data or extract character supervision from a combination of subtitles and transcripts. However, both approaches have key drawbacks that hinder application of these methods at a large scale -- manual annotation is expensive and transcripts are often hard to obtain. We investigate the topic of automatically labeling all character appearances in TV series using information obtained solely from subtitles. This task is extremely difficult as the dialogs between characters provide very sparse and weakly supervised data. We address these challenges by exploiting recent advances in face descriptors and Multiple Instance Learning methods. We propose methods to create MIL bags and evaluate and discuss several MIL techniques. The best combination achieves an average precision over 80% on three diverse TV series. We demonstrate that only using subtitles provides good results on identifying characters in TV series and wish to encourage the community towards this problem.


Ghaleb2015_Accio

Accio: A Data Set for Face Track Retrieval in Movies Across Age

Esam Ghaleb, Makarand Tapaswi, Ziad Al-Halah, Hazım Kemal Ekenel and Rainer Stiefelhagen
ACM International Conference on Multimedia Retrieval (ICMR Short paper, Poster, Acceptance rate=40.5%), Shanghai, China, Jun. 2015.

PDF DOI poster face tracks features (9GB+)

@inproceedings{Ghaleb2015_Accio,
author = {Esam Ghaleb and Makarand Tapaswi and Ziad Al-Halah and Hazım Kemal Ekenel and Rainer Stiefelhagen},
title = {{Accio: A Data Set for Face Track Retrieval in Movies Across Age}},
year = {2015},
booktitle = {ACM International Conference on Multimedia Retrieval (ICMR)},
month = {Jun.},
doi = {10.1145/2671188.2749296}
}
Video face recognition is a very popular task and has come a long way. The primary challenges such as illumination, resolution and pose are well studied through multiple data sets. However there are no video-based data sets dedicated to study the effects of aging on facial appearance. We present a challenging face track data set, Harry Potter Movies Aging Data set (Accio), to study and develop age invariant face recognition methods for videos. Our data set not only has strong challenges of pose, illumination and distractors, but also spans a period of ten years providing substantial variation in facial appearance. We propose two primary tasks: within and across movie face track retrieval; and two protocols which differ in their freedom to use external data. We present baseline results for the retrieval performance using a state-of-the-art face track descriptor. Our experiments show clear trends of reduction in performance as the age gap between the query and database increases. We will make the data set publicly available for further exploration in age-invariant video face recognition.


Tapaswi2015_Book2Movie

Book2Movie: Aligning Video scenes with Book chapters

Makarand Tapaswi, Martin Bäuml and Rainer Stiefelhagen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR Poster, Acceptance rate=28.4%), Boston, MA, USA, Jun. 2015.

PDF Project DOI data set poster-1 poster-2 ext. abstract suppl. material

@inproceedings{Tapaswi2015_Book2Movie,
author = {Makarand Tapaswi and Martin Bäuml and Rainer Stiefelhagen},
title = {{Book2Movie: Aligning Video scenes with Book chapters}},
year = {2015},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {Jun.},
doi = {10.1109/CVPR.2015.7298792}
}
Film adaptations of novels often visually display in a few shots what is described in many pages of the source novel. In this paper we present a new problem: to align book chapters with video scenes. Such an alignment facilitates finding differences between the adaptation and the original source, and also acts as a basis for deriving rich descriptions from the novel for the video clips. We propose an efficient method to compute an alignment between book chapters and video scenes using matching dialogs and character identities as cues. A major consideration is to allow the alignment to be non-sequential. Our suggested shortest path based approach deals with the non-sequential alignments and can be used to determine whether a video scene was part of the original book. We create a new data set involving two popular novel-to-film adaptations with widely varying properties and compare our method against other text-to-video alignment baselines. Using the alignment, we present a qualitative analysis of describing the video through rich narratives obtained from the novel.


Tapaswi2015_SpeakingFace

Improved Weak Labels using Contextual Cues for Person Identification in Videos

Makarand Tapaswi, Martin Bäuml and Rainer Stiefelhagen
IEEE International Conference on Automatic Face and Gesture Recognition (FG Poster, Acceptance rate=38.0%), Ljubljana, Slovenia, May 2015.

PDF DOI poster code

@inproceedings{Tapaswi2015_SpeakingFace,
author = {Makarand Tapaswi and Martin Bäuml and Rainer Stiefelhagen},
title = {{Improved Weak Labels using Contextual Cues for Person Identification in Videos}},
year = {2015},
booktitle = {IEEE International Conference on Automatic Face and Gesture Recognition (FG)},
month = {May},
doi = {10.1109/FG.2015.7163083}
}
Fully automatic person identification in TV series has been achieved by obtaining weak labels from subtitles and transcripts [Everingham 2011]. In this paper, we revisit the problem of matching subtitles with face tracks to obtain more assignments and more accurate weak labels. We perform a detailed analysis of the state-of-the-art showing the types of errors during the assignment and providing insights into their cause. We then propose to model the problem of assigning names to face tracks as a joint optimization problem. Using negative constraints between co-occurring pairs of tracks and positive constraints from track threads, we are able to significantly improve the speaker assignment performance. This directly influences the identification performance on all face tracks. We also propose a new feature to determine whether a tracked face is speaking and show further improvements in performance while being computationally more efficient.


Tapaswi2014_FaceTrackCluster

Total Cluster: A person agnostic clustering method for broadcast videos

Makarand Tapaswi, Omkar M. Parkhi, Esa Rahtu, Eric Sommerlade, Rainer Stiefelhagen and Andrew Zisserman
ACM Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP Oral, Acceptance rate=9.4%), Bangalore, India, Dec. 2014.

PDF DOI presentation

@inproceedings{Tapaswi2014_FaceTrackCluster,
author = {Makarand Tapaswi and Omkar M. Parkhi and Esa Rahtu and Eric Sommerlade and Rainer Stiefelhagen and Andrew Zisserman},
title = {{Total Cluster: A person agnostic clustering method for broadcast videos}},
year = {2014},
booktitle = {ACM Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP)},
month = {Dec.},
doi = {10.1145/2683483.2683490}
}
The goal of this paper is unsupervised face clustering in edited video material -- where face tracks arising from different people are assigned to separate clusters, with one cluster for each person. In particular we explore the extent to which faces can be clustered automatically without making an error. This is a very challenging problem given the variation in pose, lighting and expressions that can occur, and the similarities between different people. The novelty we bring is three fold: first, we show that a form of weak supervision is available from the editing structure of the material -- the shots, threads and scenes that are standard in edited video; second, we show that by first clustering within scenes the number of face tracks can be significantly reduced with almost no errors; third, we propose an extension of the clustering method to entire episodes using exemplar SVMs based on the negative training data automatically harvested from the editing structure. The method is demonstrated on multiple episodes from two very different TV series, Scrubs and Buffy. For both series it is shown that we move towards our goal, and also outperform a number of baselines from previous works.


Tapaswi2014_FalsePositiveTracks

Cleaning up after a Face Tracker: False Positive Removal

Makarand Tapaswi, Cemal Çağrı Çörez, Martin Bäuml, Hazım Kemal Ekenel and Rainer Stiefelhagen
IEEE International Conference on Image Processing (ICIP Poster, Acceptance rate=43.2%), Paris, France, Oct. 2014.

PDF DOI poster

@inproceedings{Tapaswi2014_FalsePositiveTracks,
author = {Makarand Tapaswi and Cemal Çağrı Çörez and Martin Bäuml and Hazım Kemal Ekenel and Rainer Stiefelhagen},
title = {{Cleaning up after a Face Tracker: False Positive Removal}},
year = {2014},
booktitle = {IEEE International Conference on Image Processing (ICIP)},
month = {Oct.},
doi = {10.1109/ICIP.2014.7025050}
}
Automatic person identification in TV series has gained popularity over the years. While most of the works rely on using face-based recognition, errors during tracking such as false positive face tracks are typically ignored. We propose a variety of methods to remove false positive face tracks and categorize the methods into confidence- and context-based. We evaluate our methods on a large TV series data set and show that up to 75% of the false positive face tracks are removed at the cost of 3.6% true positive tracks. We further show that the proposed method is general and applicable to other detectors or trackers.


Baeuml2014_TrackKernel

A Time Pooled Track Kernel for Person Identification

Martin Bäuml, Makarand Tapaswi and Rainer Stiefelhagen
IEEE Conference on Advanced Video and Signal-based Surveillance (AVSS Oral, Acceptance rate=22.5%), Seoul, Korea, Aug. 2014.

PDF Project DOI data

@inproceedings{Baeuml2014_TrackKernel,
author = {Martin Bäuml and Makarand Tapaswi and Rainer Stiefelhagen},
title = {{A Time Pooled Track Kernel for Person Identification}},
year = {2014},
booktitle = {IEEE Conference on Advanced Video and Signal-based Surveillance (AVSS)},
month = {Aug.},
doi = {10.1109/AVSS.2014.6918636}
}
We present a novel method for comparing tracks by means of a time pooled track kernel. In contrast to spatial or feature-space pooling, the track kernel pools base kernel results within tracks over time. It includes as special cases frame-wise classification on the one hand and the normalized sum kernel on the other hand. We also investigate non-Mercer instantiations of the track kernel and obtain good results despite its Gram matrices not being positive semidefinite. Second, the track kernel matrices in general require less memory than single frame kernels, allowing to process larger datasets without resorting to subsampling. Finally, the track kernel formulation allows for very fast testing compared to frame-wise classification which is important in settings where user feedback is obtained and quick iterations of re-training and re-testing are required. We apply our approach to the task of video-based person identification in large scale settings and obtain state-of-the art results.


Tapaswi2014_StoryGraphs

StoryGraphs: Visualizing Character Interactions as a Timeline

Makarand Tapaswi, Martin Bäuml and Rainer Stiefelhagen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR Poster, Acceptance rate=29.9%), Columbus, OH, USA, Jun. 2014.

PDF Project DOI poster-1 poster-2 code suppl. material

@inproceedings{Tapaswi2014_StoryGraphs,
author = {Makarand Tapaswi and Martin Bäuml and Rainer Stiefelhagen},
title = {{StoryGraphs: Visualizing Character Interactions as a Timeline}},
year = {2014},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {Jun.},
doi = {10.1109/CVPR.2014.111}
}
We present a novel way to automatically summarize and represent the storyline of a TV episode by visualizing character interactions as a chart. We also propose a scene detection method that lends itself well to generate over-segmented scenes which is used to partition the video. The positioning of character lines in the chart is formulated as an optimization problem which trades between the aesthetics and functionality of the chart. Using automatic person identification, we present StoryGraphs for 3 diverse TV series encompassing a total of 22 episodes. We define quantitative criteria to evaluate StoryGraphs and also compare them against episode summaries to evaluate their ability to provide an overview of the episode.


Tapaswi2014_PlotRetrieval

Story-based Video Retrieval in TV series using Plot Synopses

Makarand Tapaswi, Martin Bäuml and Rainer Stiefelhagen
ACM International Conference on Multimedia Retrieval (ICMR Oral Full paper, Acceptance rate=19.1%), Glasgow, Scotland, Apr. 2014.

PDF Project DOI presentation

@inproceedings{Tapaswi2014_PlotRetrieval,
author = {Makarand Tapaswi and Martin Bäuml and Rainer Stiefelhagen},
title = {{Story-based Video Retrieval in TV series using Plot Synopses}},
year = {2014},
booktitle = {ACM International Conference on Multimedia Retrieval (ICMR)},
month = {Apr.},
doi = {10.1145/2578726.2578727}
}
We present a novel approach to search for plots in the storyline of structured videos such as TV series. To this end, we propose to align natural language descriptions of the videos, such as plot synopses, with the corresponding shots in the video. Guided by subtitles and person identities the alignment problem is formulated as an optimization task over all possible assignments and solved efficiently using dynamic programming. We evaluate our approach on a novel dataset comprising of the complete season 5 of Buffy the Vampire Slayer, and show good alignment performance and the ability to retrieve plots in the storyline.


Baeuml2013_SemiPersonID

Semi-supervised Learning with Constraints for Person Identification in Multimedia Data

Martin Bäuml, Makarand Tapaswi and Rainer Stiefelhagen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR Poster, Acceptance rate=25.2%), Portland, OR, USA, Jun. 2013.

PDF Project DOI poster-1 poster-2

@inproceedings{Baeuml2013_SemiPersonID,
author = {Martin Bäuml and Makarand Tapaswi and Rainer Stiefelhagen},
title = {{Semi-supervised Learning with Constraints for Person Identification in Multimedia Data}},
year = {2013},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {Jun.},
doi = {10.1109/CVPR.2013.462}
}
We address the problem of person identification in TV series. We propose a unified learning framework for multi-class classification which incorporates labeled and unlabeled data, and constraints between pairs of features in the training. We apply the framework to train multinomial logistic regression classifiers for multi-class face recognition. The method is completely automatic, as the labeled data is obtained by tagging speaking faces using subtitles and fan transcripts of the videos. We demonstrate our approach on six episodes each of two diverse TV series and achieve state-of-the-art performance.


Baeuml2012_CamNetwork

Contextual Constraints for Person Retrieval in Camera Networks

Martin Bäuml, Makarand Tapaswi, Arne Schumann and Rainer Stiefelhagen
IEEE Conference on Advanced Video and Signal-based Surveillance (AVSS Oral, Acceptance rate=17.8%), Beijing, China, Sep. 2012.

PDF Project DOI

@inproceedings{Baeuml2012_CamNetwork,
author = {Martin Bäuml and Makarand Tapaswi and Arne Schumann and Rainer Stiefelhagen},
title = {{Contextual Constraints for Person Retrieval in Camera Networks}},
year = {2012},
booktitle = {IEEE Conference on Advanced Video and Signal-based Surveillance (AVSS)},
month = {Sep.},
doi = {10.1109/AVSS.2012.28}
}
We use contextual constraints for person retrieval in camera networks. We start by formulating a set of general positive and negative constraints on the identities of person tracks in camera networks, such as a person cannot appear twice in the same frame. We then show how these constraints can be used to improve person retrieval. First, we use the constraints to obtain training data in an unsupervised way to learn a general metric that is better suited to discriminate between different people than the Euclidean distance. Second, starting from an initial query track, we enhance the query-set using the constraints to obtain additional positive and negative samples for the query. Third, we formulate the person retrieval task as an energy minimization problem, integrate track scores and constraints in a common framework and jointly optimize the retrieval over all interconnected tracks. We evaluate our approach on the CAVIAR dataset and achieve 22% relative performance improvement in terms of mean average precision over standard retrieval where each track is treated independently.


Tapaswi2012_PersonID

``Knock! Knock! Who is it?'' Probabilistic Person Identification in TV series

Makarand Tapaswi, Martin Bäuml and Rainer Stiefelhagen
IEEE Conference on Computer Vision and Pattern Recognition (CVPR Poster, Acceptance rate=24.0%), Providence, RI, USA, Jun. 2012.

PDF Project DOI poster-1 poster-2 suppl. material

@inproceedings{Tapaswi2012_PersonID,
author = {Makarand Tapaswi and Martin Bäuml and Rainer Stiefelhagen},
title = {{``Knock! Knock! Who is it?'' Probabilistic Person Identification in TV series}},
year = {2012},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {Jun.},
doi = {10.1109/CVPR.2012.6247986}
}
We describe a probabilistic method for identifying characters in TV series or movies. We aim at labeling every character appearance, and not only those where a face can be detected. Consequently, our basic unit of appearance is a person track (as opposed to a face track). We model each TV series episode as a Markov Random Field, integrating face recognition, clothing appearance, speaker recognition and contextual constraints in a probabilistic manner. The identification task is then formulated as an energy minimization problem. In order to identify tracks without faces, we learn clothing models by adapting available face recognition results. Within a scene, as indicated by prior analysis of the temporal structure of the TV series, clothing features are combined by agglomerative clustering. We evaluate our approach on the first 6 episodes of The Big Bang Theory and achieve an absolute improvement of 20% for person identification and 12% for face recognition.


Das2010_SpeakerID

Direct modeling of spoken passwords for text-dependent speaker recognition by compressed time-feature representations

Amitava Das and Makarand Tapaswi
IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP Poster), Dallas, TX, USA, Mar. 2010.

PDF

@inproceedings{Das2010_SpeakerID,
author = {Amitava Das and Makarand Tapaswi},
title = {{Direct modeling of spoken passwords for text-dependent speaker recognition by compressed time-feature representations}},
year = {2010},
booktitle = {IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP)},
month = {Mar.},
doi = {}
}
Traditional Text-Dependent Speaker Recognition (TDSR) systems model the user-specific spoken passwords with frame-based features such as MFCC and use DTW or HMM type classifiers to handle the variable length of the feature vector sequence. In this paper, we explore a direct modeling of the entire spoken password by a fixed-dimension vector called Compressed Feature Dynamics or CFD. Instead of the usual frame-by-frame feature extraction, the entire password utterance is first modeled by a 2-D Featurogram or FGRAM, which efficiently captures speaker-identity-specific speech dynamics. CFDs are compressed and approximated version of the FGRAMs and their fixed dimension allows the use of simpler classifiers. Overall, the proposed FGRAM-CFD framework provides an efficient and direct model to capture the speaker-identity information well for a TDSR system. As demonstrated in trials on a 344-speaker database, compared to traditional MFCC-based TDSR systems, the FGRAM-CFD framework shows quite encouraging performance at significantly lower complexity.


Workshops

Martinez2016_ICMLCompBioW

A Closed-form Gradient for the 1D Earth Mover's Distance for Spectral Deep Learning on Biological Data

Manuel Martinez, Makarand Tapaswi, Rainer Stiefelhagen
ICML 2016 Workshop on Computational Biology (CompBio@ICML16 ), New York, NY, USA, Jun. 2016.

PDF

@inproceedings{Martinez2016_ICMLCompBioW,
author = {Manuel Martinez and Makarand Tapaswi and Rainer Stiefelhagen},
title = {{A Closed-form Gradient for the 1D Earth Mover's Distance for Spectral Deep Learning on Biological Data}},
year = {2016},
booktitle = {ICML 2016 Workshop on Computational Biology (CompBio@ICML16)},
month = {Jun.},
doi = {}
}


Vlastelica2015_MEDIAEVAL

KIT at MediaEval 2015 -- Evaluating Visual Cues for Affective Impact of Movies Task

Marin Vlastelica Pogančić, Sergey Hayrapetyan, Makarand Tapaswi, Rainer Stiefelhagen
Proceedings of the MediaEval2015 Multimedia Benchmark Workshop (MediaEval2015 ), Wurzen, Germany, Sep. 2015.

PDF

@inproceedings{Vlastelica2015_MEDIAEVAL,
author = {Marin Vlastelica Pogančić and Sergey Hayrapetyan and Makarand Tapaswi and Rainer Stiefelhagen},
title = {{KIT at MediaEval 2015 -- Evaluating Visual Cues for Affective Impact of Movies Task}},
year = {2015},
booktitle = {Proceedings of the MediaEval2015 Multimedia Benchmark Workshop (MediaEval2015)},
month = {Sep.},
doi = {}
}
We present the approach and results of our system on the MediaEval Affective Impact of Movies Task. The challenge involves two primary tasks: affect classification and violence detection. We test the performance of multiple visual features followed by linear SVM classifiers. Inspired by successes in different vision fields, we use (i) GIST features used in scene modeling, (ii) features extracted from a deep convolutional neural network trained on object recognition, and (iii) improved dense trajectory features encoded using Fisher vectors commonly used in action recognition.


Bredin2013_REPERE

QCompere @ Repere 2013

Hervé Bredin, Johann Poignant, Guillaume Fortier, Makarand Tapaswi, Viet Bac Le, Anindya Roy, Claude Barras, Sophie Rosset, Achintya Sarkar, Hua Gao, Alexis Mignon, Jakob Verbeek, Laurent Besacier, Georges Quénot, Hazım Kemal Ekenel, Rainer Stiefelhagen
Workshop on Speech, Language and Audio in Multimedia (SLAM Oral), Marseille, France, Aug. 2013.

PDF

@inproceedings{Bredin2013_REPERE,
author = {Hervé Bredin and Johann Poignant and Guillaume Fortier and Makarand Tapaswi and Viet Bac Le and Anindya Roy and Claude Barras and Sophie Rosset and Achintya Sarkar and Hua Gao and Alexis Mignon and Jakob Verbeek and Laurent Besacier and Georges Quénot and Hazım Kemal Ekenel and Rainer Stiefelhagen},
title = {{QCompere @ Repere 2013}},
year = {2013},
booktitle = {Workshop on Speech, Language and Audio in Multimedia (SLAM)},
month = {Aug.},
doi = {}
}
We describe QCompere consortium submissions to the REPERE 2013 evaluation campaign. The REPERE challenge aims at gathering four communities (face recognition, speaker identification, optical character recognition and named entity detection) towards the same goal: multimodal person recognition in TV broadcast. First, four mono-modal components are introduced (one for each foregoing community) constituting the elementary building blocks of our various submissions. Then, depending on the target modality (speaker or face recognition) and on the task (supervised or unsupervised recognition), four different fusion techniques are introduced: they can be summarized as propagation-, classifier-, rule- or graph-based approaches. Finally, their performance is evaluated on REPERE 2013 test set and their advantages and limitations are discussed.


Bredin2012_REPERE

Fusion of Speech, Faces and Text for Person Identification in TV Broadcast

Hervé Bredin, Johann Poignant, Makarand Tapaswi, Guillaume Fortier, Viet Bac Le, Thibault Napoleon, Hua Gao, Claude Barras, Sophie Rosset, Laurent Besacier, Jakob Verbeek, Georges Quénot, Frédéric Jurie, Hazım Kemal Ekenel
Workshop on Information Fusion in Computer Vision for Concept Recognition (held with ECCV 2012) (IFCVCR Poster), Florence, Italy, Oct. 2012.

PDF DOI poster

@inproceedings{Bredin2012_REPERE,
author = {Hervé Bredin and Johann Poignant and Makarand Tapaswi and Guillaume Fortier and Viet Bac Le and Thibault Napoleon and Hua Gao and Claude Barras and Sophie Rosset and Laurent Besacier and Jakob Verbeek and Georges Quénot and Frédéric Jurie and Hazım Kemal Ekenel},
title = {{Fusion of Speech, Faces and Text for Person Identification in TV Broadcast}},
year = {2012},
booktitle = {Workshop on Information Fusion in Computer Vision for Concept Recognition (held with ECCV 2012) (IFCVCR)},
month = {Oct.},
doi = {10.1007/978-3-642-33885-4_39}
}
The Repere challenge is a project aiming at the evaluation of systems for supervised and unsupervised multimodal recognition of people in TV broadcast. In this paper, we describe, evaluate and discuss QCompere consortium submissions to the 2012 Repere evaluation campaign dry-run. Speaker identification (and face recognition) can be greatly improved when combined with name detection through video optical character recognition. Moreover, we show that unsupervised multimodal person recognition systems can achieve performance nearly as good as supervised monomodal ones (with several hundreds of identity models).


Semela2012_MediaEval

KIT at MediaEval2012 - Content-based Genre Classification with Visual Cues

Tomas Semela, Makarand Tapaswi, Hazım Kemal Ekenel, Rainer Stiefelhagen
Proceedings of the MediaEval2012 Multimedia Benchmark Workshop (MediaEval2012 ), Pisa, Italy, Oct. 2012.

PDF

@inproceedings{Semela2012_MediaEval,
author = {Tomas Semela and Makarand Tapaswi and Hazım Kemal Ekenel and Rainer Stiefelhagen},
title = {{KIT at MediaEval2012 - Content-based Genre Classification with Visual Cues}},
year = {2012},
booktitle = {Proceedings of the MediaEval2012 Multimedia Benchmark Workshop (MediaEval2012)},
month = {Oct.},
doi = {}
}
This paper presents the results of our content–based video genre classification system on the 2012 MediaEval Tagging Task. Our system utilizes several low–level visual cues to achieve this task. The purpose of this evaluation is to assess our content–based system’s performance on the large amount of blip.tv web–videos and high number of genres. The task and corpus are described in detail in [Schmeideke 2012].


Das2008_Multilingual

Multilingual spoken-password based user authentication in emerging economies using cellular phone networks

Amitava Das, Ohil K. Manyam, Makarand Tapaswi and Veeresh Taranalli
IEEE Workshop on Spoken Language Technology (SLT Oral), Goa, India, Dec. 2008.

PDF DOI

@inproceedings{Das2008_Multilingual,
author = {Amitava Das and Ohil K. Manyam and Makarand Tapaswi and Veeresh Taranalli},
title = {{Multilingual spoken-password based user authentication in emerging economies using cellular phone networks}},
year = {2008},
booktitle = {IEEE Workshop on Spoken Language Technology (SLT)},
month = {Dec.},
doi = {10.1109/SLT.2008.4777826}
}
Mobile phones are playing an important role in changing the socio-economic landscapes of emerging economies like India. A proper voice-based user authentication will help in many new mobile based applications including mobile-commerce and banking. We present our exploration and evaluation of an experimental set-up for user authentication in remote Indian villages using mobile phones and user-selected multilingual spoken passwords. We also present an effective speaker recognition method using a set of novel features called Compressed Feature Dynamics (CFD) which capture the speaker-identity effectively from the speech dynamics contained in the spoken passwords. Early trials demonstrate the effectiveness of the proposed method in handling noisy cell-phone speech. Compared to conventional text-dependent speaker recognition methods, the proposed CFD method delivers competitive performance while significantly reducing storage and computational complexity -- an advantage highly beneficial for cell-phone based deployment of such user authentication systems.


Disclaimer

This publication material is presented to ensure timely dissemination of scholarly and technical work. Copyright and all rights therein are retained by authors or by other copyright holders. All persons copying this information are expected to adhere to the terms and constraints invoked by each author's copyright. In most cases, these works may not be reposted without the explicit permission of the copyright holder.