Author

Jamie A. Ward, Paul Lukowicz, Gerhard Tröster

Abstract

Evaluating the performance of a continuous activity recognition system can be a challenging problem. To-date there is no widely accepted standard for dealing with this, and in general methods and measures are adapted from related fields such as speech and vision. Much of the problem stems from the often imprecise and ambiguous nature of the real-world events that an activity recognition system has to deal with. A recognised event might have variable duration, or be shifted in time from the corresponding real-world event. Equally it might be broken up into smaller pieces, or joined together to form larger events. Most evaluation attempts tend to smooth over these issues, using ``fuzzy'' boundaries, or some other parameter based error decision, so as to make possible the use of standard performance measures (such as insertions and deletions.) However, we argue that reducing the various facets of a activity system into limited error categories -- that were originally intended for different problem domains -- can be overly restrictive. In this paper we attempt to identify and characterise the errors typical to continuous activity recognition, and develop a method for quantifying them in an unambiguous manner. By way of an initial investigation, we apply the method to an example taken from previous work, and discuss the advantages that this provides over two of the most commonly used methods.   [Download]

BibTex

@inproceedings {Ward:Evaluating:2006:6527,
	number = {}, 
	month = {}, 
	year = {2006}, 
	title = {Evaluating performance in continuous context recognition using event-driven error characterisation}, 
	journal = {}, 
	volume = {3987}, 
	pages = {239-255}, 
	publisher = {Springer-Verlag}, 
	author = {Jamie A. Ward, Paul Lukowicz, Gerhard Tröster}, 
	keywords = {}
}