Download LifeLogging: Personal Big Data PDF

TitleLifeLogging: Personal Big Data
LanguageEnglish
File Size8.1 MB
Total Pages128
Table of Contents
                            Introduction
	Terminology, definitions and memory
	Motivation
	Who lifelogs and why ?
	Topics in lifelogging
	Review outline
Background
	History
	Capture, storage and retrieval advances
	Lifelogging disciplines
Sourcing and Storing Lifelog Data
	Sources of lifelog data
	Lifelogging: personal big data — little big data
	Storage models for lifelog data
Organising Lifelog Data
	Identifying events
	Annotating events and other atomic units of retrieval
	Search and retrieval within lifelogs
	User experience and user interfaces
	Evaluation: methodologies and challenges
Lifelogging Applications
	Personal lifelogging applications
	Population-based lifelogging applications
	Potential applications of lifelogging in information retrieval
Conclusions and Issues
	Issues with lifelogging
	Future directions
	Conclusion
Acknowledgments
Acknowledgments
References
                        
Document Text Contents
Page 1

now_logo.eps


Foundations and TrendsR© in Information Retrieval
Vol. 8, No. 1 (2014) 1–107
c© 2014 C. Gurrin, A. F. Smeaton, and A. R. Doherty
DOI: 10.1561/1500000033

LifeLogging: Personal Big Data

Cathal Gurrin
Insight Centre for Data Analytics

Dublin City University
[email protected]

Alan F. Smeaton
Insight Centre for Data Analytics

Dublin City University
[email protected]

Aiden R. Doherty
Nuffield Department of Population Health

University of Oxford
[email protected]

Page 2

Contents

1 Introduction 3
1.1 Terminology, definitions and memory . . . . . . . . . . . . 4
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.3 Who lifelogs and why ? . . . . . . . . . . . . . . . . . . . 11
1.4 Topics in lifelogging . . . . . . . . . . . . . . . . . . . . . 15
1.5 Review outline . . . . . . . . . . . . . . . . . . . . . . . . 18

2 Background 19
2.1 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.2 Capture, storage and retrieval advances . . . . . . . . . . 27
2.3 Lifelogging disciplines . . . . . . . . . . . . . . . . . . . . 36

3 Sourcing and Storing Lifelog Data 39
3.1 Sources of lifelog data . . . . . . . . . . . . . . . . . . . . 39
3.2 Lifelogging: personal big data — little big data . . . . . . 45
3.3 Storage models for lifelog data . . . . . . . . . . . . . . . 47

4 Organising Lifelog Data 51
4.1 Identifying events . . . . . . . . . . . . . . . . . . . . . . 54
4.2 Annotating events and other atomic units of retrieval . . . 59
4.3 Search and retrieval within lifelogs . . . . . . . . . . . . . 68
4.4 User experience and user interfaces . . . . . . . . . . . . . 76

ii

Page 64

4.2. Annotating events and other atomic units of retrieval 61

4.2.1 Annotating lifelogs - who

Attempts to identify people who where co-present in an event have
mainly used Bluetooth sensors to scan and log co-present devices, Ea-
gle and Pentland (2006); Lavelle et al. (2007). In addition to annotating
people directly, the number and distribution of co-present people also
helps determine an event’s distinctiveness or uniqueness. For exam-
ple, Aizawa et al. (2004b) found faces helpful to detect event impor-
tance scores. Others extended this event importance scoring approach
by merging visual uniqueness, based on MPEG-7 visual features for
example, with the number of encountered people in a given event, Do-
herty and Smeaton (2008c). Distinctiveness is also a critical issue in
autobiographical memory, as found by Brewer (1988), but we don’t yet
understand what makes some things more distinct than others, though
this is a topic in current research, Hebbalaguppe et al. (2013). The
concept of distinctiveness is also taken into account in the information
retrieval domain, for example web page importance or in various forms
of novelty detection, Allan et al. (2003b).

Detecting people and faces is also useful in helping to select rep-
resentative “keyframes” for events, should image data be present. In
addition, image saliency has also been shown to be important when
selecting event keyframes, as discussed in Doherty et al. (2008). These
ideas were inspired from description of keyframe selection in the video
domain by Girgensohn and Boreczky (2000), and represented an al-
ternative to simpler techniques such as selecting the middle image in
each event as proposed by Smeaton and Browne. (2006); Blighe et al.
(2008). Keyframes can subsequently become the source data for visual
analysis or for use in a browsing interface.

4.2.2 Annotating lifelogs - what

To annotate what type of activity is occurring in events, efforts in lifel-
ogging have focused mainly on employing computer vision and audio
processing technologies. Audio processing assumes a continual audio log
of daily activities and using computer vision assumes wearable cameras,
such as the SenseCam. Wearable video, would of course capture both

Page 65

62 Organising Lifelog Data

modalities together and recent experiences suggest that the inclusion
of additional sources of evidence (such as GPS location, user activ-
ity, time of the day) can all help to inform the semantic annotation
process. There have been some efforts using audio, for example Kern
et al. (2007) used a combination of audio activity levels and move-
ment/accelerometer sensors in an attempt to identify what was hap-
pening at a given point in time. There have also been audio-based ap-
proaches that attempt to convert spoken words into textual transcripts,
but none in the lifelogging domain, to the best of our knowledge.

Given our focus in this review on capturing a totality of life expe-
rience, computer vision based approaches will receive most attention.
Such approaches have involved using general multimedia processing
techniques for internal representation of events in lifelogging such as us-
ing MPEG-7 visual features, Salembier and Sikora (2002), SIFT (Scal-
able Invariant Feature Transformations), Lowe (2004), SURF (Speeded
up Robust Features), Bay et al. (2006), search using a bag of visual
words approach, Nistér and Stewnius (2006), and others. This work in-
volves exploring image feature vector similarity options, Kokare et al.
(2003); Rubner et al. (2000), and also merging different data sources to-
gether, Montague and Aslam (2001); Fox and Shaw (1993). All similar
approaches generate signatures for a given image from an event. From
these signatures it is possible to either support ‘find visually similar’
browsing interaction or they can be used as input into a higher-level
event classification.

The goal of higher-level image based approaches has been to ap-
ply an automatic form of semantic labelling to map lifelog images to
given concepts or activities, Doherty et al. (2011a), generally based
on Support Vector Machine (SVM) learning, Joachims (2002). Since
low-level features can be extracted automatically from media objects
including lifelog image content, these are assumed to correspond to the
semantics of the query in multimedia information retrieval, and to the
semantics of the lifelog event in our case. The FnTIR review of Con-
cept Based Video Retrieval by Snoek and Worring (2009) provides the
background information on identification of semantic concepts (such
as indoors, outdoors, eating, cars, explosions, etc.) on visual media in-

Page 127

124 References

Troiano, R. P., Berrigan, D., Dodd, K. W., Masse, L. C., Tilert, T., and
McDowell, M. (2008). Physical activity in the United States measured by
accelerometer. Med Sci Sports Exerc, 40:181–188.

van den Hoven, E., Sas, C., and Whittaker, S. (2012). Introduction to this
special issue on designing for personal memories: Past, present, and future.
Human-Computer Interaction, 27(1-2):1–12.

Vemuri, S. and Bender, W. (2004). Next-generation personal memory aids.
BT Technology Journal, 22(4):125–138.

Vemuri, S., Schmandt, C., Bender, W., Tellex, S., and Lassey, B. (2004). An
audio-based personal memory aid. In Ubicomp, pages 400–417.

Vicedo, J. L. and Gómez, J. (2007). Trec: Experiment and evaluation in infor-
mation retrieval: Book reviews. J. Am. Soc. Inf. Sci. Technol., 58(6):910–
911.

Walter, C. (2005). Kryder’s law. Scienti�c American , 293(2):32–33.
Wang, P. and Smeaton, A. F. (2011). Aggregating semantic concepts for event

representation in lifelogging. In Proceedings of the International Workshop
on Semantic Web Information Management, SWIM ’11, pages 8:1–8:6, New
York, NY, USA. ACM.

Wang, P. and Smeaton, A. F. (2012). Semantics-based selection of everyday
concepts in visual lifelogging. International Journal of Multimedia Infor-
mation Retrieval, 1:87–101.

Wang, Z., Hoffman, M. D., Cook, P. R., and Li, K. (2006). Vferret: content-
based similarity search tool for continuous archived video. In CARPE ’06:
Proceedings of the 3rd ACM workshop on Continuous archival and retrival
of personal experences, pages 19–26, New York, NY, USA. ACM.

Watson, H. J. and Wixom, B. H. (2007). The current state of business intel-
ligence. Computer, 40(9):96–99.

Whittaker, S., Kalnikaitė, V., Petrelli, D., Sellen, A., Villar, N., Bergman,
O., Ilan, B., Clough, P., Brockmeier, J., and Whittaker, S. (2012). Socio-
technical lifelogging: Deriving design principles for a future proof digital
past. Human-Computer Interaction (Special Issue on Designing for Per-
sonal Memories), 27(1-2):37.62.

Woodman, O. and Harle, R. (2008). Pedestrian localisation for indoor envi-
ronments. In Proceedings of the 10th international conference on Ubiquitous
computing, UbiComp ’08, pages 114–123, New York, NY, USA. ACM.

Yang, Y., Zhou, R., and Gurrin, C. (2012). A mechanical memory: Proto-
type digital memories in everyday devices. In 6th Irish Human Computer
Interaction Conference (iHCI2012) , Galway, Ireland.

Page 128

References 125

Yeung, M. and Yeo, B.-L. (1996). Time-constrained clustering for segmenta-
tion of video into story units. Pattern Recognition, 1996., Proceedings of
the 13th International Conference on, 3:375–380.

Zacks, J. M., Speer, N. K., Vettel, J. M., and Jacoby, L. L. (2006). Event Un-
derstanding and Memory in Healthy Aging and Dementia of the Alzheimer
Type. Psychology and Aging, 21(3):466–482.

Zacks, J. M. and Tversky, B. (2001). Event structure in perception and con-
ception. Psychological Bulletin, 127(1):3–21.

Zhang, S., Rowlands, A. V., Murray, P., and Hurst, T. L. (2012). Physical
activity classification using the GENEA wrist-worn accelerometer. Med Sci
Sports Exerc, 44(4):742–748.

Zheng, Y.-T., Zhao, M., Song, Y., Adam, H., Buddemeier, U., Bissacco, A.,
Brucher, F., Chua, T.-S., and Neven, H. (2009). Tour the world: Build-
ing a web-scale landmark recognition engine. In Proceedings of the IEEE
Conference on Computer Vision and Pattern Recognition, (CVPR), pages
1085–1092.

Zhou, L., Caprani, N., Gurrin, C., and O’Connor, N. E. (2013). ShareDay:
A novel lifelog management system for group sharing. In Li, S., Saddik,
A., Wang, M., Mei, T., Sebe, N., Yan, S., Hong, R., and Gurrin, C., ed-
itors, Advances in Multimedia Modeling, volume 7733 of Lecture Notes in
Computer Science, pages 490–492. Springer Berlin Heidelberg.

Similer Documents