Basic features 78 hours of data for 7 participants in two 6-hour sessions each, eating and doing other daily activities while wearing Google Glass, an earbud microphone, and a smartwatch on each wrist. Includes ground truth at the level of chews and swallows annotated from video, and annotation of type and amount of food and drink consumed in each intake. Audio data coming soon.
Getting the data Registration form and download available: [here].
C. Merck, C. Maher, M. Mirtchouk, M. Zheng, Y. Huang, and S. Kleinberg. Multimodality Sensing for Eating Recognition. In Pervasive Health, 2016.
If you use the food weight and type annotations, also cite: M. Mirtchouk, C. Merck, and S. Kleinberg. Automated Estimation of Food Type and Amount Consumed from Body-worn Audio and Motion Sensors. In UbiComp, 2016.
ACE 1.1: Annotations of food type and amount consumed in each intake have been added.
License Use of the data is permitted for non-commercial research and education purposes provided you 1) properly credit the data source (citation information above), 2) do not attempt to identify participants in the study and 3) do not redistribute the data (with or without modification).
Basic features 2 hours of high resolution activity data for each of 38 participants as they walk, talk, and eat meals, collected with Google Glass.
Getting the data The data along with a more detailed README, and annotation of activities are available at: [GLEAM.tar.gz]
Citation Rahman, S. A., Merck, C., Huang, Y. and Kleinberg, S. (2015). Unintrusive Eating Recognition using Google Glass. Pervasive Health.
License The data can be used for any non-commercial purpose, as long as you do not redistribute the data (original or modified) and properly attribute its source. You also may not attempt to identify participants in the study.