Show simple item record

dc.contributor.authorChen, Wenqiang
dc.contributor.authorCheng, Jiaxuan
dc.contributor.authorWang, Leyao
dc.contributor.authorZhao, Wei
dc.contributor.authorMatusik, Wojciech
dc.date.accessioned2024-12-19T22:04:16Z
dc.date.available2024-12-19T22:04:16Z
dc.date.issued2024-11-21
dc.identifier.issn2474-9567
dc.identifier.urihttps://hdl.handle.net/1721.1/157899
dc.description.abstractVisual Question-Answering, a technology that generates textual responses from an image and natural language question, has progressed significantly. Notably, it can aid in tracking and inquiring about daily activities, crucial in healthcare monitoring, especially for elderly patients or those with memory disabilities. However, video poses privacy concerns and has a limited field of view. This paper presents Sensor2Text, a model proficient in tracking daily activities and engaging in conversations using wearable sensors. The approach outlined here tackles several challenges, including low information density in wearable sensor data, insufficiency of single wearable sensors in human activities recognition, and model's limited capacity for Question-Answering and interactive conversations. To resolve these obstacles, transfer learning and student-teacher networks are utilized to leverage knowledge from visual-language models. Additionally, an encoder-decoder neural network model is devised to jointly process language and sensor data for conversational purposes. Furthermore, Large Language Models are also utilized to enable interactive capabilities. The model showcases the ability to identify human activities and engage in Q&A dialogues using various wearable sensor modalities. It performs comparably to or better than existing visual-language models in both captioning and conversational tasks. To our knowledge, this represents the first model capable of conversing about wearable sensor data, offering an innovative approach to daily activity tracking that addresses privacy and field-of-view limitations associated with current vision-based solutions.en_US
dc.publisherACMen_US
dc.relation.isversionofhttps://doi.org/10.1145/3699747en_US
dc.rightsCreative Commons Attributionen_US
dc.rights.urihttps://creativecommons.org/licenses/by/4.0/en_US
dc.sourceAssociation for Computing Machineryen_US
dc.titleSensor2Text: Enabling Natural Language Interactions for Daily Activity Tracking Using Wearable Sensorsen_US
dc.typeArticleen_US
dc.identifier.citationChen, Wenqiang, Cheng, Jiaxuan, Wang, Leyao, Zhao, Wei and Matusik, Wojciech. 2024. "Sensor2Text: Enabling Natural Language Interactions for Daily Activity Tracking Using Wearable Sensors." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 8 (4).
dc.contributor.departmentMassachusetts Institute of Technology. Department of Electrical Engineering and Computer Scienceen_US
dc.relation.journalProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologiesen_US
dc.identifier.mitlicensePUBLISHER_CC
dc.eprint.versionFinal published versionen_US
dc.type.urihttp://purl.org/eprint/type/JournalArticleen_US
eprint.statushttp://purl.org/eprint/status/PeerRevieweden_US
dc.date.updated2024-12-01T08:54:55Z
dc.language.rfc3066en
dc.rights.holderThe author(s)
dspace.date.submission2024-12-01T08:54:55Z
mit.journal.volume8en_US
mit.journal.issue4en_US
mit.licensePUBLISHER_CC
mit.metadata.statusAuthority Work and Publication Information Neededen_US


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record