kenna

joined 1 year ago
MODERATOR OF
 

Clinical-grade wearable sleep monitoring is a challenging problem since it requires concurrently monitoring brain activity, eye movement, muscle activity, cardio-respiratory features, and gross body movements. This requires multiple sensors to be worn at different locations as well as uncomfortable adhesives and discrete electronic components to be placed on the head. As a result, existing wearables either compromise comfort or compromise accuracy in tracking sleep variables. We propose PhyMask, an all-textile sleep monitoring solution that is practical and comfortable for continuous use and that acquires all signals of interest to sleep solely using comfortable textile sensors placed on the head. We show that PhyMask can be used to accurately measure all the signals required for precise sleep stage tracking and to extract advanced sleep markers such as spindles and K-complexes robustly in the real-world setting. We validate PhyMask against polysomnography (PSG) and show that it significantly outperforms two commercially-available sleep tracking wearables—Fitbit and Oura Ring.

 

The last few decades have witnessed an emerging trend of wearable soft sensors; however, there are important signal-processing challenges for soft sensors that still limit their practical deployment. They are error-prone when displaced, resulting in significant deviations from their ideal sensor output. In this work, we propose a novel prototype that integrates an elbow pad with a sparse network of soft sensors. Our prototype is fully bio-compatible, stretchable, and wearable. We develop a learning-based method to predict the elbow orientation angle and achieve an average tracking error of 9.82 degrees for single-user multi-motion experiments. With transfer learning, our method achieves the average tracking errors of 10.98 degrees and 11.81 degrees across different motion types and users, respectively. Our core contributions lie in a solution that realizes robust and stable human joint motion tracking across different device displacements.

 

Mobile headsets should be capable of understanding 3D physical environments to offer a truly immersive experience for augmented/mixed reality (AR/MR). However, their small form-factor and limited computation resources make it extremely challenging to execute in real-time 3D vision algorithms, which are known to be more compute-intensive than their 2D counterparts. In this paper, we propose DeepMix, a mobility-aware, lightweight, and hybrid 3D object detection framework for improving the user experience of AR/MR on mobile headsets. Motivated by our analysis and evaluation of state-of-the-art 3D object detection models, DeepMix intelligently combines edge-assisted 2D object detection and novel, on-device 3D bounding box estimations that leverage depth data captured by headsets. This leads to low end-to-end latency and significantly boosts detection accuracy in mobile scenarios. A unique feature of DeepMix is that it fully exploits the mobility of headsets to fine-tune detection results and boost detection accuracy. To the best of our knowledge, DeepMix is the first 3D object detection that achieves 30 FPS (i.e., an end-to-end latency much lower than the 100 ms stringent requirement of interactive AR/MR). We implement a prototype of DeepMix on Microsoft HoloLens and evaluate its performance via both extensive controlled experiments and a user study with 30+ participants. DeepMix not only improves detection accuracy by 9.1--37.3% but also reduces end-to-end latency by 2.68--9.15×, compared to the baseline that uses existing 3D object detection models.

 

Nowadays, the market of 3D human posture tracking has extended to a broad range of application scenarios. As current mainstream solutions, vision-based posture tracking systems suffer from privacy leakage concerns and depend on lighting conditions. Towards more privacy-preserving and robust tracking manner, recent works have exploited commodity radio frequency signals to realize 3D human posture tracking. However, these studies cannot handle the case where multiple users are in the same space. In this paper, we present a mmWave-based multi-user 3D posture tracking system, m3Track, which leverages a single commercial off-the-shelf (COTS) mmWave radar to track multiple users' postures simultaneously as they move, walk, or sit. Based on the sensing signals from a mmWave radar in multi-user scenarios, m3Track first separates all the users on mmWave signals. Then, m3Track extracts shape and motion features of each user, and reconstructs 3D human posture for each user through a designed deep learning model. Furthermore. m3Track maps the reconstructed 3D postures of all users into 3D space, and tracks users' positions through a coordinate-corrected tracking method, realizing practical multi-user 3D posture tracking with a COTS mmWave radar. Experiments conducted in real-world multi-user scenarios validate the accuracy and robustness of m3Track on multi-user 3D posture tracking.

 

Concurrent inference execution on heterogeneous processors is critical to improve the performance of increasingly heavy deep learning (DL) models. However, available inference frameworks can only use one processor at a time, or hardly achieve speedup by concurrent execution compared to using one processor. This is due to the challenges to 1) reduce data sharing overhead, and 2) properly partition each operator between processors. By solving the challenges, we propose CoDL, a concurrent DL inference framework for the CPU and GPU on mobile devices. It can fully utilize the heterogeneous processors to accelerate each operator of a model. It integrates two novel techniques: 1) hybrid-type-friendly data sharing, which allows each processor to use its efficient data type for inference. To reduce data sharing overhead, we also propose hybrid-dimension partitioning and operator chain methods; 2) non-linearity- and concurrency-aware latency prediction, which can direct proper operator partitioning by building an extremely light-weight but accurate latency predictor for different processors. Based on the two techniques, we build the end-to-end CoDL inference framework, and evaluate it on different DL models. The results show up to 4.93× speedup and 62.3% energy saving compared with the state-of-the-art concurrent execution system.

 

Research has shown that the location of touch screen taps on modern smartphones and tablet computers can be identified based on sensor recordings from the device’s accelerometer and gyroscope. This security threat implies that an attacker could launch a background process on the mobile device and send the motion sensor readings to a third party vendor for further analysis. Even though the location inference is a non-trivial task requiring machine learning algorithms in order to predict the tap location, previous research was able to show that PINs and passwords of users could be successfully obtained. However, as the tap location inference was only shown for taps generated in a controlled setting not reflecting the environment users naturally engage with their smartphones, the attempts in this paper bridge this gap. We propose TapSensing, a data acquisition system designed to collect touch screen tap event information with corresponding accelerometer and gyroscope readings. Having performed a data acquisition study with 27 participants and 3 different iPhone models, a total of 25,000 labeled taps could be acquired from a laboratory and field environment enabling a direct comparison of both settings. The overall findings show that tap location inference is generally possible for data acquired in the field, hence, with a performance reduction of approximately 20% when comparing both environments. As the tap inference has therefore been shown for a more realistic data set, this work shows that smartphone motion sensors could potentially be used to comprise the user’s privacy in any surrounding user’s interact with the devices.

 

https://doi.org/10.1145/3498361.3538931 In this work, we propose FabToy, a plush toy instrumented with a 24-sensor array of fabric-based pressure sensors located beneath the surface of the toy to have dense spatial sensing coverage while maintaining the natural feel of fabric and softness of the toy. We optimize both the hardware and software pipeline to reduce overall power consumption while achieving high accuracy in detecting a wide range of interactions at different regions of the toy. Our con- tributions include a) sensor array fabrication to maximize coverage and dynamic range, b) data acquisition and triggering methods to minimize the cost of sampling a large number of channels, and c) neural network models with early exit to optimize power consumed for computation when processing locally and autoencoder-based channel aggregation to optimize power consumed for communi- cation when processing remotely. We demonstrate that we can achieve high accuracy of more than 83% for robustly detecting and localizing complex human interactions such as swiping, patting, holding, and tickling in different regions of the toy.

view more: ‹ prev next ›