We present a novel approach to monitor the Body in White (BiW), the fundamental metallic structure of a vehicle. Existing monitoring methods, including both wired and wireless sensor systems, face significant challenges due to integration complexity, weight considerations, material costs, and signal blockage within the metallic environment. To overcome these limitations, we introduce Arach-Net, an acoustic backscatter network that leverages the conductive properties of the BiW to propagate vibration signals for energy transfer and data communication. This system comprises batteryfree tags that harvest energy from BiW vibrations and utilize a backscatter technique for efficient communication, thereby eliminating the need for external power sources and reducing the power consumption. We address key challenges such as power sufficiency for tag activation and sustained operation, and collision reduction in network communication, by designing an ultra-low power backscatter tag and a distributed slot allocation protocol. We implement ArachNet, and deploy 12 tags onto the BiW of an electric SUV car. The evaluation results show that the power consumption of the tag is 51.0 𝜇W for uplink packet transmission, and 24.8 𝜇W for downlink packet reception. With our network protocol, the slot utilization can be up to 81.2%.
@inproceedings{Sigcomm_acoustic,title={Acoustic Backscatter Network for Vehicle Body-in-White},author={Wang, Weiguo and He, Yuan and Xie, Yadong and Xie, Chuyue and Kai, Yi and Hu, Chengchen},booktitle={2025 ACM Special Interest Group on Data Communication (SIGCOMM)},year={2025},}
WASA
MASS: Empowering Wi-Fi Human Sensing with Metasurface-Assisted Sample Synthesis
Jiaming Gu, Shaonan Chen, Yimiao Sun, Yadong Xie, Rui Xi, Qiang Cheng, and Yuan He
In Wireless Artificial Intelligent Computing Systems and Applications (WASA), 2025
Wi-Fi human sensing has attracted numerous research studies over the past decade. The rapid advancement of machine learning technology further boosts the development of Wi-Fi human sensing. However, current Wi-Fi human sensing suffers from the “data scarcity” problem: all the existing proposals require collecting a large amount of human-based datasets to train the sensing models, which is labor-intensive and may raise ethical concerns in certain scenarios. This obstacle seriously restricts the size, quality, and diversity of available datasets, thereby affecting the sensing performance in terms of accuracy and cross-domain applicability. In order to solve this problem, we in this paper propose Metasurface-Assisted Sample Synthesis (MASS), a novel approach to synthesize high-fidelity Wi-Fi sensing samples that effectively capture both the essential features of human motion and environment-specific multipath characteristics without requiring human involvement. The evaluation results show that MASS is effective in boosting the machine learning performance, improving the classification accuracy by 18%, and enhancing the cross-domain sensing accuracy by 22%. These findings underscore the potential of MASS to facilitate the creation of high-quality, diverse datasets with minimal human involvement and associated labor costs.
@inproceedings{10.1007/978-981-96-8728-2_32,author={Gu, Jiaming and Chen, Shaonan and Sun, Yimiao and Xie, Yadong and Xi, Rui and Cheng, Qiang and He, Yuan},title={MASS: Empowering Wi-Fi Human Sensing with Metasurface-Assisted Sample Synthesis},booktitle={Wireless Artificial Intelligent Computing Systems and Applications (WASA)},year={2025},pages={394--405},doi={https://doi.org/10.1007/978-981-96-8728-2_32},}
2024
ICPADS
mmJaw: Remote Jaw Gesture Recognition with COTS mmWave Radar
Awais Ahmad Siddiqi, Yuan He, Yande Chen, Yimao Sun, Shufan Wang, and Yadong Xie
In 2024 IEEE 30th International Conference on Parallel and Distributed Systems (ICPADS), 2024
With the increasing prevalence of IoT devices and smart systems in daily life, there is a growing demand for new modalities in Human-Computer Interaction (HCI) to improve accessibility, particularly for users who require hands-free and eyes-free interaction in contexts like VR environments, as well as for individuals with special needs or limited mobility. In this paper, we propose teeth gestures as an input modality for HCI. We find that teeth gestures, such as tapping, clenching, and sliding, are generated by various facial muscle movements that are often imperceptible to the naked eye but can be effectively captured using mm-wave radar. By capturing and analyzing the distinct patterns of these muscle movements, we propose a hands-free and eyes-free HCI solution based on three different gestures. Key challenges addressed in this paper include user range identification amidst background noise and other irrelevant facial movements. Results from 16 volunteers demonstrate the robustness of our approach, achieving 93% accuracy for up to a 2.5m range.
@inproceedings{10763597,author={Siddiqi, Awais Ahmad and He, Yuan and Chen, Yande and Sun, Yimao and Wang, Shufan and Xie, Yadong},booktitle={2024 IEEE 30th International Conference on Parallel and Distributed Systems (ICPADS)},title={mmJaw: Remote Jaw Gesture Recognition with COTS mmWave Radar},year={2024},pages={52-59},keywords={Human computer interaction;Accuracy;Radar detection;Radar;Muscles;Smart systems;Robustness;Sensors;Internet of Things;Millimeter wave communication;mmWave;Sensing;Human-Computer Interface;Teeth Gestures},doi={10.1109/ICPADS63350.2024.00017},}
ICPADS
mmHRR: Monitoring Heart Rate Recovery with Millimeter Wave Radar
Heart rate recovery (HRR) within the initial minute following exercise is a widely utilized metric for assessing cardiac autonomic function in individuals and predicting mortality risk in patients with cardiovascular disease. However, prevailing solutions for HRR monitoring typically involve the use of specialized medical equipment or contact wearable sensors, resulting in high costs and poor user experience. In this paper, we propose a contactless HRR monitoring technique, mmHRR, which achieves accurate heart rate (HR) estimation with a commercial mmWave radar. Unlike HR estimation at rest, the HR varies quickly after exercise and the heartbeat signal entangles with the respiration harmonics. To overcome these hurdles and effectively estimate the HR from the weak and non-stationary heartbeat signal, we propose a novel signal processing pipeline, including dynamic target tracking, adaptive heartbeat signal extraction, and accurate HR estimation with composite sliding windows. Real-world experiments demonstrate that mmHRR exhibits exceptional robustness across diverse environmental conditions, and achieves an average HR estimation error of 3.31 bpm (beats per minute), 71% lower than that of the state-of-the-art method.
@inproceedings{10763548,author={Mao, Ziheng and He, Yuan and Zhang, Jia and Sun, Yimiao and Xie, Yadong and Guo, Xiuzhen},booktitle={2024 IEEE 30th International Conference on Parallel and Distributed Systems (ICPADS)},title={mmHRR: Monitoring Heart Rate Recovery with Millimeter Wave Radar},year={2024},pages={1-8},keywords={Accuracy;Target tracking;Heart beat;Signal processing algorithms;Millimeter wave radar;Harmonic analysis;User experience;Millimeter wave communication;Monitoring;Wearable sensors;millimeter wave radar;heart rate recovery monitoring;mmWave sensing;wireless sensing},doi={10.1109/ICPADS63350.2024.00011},}
INFOCOM
HearBP: Hear Your Blood Pressure via In-ear Acoustic Sensing Based on Heart Sounds
Zhiyuan Zhao, Fan Li, Yadong Xie, Huanran Xie, Kerui Zhang, Li Zhang, and Yu Wang
In IEEE INFOCOM 2024 - IEEE Conference on Computer Communications, 2024
Continuous blood pressure (BP) monitoring using wearable devices has received increasing attention due to its importance in diagnosing diseases. However, existing methods mainly measure BP intermittently, involve some form of user effort, and suffer from insufficient accuracy due to sensor properties. In order to overcome these limitations, we study the BP measurement technology based on heart sounds, and find that the time interval between the first and second heart sounds (TIFS) of bone-conducted heart sounds collected in the binaural canal is closely related to BP. Motivated by this, we propose HearBP, a novel BP monitoring system that utilizes inear microphones to collect bone-conducted heart sounds in the binaural canal. We first design a noise removing method based on U-net autoencoder-decoder to separate clean heart sounds from background noises. Then, we design a feature extraction method based on shannon energy and energy-entropy ratio to further mine the time domain and frequency domain features of heart sounds. In addition, combined with the principal component analysis algorithm, we achieve feature dimension reduction to extract the main features related to BP. Finally, we propose a network model based on dendritic neural regression to construct a mapping between the extracted features and BP. Extensive experiments with 41 participants show the average estimation error of 0.97mmHg and 1.61mmHg and the standard deviation error of 3.13mmHg and 3.56mmHg for diastolic pressure and systolic pressure, respectively. These errors are within the acceptable range specified by the FDA’s AAMI protocol.
@inproceedings{10621249,author={Zhao, Zhiyuan and Li, Fan and Xie, Yadong and Xie, Huanran and Zhang, Kerui and Zhang, Li and Wang, Yu},booktitle={IEEE INFOCOM 2024 - IEEE Conference on Computer Communications},title={HearBP: Hear Your Blood Pressure via In-ear Acoustic Sensing Based on Heart Sounds},year={2024},pages={991-1000},keywords={Heart;Irrigation;Estimation error;Protocols;Frequency-domain analysis;Feature extraction;Time-domain analysis},doi={10.1109/INFOCOM52122.2024.10621249},}
TDSC
User Authentication on Earable Devices Via Bone-Conducted Occlusion Sounds
Yadong Xie, Fan Li, Yue Wu, and Yu Wang
IEEE Transactions on Dependable and Secure Computing (TDSC), 2024
With the rapid development of mobile devices and the fast increase of sensitive data, secure and convenient mobile authentication technologies are desired. Except for traditional passwords, many mobile devices have biometric-based authentication methods (e.g., fingerprint, voiceprint, and face recognition), but they are vulnerable to spoofing attacks. To solve this problem, we study new biometric features which are based on the dental occlusion and find that the bone-conducted sound of dental occlusion collected in binaural canals contains unique features of individual bones and teeth. Motivated by this, we propose a novel authentication system, TeethPass^+, which uses earbuds to collect occlusal sounds in binaural canals to achieve authentication. Firstly, we design an event detection method based on spectrum variance to detect bone-conducted sounds. Then, we analyze the time-frequency domain of the sounds to filter out motion noises and extract unique features of users from four aspects: teeth structure, bone structure, occlusal location, and occlusal sound. Finally, we train a Triplet network to construct the user template, which is used to complete authentication. Through extensive experiments including 53 volunteers, the performance of TeethPass^+ in different environments is verified. TeethPass^+ achieves an accuracy of 98.6% and resists 99.7% of spoofing attacks.
@article{10330729,author={Xie, Yadong and Li, Fan and Wu, Yue and Wang, Yu},journal={IEEE Transactions on Dependable and Secure Computing (TDSC)},title={User Authentication on Earable Devices Via Bone-Conducted Occlusion Sounds},year={2024},volume={21},number={4},pages={3704-3718},doi={10.1109/TDSC.2023.3335368},}
TMC
FingerSlid: Towards Finger-sliding Continuous Authentication on Smart Devices via Vibration
Nowadays, mobile smart devices are widely used in daily life. It is increasingly important to prevent malicious users from accessing private data, thus a secure and convenient authentication method is urgently needed. Compared with common one-off authentication (e.g., password, face recognition, and fingerprint), continuous authentication can provide constant privacy protection. However, most studies are based on behavioral features and vulnerable to spoofing attacks. To solve this problem, we study the unique influence of sliding fingers on active vibration signals, and further propose an authentication system, FingerSlid, which uses vibration motors and accelerometers in mobile devices to sense biometric features of sliding fingers to achieve behavior-independent continuous authentication. First, we design two kinds of active vibration signals and propose a novel signal generation mechanism to improve the anti-attack ability of FingerSlid. Then, we extract different biometric features from the received two kinds of signals, and eliminate the influence of behavioral features in biometric features using a carefully designed Triplet network. Last, user authentication is performed by using the generated behavior-independent biometric features. FingerSlid is evaluated through a large number of experiments under different scenarios, and it achieves an average accuracy of 95.4% and can resist 99.5% of attacks.
@article{10251599,author={Xie, Yadong and Li, Fan and Wang, Yu},journal={IEEE Transactions on Mobile Computing (TMC)},title={FingerSlid: Towards Finger-sliding Continuous Authentication on Smart Devices via Vibration},year={2024},volume={23},number={5},pages={6045-6059},doi={10.1109/TMC.2023.3315291},}
2023
TMC
HearFit+: Personalized Fitness Monitoring via Audio Signals on Smart Speakers
Fitness can help to strengthen muscles, increase resistance to diseases, and improve body shape. Nowadays, a great number of people choose to exercise at home/office rather than at the gym due to lack of time. However, it is difficult for them to get good fitness effects without professional guidance. Motivated by this, we propose the first personalized fitness monitoring system, HearFit^++, using smart speakers at home/office. We explore the feasibility of using acoustic sensing to monitor fitness. We design a fitness detection method based on Doppler shift and adopt the short time energy to segment fitness actions. Based on deep learning, HearFit^++ can perform fitness classification and user identification at the same time. Combined with incremental learning, users can easily add new actions. We design 4 evaluation metrics (i.e., duration, intensity, continuity, and smoothness) to help users to improve fitness effects. Through extensive experiments including over 9,000 actions of 10 types of fitness from 12 volunteers, HearFit^++ can achieve an average accuracy of 96.13% on fitness classification and 91% accuracy for user identification. All volunteers confirm that HearFit^++ can help improve the fitness effect in various environments.
@article{9606582,author={Xie, Yadong and Li, Fan and Wu, Yue and Wang, Yu},journal={IEEE Transactions on Mobile Computing (TMC)},title={HearFit+: Personalized Fitness Monitoring via Audio Signals on Smart Speakers},year={2023},volume={22},number={5},pages={2756--2770},doi={10.1109/TMC.2021.3125684},}
INFOCOM
WakeUp: Fine-Grained Fatigue Detection Based on Multi-Information Fusion on Smart Speakers
Zhiyuan Zhao, Fan Li, Yadong Xie, and Yu Wang
In IEEE INFOCOM 2023 - IEEE Conference on Computer Communications, 2023
With the development of society and the gradual increase of life pressure, the number of people engaged in mental work and working hours have increased significantly, resulting in more and more people in a state of fatigue. It not only reduces people’s work efficiency, but also causes health and safety related problems. The existing fatigue detection systems either have different shortcomings in diverse scenarios or are limited by proprietary equipment, which is difficult to be applied in real life. Motivated by this, we propose a multi-information fatigue detection system named WakeUp based on commercial smart speakers, which is the first to fuse physiological and behavioral information for fine-grained fatigue detection in a non-contact manner. We carefully design a method to simultaneously extract users’ physiological and behavioral information based on the MobileViT network and VMD decomposition algorithm respectively. Then, we design a multi-information fusion method based on the statistical features of these two kinds of information. In addition, we adopt an SVM classifier to achieve fine-grained fatigue level. Extensive experiments with 20 volunteers show that WakeUp can detect fatigue with an accuracy of 97.28%. Meanwhile, WakeUp can maintain stability and robustness under different experimental settings.
@inproceedings{10229021,author={Zhao, Zhiyuan and Li, Fan and Xie, Yadong and Wang, Yu},booktitle={IEEE INFOCOM 2023 - IEEE Conference on Computer Communications},title={WakeUp: Fine-Grained Fatigue Detection Based on Multi-Information Fusion on Smart Speakers},year={2023},pages={1--10},doi={10.1109/INFOCOM53939.2023.10229021}}
IOTJ
HearASL: Your Smartphone Can Hear American Sign Language
Yusen Wang, Fan Li, Yadong Xie, Chunhui Duan, and Yu Wang
Sign language is expressed by movements of the hands and facial expressions, which is mainly used by the deaf community. Although some gesture recognition methods are put forward, they possess different defects and are not applicable to deal with the sign language recognition (SLR) problem. In this article, we propose an end-to-end American SLR system with built-in speakers and microphones in smartphones, which enables SLR at both word level and sentence level. The high-level idea is to use the inaudible acoustic signal to estimate channel information and capture the sign language in real time. We use channel impulse response to represent each sign language gesture, which can realize finger-level recognition. We also pay attention to conversion movements between two words and treat them as an additional label when training the sentence-level classification model. We implement a prototype system and run a series of experiments that demonstrate the promising performance of our system. Experimental results show that our approach can achieve an accuracy of 97.2% at word-level recognition and word error rate of 0.9% at sentence-level recognition, respectively.
@article{9999544,author={Wang, Yusen and Li, Fan and Xie, Yadong and Duan, Chunhui and Wang, Yu},journal={IEEE Internet of Things Journal (IOTJ)},title={HearASL: Your Smartphone Can Hear American Sign Language},year={2023},volume={10},number={10},pages={8839--8852},doi={10.1109/JIOT.2022.3232337}}
TMC
BSMonitor: Noise-Resistant Bowel Sound Monitoring Via Earphones
Zhiyuan Zhao, Fan Li, Yadong Xie, Yue Wu, and Yu Wang
Bowel sound (BS) is an important physiological signal of the human body, which is also an objective reflection of gastrointestinal motility. However, BS has characteristics of weak signal, strong noise, and randomicity, which bring great challenges to the daily detection of BS. In this paper, we propose BSMonitor, the first BS monitoring system with strong noise-resistant capability via earphones. BSMonitor uses one earphone attached to the abdomen to collect BS signals and the other earphone worn in the ear to collect external noises and internal noises. After eliminating the noises through the Kalman filter and band-pass filter, the signal containing BS is separated via the empirical mode decomposition. Then BSMonitor extracts MFCC features of BS signals and applies a carefully-designed LSTM network to perform highly-accurate BS detection. Finally, an alert mechanism calculates the frequency and duration of detected BS and compares with the normal values to alert users. Furthermore, to increase the amount and diversity of training data, we introduce a data augmentation method, which can further improve the accuracy and generalization of BSMonitor. Through extensive experiments with 18 volunteers, we find that BSMonitor not only achieves high accuracy of BS detection but also has strong generalization across different users and environments. Particularly, BSMonitor achieves accuracy up to 98.73% and 94.56% in the benchmark experiments and the cross experiments, respectively.
@article{10109814,author={Zhao, Zhiyuan and Li, Fan and Xie, Yadong and Wu, Yue and Wang, Yu},journal={IEEE Transactions on Mobile Computing (TMC)},title={BSMonitor: Noise-Resistant Bowel Sound Monitoring Via Earphones},year={2023},pages={1--15},doi={10.1109/TMC.2023.3270926}}
INFOCOM
FlyTracker: Motion Tracking and Obstacle Detection for Drones Using Event Cameras
Yue Wu, Jingao Xu, Danyang Li, Yadong Xie, Hao Cao, Fan Li, and Zheng Yang
In IEEE INFOCOM 2023 - IEEE Conference on Computer Communications, 2023
Location awareness in environments is one of the key parts for drones’ applications and have been explored through various visual sensors. However, standard cameras easily suffer from motion blur under high moving speeds and low-quality image under poor illumination, which brings challenges for drones to perform motion tracking. Recently, a kind of bio-inspired sensors called event cameras emerge, offering advantages like high temporal resolution, high dynamic range and low latency, which motivate us to explore their potential to perform motion tracking in limited scenarios. In this paper, we propose FlyTracker, aiming at developing visual sensing ability for drones of both individual and circumambient location-relevant contextual, by using a monocular event camera. In FlyTracker, background-subtraction-based method is proposed to distinguish moving objects from background and fusion-based photometric features are carefully designed to obtain motion information. Through multilevel fusion of events and images, which are heterogeneous visual data, FlyTracker can effectively and reliably track the 6-DoF pose of the drone as well as monitor relative positions of moving obstacles. We evaluate performance of FlyTracker in different environments and the results show that FlyTracker is more accurate than the state-of-the-art baselines.
@inproceedings{10228976,author={Wu, Yue and Xu, Jingao and Li, Danyang and Xie, Yadong and Cao, Hao and Li, Fan and Yang, Zheng},booktitle={IEEE INFOCOM 2023 - IEEE Conference on Computer Communications},title={FlyTracker: Motion Tracking and Obstacle Detection for Drones Using Event Cameras},year={2023},pages={1--10},doi={10.1109/INFOCOM53939.2023.10228976}}
TOSN
SymListener: Detecting Respiratory Symptoms via Acoustic Sensing in Driving Environments
Yue Wu, Fan Li, Yadong Xie, Yu Wang, and Zheng Yang
Sound-related respiratory symptoms are commonly observed in our daily lives. They are closely related to illnesses, infections, or allergies but ignored by the majority. Existing detection methods either depend on specific devices, which are inconvenient to wear, or are sensitive to noises and only work for indoor environment. Considering the lack of monitoring method for in-car environment, where there is high risk of spreading infectious diseases, we propose a smartphone-based system, named SymListener, to detect respiratory symptoms in driving environment. By continuously recording acoustic data through a built-in microphone, SymListener can detect the sounds of cough, sneeze, and sniffle. We design a modified ABSE-based method to remove the strong and changeable driving noises while saving energy of the smartphone. An LSTM network is adopted to classify the three types of symptoms according to the carefully designed acoustic features. We implement SymListener on different Android devices and evaluate its performance in real driving environment. The evaluation results show that SymListener can reliably detect target respiratory symptoms with an average accuracy of 92.19% and an average precision of 90.91%.
@article{10.1145/3517014,author={Wu, Yue and Li, Fan and Xie, Yadong and Wang, Yu and Yang, Zheng},title={SymListener: Detecting Respiratory Symptoms via Acoustic Sensing in Driving Environments},year={2023},volume={19},number={1},doi={10.1145/3517014},journal={ACM Transactions on Sensor Networks (TOSN)},numpages={1--21}}
2022
TMC
HearSmoking: Smoking Detection in Driving Environment via Acoustic Sensing on Smartphones
Yadong Xie, Fan Li, Yue Wu, Song Yang, and Yu Wang
Driving safety has drawn much public attention in recent years due to the fast-growing number of cars. Smoking is one of the threats to driving safety but is often ignored by drivers. Existing works on smoking detection either work in contact manner or need additional devices. This motivates us to explore the practicability of using smartphones to detect smoking events in driving environment. In this paper, we propose a cigarette smoking detection system, named HearSmoking, which only uses acoustic sensors on smartphones to improve driving safety. After investigating typical smoking habits of drivers, including hand movement and chest fluctuation, we design an acoustic signal to be emitted by the speaker and received by the microphone. We calculate Relative Correlation Coefficient of received signals to obtain movement patterns of hands and chest. The processed data is sent into a trained Convolutional Neural Network for classification of hand movement. We also design a method to detect respiration at the same time. To improve system performance, we further analyse the periodicity of the composite smoking motion. Through extensive experiments in real driving environments, HearSmoking detects smoking events with an average total accuracy of 93.44 percent in real-time.
@article{9312460,author={Xie, Yadong and Li, Fan and Wu, Yue and Yang, Song and Wang, Yu},journal={IEEE Transactions on Mobile Computing (TMC)},title={HearSmoking: Smoking Detection in Driving Environment via Acoustic Sensing on Smartphones},year={2022},volume={21},number={8},pages={2847--2860},doi={10.1109/TMC.2020.3048785},}
INFOCOM
TeethPass: Dental Occlusion-based User Authentication via In-ear Acoustic Sensing
Yadong Xie, Fan Li, Yue Wu, Huijie Chen, Zhiyuan Zhao, and Yu Wang
In IEEE INFOCOM 2022 - IEEE Conference on Computer Communications, 2022
With the rapid development of mobile devices and the fast increase of sensitive data, secure and convenient mobile authentication technologies are desired. Except for traditional passwords, many mobile devices have biometric-based authentication methods (e.g., fingerprint, voiceprint, and face recognition), but they are vulnerable to spoofing attacks. To solve this problem, we study new biometric features which are based on the dental occlusion and find that the bone-conducted sound of dental occlusion collected in binaural canals contains unique features of individual bones and teeth. Motivated by this, we propose a novel authentication system, TeethPass, which uses earbuds to collect occlusal sounds in binaural canals to achieve authentication. We design an event detection method based on spectrum variance and double thresholds to detect bone-conducted sounds. Then, we analyze the time-frequency domain of the sounds to filter out motion noises and extract unique features of users from three aspects: bone structure, occlusal location, and occlusal sound. Finally, we design an incremental learning-based Siamese network to construct the classifier. Through extensive experiments including 22 participants, the performance of TeethPass in different environments is verified. TeethPass achieves an accuracy of 96.8% and resists nearly 99% of spoofing attacks.
@inproceedings{9796951,author={Xie, Yadong and Li, Fan and Wu, Yue and Chen, Huijie and Zhao, Zhiyuan and Wang, Yu},booktitle={IEEE INFOCOM 2022 - IEEE Conference on Computer Communications},title={TeethPass: Dental Occlusion-based User Authentication via In-ear Acoustic Sensing},year={2022},pages={1789--1798},doi={10.1109/INFOCOM48880.2022.9796951},}
IOTJ
Gait and Respiration-Based User Identification Using Wi-Fi Signal
Xiaoyang Wang, Fan Li, Yadong Xie, Song Yang, and Yu Wang
The ever-growing security issues in various scenarios create an urgent demand for a reliable and convenient identification system. Traditional identification systems request users to provide passwords, fingerprints, or other easily stolen information. Existing works show that everyone’s gait and respiration have unique characteristics and are difficult to imitate. But these works only use gait or respiration information to achieve identification, which leads to low accuracy or long identification time. And they have no strong anti-interference ability, which leads to the limitation in practical application. Toward this end, we propose a new system which uses both gait and respiratory biometric characteristics to achieve user identification using Wi-Fi (GRi-Fi) in the presence of interferences. In our system, we design a segmentation algorithm to segment gait and respiration data. And we design a weighted subcarrier screening method to improve the anti-interference ability. In order to shorten the identification time, we propose a feature integration method based on the weighted average. Finally, we use a deep learning method to identify users accurately. Experimental results show that GRi-Fi can identify the users identity with an average accuracy of 98.3% in noninterference environments. Even in the presence of multiple interferences, the average identification accuracy also reaches 91.2%. In future applications, our system can be applied to many fields of Internet of Things, such as smart home systems and clocking in at companies.
@article{9488277,author={Wang, Xiaoyang and Li, Fan and Xie, Yadong and Yang, Song and Wang, Yu},journal={IEEE Internet of Things Journal (IOTJ)},title={Gait and Respiration-Based User Identification Using Wi-Fi Signal},year={2022},volume={9},number={5},pages={3509--3521},doi={10.1109/JIOT.2021.3097892}}
TMC
HDSpeed: Hybrid Detection of Vehicle Speed via Acoustic Sensing on Smartphones
Yue Wu, Fan Li, Yadong Xie, Song Yang, and Yu Wang
Speeding is one of the biggest threatens to road safety. However, facilities like radar detector and speed camera are not deployed everywhere, as roads in some areas like campus and residential areas often lack these facilities. Several solutions either depend on pre-deployed infrastructures, or require additional devices, which motivate us to explore the practicability of using smartphones’ acoustic sensors to detect vehicle speed. In this paper, we propose a Hybrid Detection system for vehicle Speed (HDSpeed). We first investigate the relationship between acoustic pattern and vehicle speed. According to our findings on typical patterns of both electric vehicles (EVs) and gasoline vehicles (GVs), we separately extract different features from the acoustic signals of EVs and GVs. A CNN and an LSTMN are designed for training EV and GV models, respectively. Considering that applying neural networks obtains coarse-grained information like a speed section, we propose a detection method based on active acoustic sensing, in which method HDSpeed calculates the fine-grained speed by detecting the distance change between the smartphone and the passing vehicle. In addition, the previously detected speed section can eliminate interferences of surrounding moving objects. Through extensive experiments in real driving environments, HDSpeed achieves an average error of 2.17km/h.
@article{9311795,author={Wu, Yue and Li, Fan and Xie, Yadong and Yang, Song and Wang, Yu},journal={IEEE Transactions on Mobile Computing (TMC)},title={HDSpeed: Hybrid Detection of Vehicle Speed via Acoustic Sensing on Smartphones},year={2022},volume={21},number={8},pages={2833--2846},doi={10.1109/TMC.2020.3048380}}
2021
INFOCOM
HearFit: Fitness Monitoring on Smart Speakers via Active Acoustic Sensing
Yadong Xie, Fan Li, Yue Wu, and Yu Wang
In IEEE INFOCOM 2021 - IEEE Conference on Computer Communications, 2021
Fitness can help to strengthen muscles, increase resistance to diseases and improve body shape. Nowadays, more and more people tend to exercise at home/office, since they lack time to go to the dedicated gym. However, it is difficult for most of them to get good fitness effect due to the lack of professional guidance. Motivated by this, we propose HearFit, the first non-invasive fitness monitoring system based on commercial smart speakers for home/office environments. To achieve this, we turn smart speakers into active sonars. We design a fitness detection method based on Doppler shift and adopt the short time energy to segment fitness actions. We design a high-accuracy LSTM network to determine the type of fitness. Combined with incremental learning, users can easily add new actions. Finally, we evaluate the local (i.e., intensity and duration) and global (i.e., continuity and smoothness) fitness quality of users to help to improve fitness effect and prevent injury. Through extensive experiments including over 7,000 actions of 10 types of fitness with and without dumbbells from 12 participants, HearFit can detect fitness actions with an average accuracy of 96.13%, and give accurate statistics in various environments.
@inproceedings{9488811,author={Xie, Yadong and Li, Fan and Wu, Yue and Wang, Yu},booktitle={IEEE INFOCOM 2021 - IEEE Conference on Computer Communications},title={HearFit: Fitness Monitoring on Smart Speakers via Active Acoustic Sensing},year={2021},pages={1--10},doi={10.1109/INFOCOM42981.2021.9488811},}
TMC
Real-Time Detection for Drowsy Driving via Acoustic Sensing on Smartphones
Yadong Xie, Fan Li, Yue Wu, Song Yang, and Yu Wang
Drowsy driving is one of the biggest threats to driving safety, which has drawn much public attention in recent years. Thus, a simple but robust system that can remind drivers of drowsiness levels with off-the-shelf devices (e.g., smartphones) is very necessary. With this motivation, we explore the feasibility of using acoustic sensors on smartphones to detect drowsy driving. Through analyzing real driving data to study characteristics of drowsy driving, we find some unique patterns of Doppler shift caused by three typical drowsy behaviours (i.e., nodding, yawning and operating steering wheel), among which operating steering wheels is also related to drowsiness levels. Then, a real-time Drowsy Driving Detection system named D 3 -Guard is proposed based on the acoustic sensing abilities of smartphones. We adopt several effective feature extraction methods, and carefully design a high-accuracy detector based on LSTM networks for the early detection of drowsy driving. Besides, measures to distinguish drowsiness levels are also introduced in the system by analyzing the data of operating steering wheel. Through extensive experiments with five drivers in real driving environments, D 3 -Guard detects drowsy driving actions with an average accuracy of 93.31%, as well as classifies drowsiness levels with an average accuracy of 86%.
@article{9055089,author={Xie, Yadong and Li, Fan and Wu, Yue and Yang, Song and Wang, Yu},journal={IEEE Transactions on Mobile Computing (TMC)},title={Real-Time Detection for Drowsy Driving via Acoustic Sensing on Smartphones},year={2021},volume={20},number={8},pages={2671--2685},doi={10.1109/TMC.2020.2984278},}
Inf. Sci.
SVSV: Online handwritten signature verification based on sound and vibration
Zhixiang Wei, Song Yang, Yadong Xie, Fan Li, and Bo Zhao
Handwritten signature is one of the most important behavioral biometrics and plays an important role in the field of identity verification. It is regarded as a legal means to verify personal identity by administrative and financial institutions. Traditional manual signature verification requires large labor costs and the probability of verification error is relatively high. Nowadays, tablets are often used for signature capturing, which motivates us to explore the feasibility of using tablets for signature verification. In this paper, we propose an online handwriting signature verification system based on sound and vibration (SVSV) generated during the signing process. We develop an application to collect signature-related vibration and sound data. We first extract the time domain features of the sound signal and use Fast Fourier Transform to extract the frequency domain features of the sound data. For vibration data, we use Discrete Cosine Transform for dimensionality reduction and feature extraction. Then we fuse the sound and vibration features. Finally, we design an efficient one-class classifier based on the Convolutional Neural Network to perform signature verification. Through extensive experiments with 12 volunteers, the results show that SVSV is a robust and efficient system with an AUC of 0.984 and an EER of 0.05.
@article{WEI2021109,title={SVSV: Online handwritten signature verification based on sound and vibration},journal={Information Sciences},volume={572},pages={109--125},year={2021},issn={0020--0255},doi={https://doi.org/10.1016/j.ins.2021.04.099},author={Wei, Zhixiang and Yang, Song and Xie, Yadong and Li, Fan and Zhao, Bo}}
2019
INFOCOM
D3-Guard: Acoustic-based Drowsy Driving Detection Using Smartphones
Yadong Xie, Fan Li, Yue Wu, Song Yang, and Yu Wang
In IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, 2019
Since the number of cars has grown rapidly in recent years, driving safety draws more and more public attention. Drowsy driving is one of the biggest threatens to driving safety. Therefore, a simple but robust system that can detect drowsy driving with commercial off-the-shelf devices (such as smart-phones) is very necessary. With this motivation, we explore the feasibility of purely using acoustic sensors embedded in smart-phones to detect drowsy driving. We first study characteristics of drowsy driving, and find some unique patterns of Doppler shift caused by three typical drowsy behaviors, i.e., nodding, yawning and operating steering wheel. We then validate our important findings through empirical analysis of the driving data collected from real driving environments. We further propose a real-time Drowsy Driving Detection system (D 3 -Guard) based on audio devices embedded in smartphones. In order to improve the performance of our system, we adopt an effective feature extraction method based on undersampling technique and FFT, and carefully design a high-accuracy detector based on LSTM networks for the early detection of drowsy driving. Through extensive experiments with 5 volunteer drivers in real driving environments, our system can distinguish drowsy driving actions with an average total accuracy of 93.31% in real-time. Over 80% drowsy driving actions can be detected within first 70% of action duration.
@inproceedings{8737470,author={Xie, Yadong and Li, Fan and Wu, Yue and Yang, Song and Wang, Yu},booktitle={IEEE INFOCOM 2019 - IEEE Conference on Computer Communications},title={D3-Guard: Acoustic-based Drowsy Driving Detection Using Smartphones},year={2019},pages={1225--1233},doi={10.1109/INFOCOM.2019.8737470},}
IOTJ
A context-aware multiarmed bandit incentive mechanism for mobile crowd sensing systems
Yue Wu, Fan Li, Liran Ma, Yadong Xie, Ting Li, and Yu Wang
Smart city is a key component in Internet of Things, so it has attracted much attention. The emergence of mobile crowd sensing (MCS) systems enables many smart city applications. In an MCS system, sensing tasks are allocated to a number of mobile users. As a result, the sensing related context of each mobile user plays a significant role on service quality. However, some important sensing context is ignored in the literature. This motivates us to propose a context-aware multiarmed bandit (C-MAB) incentive mechanism to facilitate quality-based worker selection in an MCS system. We evaluate a worker’s service quality by its context (i.e., extrinsic ability and intrinsic ability) and cost. Based on our proposed C-MAB incentive mechanism and quality evaluation design, we develop a modified Thompson sampling worker selection (MTS-WS) algorithm to select workers in a reinforcement learning manner. MTS-WS is able to choose effective workers because it can maintain accurate worker quality information by updating evaluation parameters according to the status of task accomplishment. We theoretically prove that our C-MAB incentive mechanism is selection efficient, computationally efficient, individually rational, and truthful. Finally, we evaluate our MTS-WS algorithm on simulated and real-world datasets in comparison with some other classic algorithms. Our evaluation results demonstrate that MTS-WS achieves the highest cumulative utility of the requester and social welfare.
@article{wu2019context,title={A context-aware multiarmed bandit incentive mechanism for mobile crowd sensing systems},author={Wu, Yue and Li, Fan and Ma, Liran and Xie, Yadong and Li, Ting and Wang, Yu},journal={IEEE Internet of Things Journal (IOTJ)},volume={6},number={5},pages={7648--7658},year={2019},doi={10.1109/JIOT.2019.2903197}}