Students in higher education frequently face challenges with sensory processing that can impact their ability to focus, engage, and collaborate. These challenges include sensory overload (e.g., from noisy lecture halls or crowded group settings), under-stimulation (e.g., fatigue, low arousal), and distraction (e.g., constant notifications). Addressing these issues is essential for fostering inclusive learning environments.
Smart devices such as smartwatches, headphones, and mobile phones offer complementary data sources that, when combined, can provide a rich, real-time picture of a learner’s sensory and cognitive state.
- Smartwatches provide physiological and movement data (heart rate, accelerometer, stress indicators).
- Headphones capture auditory environment data (ambient noise, listening patterns, noise cancellation use).
- Mobile phones contribute contextual and behavioural data (location, notifications, usage patterns, ambient sensing).
Integrating these devices allows for a more holistic approach to detecting sensory challenges and delivering personalised support strategies in academic contexts.
Aim:
To design and evaluate a multi-device system that integrates smartwatch, headphone, and mobile phone data to model sensory processing states and provide adaptive support for inclusive learning.
Objectives:
- Data Fusion & Collection
- Collect multimodal data across devices (e.g., smartwatch biometrics, headphone noise exposure, mobile phone usage patterns).
- Develop a synchronised data pipeline linking time-stamped signals across devices.
- Annotate with learner self-reports of sensory overload, distraction, or comfort.
- Integrated Modelling
- Apply AI/ML methods (e.g., multimodal fusion, ensemble learning, temporal modelling) to combine device data into a unified model of sensory states.
- Explore cross-device correlations (e.g., heart rate spikes + high ambient noise + notification overload).
- Adaptive Support System
- Prototype interventions delivered through one or more devices, such as:
- Haptic feedback (smartwatch) prompting a grounding break.
- Adaptive noise cancellation or curated focus soundscapes (headphones).
- Smart notification filtering or study-space suggestions (mobile phone).
- Prototype interventions delivered through one or more devices, such as:
- Evaluation
- Conduct real-world trials in academic settings (lectures, campus study spaces, group work).
- Assess model accuracy, usability, and user perceptions of support effectiveness.
- Explore ethical issues around data fusion, privacy, and ownership.
Methodology:
- Develop a multi-device data collection framework.
- Use experience sampling to ground-truth sensory experiences.
- Train multimodal AI/ML models for sensory state detection.
- Implement and test a cross-device prototype that delivers interventions.
Expected Outcomes:
- A unified multimodal model of sensory processing states in students.
- A prototype demonstrating cross-device adaptive interventions.
- Insights into how integrating wearable, audio, and mobile data enhances inclusivity.
- Ethical and technical guidelines for multi-device sensory support systems.
Learning Context & Inclusion:
The integrated system addresses sensory barriers such as:
- Overload from noise, notifications, or crowded settings, which can reduce participation.
- Difficulty sustaining attention during long lectures or study sessions.
- Individual variability in sensory needs, requiring personalised strategies.
Potential supports include:
- Automatic adjustment of headphones in noisy environments.
- Gentle smartwatch prompts for self-regulation during overload.
- Mobile phone-based notifications management and environment recommendations.
By combining multiple data streams, the system can provide more reliable and personalised sensory regulation, fostering greater focus, engagement, and collaboration in learning environments.