Other contributions

In addition to my own projects, I have contributed as a research assistance to two other studies at Lehigh, both of which have been published in peer-reviewed journals.

fNIRS-based classification of mind-wandering with personalized window selection for multimodal learning interfaces - Journal on Multimodal User InterfacesAutomatic detection of an individual’s mind-wandering state has implications for designing and evaluating engaging and effective learning interfaces. While it is difficult to differentiate whether an individual is mind-wandering or focusing on the task only based on externally observable behavior, brain-based sensing offers unique insights to internal states. To explore the feasibility, we conducted a study using functional near-infrared spectroscopy (fNIRS) and investigated machine learning classifiers to detect mind-wandering episodes based on fNIRS data, both on an individual level and a group level, specifically focusing on automated window selection to improve classification results. For individual-level classification, by using a moving window method combined with a linear discriminant classifier, we found the best windows for classification and achieved a mean F1-score of 74.8%. For group-level classification, we proposed an individual-based time window selection (ITWS) algorithm to incorporate individual differences in window selection. The algorithm first finds the best window for each individual by using embedded individual-level classifiers and then uses these windows from all participants to build the final classifier. The performance of the ITWS algorithm is evaluated when used with eXtreme gradient boosting, convolutional neural networks, and deep neural networks. Our results show that the proposed algorithm achieved significant improvement compared to the previous state of the art in terms of brain-based classification of mind-wandering, with an average F1-score of 73.2%. This builds a foundation for mind-wandering detection for both the evaluation of multimodal learning interfaces and for future attention-aware systems.

In Dr. Catherine Arrington's Multimodal Interface Lab, I executed research for head-mounted augmented reality (AR) displays, tangible user interfaces (TUIs), and various creativity support systems in order to create original interfaces that work seamlessly with humans. My role primarily consisted of presenting weekly literature reviews to the principal investigator, contributing to weekly discussions regarding current and future projects, and guiding participants through our experiments.

Objective: Combining interdisciplinary science and engineering research to understand and build the human-technology relationship in order to design new technologies that augment human performance.

In Dr. Nancy Carlisle's Attention & Memory Lab, I helped our 6-person research team design, execute, and analyze data for 3 separate experiments on context-dependant attentional control. My role mainly consisted of performing literature reviews, contributing study-design modifications, consulting with our IRB, and running participants.

LinkedIn