Keeping a high level of attention is a prerequisite for successful learning. So, how about some external help in noticing that your mind is drifting off, and in getting it back on track? In her PhD, Yoon Lee aims to design a multimodal feedback loop for better-sustained attention during educational activities.
Educated in Industrial Engineering, Yoon Lee became interested in both feedback loops and human-robot interaction. When finalising her master’s thesis on classifying noise fatigue in an intensive care setting, she stumbled upon an open PhD position at the Leiden-Delft-Erasmus Centre for Education and Learning (LDE-CEL). ‘Involving system development, machine-learning and multimodal assessment of the human response, it was a close match to what I had been doing for my master’s thesis project,’ she says. ‘And education is a very interesting new target audience for me.’
Driven by the emergence of online and hybrid education she started her PhD before the start of the COVID crisis, but the pandemic has made it all the more pressing. ‘Learners are used to going to school, attending class, listening to an educator,’ she says. ‘But a whole new dynamic has arisen, with lots of computer-mediated content. And learners themselves don’t really know how to manage their attention.’ Her aim is to combine two techniques into one feedback loop: noticing attention loss, and gently assisting the learner in regaining focus.
But a whole new educational dynamic has arisen, with lots of computer mediated content
Noticing attention loss
Traditionally, in research settings, attention has been tracked using an eye-tracker or physical sensors, with the signals and the level of attention being interpreted by a researcher. Yoon tries for something different. ‘I am developing a recognition system based on learner’s self-regulation – noticing observable behaviour such as eyebrow-raising, body adjustment or touching one’s face, and correlating these with self-reported attention loss.’ She is also dead set on using something as simple as a webcam, so that the system can be much more easily deployed as well as encouraging the next generation of researchers to build on her work.
Her target audience consists of students in higher education. She chose e-reading (reading text from a screen) as the educational task as it is a fundamental part of learning. Yoon has already collected a dataset that allows her to analyse attention loss in retrospect, detecting what observable behaviour was critical. ‘The goal, of course, is to build a machine learning model that notices attention loss in real-time,’ she says. ‘As visible cues may vary with cultural background and personal features, it has to allow personalisation to any particular subject.’
Visible cues linked to attention loss may vary with cultural background and personal features
Empathic feedback
When it comes to providing feedback on attention loss, she is experimenting with a social robot. ‘The starting point for me was that, in computer-mediated learning, we are already very immersed in visual stimuli,’ she says. ‘Sure, you can have a graphical user interface (GUI) provide feedback through visuals, pop-ups, colours, blinks. But what if we add a humanoid with a voice showing empathy and supporting reflection?’
So far, she tested the social robot (called FurHat) in a standalone setting, simply providing feedback at the end of the reading assignment – with messages such as “Can you recall the main point of the subtopic in your mind?” or “Did you understand everything in the text?” Yoon: ‘It was a preliminary test, from which I wanted to learn how they liked the robot compared to the GUI. But we also did notice a tendency towards improved knowledge uptake when using the robot.
What if we add a humanoid, providing feedback with a voice showing empathy and supporting reflection?
There’s a colleague for that
Involving machine learning, a social robot and education, her research is very multidisciplinary. ‘Coming from industrial design, I’m not an expert in all these areas,’ she says. ‘But when I need help with some intricacy, I can ask my fellow PhD students for help. We are a very interdisciplinary group, with everyone having their own background and specialisation, from computer science to neurocognitive psychology. During COVID it was difficult to build and maintain a very close connection but nowadays we consult each other a lot. We are also good friends.’
Aesthetic baking
Three years into her PhD, she now wants to deepen her research into both directions and to integrate them into a single multi-modal feedback system. ‘Maybe even add tactile feedback,’ she says. ‘Something different from what we traditionally do.’ She is aware that it may not be for everyone. ‘People can be allergic to having a system watching them, handing out advice. But it is a potential companion, and certainly not aimed at becoming something commercial.’
Asked about how she herself stays focused on her PhD goals, she says that it helps to clear her head with some leisure activities. ‘I am very much into crafts, making stuff. Aesthetic baking, for example, putting flower decoration on cakes.’ Involving taste, visuals, and smell, even her hobby is multi-modal.