Tick 1
The next “Tick” cycle will build something for Danette and her team. What will that be? Can we fit that into a 6 week project?
What problem could we solve?
- Danette needs a system to help kids “learn BCI skills” (mental imagery, for now, or whatever Emotiv is using). Initial exercises for kids are very simple, like choosing between two songs (etc). This is learning to scan. The next step is turning this into learning two commands for move and select (“two switch step scanning”). Would be really nice if they could learn more that two commands, but getting to two commands is the minimum.
- The current solution is heavily reliant on a bunch of workaround solutions that are hard to set up and use (Combinations of Think2Switch keyboard support, Emotiv headsets, switch interfaces, grid software). Would like something that’s easier to set up and use.
- The system needs to work with what we have, ex: Emotiv headsets. Ideally extensible for future headsets, etc. Doesn’t really care if we’re using EEG, but the idea is we don’t want to be dependent on Emotiv.
- Wants something kids are motivated by to choose, ex: youtube, ticktock, etc. Games are ok. Learning to scan is the thing. The system needs to be configurable by Danette so she can customize “homework” for kids. In an ideal world, the kids could be provided with a stripped down application they use (send them home with a configured “homework” app)
- For motor imagery, we will would ideally have the training step integrated into the system. It would be messy to switch between applications. Motion is important for teaching motor imagery skills. Discussion ensued about all the ways training could be integrated into the system.
Stepping through the use case, what does using the system look like?
- Headset connectivity check is probably needed, ideally integrated
- Training is the next step. We need a way to show the action happening and to gauge if the training is working. For example, kissing is the action and a an animated face makes a kiss face. Ideally, animations and more realistic imagery would help. Just the “ball” as in Think2Switch.
- Ideally, it would be nice to be able to customize these animations / images for each kid. For example, if this was ultimately for wheelchair control, the images could be wheelchairs. That implies the ability to save a “profile” for a given user, etc. Perhaps using an AI tool to generate imagery would be handy here.
- Being able to share the configured profile for use at home would be useful. Not sure how, maybe delivering a surface pro or installing on home system.
- Don’t really want to use the Emotiv 8 second training approach. Would be open to try something different. Eli excited about potatoes.
- Need a way to show that the training is accurate.
- Some kind of sensitivity adjustment is needed, maybe. Danette indicated that they use sensitivity to adapt for fatigue (kid getting tired).
- Next step is using the mental commands. Sounds like we’ll focus on being able to play videos w/audio (probably youtube videos. there’s other ideas like a book, etc.). The ability to train a few different activities that are required.
- The most simple would be teaching that the command can trigger something like playing a video. Building on that, learning how to play and stop the video.
- Learning control is important, for example being able to hold the command for a period of time. So perhaps video would only play while the command was active. Many possible feedback things here, ex: making video larger.
- Adding more videos would be the next step. When there’s more than one video, we need a second command to control selection. Audio feedback on selection, etc would be helpful. We need to be able to customize activities to teach the difference between just playing two videos with two commands and moving to “move and select” skills.
- Several concerns about where feedback is stored. How it’s converted. How it can be customized for auditory feedback, etc.