Project Kickoff
Introductions
- Beverly - CLIP is for Beverly to use (main user if you will)
- Hosein Bahari - Glenrose BCI team (lead)
- Alex Liu - 5th year software/computer engineer
- Elena Arnold - 5th year software/computer engineer
- Eli, Greg, Dan, Anup
Requirements discussion
Beverly stepped through what problem CLIP is solving: teaching communicative functions (request, reject, etc) to children with language impairments (motor, etc). CLIP = Communicative Language Intent Project.
CLIP will focus on requesting something, in this case, an instrument from a shelf of instruments. CLIP will be used to teach the concept of identifying a thing from a group of things, ie: understanding that a label represents a thing. Instruments will be introduced one at a time, etc.
Eli demonstrated what is available right now - his demo, a bit about Unity, etc.
Primary Actors
- Childen
- Little experience with 3D environments
- Child will be seated in front of computer screen
- Clinician (ex: Beverly)
- Drives session / training with child
Discussion
- This is just a P300 driven object selector
- This is a pretty generic thing. What distinguishes it is the customization for Beverly (and clinicians) to work with? Ex: what options do they need to control the session with the child.
- Present intervention to child to do things like pointing to instrument, tracking with eye tracker. Child will be trained to do up to 5 instruments
- we must support this for next step
- may need to set up the display to accommodate the child’s visual
- to make the background consistent, they have a shelf that they bring to various locations, but there’s a desire to change it to match actual background.
- There’s also an angle here where the shelf is instrumented with markers for AR applicaitons, etc.
- Once it’s confirmed the child can choose an instrument visually, then we move to the next step where they sit in front of a computer screen with a BCI.
- Goal is to have the child select from up to 5 items, starting with just one item
- This display needs to look identical to the initial eye tracker screen
- Child can select the object on the screen using BCI
- On selection, a verbal message is played (then Clinician does something with the real object)
- Clinician helps Child learn to focus their intention on a target <— this is pretty much the application
- Clinician can introduce instruments one at a time
- Clinician can start flashing (P300, ex: user presses a key)
- Clinician can pause the flashing (P300)
- Clinician can train the BCI system
- Clinician can move the location of instruments
- Clinician can adjust P300 flash style (will have to experiment with this and figure out what they actually are, shake effect, flash, back/white, etc).
- Clinician can tweak the animations, perhaps we train them to do this once we have the template application working.
- Clinician can save the setup for a session (instruments, location, flash style)
- Clinician can choose from a library of things to put on the shelf
- Clinician can associate audio ex “I want the Drum” with objects
- Clinician can change the shelf using a photograph?
- P300 stuff needs to be set up, not actually set up
- Capturing data isn’t a requirement
- Bessy python needs to be updated to enable training of P300 after one selection (posterior training something or other) <—- This is the key risk item, sounds like we’re not clear on what’s needed re: if there’s an online use case or not
- Lots of discussions about how to flash an object for P300, visual aspects.
Hardware constraints
- Clinician can run this on a Surface Pro
- Not using Emotiv
- Headset EEG provided over LSL → Bessy python
- Unity sends markers (what’s flashed) → Bessy python
- May want to have a way to launch Python from Unity for ease of use
Timelines
- Hoping to start using this October 1 (sounds flexible-ish)
Resources
Team
- Eli - has some capacity
- Hosein - has some capacity too
- Alex / Elena - nearly full time resource
- Anup - some hours
- Greg - basically away for the whole project
- Beverly - can review / provide feedback on each story as they are delivered (would be happy to participate in demos)
Training
- Alex / Elena should ramp up on Unity
- GitHub development process
Prioritization
- Really want to deliver working software
Reality checks
- This is pretty tight. 4 sprints. Demos on Friday
Next steps
- Unity ramp probably takes a week, team can ramp by learning to make a simple game (ex: watch a video)
- Eli can stand up a basic messy implementation even in the first week. The goal would be simple flashing images on a shelf. Then the team can work to make it prettier.
- Hosein - familiarize with Bessy python / unity
When is first planning session?
- Greg / Eli to schedule this, likely this week
Communication preferences (slack, etc?)
Add work into Jira