Rough User Stories
We want to create a tool that will help kids/patients leaning BCI skills.
Spikes
- Navigating the activity - Danette mentioned a couple of ways to navigate through the application. What are the primary ones that we should focus on? Is there more than one?
- How does LSL and Emotiv interact? Are there any applications like dsi2lsl.exe that exist as full wrappers for emotiv?
- How do we get rid of poor training data? How do we use a intuitive user interface for masking/rejection/selection? What are our opportunities here to improve?
- What do we want the in-application EEG viewer to look like? Do we even want to include this?
Connectivity
As a Clinician, I can connect to a headset - 3
- Assume that we will use LSL headset, that we will use existing Bessy Python,
- This is written for the Emotiv headset at the moment.
- AC - User can interact with just Unity front-end to launch LSL connection application for the Emotiv Headset.
As a Clinician, I can check headset contact quality - 3
- Assume that we can use the Unity Emotiv examples or CURL, and adopt those for our headset of choice.
- AC - User can look at the contact quality in real-time within the Unity application.
Training
As a Clinician, I can name training labels/actions - 1
- assuming 3 labels are needed, more is harder
As a Clinician, I can choose an image for training - 3
- Assumption - there is a folder that is where training images are kept, we just pull from folder
- AC - User can select from the available images in the folder to be the target image used during training.
As a Clinician, I can add new image for use in training - 3
- AC - User knows where to put images in folder outside of the application.
- AC - User adds a new image into the folder of training images
- AC - The application can use any image from the training folder as the training target.
As a Clinician, I can choose what feedback training provides (color, position, scale, audio, etc) - 13
- Story 1 - I can change the color of the image as feedback (i.e. putting a filter on top of the image of a specific color).
- Story 2 - I can choose from a predefined simple animation for the image - shake, bounce, wiggle, grow.
- Story 3 - I can preview my simple animation.
- Story 4 - I can change the size of the image used for feedback.
- Story 5 - I can select to play audio from my image.
- Only specific audio is included.
As a Clinician, I can choose training parameters (window settings, maybe classifier, etc) - 2
- AC - User can change the primary training parameters available from the Bessy Classifier.
As a Clinician, I can initiate training on a Mental Imagery (MI) action - 1
As a Clinician, I can remove poor training segments - 13
- Need to spike on this
As a Clinician, I can save a profile (classifier, settings, feedback, etc) - 8
- User can create a new profile
- User can select from a list of available profiles.
- User can load a profile from locally saved options.
- User can save settings into a profile.
- This includes the training settings, the BCI settings, the feedback during training, and the “main” application settings.
- Is this manual or automatic on any given change?
As a Clinician, I can visualize the EEG - 21
- Need to spike on this, review work from summer student in 2020.
Activity
As a Clinician, I can start an activity (using the trained classifier) - 3
- AC - After training, user can “start” the application to get real-time selections from the BCI
- AC - The application can receive inputs from the BCI.
As a Clinician, I can set up an activity (video, cells/objects, navigation style) - 13
- Story 1 - I can populate cells which can be selected by a BCI
- Story 2 - I can select from a preset number of actions for each cell.
- A cell can move to a new scene with a movie
- A cell can move back to the navigation scene
- A cell can play a movie
- A cell can pause playback
- A cell can stop playback completely
- Story 3 - I can move cell locations around to change their position in the application.
- Story 4 - I can test my application set-up using mouse/keyboard
- Story 5 - I can test my application set-up using touch input (select the button)
- Story 6 - I can navigate through the options using the appropriate navigation approach
- This needs to be reviewed as a spike.
As a Clinician, I can map BCI actions to objects (what action does to a cell) - 8
- Story 1 - I can select from the available, trained BCI actions which activates a cell
- Story 2 - I can select what feedback I provide when a cell is selected
- Story 3 - I do not select more than 1 option for my mapped BCI action
As a Clinician, I can provide a set of videos for activities - 3
- AC - User knows where to put videos in folder outside of the application.
- AC - User adds a new videos into the folder
- AC - The application can use any video from the folder as the output of a cell.
As a Clinician, I can change the visual settings of the application (background color, etc) - 3
- AC - I can change the outline of each cell in the application
- AC - I can change the primary background of the scene
- AC - I can change the primary background of each cell
Deployment
As a Clinician, I can start/stop the application - 2
As a Clinician, I can resize the application window to my liking - 1
As a Clinician, I can put the application in a “windowed” or “fullscreen” view - 0.5
As a Clinician, I can change the resolution of the application to my liking - 2
As a Clinician, I can use the application on Windows 11 - 3
- AC - The headset connects on Windows 11
- AC - Bessy Python connects on Windows 11
- AC - The application runs on Windows 11
As a Clinician, I can install the application on my computer (or phone?) - 5
- I know where I can get the executable application from the internet.
- I know how to run the application as an Administrator/accept the default warnings and risks
- I know the order to run the software outside of the application
- I know how to run the software on my computer of choice.