Kick Off Scope Overview
Overview
The following is a summary of the scope for the “Tick 1” project (name TBD, possibly “Brain Tube” or “MindTV”). The goal of this project is to produce a software prototype that can help kids learn BCI skills. In this first prototype, it will focus on selecting and playing a video using a BCI.
The Tick 1 project will be kicked off in the week of January 8. The development team will follow an iterative development process, where work is planned in 1 week “sprints”. The team will aim to demonstrate new functionality at the end of each sprint when possible. When possible, the team will also provide alpha versions of the product for evaluation and feedback to the clinical team (DR).
The duration of the project is planned to be 6 weeks. The final deliverables will be provided at the end of the last sprint ~February 16th.
Project scope
Below are the user stories that define the project scope. The “application” refers to the software application, ie: the user interface (name is TBD… BrainTube?).
“Clinician” refers to Danette 🙂
“User” refers to the general user of the application prototype
Connectivity
As a Clinician, I can connect to a headset - 3 pts
- Emotiv headset support only (this is what users have today)
- User can launch any apps needed to communicate to Emotiv Headset
As a Clinician, I can check headset contact quality - 1 pt
- User knows where to check headset contact quality using Emotiv tools for this
Training
As a Clinician, I can name training labels/actions as I desire - 1 pt
- Application shall support 3 labels/actions
As a Clinician, I can add new image for use in training (instead of ball) - 3 pt
- User can put images in a folder outside of the application.
- The application can use any image from the training folder as the training target.
As a Clinician, I can choose an image for training - 3 pt
- User can select from the available images in the folder to be the target image used during training.
As a Clinician, I can choose what feedback training provides - 8 pt
- Change the color of the image as feedback (i.e. putting a filter on top of the image of a specific color).
- Predefined simple animation for the image - shake, bounce, wiggle, grow.
- Change the size of the image used for feedback.
As a Trainee, I know when to start imagining my mental command - 1 pt
- Audio cue is played before training begins (just like Think2Switch training)
As a Clinician, I can initiate training on a Mental Imagery (MI) action - 1 pt
- User presses a button to begin
As a Clinician, I can choose training parameters (window settings, maybe classifier, etc) - 2 pt
- User can change the primary training parameters available from the Bessy Classifier.
As a Clinician, I can save a profile (classifier, settings, feedback, etc) - 8 pt
- User can create a new profile
- User can select from a list of available profiles.
- User can load a profile from locally saved options.
- User can save settings into a profile.
- This includes the training settings, the BCI settings, the feedback during training, and the “main” application settings.
- Is this manual or automatic on any given change?
As a Clinician, I can visualize the EEG - 1 pt
- User knows where to visualize the EEG using Emotiv tools for this (existing)
Activity
As a Clinician, I can start an activity (using the trained classifier) - 3 pt.
- After training, user can “start” the application to get real-time selections from the BCI
- The application can receive inputs from the BCI.
As a Clinician, I can set up an activity (video, cells/objects, navigation style) - 13 pt
- I can populate cells which can be selected by a BCI
- I can populate empty pages with cells
- I can populate up to 3 pages of cells.
- I can select from a preset number of actions for each cell.
- A cell can move to a new page with a movie
- A cell can move back to the navigation page
- A cell can play a movie
- A cell can pause playback
- A cell can stop playback completely
- I can move cell locations around to change their position on the page, and within the application.
- I can test my application set-up using mouse/keyboard
- I can test my application set-up using touch input (select the button)
- I can navigate through the options using the appropriate navigation approach
- This needs to be reviewed as a spike.
As a Clinician, I can map BCI actions to objects (what an action does to a cell) - 8 pt
- I can select from the available, trained BCI actions which activates a cell
- I can select what feedback I provide when a cell is selected
- I cannot select more than 1 option for my mapped BCI action
As a Clinician, I can provide a set of videos for activities - 3 pt
- User can put videos in a folder outside of the application.
- The application can use any video from the folder as the output of a cell.
As a Clinician, I can change the visual settings of the application (background color, etc) - 3 pt
- I can change the outline of each cell in the application
- I can change the primary background of the page
- I can change the primary background of each cell
Deployment
As a Clinician, I can start/stop the application - 2 pt
As a Clinician, I can resize the application window to my liking - 1 pt
As a Clinician, I can put the application in a “windowed” or “fullscreen” view - 0.5 pt
As a Clinician, I can change the resolution of the application to my liking - 2 pt
As a Clinician, I can use the application on Windows 11 - 3 pt
- The headset connects on Windows 11
- The application runs on Windows 11
As a Clinician, I can install the application on my computer - 5 pt.
- I know where I can get the executable application from the internet.
- I know how to run the application as an Administrator/accept the default warnings and risks
- I know the order to run the software outside of the application
- I know how to run the software on my computer of choice.