Tick 2
Overview
The purpose of this meeting was to review the specification that Eli wrote after some envisioning discussions with Anna:
FlickTok-Functional%20Spec_v0.1_2024May13.docx
Attendees: Greg Wilding Eli Kinney-Lang Anna
Review notes
- FES device
- It is a “NeuroTrack”, there is no open API, but Ion hacked in a relay so it could be activated by a manual switch / 3.5 mm switch input. This can’t be directly connected to a computer, so another solution will need to be found, ex: a USB triggered relay, etc. This is a risk item.
- The application will have a manual trigger button in the software so we can test that the connection works. This can be one of the first features we add.
- BCI Essentials will be used for processing of raw EEG from headset, training, producing predictions, etc. The ThinkLab application is an example of how BCI Essentials can be wrapped in a parent application instead of relying on two processes.
- Data capture, aka “saving”
- EEG data (raw, messages, timestamps, channels)
- FlickTok messages (timestamp of a swipe, video length before a swipe, comments re: intentionality, trial start/end)
- There’s probably stuff that was missed / wrong here - need to get into the rest of the application first so we have some common ground.
- Settings
- Discussion about trials, windows, etc. Will need to get something ~working into Anna’s hands so we can tweak and expose the right settings.
- A script could be used for the settings instead of a UI (much faster, less user friendly). It would require the user to restart the application each time the settings were changed (maybe not a deal breaker)
- We could just also have a UI that shows some default settings that the user can change (but they are not persisted).
- There are no user profiles, this is a single user application
- Actually, perhaps we do all three here:
- first - settings are just part of the scripts, ex: settings.py
- second - create a settings UI to change the important settings on the fly (requires some infrastructure to recreate the EegData object each time). These settings don’t persist.
- third - add persistence to the settings UI. Make this something that’s later in the project. (however, we still need to design a proper model to share data in the application)
- Training
- The training mock up shows a red / green box around a video. The idea is that the box would be green when it’s time to for the user to perform the action.
- The training will just be automatic instead of letting them press a train button. The user will be stepped through the desired number of trials.
- There may also need to be an animated action on the side, ex: showing a closing hand (or whatever the action is they’re tying to change).
- Inference
- In “social media” mode, the red/green box indicates when the system is ready to be used. There is no signal for “ready”, we’re just going to enforce a ~5-7 second delay before letting the user start.
- This may be something to put into the settings UI as it may be a per-person setting.
- Why not just use T2S? It can send keystrokes to a web browser to trigger scrolling
- G.tec headset is the likely target
- Don’t want to be stuck with Emotiv headset re: access to data, etc
- Nobody really likes Emotiv
- Why websites? They are harder interact with automatically, can we avoid this pain?
- Easy access to lots of content and better content is the main reason. This will be better than just dumping some videos into a folder and having the app just cycle through them. Though, worst case, perhaps this is the fall back
- We could make a simple video player the first step, add in website control later
- What website?
- Instagram is the first target
Next steps
- Meetings
- First planning meeting next week (Thursday)
- Will combine the demo + planning meeting on subsequent Thursdays
- Project management
- Get project management envrionment set up (Jira project, confluence space, etc)
- Figure out how to get the simplest app delivered early so we can iterate on settings, etc. (ex: defer settings UI, persistence, animations, website)
- Fill up backlog
- Plan spikes for risks:
- How to actually control FES device?
- existing hardware has a relay switch, but this can’t be connected to the computer.
- Need Ion to make something
- A T2S device could be used, but there are none available, unless one of the very first ones are used.
- How do we “scroll” to the next video in instagram from an application
- Maybe just simulate keystrokes?
- Is there an api for this? (would be specific to each website)
- How do we get the duration of a video?
- Not sure, this depends on a website.
- Discussed why we’re using a website at all, why not a set of videos we can step through.
- How to actually control FES device?
- Application design spike
- Single application?
- Do we need a real browser, or can a QWebViewEngine be used? (we want to be able to put a red/green box around the website - single application makes this easier)
- MVC design, ie: how to propagate changes to settings
- Persistence design, ie: how to save values