top of page

Minds AI - Guidance

We know you want to make the most out of your brain activity so we're here to help! Here's supporting figures and general guidance for using your MindsApplied Artificial Intelligence (MAI).

 We are so grateful you are trying out our applications but please understand, this technology is still developing so it will take some time and occasional troubleshooting, but the results are well worth the wait!  We’re excited to guide you on this journey through this page, however if you need more help please leave a review in the Feedback section at the bottom of minds-applied.com/minds-ui.

 

Section specific links can be found on the right hand side, but if this is your first time time trying out our MAI please carefully follow the steps in this User Guide. These are experimental applications MindsApplied is working to improve with the introduction of models created through our Minds AI research.

 

Your Minds UI connects to a variety of brain-computer interfaces to record neural data for purposes of artificial intelligence and testing cutting edge neurotechnology applications. Models trained are meant to filter and recognize the same patterns of brain activity and predict the thought or action you are most likely thinking. We leverage these models in our real-world applications Cognichat and Cognitrol. We plan on introducing AI to the Neurovision application in the near future so that it can provide more practical analysis, diagnostics, and visuals, but for now calibration is not required to use it. When applied towards communication, control, or mental health, the possibilities of what you’ll be able to do with neurotechnology are endless. If you have any questions, please reach out to our team via any previous correspondence or contact@minds-applied.com.

Read Me:

  1. While our MAI model download pipeline is not yet publicly available on our website, please reach out to our to our team with your research or application development goals to discuss how we can help you meet them.

  2. Models require data to be recorded from your Minds UI to train. See Minds UI User Guidance Calibration sections D1 & D2 for more information.

A) Signal Filter

Minds AI Signal Filter can work as is, with any data recorded offline or in real-time. The package is available to download here for Python v.3.10 or 3.11 and requires a purchased license ('YOUR-LICENSE-KEY') to initialize. If you need another version of Python or it doesn't work on your system, please let us know and we can make the appropriate adjustments. We are working to make the DLL language/version agnostic and available offline as well.

​

After adding the mindsai_filter_python file to your project, and ensuring version compatibility, it can be called using the following:​​

import mindsai_filter_python

 

mindsai_filter_python.initialize_mindsai_license('YOUR-LICENSE-KEY')

print(mindsai_filter_python.get_mindsai_license_message()) 

 

filtered_data = mindsai_filter_python.mindsai_python_filter(data, lambda)

 

It's that easy! The license message will return how long your key is active until. It currently requires this initialization before every use, but we can provide an offline version as well.  It expects data to be a 2-D continuous array of channels x time and relies on one hyperparameter, it can be looped for real-time usage on trials of at least 4 seconds of data. Frequencies will have a smaller amplitude but this is relative and will retain the significant oscillations.​

 

The hyperparameter integer, lambda, controls how much your Minds AI Filter modifies the original signal and should be input on a logarithmic scale between `0` and `0.1`. A lower `lambda` value like the default `1e-8` causes the filter to make bolder adjustments for more complex transformations that  highlight the structure across `channels`, such as for real-time filtering (4 second segments at least) . A higher `lambda` value like `1e-5` works best with more data (such as 60-second trials) for still helpful, but more conservative adjustments.

 

Effects and Efficacy:

We used 2-fold stratified cross-validation grid search to tune the filter's key hyperparameter (λ). Classification relied on balanced on balanced accuracy using logistic regression on features derived from wavelet coefficients.

  1. MAI yields a cleaner signal than bandpass filtering alone. This is especially true for datasets characterized by high frequency noise and low channel coherence.

  2. MAI works best when it is applied to the data as a whole and prior to a bandpass filter or individual electrode analysis.

  3. MAI is particularly useful for use-cases where there are few data and a low signal-to-noise ratio. 

  4. MAI improves AI prediction by 4-14 percent (depending on other filters and data quality) and is especially good for tasks which engage the entire brain.

 

DEAP Dataset Valence Decoder Accuracy:

  • +6% average improvement across 32 subjects and 32 channels

  •  Maximum individual gain: +35%

  • Average gain in classification accuracy was 17% for cases where the filter led to improvement.

  • No decline in accuracy for any participant

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

​

 

 

 

Dragon EEG Valence Decoding

​

​

​

​

​

​

​

​

​

​

​

​

​​​Emotion decoding of positive vs negative valence which has been used to inform a model predicting the strength of such valence. Requires at least 12 minutes of Calibration data taken from the Minds UI Emotion Recognition experiment. Combines with real-time arousal from PSD for specific emotions.

  • Fitting 2 folds for each of 6 candidates, totaling 12 fits 

  • 6 possible candidate for hyperparameters each validated on 2 folds of the data.

  • Best Hyperparameter λ: 1e-05

Best balanced accuracy: 0.729

​

Original vs Minds AI Filter Across Multiple Channels

​

​

​

​

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Single Channel Wavelet Transform

​

​

​

​

​

​

​

​

​

 

 

This time–frequency comparison shows that the Minds AI Filter reduces high-frequency noise (~40 Hz) and sharpens low-frequency activity (~3–7 Hz), enhancing signal clarity in a single EEG Channel.

​

B) Emotion Decoder

  1. ​When the application begins you will see a Disclaimer page. After reading, click Begin to be taken to the Main Menu which automatically starts a synthetic neural signal. This allows applications to run accurately based on faux brain activity until a headset (board) is connected in the Configuration section. However, take note that this data is automatically saving to your file system and will need to be cleared if the application becomes too heavy. More about this in CONFIGURATION

  2. From the Main Menu you will see buttons for Prediction, Configuration, Calibration, Visualization, and Termination.

  3. You will also see a live head plot of the classic 10-20 EEG system holding placements of the most common electrode locations. Once your headset is connected, these will accurately reflect your active electrodes: colored green for positive polarity, red for negative, and opacity based on signal strength. (An advanced version of this is used in the Neurovision application!)

​​​

Troubleshooting:​

Data not filtering correctly?

  • Make sure you have an internet connection. Right now your Minds AI Filter requires this connection to verify your subscription key.

  • Make sure you have an active subscription key. A good check for if your package is being referenced and initialized correctly is to print the subscription key response and check the date returned. Initialization must be done every time before filtering data (not one and done).

  • Ensure your MAI Filter package is in the same file as your initialization code or that its path is being properly referenced. 

  • Ensure you've properly calibrated your lambda hyperparameter, the default value may not be best for your data. Experiment with different logarithms or provide your data to our team to perform a hyperparameter search (which you can also do for yourself).

Signal Filter
Troubleshooting
Emotion Decoder
MAI Filter_Accuracy Change.png
MAI Filter_Pipeline Performance.png
MAI Filter_Balanced Accuracy.png
MAI Filter_Multiple Channels.png
MAI Filter_Wavelet Transform.png

Feedback

Let us know what you think

Thanks for submitting! We'll get to your feedback as soon as possible. 

bottom of page