top of page

Minds UI - Guidance

We know you want to make the most out of your brain activity so we're here to help! Here's general guidance and documentation on using the MindsApplied Neural User Interface (MUI).

 We are so grateful you are trying out our applications but please understand, this technology is still developing so it will take some time and occasional troubleshooting, but the results are well worth the wait.  We’re excited to guide you on this journey through this page, however if you need more help please leave a review in the Feedback section at the bottom of minds-applied.com/minds-ui.

 

Section specific links can be found on the right hand side, but if this is your first time time trying out our MUI please carefully follow the steps in this User Guide. These are experimental applications MindsApplied is working to improve with the introduction of models created through our Crypt Algorithm. Unless told otherwise, please assume this technology is meant to leverage the advancements of your own AI, using your own models and training methods (more information in DATA ANALYSIS and MODEL LOADING).

 

The Minds UI connects to a variety of brain-computer interfaces to record neural data for purposes of artificial intelligence and testing cutting edge neurotechnology applications. Models trained are meant to recognize the same patterns of brain activity and predict the thought or action you are most likely thinking. We leverage these models in our real-world applications Cognichat and Cognitrol. We plan on introducing AI to the Neurovision application in the near future so that it can provide more practical analysis, diagnostics, and visuals, but for now calibration is not required to use it. When applied towards communication, control, or mental health, the possibilities of what you’ll be able to do with neurotechnology are endless. If you have any questions, please reach out to our team via any previous correspondence or contact@minds-applied.com.

 

Read Me:

  1. The Minds UI will take up a full screen. We recommended having a secondary monitor so that you can access your file system while the application is running on the main screen. 

  2. We recommend that you have someone else present with you to verify that your BCI headset is functioning properly. If this is not possible and you have questions or concerns, get in contact with our team to provide guidance or follow these directions for best practices. 

  3. Applications in Calibration and Prediction should be performed in a distraction-free environment. We recommend quiet, private areas with minimal noise, visual distractions, and strong odors. These may contaminate or skew data recording, analysis, and predictions.

  4. Make sure you are comfortable and avoid jostling as much as possible. Stress and movement may be picked up by your device. These may contaminate or skew data recording, analysis, and predictions.

  5. Try not to get tired. Your average daily cognitive state should be used through any applications for best results due to the importance of frequency bands.

    • Sitting causes you to dose off easily, if this happens to you during any Calibration trials we recommend waiting until you are more rested and for safety, disregard the last session. If you are sharing your data with MindsApplied, inform our team of any similar occurrences.

    • There will be times to make notes during the intermissions. There’s no need to make notes during other parts of the Calibration

  6. You can press the MindsApplied logo in the top left to go back on any menu screens. You can press the escape button (ESC) at any moment to pause a running application and return to the Main Menu.

  7. Please carefully read the applicable sections of this User Guide before starting the application. We know everyone hates reading long documents, so we made this as short as possible so that you can quickly start your journey towards using Neuroscience as a Service! At MindsApplied we believe your neural data is the purest and most important type, after all it's what makes you you! So treat it carefully. You wouldn't scribble over a pretty picture of yourself would you?

 

A) PREPARATION
  1. Download the application from minds-applied.com/minds-ui or a link provided by our team.

  2. Unzip the MindsUI_Beta to an easily accessible folder (e.g. C:\Users\).

  3. Inside the MindsUI file run the 'MindsUI.exe'.

    1. Note: Your computer’s security may attempt to block this file from running. You can disregard this message and run the file anyway.

  4. Put an EEG or other BCI headset on your head according to the manufacturer's instructions. If a gel or saline solution is supposed to be used with your headset, please make sure this step is followed. However, be aware that commercial usage of such a gel may be less than desirable.

  5. Verify that all channels are functioning properly. To test this, use the GUI that is recommended by the manufacturer of your headset. We recommend experimenting with a different number of channels or according to the latest research.

  6. If the signal quality is poor on any electrode channel, use a 0.9% saline solution to clean the electrode contacts.

  7. Retry the channel verification test using the appropriate GUI.

  8. Once you have verified that your device is functioning as expected, you are ready to begin your journey into your mind!

​

B) BEGIN 
  1. ​When the application begins you will see a Disclaimer page. After reading, click Begin to be taken to the Main Menu which automatically starts a synthetic neural signal. This allows applications to run accurately based on faux brain activity until a headset (board) is connected in the Configuration section. However, take note that this data is automatically saving to your file system and will need to be cleared if the application becomes too heavy. More about this in CONFIGURATION

  2. From the Main Menu you will see buttons for Prediction, Configuration, Calibration, Visualization, and Termination.

  3. You will also see a live head plot of the classic 10-20 EEG system holding placements of the most common electrode locations. Once your headset is connected, these will accurately reflect your active electrodes: colored green for positive polarity, red for negative, and opacity based on signal strength. (An advanced version of this is used in the Neurovision application!)

​

C) CONFIGURATION

* Feel free to skip this section if you would like to just test our applications with the faux brain activity. (Predictions won't be accurate but the technology is still captivating!)

  1. The Configuration button takes you to the settings of the Minds UI. Access recordings, load predictive models, and most importantly connect your headset!

    • Credits are available here as well, big thanks to all the individuals and wondering technology that contributed this hub for the future of brain-computer interaction!​

  2. Connect your Headset allows you to connect to many of the most common commercial BCI headsets and label recording sessions. ​

  3. Choose your brain-computer interface (BCI) board from the drop down menu.

    • ​For a complete list refer to brainflow.com, which has been instrumental in open source connective technology.

    • If your device is not listed, please leave a Feedback review or reach out to a member of our team and we will try to include it in a future release or special accommodation. 

    • Assume headset locations are based on your device defaults. Verify these if activity seems skewed.

  4. If you are using a USB dongle, select the appropriate COM port for your BCI. Disregard this step if your headset is connected without a USB dongle.

  5. Enter your preferred or provided subject ID. This is in the format of ##, and will save as S01.

  6. Enter your preferred or provided session number. This is in the format of ###, and will save as S001.

  7. To start the data intake, please press the green Connect button once.

    • If successful, the button will change to red and will display Connection Successful.

    • If the connection was unsuccessful, press the X on the right and try again.

    • If you still cannot connect, try moving closer, restarting the application, and or ensure your headset’s connection is stable.

  8. Once your BCI device is connected, the new data intake has started.

  9. To locate the recording directory, return to the Configuration screen and press the Saved Recordings button. This will open the directory in your computer’s file explorer.

  10. Here you can find your neural recordings saved by Subject and Session. Within each Session file you will find the application/experiment, raw EEG, trial_results with session specifics, and the associated debug log.​​

* The files marked SYN (synthetic) and brainflow_log.txt, are saved automatically. These are best to be deleted occasionally, if the MUI gets too heavy. 

 

D) CALIBRATION

In this section, we will save segments of your neural activity associated with the words you think (a trial) so that a model of prediction can be trained on the data. Once you begin, please keep your eyes directed at the middle of the screen for the duration of the calibration. It will take roughly 2 minutes per block (not including intermission), and each block has 2 parts. The more trials you perform the better predictive accuracy of our solutions.

  1. Select the Calibration page.

  2. Choose either Inner Speech or Motor Imagery.

    1. Inner Speech: From here you are able to choose the different categories of words through which you would like to communicate. We recommend selecting all the categories available to you for recording. As a note, each of these categories will need to have their own models generated. We reduce the number of words per model to improve predictive accuracy. This will make sense once we get to Cognichat.

    2. Motor Imagery: The default words for training will be related to body parts, which we expect you to engage in a way that has been communicated to you or best suites your training needs. These same body parts can be used to steer the spaceship in Cognitrol.

  3. When you are ready, press Train Words to begin.

  4. A word will briefly appear in the middle of the screen. Remember this word.

  5. Immediately after it disappears, a plus sign will appear in the middle of the screen and only for about 2 seconds, depending on the length of the word.

  6. Once the plus sign appears, using only your mind, think of the previous word once or engage the body part (for Motor Imagery)​

  7. The plus sign will disappear and a new word or part will appear on screen. Repeat steps 1-3 of this section for each new word that appears on screen.

  8. There will be an intermission at the midpoint of the Calibration that says Start Next Block.

    • Take this time to write down any notes that you think will be relevant to our team and during which words (falling asleep, headset knocked, etc)

    • Please don’t leave your desk. Remember your brainwaves are still being collected.

    • You’re halfway done! That wasn’t so bad was it? Let’s finish the second half now!

  9. When you’re ready, press the Start Next Block button to start the second half of the calibration exercise and repeat steps 4-7.

  10. Once the training has ended, Calibration Finished will appear on the screen.

  11. Stop the Calibration and return to the main menu. Either disconnect your headset or select Termination to ensure all data stops recording and saves correctly.

​

E) DATA ANALYSIS AND MODEL LOADING

Now that you’ve saved the necessary data, you are able to train your own models or join the waitlist for one provided by our Crypt Algorithm. Models can be trained via neural networks like EEGNet, EEG-Conformer and more. We currently only support live predictive models generated via ONNX format. If a model is not uploaded for any category , a random variable will be pulled from our Predictionary (more below), for purposes of testing the application (but it's much cooler if you use AI).

​

  1. Models can be uploaded via the Configurations menu by selecting the button Model Loading. This will open the required ONNX file folder.

  2. Models need to follow the naming format of MindsUI_Model_Category. For example: MindsUI_Model_Directions.

  3. Cogntrol requires at least the above trained model to leverage directional activity. Cognichat requires models from Directions, Emotions, Time, and Confirmation.

  4. Below are the Thought Categories and their associated Calibration words:

 

 

 

 

 

 

 

​

​

​

 

​

​

 

*These words are not included in the current iteration of Cognitrol and while they may be included in the model, currently they will only be used for predictive purposes in Cognichat.
 

F1) PREDICTION - Cognichat

Cognichat (patent pending) combines individual thought word prediction with Natural Language Processing to send full thoughts as text message.  Think of words that were previously trained upon and make sure you have a model loaded for each of your available thought categories, otherwise a random but applicable word will be chosen for testing purposes. Simply enter the phone number from which you want your BCI to communicate and off you go. Have your partner ask questions which can be answered by using just the above individual thought words like the below example conversations (We’re constantly working to expand these). There are 5 questions allowed, then you will be required to pause and return to the main menu.

 

Here is where we will apply what we call Cognispeak: thinking in a way that allows your mind to best be read. While the '+' is on the screen, similar to CALIBRATION, you will think in one word responses using the thoughts above and which you have recorded/trained on. Cognichat will do the rest; taking your partners asked question in combination with the 2 second predicted response you are thinking and Bam! Telepathy: 

 

Example Conversations(Question, Thought, Response)

Q: How are you feeling today?

T: “Happy”

R: I am feeling happy.

 

Q: Which direction did you go?

T: "Right"

R: Previously, I went right.

​

  1. Navigate to the button Cognichat.

  2. Enter the phone number which has given you consent to message them and press the Start button. Welcome to Cognichat will appear in the center of the screen.

    • Cognichat requires stable internet connections and can be prevented from sending messages due to vpns, firewalls or other interference.

  3. Press the What’s Your Question? button when you are ready.

  4. Your partner has 20 seconds to text a question (we’re working to make this more seamless).

  5. After you read the question the ‘+’ will appear in the center of the screen.

  6. During this 2 second interval, you are meant to leverage Cognispeak by thinking your one word response from the thought words you trained on.

  7. After, a complete sentence will be sent back to your partner.

  8. Repeat or reload the application as many times as you would like and to as many phone numbers.  

  9. Now that you’re all done, you can close the application. To do so, press the Esc button on your keyboard and press Back to Menu on screen. Then press the back arrow at the top of the screen, and press Quit.

 

Like all neurotechnology, we’re still working to improve and expand upon Cognichat. We want to use more of the conversation history and other contextual clues like location, and general user data to better formulate responses as actual predictions become more accurate.  We are also working to include various languages and automatic translation for increased accessibility. However, the MindsUI is meant to demonstrate the capabilities of this technology. The end goal is for us to work with you to decide where to best  integrate it. Whether it be video games, mobile devices, or healthcare, Cognichat shows the possibilities of improving upon all forms of communication!

 

F2) PREDICTION - Cognitrol

Cognitrol (patent pending) leverages various neural activities such as inner speech or motor imagery, to steer a spaceship. Unlike Cognichat, it only makes use of the thought categories for 'Directions' and 'Motor_Imagery' (Use these as the category name).  If you do not upload a model, a random but applicable direction word or body part will be chosen for testing purposes. When the ship gets too far from the screen, it will reset to the center despite the prediction. 

  1. Navigate to the button Cognitrol.

  2. Press the Start button. Welcome to Cognitrol  will appear in the center of the screen.

  3. The application will start automatically.

  4. Similarly to the Calibration and Cognitrol applications, a ‘+’ will appear for 2 seconds on the screen in repetition of about 5 second intervals.

    • If using a different mode of neural control, please defer to that method.

  5. The ship will move in the direction predicted. 

  6. Now that you’re all done, you can close the application. To do so, press the Esc button on your keyboard and press Back to Menu on screen. Then press the back arrow at the top of the screen, and press Quit.

 

Like the previous applications, we’re still working to improve and expand upon Cognitrol. We want to introduce various types of neural control to see what works best for any individuals. We also will be expanding upon the application to navigate around obstacles. But, once again, the application is meant to demonstrate the capabilities of this technology. The end goal is to make the technology work in commercial scenarios like controlling machinery, or simply turning off lightswitches and opening doors. Whether it be video games, transportation, robotic manipulation, Cognitrol shows the possibilities of improving upon all forms of control and interaction! 

​

G) VISUALIZATION - Neurovision

Navigate to the button Visualization. This section holds our features for Neural visualization, analysis, and diagnostics. 

Neurovision gives you a live visualization into the workings of your mind. This artistic rendering shows how the activity of your brain can affect a real environment! Try the different 'experiences' to see just how your state of mind can look and affect.

 

Ratio of frequency bands (Alpha, Beta, etc.) affect animation speed, pushes and pulls. Colors can be seen as layers of the psyche, with changes in activity revealing deeper and more vibrant states. Rapid movements and explosions signify a heighten interest, stress, excitement, or fear from the user. Calm and focused mentalities will evoke a slower and more viscus experience along with inward pulls that can even be traced to things like rain or slow jazz. Such emotional states can also influence the fall and rise of gravity, which is propelled by positive or negative polarity. When localized to particular areas of the brain, upwards gravity signifies engaging in a new mental task (such as math or speaking), while downward hints you're moving away from your previous state of mind, inhibiting the previously engaged parts of your brain.  

 

Used in video games: weather, obstacles, music, and personalities can all be based on the players cognitive state. For content creation, Neurovision gives followers a deeper insight into the creators process while performing their art, allowing for stronger connections and appreciation. Watching a suspenseful movie, listening to emotional music, falling in love, anything you want to do, Neurovision will provide a magical perspective.

 

Neurovision has been called ‘Your Minds Eye’ not  because it shows what you’re seeing (those would just be your eyes), but because it allows you a look into your mind. Data alone cannot truly show the turmoil and tranquility we feel everyday. What are currently abstract glimpses of your inner cognition, we hope to soon be a window into your mind. With the introduction of AI, we plan to have the color schemes better reflect your emotional state, like an advanced mood ring.  Particles made of potentials interact at a viscosity derived from velocity. As we continue to map changes in our psyche, whether they be great or small, to points of representation, these are pixels used to improve the resolution of your minds eye.

​

H) TERMINATION

When you are finished, click the Termination button to leave the application. This button will end all recording processes and ensure data saves correctly. The same cannot be guaranteed of the X in the top right hand corner. However, if you just want to start a new recording, disconnect the application, or connect a new device, simply return to the Configuration menu.

​

Where would you apply these Visualization, Calibration, and Prediction technologies? Visit our contact page to let us know how you would apply your mind.

 

Troubleshooting:
  • EEG device not connecting properly?

    • Ensure your device is in close enough proximity to connect with your USB dongle or computer

    • Try disconnecting and reconnecting  your device with your computer

    • Still not working? Double check the troubleshooting documentation from your device manufacturer

    • Contact our team for help

  • Cognichat not sending or receiving messages?

    • This can be due to web connection interference like internet quality, vpns, or firewalls. Try the application in various locations or with different wifi/cellular connections. 

    • Test the application using different phone numbers, certain regions or numbers may not be available.

    • Contact our team with the methods that don’t work for you and we will try to address them in one of our upcoming releases.

Thought Predictionary
Preparation
Begin
Configuration
Calibration
Data_Analysis
Cognitrol
Cognichat
Neurovision
Termination

Feedback

Let us know what you think

Thanks for submitting! We'll get to your feedback as soon as possible. 

bottom of page