tldr: brain’s output throughput should be low.
Interface consists of two parts: a good one and a bad one.
The good one is a FullblownHD 3D multielectrode pixel display in visual cortex. A series of needles (or whatever) in the brain (maybe roughly arranged as a cube). Works as the input to the brain, no need to record a lot of data from it.
To teach a human understand a 3d image we can transmit a 3d video taken from the subject’s body movements. Or an image of simple shape that the subject is moving in their hands. The goal is to transmit a real time image that the human sees and fully understands right now and hope that the brain will pick up a better source of information about what the human is holding in their hands.
The bad part is so bad it can’t even right click. A simple switch, that will communicate only two commands to the machine: «| and |». One electrode (or even EEG) in the motor cortex will do. It will drive a playback of an offline 3d video in FHD 3d array backwards or forwards. Like youtube360 but better (sell it to Google, hm?) Make it like a joystick or spring to be able to control acceleration too.
Later we will be able to teach the human Kung Fu.
Further research can be done in making the feedback channel more sophisticated. To be able to know which point of the 3d image is the most interesting to the human we can choose a couple of feedback electrodes. Again, threre is no need for a lot of recording power. The exact point of interest will be calculated based on the average signal between several electrodes. To teach the human to transmit to these electrodes we can light up or scale up part of the image where these signals are detected. It’s like brightness or zoom control but spatial. Thus the human will be able to choose between various Netflix shows.
Don’t forget to make a kill switch in the prefrontal cortex.
https://www.youtube.com/watch?v=EiUUFdUFyIU Put the tracker inside simple shapes. Maybe it makes sense to implant a separate acceleration/direction/scale channel.