We currently interact with computers and electronic devices by way of keyboards, touchscreens, mouses, game controllers, and the like. But imagine being able to do so via a more direct connection with one’s brain, for example by controlling a robotic arm with a device attached to one’s skull or a chip implanted in one’s brain—technologies already being developed. That would certainly be a powerful way to address mobility issues among persons with disabilities. But such Brain-Computer Interfaces (BCIs) and other neuro-technologies raise questions about individual agency and identity (to what extent do such devices constitute or diminish an individual’s actions? How does an individual perceive of their actions when mediated by these technologies), privacy (what kind of data can be collected by such devices, and how might it be used?), and equality (who will benefit from these technologies? And will the algorithms used to run them encode old biases?). In May 2017, an international group of neuroscientists, neurotechnologists, clinicians, ethicists, and machine-intelligence engineers (called the “Morningside Group”) met at an NSF-sponsored workshop at Columbia University to identify key issues in the ethics of neurotechnologies. Associate Professor Alan Rubel, who has recently been working on privacy issues in BCI research, was part of this effort. The group’s recommendations have been published in the most recent issue of Nature. Among the recommendations are measures to protect individual agency, identity, and privacy and to address issues of equality and bias. The article and its specific recommendations are available on Nature’s website here http://www.nature.com/news/four-ethical-priorities-for-neurotechnologies-and-ai-1.22960 : and in the November 9, 2017, print version of the journal.