Over the past decade, Brain-Computer Interfaces have rapidly developed from being a research subject in medical centres and research laboratories to providing actionable solutions to different industries and people around the world.
Due to their increased development and use, ethical issues have come to prominence that ought to be addressed effectively in 2020.
For the sake of clarity, in this blog post, we will address the ethical questions raised by non-invasive Brain-Computer Interfaces in 2020 rather than also including invasive BCI.
For many professionals, researchers and academics, BCI fall into the same category of Neuroethics, MedTech and AI ethics, as all three disciplines combine to bring about the rise and development of BCI. This is why the ethical questions we will address are also the same or similar questions that professionals within those fields are preoccupied with.
So far so good…
In 2019, measures have already been taken to ensure the ethical use of AI and consequently of Brain-Computer Interfaces, as in April 2019 a High-Level Group established by the European Commission issued guidelines for AI and ethics called “Ethics Guidelines for trustworthy AI”. Moreover, the World Economic Forum’s (2019) Global Future Council on Neurotechnologies has already taken measures to put into focus these issues by recommending eight interdisciplinary, cross-sector actions. (More information can be found on To Be Involved in Neuroethics: A Must for Entrepreneurs and for Healthcare as a Whole by Alvaro Fernandez)
In addition, the EIT Health network has already carried out a survey to show the most important points to be addressed. The findings of the survey can be found here.
So here are the five most important ethical issues raised by BCI in 2020 according to InnoBrain:
Defining the company’s responsibility for the use of BCI technology is one of the most important ethical issues raised in 2020. This year, companies specialising in BCI ought to put more emphasis on safety as a way of eliminating potentially dangerous use. A fall back plan is needed in case of a pitfall, as well as increased reassurance in reliability and accuracy. The question of accountability in case of an error in obtaining results or in providing flawed data ought to be effectively put into focus.
2) Data privacy
Data privacy has long been a very important concern when it comes to AI, BCI, and Neuroethics. In 2020 companies specialising in these fields ought to provide transparency on how the data will be obtained, stored, and used. In addition, data governance mechanisms are vital to ensure that access to data is legitimised. This can be achieved by traceability mechanisms. In particular, to achieve transparency it is important that the results of BCI are explained sufficiently to the users and to all stakeholders, as well as providing detailed information about the limitations and the capabilities of the applications.
Another important ethical issue to be addressed in 2020 are unwanted bias that can limit the diversity of the results. This means that BCI applications should be accessible to all, taking into account marginalised groups. In the case of usability testing using BCI for example, it is important to take into account users such as the elderly, children and people with physical impairments.
What is the line between a person’s thoughts and technology? That is a question that is on the minds of many people regarding BCI in 2020. Similar concerns have been raised about which individuals are appropriate research subjects and about their capacity to consent. In addition, many people can carry out thoughts that they wouldn’t consciously or voluntarily like to express. As such an ethical concern is how much of what a person thinks should be detected by BCI and what percentage of those thoughts would actually be implemented by the person in reality? Self-determination is thus a vital component that should comprise the ethics of BCI. For this to be achieved, the person needs to understand the potential risks involved before consenting to the procedure.
Deriving information from the brain of a person is one of the most private practices that can be carried out. This is why many people are concerned about the privacy of individuals in case of stored neural data that can be stolen or hacked. Data about affective states belong to an individual’s personal data and therefore need to be protected from any undue treatment by other parties. Moreover, another important concern regards the extent of information that can be derived from a person’s brain who might be unaware of the amount of information that is being extracted. With the advancement of BCI in the future, other private information could be potentially derived that the person could be unaware of, especially in the case of unethical companies that deviate from regulations.
What measures should you take when implementing BCI?
- Make sure that your BCI provider is ISO27001S compliant, consistent with the National Institute of Standards and Technology.
- Ensure that your BCI provider is compliant with the GDPR affected on May 25, 2018.
- Understand the process of BCI and whether any potential risks are involved and ask for a traceability mechanism report.
- Make sure that you can terminate the BCI process at any given point in time with the possibility of the data obtained being cancelled.