Downloading Human Consciousness into Computer: An AI Thing|DigiCodeWare

Downloading Human Consciousness into Computer: An AI Thing

Ever heard about Downloading Human Consciousness into Computer? Is it really possible? Well, today, in the era of Artifical Intelligence, Machine Learning, we are seeing truly a broad perspective of technology. From our smartphones to the big medical field, everything is changing. But what if it is possible to download human consciousness into computer? This has started now.

Downloading Human Consciousness or you can say Human Brain may appear sci-fi, yet a few neuroscientists believe it’s conceivable, as well as that we’ve just begun down a way to one day make it a reality. Things being what they are, how close would we say we are to downloading human consciousness?

Downloading Human Consciousness into Computer: An AI Thing_post_image(2)|DigiCodeWare
A Human Brain

 

People who are paralyzed and unable to speak, what they want to say is stored in form of signals in their brains. No one has been able to decipher those signals directly. But three research groups has recently made a progress about this. They are turning data from electrodes surgically placed on the brain into computer-generated speech. Utilizing computational models known as neural systems, they reproduced words and sentences that were, now and again, clear to human audience members.

 

The hurdles are high. “We are trying to work out the pattern of … neurons that turn on and off at different time points, and infer the speech sound,” says Nima Mesgarani, a computer scientist at Columbia University.

“The mapping from one to the other is not very straightforward.”

The First Paper about Downloading Human Consciousness:

The first paper was presented on bioRxiv on October 10, 2018, portrays a trial in which scientists played recordings of speech to the patients with epilepsy who were in the middle of brain surgery.

As the patients listened to the sound files, the scientists recorded the nearons firing in the parts of the patients’ brains that process sound. The scientists used Deep Learning method, which seemed to be the best method, to turn that neuronal firing data into speech.

When they played the outcomes through a vocoder, which integrates human voices, for a gathering of 11 audience members, those people had the ability to accurately translate the words 75 percent of the time.

Listen to the audio from this experiement HERE.

 

Downloading Human Consciousness into Computer: An AI Thing_post_image(1)|DigiCodeWare
Epilepsy patients with electrode implants have aided efforts to decipher speech

Image Credit: Sciencemag

The Second Paper about Downloading Human Consciousness:

The second paper was posted on November 27, 2018. The paper was relied on neural recordings from people that were undergoing surgery to remove brain tumors.

As the patients read single-syllable words loudly, the specialists recorded both the sounds leaving the members’ mouths and the neurons firing in the discourse delivering areas of their brains. Rather than preparing computers profoundly on every patient, these specialists showed a counterfeit neural system to change over the neural chronicles into sound, demonstrating that the outcomes were at least reasonably intelligible and similar to the recordings made by the microphones.

Listen to the audio from this experiment HERE.

The Third Paper about Downloading Human Consciousness:

The third paper was posted on August 09, 2018. It showed the part of the brain that converts specific words that a human decides to speak into muscle movements.

While no recording from this experiment is accessible on the web, the specialists detailed that they had the ability to reproduce whole sentences and that individuals who tuned in to the sentences had the capacity to effectively translate them on a different decision test (out of 10 decisions) 83 percent of the time. That test’s strategy depended on distinguishing the examples associated with creating singular syllables, as opposed to entire words.

These experiments will one day, make possible for people who have lost the ability to speak, to speak through a computer-to-brain interface.

 

Christian Herff, a neuroscientist, member of one of the team at Maastricht University in the Netherlands says:  If they can hear the computer’s speech interpretation in real time, they may be able to adjust their thoughts to get the result they want. With enough training of both users and neural networks, brain and computer might meet in the middle.

 

Final Thoughts:

However, these tests were a part of small studies. The first paper relied on data taken from just five patients, while the second looked at six patients and the third only three. And none of the neural recordings lasted more than an hour.

In any case, the science is pushing ahead, and artificial-speech devices hooked up directly to the brain seem like a real possibility at some point down the road.

 

 

Please write to us at [email protected] if you want to contribute any article for this blog. Your name will be featured with that content.

If you want to report any issue with the above content, write to us at [email protected]. We respect your suggestions.

Did you enjoy this article?
Signup today and receive free updates straight in your inbox. We will never share or sell your email address.
I agree to have my personal information transfered to MailChimp ( more information )

Prachi Sharma

In love with coding. A technophile and a web developer.

Leave a Reply

Your email address will not be published. Required fields are marked *