I work in neurotech, I don't believe that the electrical signals of the brain define thought or memory.
When humans understood hydro-dynamics, we applied that understanding to the body and thought we had it all figured out. The heart pumped blood, which brought nutients to the organs, etc etc.
When humans discovered electricity, we slapped ourselves on the forehead and exclaimed "of course!! it's electric" and we have now applied that understanding on top of our previous understanding.
But we still don't know what consciousness or thought is, and the idea that it is a bunch of electrical impulses is not quite proven.
There are electrical firing of neurons, absolutely, but do they directly define thought?
I'm happy to say we don't know, and that "mind-reading" devices are yet un-proven.
A few start-ups are doing things like showing people images while reading brain activity and then trying to understand what areas of the brain "light-up" on certain images, but I think this path will prove to be fruitless in understanding thought and how the mind works.
[alert] Pre-thought match blacklist: 7f314541-abad-4df0-b22b-daa6003bdd43
[debug] Perceived injustice, from authority, in-person
[info] Resolution path: eaa6a1ea-a9aa-42dd-b9c6-2ec40aa6b943
[debug] Generate positive vague memory of past encounter
Not a reason to stop trying to help people with spinal damage, obviously, but a danger to avoid. It's easy to imagine a creepy machine argues with you or reminds you of things, but consider how much worse it'd be if it derails your chain of thought before you're even aware you have one.[1]: "The interpreter" https://en.wikipedia.org/wiki/Left-brain_interpreter
Shouldn't the device be the judge of that?
Sounds like Libet's Delay and all that. Conscious awareness is just a documentary covering something that has been decided some half a second ago.
It's a very interesting quirk of a immensely useful device for those that need it, but it's not an ethical dilemma.
I for one am sick and tired of these so-called ethicists who's only work appear to be so stir up outrage over nothing holding back medicinal progress.
Similar disingenuous articles appeared when stem-cell research was new, and still do from time to time. Saving lives and improving life for the least fortunate is not an ethical dilemma, it's an unequivocally good thing.
Quit the concern trolling nature.com, you're supposed to be better than that
I love seeing the advancements still, don’t get me wrong, but in the current data, advertising, and attention economies under Capitalism? No fucking way that shit is ending up in my head.
> Smith’s BCI system, implanted as part of a clinical trial, trained on her brain signals as she imagined playing the keyboard. That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so
There are some serious problems lurking in the narrative here.
Let's look at it this way: they trained a statistical model on all of the brain patterns that happen when the patient performs a specific task. Next, the model was presented with the same brain pattern. When would you expect the model to complete the pattern? As soon as it recognizes the pattern, of course!
> That learning enabled the system to detect her intention to play hundreds of milliseconds before she consciously attempted to do so
There are two overconfident assumptions at play here:
1. Researchers can accurately measure the moment she "consciously attempted" to perform the pretrained task.
2. Whatever brain patterns that happened before this arbitrary moment are relevant to the patient's intention.
There's supposed to be a contradiction here: The first assumption is correct, and the second assumption is also correct. Therefore, the second assumption does not invalidate the first assumption. How? Because the circumstances of the second assumption are a special thing called "precognition"... Tautological nonsense.
Not only do these assumptions blatantly contradict each other, they are totally irrelevant to the model itself. The BCI system was trained on her brain signals during the entirety of her performance. It did not model "her intention" as anything distinct from the rest of the session. It modeled the performance. How can we know that when the patient begins a totally different task, that the model won't just "play the piano" like it was trained to? Oh wait, we do know:
> But there was a twist. For Smith, it seemed as if the piano played itself. “It felt like the keys just automatically hit themselves without me thinking about it,” she said at the time. “It just seemed like it knew the tune, and it just did it on its own.”
So the model is not responding to her intention. That's supposed to support your hypothesis how?
---
These are exactly the kind of narrative problems I expect to find any "AI" research buried in. How did we get here? I'll give you a hint:
> Along the way, he says, AI will continue to improve decoding capabilities and change how these systems serve their users.
This is the fundamental miscommunication. Statistical models are not decoders. Decoding is a symbolic task. The entire point of a statistical model is to overcome the limitations of symbolic logic by not doing symbolic logic.
By failing to recognize this distinction, the narrative leads us right to all the familiar tropes:
LLMs are able to perform logical deduction. They solve riddles, math problems, and find bugs in your code. Until they don't, that is. When an LLM performs any of these tasks wrong, that's simply a case of "hallucination". The more practice it gets, the fewer instances of hallucination, right? We are just hitting the current "limitation".
This entire story is predicated on the premise that statistical models somehow perform symbolic logic. They don't. The only thing a statistical model does is hallucinate. So how can it finish your math homework? It's seen enough examples to statistically stumble into the right answer. That's it. No logic, just weighted chance.
Correlation is not causation. Statistical relevance is not symbolic logic. If we fail to recognize the latter distinction, we are doomed to be ignorant of the former.