Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I agree.

It's completely speculative. There is no evidence at all that Spiking NNs really work better is any circumstances.

Speaking as someone who has worked in the ML field, it feels to me like advocates for them are caught up in the biological plausibility argument. That's an interesting branch of research, but has very little to do with how AI should be implemented using transistors. In some ways the "neural networks" name has done a great disservice because people keep getting caught in the trap of comparing them to how the human brain works.



Spiking comes with persistence baked in, so anything done with them has an implicit sequence and temporal context. Like LSTM, it automatically means the architecture is going to handle some problems better than a naive perceptron.

Transformers have a sequence context, but it constructs its own context dependent notion of orderliness with attention.

Persistent or recurrent activation states can extend the context window past the current tokenizing limitations. Better still would be dynamic construction where new knowledge can be carefully grafted into a network without training, and updates over the recurrent states feeding back into modifying learned structures.

Spiking networks might provide a clear architecture to achieve some of those goals, but it's really just recurrence shuffled around different stages of processing.


> it's really just recurrence shuffled around different stages of processing

Interesting. I hadn't really thought about this. Although I wonder if there is a more direct way of achieving this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: