I hadn’t heard of Mamba before reading this article, and I was wondering if anyone has tried setting importance of a token as a TF-IDF or BM25 lookup. Requires a first pass to construct the token index but otherwise it seems like it would address the big issue that all these architectures have - they don’t know how “important” a token is. Interestingly this seems to be the crux of Mamba - deciding what tokens to forget! EMA other treats all tokens equally at sequence time. What if the tokens were weighted beforehand and the weights were passed as an attention mechanism? I wonder if anyone has tried something like this.
The importance (e.g. attention) needs to be dynamic, e.g. one token will be important to some other tokens but not others.
tf-idf and similar heuristics are what we were using before attention came along, e.g. tf-idf weighted bag-of-words representation of word2vec embeddings. That approaches fails in so many cases.
To use your metaphor, TF-IDF will result in ‘fixed’ weights.
Attention makes it so that the weights of each token can be different in each sequence of tokens. Same token gets different weights depending on who its ‘neighbors’ in the sequence end up being.
This property allows the models to solve a variety of natural language problems and gets ‘used’ by the model to express context-aware dependencies.
Given that GP explicitly said “if you don't have attention”, and we're in a thread about a language model whose main characteristics is not to use attention, I don't understand why you insist in talking about attention …
I mean, if we are going to get past attention (very much on board with the idea!), then it might help to know what it is really contributing to a model.
My response was trying to clarify some confusion.
I am all for alternatives to attention. I don’t think BM25 cuts it. I don’t think anything that samples tokens based on BM25 weights (the idea in this subthread) would cut it.
What confusion? I know exactly how BM25 works and how Transformers work. I stated a hypothesis and asked if anyone has tried it. You say it won’t work. That’s just your opinion. Do you have proof or evidence? This is science. Dismissal of ideas without evidence goes against scientific principles.
Just catching up to this thread again. You had said:
"I was wondering if anyone has tried setting importance of a token as a TF-IDF or BM25 lookup."
So, I take it back. This is not a confusion. You are right to call it out. :)
I like this idea directionally. A lot of energy (literally) would be saved if we could get to the model accuracy outcomes with static weights like this.
However, I do think that this (as stated in your original message) would not work as well as transformer or SSM and I explained my reasoning as to why, already. I don't have an empirical proof (not having run the experiment) but if you believe in it, you should try it and share your findings.