news

Sep 25, 2024 :loudspeaker: GSA and DeltaNet have been accepted to NeurIPS’24 :fire: :fire:
Aug 20, 2024 Gave a talk at Stanford HazyResearch, “Linear Transformers for Efficient Sequence Modeling
Jun 10, 2024 :loudspeaker: New arxiv “Parallelizing Linear Transformers with the Delta Rule over Sequence Length” with a very beautiful algorithm in it :cherry_blossom:!
May 2, 2024 Gated Linear Attention Transformers (GLA) is accepted to ICML 2024 :smile: Code is available at here.
Apr 25, 2024 Gave a talk at Cornell Tech, “Gated linear Recurrence for Efficient Sequence Modeling
Jan 1, 2024 Introducing our open-source project flash-linear-attention :rocket: :rocket: :rocket:. Join Discord if you are interested in linear attention/RNN!