Transformers have dominated empirical machine learning models of natural language processing (NLP). Here we introduce basic concepts of Transformers and present key techniques that form the recent advances of these models. This includes a description of the standard Transformer architecture, a series of model refinements, and common applications. Given that Transformers and related deep learning techniques might be evolving in ways we have never seen, we cannot dive into all the model details or cover all the technical areas. Instead, we focus on just those concepts that are helpful for gaining a good understanding of Transformers and their variants. We also summarize the key ideas that impact this field, thereby yielding some insights into the strengths and limitations of these models.
You can find the pdf file here: Introduction to Transformers: an NLP Perspective.
GitHub:GitHub - xiaotong/Introduction-to-Transformers: An introduction to Transformers