LLM Security: A Scientific Taxonomy of Attack Vectors

Introduction Security in Large Language Models (LLMs) is no longer a small topic inside NLP (Natural Language Processing). It has become its own field within computer security. Between 2021 and 2025, research moved from studying adversarial examples in classifiers to looking at bigger risks: alignment, memorization, context contamination, and models that keep behaving in harmful ways. The problem today is not only a bug in the code. It comes from how the system is built: the architecture, the training data, the alignment methods, and how the model is connected to other systems. ...

02/13/2026 · 13 min · Digenaldo Neto

How a Large Language Model (LLM) Works

1. Fundamental Architecture: The Transformer The foundation of modern LLMs (Large Language Models) is the Transformer architecture [1]. Unlike RNNs (Recurrent Neural Networks) and LSTMs (Long Short-Term Memory networks), which process text one step after another, the Transformer processes the entire sequence in parallel. This allows better modeling of long-range dependencies and faster training [1]. Figure: Flow of the Transformer architecture (attention, encoder/decoder, feed-forward). Source: Vaswani et al. [1]. ...

02/12/2026 · 14 min · Digenaldo Neto