LLM Security: A Scientific Taxonomy of Attack Vectors

Introduction Security in Large Language Models (LLMs) is no longer a small topic inside NLP (Natural Language Processing). It has become its own field within computer security. Between 2021 and 2025, research moved from studying adversarial examples in classifiers to looking at bigger risks: alignment, memorization, context contamination, and models that keep behaving in harmful ways. The problem today is not only a bug in the code. It comes from how the system is built: the architecture, the training data, the alignment methods, and how the model is connected to other systems. ...

02/13/2026 · 13 min · Digenaldo Neto

ArgusScan: Automating Ethical Pentest with Shodan API

Introduction Security professionals know that reconnaissance is one of the most important steps in a penetration test. Finding vulnerable systems, understanding network exposure, and identifying potential attack surfaces takes time and requires using multiple tools. What if you could automate this process while maintaining professional standards and generating reports that follow industry methodologies like PTES and OWASP? That’s where ArgusScan comes in. It’s an open-source Python CLI tool that integrates with Shodan API to automate ethical pentest reconnaissance, making security assessments faster and more efficient. ...

12/14/2025 · 7 min · Digenaldo Neto