PRIZES to win! PROMOTIONS

Close Notification

Your cart does not contain any items

Model Behaviour

Getting AI to (mis)behave: LLM Red Teaming, Prompt Injection, and Jailbreaking

$45.95   $39.02

Paperback

Not in-store but you can order this
How long will it take?

QTY:

English
James Stevenson
17 September 2025
Deepfakes, misinformation, AI slop... these are all examples of AI producing content that can -and does- manipulate, persuade, or trick people in one way or another. This book takes a look at AI from another perspective: by taking insights from offensive security, LLM Red Teaming, and penetration testing, we learn about how AI itself is tricked and deceived - providing us with practical tools to test modern AI systems.

This book will walk you through the fundamentals of LLM internals, discuss important ethical and philosophical frameworks to work under, provide taxonomies and templates for testing LLM systems, and outline automated approaches to LLM Red Teaming and example environments for testing your skills.

With its focus on LLM red teaming, this book is written for security practitioners looking to get into LLM Red Teaming, individuals interested in better understanding how LLMs tick, or as a pocket guide for AI security researchers.

Built off of the author's experience from working on a PhD at the intersection of social science and machine learning, and training in AI Red Teaming, machine learning, and artificial intelligence ethics and human rights.

AI is a tool - Just like any tool it can be used to build cities, or break them down. Choose the former.
Imprint:   James Stevenson
Country of Publication:   United Kingdom
Dimensions:   Height: 229mm,  Width: 152mm,  Spine: 6mm
ISBN:   9781036935795
ISBN 10:   1036935795
Pages:   106
Publication Date:  
Audience:   Professional and scholarly ,  Undergraduate
Format:   Paperback
Publisher's Status:   Active

See Also