Red Teaming AI
A Field Manual for Attacking Intelligent Systems
- Publisher's listprice GBP 69.99
-
31 600 Ft (30 095 Ft + 5% VAT)
The price is estimated because at the time of ordering we do not know what conversion rates will apply to HUF / product currency when the book arrives. In case HUF is weaker, the price increases slightly, in case HUF is stronger, the price goes lower slightly.
- Discount 13% (cc. 4 108 Ft off)
- Discounted price 27 492 Ft (26 183 Ft + 5% VAT)
Subcribe now and take benefit of a favourable price.
Subscribe
31 600 Ft
Availability
Not yet published.
Why don't you give exact delivery time?
Delivery time is estimated on our previous experiences. We give estimations only, because we order from outside Hungary, and the delivery time mainly depends on how quickly the publisher supplies the book. Faster or slower deliveries both happen, but we do our best to supply as quickly as possible.
Product details:
- Publisher No Starch Press
- Date of Publication 28 July 2026
- ISBN 9781718504721
- Binding Paperback
- No. of pages500 pages
- Size 235x178 mm
- Language English 700
Categories
Long description:
AI is no longer a futuristic concept-it's embedded in critical systems shaping finance, healthcare, infrastructure, and national security. But with this power comes unprecedented risk. Red Teaming AI arms you with the mindset, methodology, and tools to proactively test and secure intelligent systems before real adversaries exploit them. Written for security professionals, researchers, and AI practitioners, this field manual goes beyond theory. You'll learn how to map the new AI attack surface, anticipate adversarial moves, and simulate real-world threats to uncover hidden vulnerabilities. You'll Learn How To: Think in graphs, not checklists: trace attack paths through interconnected AI components, data pipelines, and human interactions, Poison the well: explore how adversaries corrupt training data to implant backdoors and erode model integrity, Fool the oracle: craft evasion attacks that manipulate AI perception at decision time, Hijack conversations: execute prompt injection attacks that turn Large Language Models into insider threats, Steal the brain: probe for model extraction and privacy attacks that compromise valuable IP, Conduct full-spectrum campaigns: use the STRATEGEMS framework and the AI Kill Graph to plan, execute, and report professional-grade red team engagements. Traditional security methods can't keep up with adversarial AI. From manipulated financial agents to compromised autonomous vehicles, real-world failures have already caused billions in losses and threatened lives. Red Teaming AI equips you to meet this challenge with practical techniques grounded in real attack scenarios and cutting-edge research.
More