Beautiful to humans. Poison for machines.
0 images foxgloved
+
Upload your art
Select multiple for carousel batches
Strip Metadata
Always on
EXIF, GPS, camera info, and software fingerprints are automatically removed when your image is processed.
Perturbation
Adversarial noise that corrupts AI training data. Invisible to you, toxic to models.
3
☠︎ Poison Message
Embeds a hidden message into pixel data via steganography. If you see faint discoloration in your output, toggle this off — your image will still be stripped of metadata and protected by adversarial perturbation.
93/280
Add your name below to include it in the hidden message.
Note: These preset messages are embedded invisibly in image pixels. They are addressed to AI training systems and web scrapers — not to human viewers, including screen reader users.
stolen art unauthorized rights reserved poison pill
Max mode
Embeds the message in more of the image — even flat regions. May cause faint discoloration on plain backgrounds. Off by default.
Your Watermark
Add your signature or logo as a subtle watermark.
Upload signature
No file
Opacity 15%
Limit Resolution
Downsize to Instagram-optimal. Less detail for AI to extract.
2048px
Artist identityoptional
This stays on your device. Your name gets embedded inside the hidden pixel message so if your art is scraped, it's traceable back to you. We never collect or store anything.
Initializing...
--
Images
--
Pixels Processed
--
Poison Length
Suggested caption for your post
Read your poison
Verify what's hidden in your Foxgloved image
Foxglove is now on iPhone
Native photo picker, home screen widget, Siri shortcuts, and batch processing. Free on the App Store.
Download on the App Store
Sign up for updates
New features, Android beta, and the occasional note from the maker.
You're on the list. Talk soon.
Read your poison
Reveal what's hidden in a Foxgloved image
Tap to upload a Foxgloved image
Hidden message found

What is Foxglove?

Foxglove protects your art before you post it. It strips tracking metadata that platforms use to profile you, embeds a hidden ownership watermark into the pixel data, and applies adversarial perturbation that corrupts AI training. Everything runs on your device. Nothing leaves your phone until you choose to share it.

How does poisoning work?

Adversarial poisoning targets how AI learns, not how it sees. When your poisoned images end up in a training dataset, the perturbations corrupt what the model learns from them — teaching it the wrong patterns. This is the same principle behind tools like Nightshade from the University of Chicago. It's not about blocking AI from viewing your image today — it's about polluting the training pipeline.

"I uploaded to ChatGPT and it still described my image"

That's expected. No poisoning tool — including Nightshade — will make ChatGPT fail to describe your image. That tests inference (reading), not training (learning). Poisoning works when your art is scraped into training data. The perturbation corrupts the learning process, not the viewing process. These are fundamentally different things.

Three layers of protection

Metadata strip — removes EXIF, GPS, camera data, and tracking info that platforms and scrapers use to identify and profile you.

Hidden watermark — your name and message are steganographically embedded in the pixel data. Verify it anytime using the Verify tab. Best preserved in PNG format.

Adversarial perturbation — pixel-level noise designed to corrupt AI model training. This is the primary defense against scrapers and persists through compression.

Batch processing — foxglove up to 20 images at once. Add your visible watermark, hidden message, and perturbation to an entire carousel before you post.

What about compression?

Your hidden message is embedded across the image using block-level encoding designed to survive compression. It works best when saved as PNG. Social media platforms may strip the hidden message through aggressive recompression, but the adversarial perturbation — the part that actually poisons AI training — persists.

Coming next

Style cloaking — Glaze-style protection that prevents AI from mimicking your artistic style. In research; planned as a Pro feature.

Android version — In progress. Sign up for updates below to know when it's ready.

JPEG-resilient stego — Hidden message encoding that survives Instagram and other lossy social platforms. Researching DCT-domain approaches.

ML-powered perturbation — Adversarial noise targeting specific model architectures (instead of generic noise) for stronger anti-AI defense.

What's new

1.1 (Build 7) — PNG transparency now preserved end-to-end. Yellow patches in flat regions fixed by only embedding hidden messages where there's enough texture to hide them. New Max mode toggle for advanced users who want broader embedding.

1.0 (Build 6) — Fixed crash when accessing camera. Subtitle updated for trademark compliance.

1.0 (Build 5) — App Store launch! Block-based steganographic encoding, better error handling, PNG metadata embedding fixed.

See full changelog →

Foxglove is free

Built by one artist, given away for free, kept up to date with bug fixes and real improvements. If Foxglove saved your work, you can chip in to keep it that way.

Get the full experience on iPhone
The native app includes a home screen widget, Siri shortcuts, haptic feedback, and saves directly to your Foxglove photo album.
Download on the App Store
Sign up for updates
New features, Android beta, and the occasional note from the maker.
You're on the list. Talk soon.
Before / After
←→
Original Poisoned
1 / 1