Foxglove protects your art before you post it. It strips tracking metadata that platforms use to profile you, embeds a hidden ownership watermark into the pixel data, and applies adversarial perturbation that corrupts AI training. Everything runs on your device. Nothing leaves your phone until you choose to share it.
Adversarial poisoning targets how AI learns, not how it sees. When your poisoned images end up in a training dataset, the perturbations corrupt what the model learns from them — teaching it the wrong patterns. This is the same principle behind tools like Nightshade from the University of Chicago. It's not about blocking AI from viewing your image today — it's about polluting the training pipeline.
That's expected. No poisoning tool — including Nightshade — will make ChatGPT fail to describe your image. That tests inference (reading), not training (learning). Poisoning works when your art is scraped into training data. The perturbation corrupts the learning process, not the viewing process. These are fundamentally different things.
Metadata strip — removes EXIF, GPS, camera data, and tracking info that platforms and scrapers use to identify and profile you.
Hidden watermark — your name and message are steganographically embedded in the pixel data. Verify it anytime using the Verify tab. Best preserved in PNG format.
Adversarial perturbation — pixel-level noise designed to corrupt AI model training. This is the primary defense against scrapers and persists through compression.
Batch processing — foxglove up to 20 images at once. Add your visible watermark, hidden message, and perturbation to an entire carousel before you post.
Your hidden message is embedded across the image using block-level encoding designed to survive compression. It works best when saved as PNG. Social media platforms may strip the hidden message through aggressive recompression, but the adversarial perturbation — the part that actually poisons AI training — persists.