Blog / Standards
C2PA & Content Credentials Explained: The New Standard for Authentic Media (2026)
On this page
If you've seen the small "CR" pin appear on an image in Adobe Photoshop, the iPhone Photos app, or a recent New York Times photo essay, you've already encountered C2PA. The specification went from a niche industry standard in 2023 to a mainstream provenance system by 2025, and in 2026 it's the closest thing the open web has to "verified" media.
This guide is for anyone trying to understand what Content Credentials actually do — without jargon, but without dumbing the cryptography down past the point of being useful. We'll cover what C2PA is, how the signing chain works, which tools support it, where the gaps still are, and how to verify and read manifests in your own workflow.
What C2PA actually is
C2PA stands for the Coalition for Content Provenance and Authenticity. It's a Joint Development Foundation project, maintained by an industry consortium that includes Adobe, Microsoft, Intel, BBC, Sony, Leica, Canon, OpenAI, Google, Meta, Truepic, and a couple dozen others. The C2PA specification defines a standard format for content credentials — cryptographically signed metadata embedded in image, video, audio, and document files.
Content Credentials is the consumer-facing brand name (with a "CR" pin icon) that the Content Authenticity Initiative (Adobe-led) uses for the user-visible part of the C2PA system. Same underlying technology; one is the spec name, one is the brand.
When you see a CR pin on an image:
- Click it (or upload the file to contentcredentials.org/verify)
- You'll see a manifest showing where the image came from, what device or app made it, and every editing operation applied to it
- Each item in the manifest is cryptographically signed by the entity that did it
The signing matters. The manifest can't be forged without breaking the signature chain, and the signature chain ties back to a publicly-known root authority. So unlike EXIF (which can be edited freely), Content Credentials provide verifiable provenance.
How the cryptographic signing works (without the math)
The mental model:
- When a C2PA-aware tool creates or edits an image, it writes a claim into the file's manifest: "I, Adobe Photoshop version 25.3, on this date, opened the file from Sony's a7-IV camera and applied these specific edits."
- The tool signs that claim with its private key — a key that's owned by the tool's vendor and whose certificate is signed by a trusted certificate authority.
- The next tool that touches the file does the same: appends a new claim, signs it, and includes a reference to the previous claim's signature.
- By the time the file reaches you, the manifest is a chain of signed claims from origin to current state.
To verify, your viewer (the website, app, or CLI tool) walks the chain backwards, checking each signature against the public key of the entity that made the claim, and verifying that each entity's certificate traces to a trusted root.
If any link in the chain is broken — bad signature, expired certificate, untrusted root — the verifier reports that. If the manifest has been tampered with, the signature won't validate. If the file has been edited by a tool that didn't write a manifest entry, the verifier can detect that the claimed history doesn't match the file as-is.
It's PKI applied to images. Same fundamental machinery as SSL/TLS, code signing, or HTTPS — just embedded in image, video, and audio file containers.
What's inside a manifest
A typical Content Credentials manifest contains:
- Capture device (if the image was photographed) — make, model, sometimes serial number
- Capture timestamp
- Capture location (only if the user opted in; often omitted for privacy)
- AI generation flag — explicit boolean indicating "this content was generated or substantially modified by AI"
- AI model attribution — which generative model produced the content (e.g., "Adobe Firefly v3", "OpenAI DALL-E 4", "Stable Diffusion XL via local install")
- Edit history — a list of operations: cropped, color-corrected, retouched, generative-fill applied, etc.
- Editing actor identity — the person, account, or tool that performed each edit (often pseudonymous)
- Asset hash — a cryptographic hash of the image as it stood at each manifest checkpoint
- Signing chain — the public-key certificates and signature blocks proving each claim is genuine
The spec is extensible. Some manifests carry additional fields specific to a producer — for example, news organizations sometimes attach a "newsroom validated" assertion, and stock photo platforms attach a "marketplace verified" assertion.
Which tools support C2PA in 2026
Image generators that embed manifests by default:
- OpenAI DALL-E (via web app, API, ChatGPT integration) — embeds C2PA manifest with
ai_generated: true - Adobe Firefly — embeds manifest with model version
- Microsoft Designer / Bing Image Creator — embeds manifest
- Google Imagen (via Gemini, Vertex AI in 2025+) — embeds with SynthID watermark and C2PA manifest
- Sora (OpenAI's video model, 2025+) — embeds video-level manifest
Image generators that do NOT embed manifests by default (as of mid-2026):
- Midjourney — has stated commitment to add support but not shipped yet
- Stable Diffusion (local installs and most hosted services) — no default manifest; must be added via post-processing
- Flux (Black Forest Labs) — no default manifest
Cameras that sign images at capture:
- Sony α-series and FX-series (firmware updates from 2024 added C2PA)
- Leica M11-P (the first commercial camera to ship with C2PA built in, 2023)
- Canon R5 II and R1 (firmware support added 2024-2025)
- Nikon Z9 / Z8 (firmware support added 2025)
- Several phone manufacturers including Samsung S25, Pixel 10 added partial support
Editing tools that maintain or extend manifests:
- Adobe Photoshop, Lightroom, Premiere Pro (full support since 2023)
- Capture One (added 2024)
- DaVinci Resolve (added 2025)
- Microsoft Word, Excel, PowerPoint (basic support for image insertion since 2024)
Platforms that display Content Credentials publicly:
- LinkedIn (2024+) — automatic display on uploaded images
- Adobe Behance — automatic
- The New York Times, Reuters, BBC News, AFP — selective on news photos
- Truepic and several stock photo marketplaces — automatic
Platforms that strip manifests (notable gaps as of 2026):
- Most social platforms still re-encode images on upload, which strips the manifest unless they specifically preserve it
- Twitter/X, TikTok, Meta platforms — partial preservation but inconsistent
- Most messaging apps — manifests survive on direct file shares but are stripped by media compression
This is the biggest practical limitation in 2026: an image that was signed at creation often loses its manifest by the time it reaches the average viewer because intermediate platforms re-encoded it.
How to verify a manifest yourself
Three options:
Web-based (zero setup):
- Drag any image into contentcredentials.org/verify
- The official verifier from the Content Authenticity Initiative
- Works for images, videos, and audio
- No login required, free, no rate limits for normal use
Command-line:
brew install c2patool # macOS# or download from github.com/contentauth/c2patoolc2patool image.jpgThe CLI returns the full manifest as JSON, including signing chain details. Useful for automation, batch processing, or pipeline integration.
Programmatic:
- Python:
pip install c2pa-python— full SDK with signing and verification - Rust:
c2pa-rs(the canonical implementation; the C and other-language SDKs wrap this) - JavaScript / Node:
c2papackage — for browser and server-side use - Mobile SDKs: iOS and Android SDKs available from the Content Authenticity Initiative
A typical verification call looks like (Python):
from c2pa import Readerwith open('image.jpg', 'rb') as f: reader = Reader.from_stream('image/jpeg', f) manifest = reader.json()print(manifest['manifests'][reader.active_manifest()])The returned object includes the full signing chain, edit history, AI flags, and validation status.
Where C2PA fits in your detection workflow
C2PA is one layer of provenance — and an important one — but it doesn't replace AI detection. The two are complementary:
Use C2PA when:
- You need cryptographic certainty about an image's origin
- The image came from a source likely to have signed it (a major newsroom, a professional photographer, a generator that includes manifests)
- You need to prove provenance for legal, journalism, or compliance contexts
Fall back to AI detection when:
- The image has no manifest (most images on the open web today)
- The manifest has been stripped by an intermediate platform
- You need to detect AI-generated content from generators that don't sign (Midjourney, Stable Diffusion, Flux)
Our pillar guide on detecting AI-generated images walks through the full five-layer detection stack with C2PA as the first layer. The TL;DR: check C2PA first; if there's a manifest, you have your answer; if not, run a detection API.
For developers building pipelines that handle both cases, our AI Image Detector API endpoint returns C2PA verification results alongside the AI-detection score in a single response, so you don't have to make two calls.
What still needs to improve
A handful of practical limits remain in mid-2026:
Adoption gaps in major image generators. Midjourney (the largest by revenue) and Stable Diffusion / Flux (the largest by open-source ecosystem) still don't sign their outputs. Until they do, "no manifest" remains an ambiguous signal.
Platform stripping. Most social platforms still re-encode images on upload, which destroys C2PA manifests unless the platform takes specific steps to preserve them. Meta, X, and TikTok have all announced support but rolled it out unevenly.
Reader adoption. The CR pin is now visible in major Adobe products, LinkedIn, and a few news sites. It's still absent from most consumer apps, browsers, and mobile photo viewers. Until reading is universal, signing is half a system.
Identity binding. A C2PA manifest tells you the tool that signed an image. It often doesn't tell you the human operating that tool. For attribution-grade provenance — "this image was taken by this specific photographer at this specific event" — you still need additional identity systems on top.
Privacy tradeoffs. Manifests are signed records of editing history. For some users (whistleblowers, abuse victims, journalists' sources) that history is sensitive. The spec includes mechanisms for redaction and pseudonymous signing, but those are not yet universally adopted by tools.
The arms race. A bad actor can publish a fake image without a manifest. Or take a real image, edit it adversarially, and publish without re-signing. Or strip the manifest entirely. C2PA is a positive identification system — "this image is what it claims to be" — not a negative one. It can't tell you that an unsigned image is fake.
A typical real-world example
A Reuters photographer takes a photo of a public event with a Sony α7-IV that supports C2PA. The camera signs the file at capture: "Sony α7-IV, serial number ..., timestamp 2026-03-12 14:23:07 UTC, no edits."
The photographer transfers the file to Adobe Lightroom, which appends a new manifest entry: "Adobe Lightroom 12.4.1, applied exposure +0.3 stop, white balance correction, crop." Both manifest entries are signed with their respective tools' keys.
Reuters' content management system imports the file, verifies the chain, and adds a Reuters newsroom claim: "Verified Reuters news content, photographer ID 4523." Reuters publishes to their wire service.
A news consumer sees the image embedded in a Reuters article. They click the CR pin, which calls the contentcredentials.org verifier. The full chain — Sony → Adobe → Reuters — is displayed, all signatures valid.
Now: the same image gets reposted to Facebook. Facebook re-encodes the image and strips the manifest. Now the image circulates without provenance, and a malicious actor adds a fabricated caption claiming the image shows something it doesn't. Anyone trying to verify it now has to fall back on layer-2-through-5 detection methods because the chain has been broken.
That's the reality in 2026. C2PA works perfectly when intact and end-to-end. The middle of the open web breaks the chain frequently. Both halves of the workflow — provenance and detection — are needed.
Frequently asked questions
Is C2PA the same as digital watermarking?
No, they're complementary. C2PA is signed metadata in the file format. Watermarking embeds an imperceptible signal directly in the pixels of an image, designed to survive re-encoding and editing. Some generators (notably Google's Imagen with SynthID) embed both. Watermarks are weaker than C2PA when present (no cryptographic signing) but more robust to platform re-encoding because the signal is in the pixels, not the metadata.
Can C2PA manifests be forged?
Forging the signature requires the private key of the signing entity. With current cryptography, that's effectively impossible. What's not prevented is creating a new manifest from scratch claiming false history — but that fake manifest will be signed by an untrusted entity and won't validate against the trusted root certificate authority.
Does C2PA work for video?
Yes. Video and audio are first-class supported in the spec. Video manifests can include per-segment editing history and signed time-stamps for each segment.
Will my photos be signed automatically?
Depends on your camera or app. As of 2026, several pro and consumer cameras and most major editing software do, but not all. Check your specific device or app's documentation, or look for the CR pin in your device's image-export options.
How do I add C2PA support to my own app?
Use the official SDKs (Rust, Python, JavaScript, mobile). The signing flow requires a certificate from a trusted CA — Adobe, DigiCert, and several others issue C2PA-compatible certificates for production use. Test certificates are available free for development.
Where can I learn more?
The C2PA specification itself is at c2pa.org/specifications. The Content Authenticity Initiative (Adobe-led, broader scope) is at contentauthenticity.org. Both have detailed technical and policy documentation.
C2PA in 2026 is real, growing, and useful — but not yet sufficient on its own. It's the strongest provenance system the industry has, and it's the right place to start any verification workflow. Just don't stop there. For images without manifests, you still need detection. For high-stakes verification, you need both, plus context, plus human judgment.
If you're building a verification flow, our AI Image Detector API returns C2PA verification and AI detection in a single call — so your pipeline doesn't have to differentiate between manifest-bearing and unsigned images. Free tier covers 500 checks per month with no credit card.
Try the AI Image Detector API
500 free scans per month. No credit card. Sub-100ms detection with model attribution and region heatmaps.
Get an API key →