Hacker Newsnew | past | comments | ask | show | jobs | submit | madrox's commentslogin

This seems like a great time to mention C2PA, a specification for positively affirming image sources. OpenAI participates in this, and if I load an image I had AI generate in a C2PA Viewer it shows ChatGPT as the source.

Bad actors can strip sources out so it's a normal image (that's why it's positive affirmation), but eventually we should start flagging images with no source attribution as dangerous the way we flag non-https.

Learn more at https://c2pa.org


> but eventually we should start flagging images with no source attribution as dangerous the way we flag non-https.

Yes, lets make all images proprietary and locked behind big tech signatures. No more open source image editors or open hardware.


C2PA is actually an open protocol, à la SMTP. the whole spec is at https://spec.c2pa.org/, available for anyone to implement.

The standard itself being open is irrelevant. I'm not sure why this is always brought up for attestation standards. It is fundamentally impossible to trust the signature from open-source software or hardware, so a signature from open-source software is essentially the same as no signature.

The need for a trusted entity is even mentioned in your specification under the "attestation" section: https://spec.c2pa.org/specifications/specifications/1.4/atte...

So now, if we were to start marking all images that do not have a signature as "dangerous", you would have effectively created an enforcement mechanism in which the whole pipeline, from taking a photo to editing to publishing, can only be done with proprietary software and hardware.


We already have a centrally curated trust model in https. Browsers only treat connections as "secure" if they chain up to a root CA in their trust store. You can operate outside that system, but users will see warnings and friction. Some level of trust concentration isn’t new.

I'm curious if you think this is worse or not as bad as a best-case broad implementation c2pa...especially if there is a similar Let's Encrypt entity assisting with signatures.


Why would the image itself have to be proprietary to have some new piece of metadata attached to it ?

> Bad actors can strip sources out

I think the issue is that it's not just bad actors. It's every social platform that strips out metadata. If I post an image on Instagram, Facebook, or anywhere else, they're going to strip the metadata for my privacy. Sometimes the exif data has geo coordinates. Other times it's less private data like the file name, file create/access/modification times, and the kind of device it was taken on (like iPhone 16 Pro Max).

Usually, they strip out everything and that's likely to include C2PA unless they start whitelisting that to be kept or even using it to flag images on their site as AI.

But for now, it's not just bad actors stripping out metadata. It's most sites that images are posted on.


There’s actually a part of the NY state budget right now (TEDE part X, for my law nerds) that’d require social media companies to preserve non-PII provenance metadata and surface it to the user, if the uploaded image has it.

linkedin already does this--- see https://www.linkedin.com/help/linkedin/answer/a6282984, and X’s “made with ai” feature preserves the metadata but doesn’t fully surface it (https://www.theverge.com/ai-artificial-intelligence/882974/x...)


You're implying social platforms aren't bad actors ;)

In seriousness, social platforms attributing images properly is a whole frontier we haven't even begun to explore, but we need to get there.


Yeah, OpenAI has been attaching C2PA manifests to all their generated images from the very beginning. Also, based on a small evaluation that I ran, modern ML based AI generated image detectors like OmniAID[1] seem to do quite well at detecting GPT-Image-2 generated images. I use both in an on-device AI generated image detector that I built.

[1]: https://arxiv.org/abs/2511.08423


What a dystopian, pro-tyranny ask. Horrifying.

The comments that aren't directly discussing the technical achievement here are bemoaning the destruction to society that AI generated images can cause, which is a fair criticism. I'm genuinely curious what you think the greater horror is. Or what a better solution might be.

Reddit blurs nsfw images by default. You can change that in settings. I don't see what it so terrible about the idea of doing this with untrusted image sources.


To ask for verification whether a photo is real or fake?

This is probably the fairest counter argument I’ve heard. One can hope that today’s AI will eventually be as cheap as a calculator, though.

I hope so too, however cheap is relative. One's ordinary morning coffee is a full day wage for someone else. If we could have decent models fitting laptops of most students, that would be point where we could possibly treat AI as we treat calculator or computer today.

Just to put things in context, https://www.bbc.com/news/articles/ce8444gex65o shares income for a good number of people now a days. (note that many of those workers are taking care of a family of 2+ members, most of the time)


I remember a TI-89 being mandatory for my AP math classes (calculus and statistics). It was utterly essential for solving problems in a reasonable amount of time. There were programs available to assist families who couldn't afford one so their children wouldn't be left behind.

I like to think we'll figure this out.


AI in it‘s current phase, definitely. However, we‘ve been seeing the transformer architecture plateauing in the last couple of years. There are still improvements, but open source models are catching up.

I feel like at this point it’s an inevitability that given enough time, capable models will be cheap enough for everyone.


If poor students have capable models but rich students have much better models that go the extra mile for a great mark and do everything in a single prompt, it would still be unfair.

For it to be fair, you would not only need good free models, but actual parity between free models and the highest subscription tier the big AI companies can offer. And I don't think that will happen in the short or mid term future.


When I was in AP classes in high school, you were required to have a TI-89 calculator. If you couldn't afford one, there were assistance programs.

You were not allowed to use a TI-92, which was the next step up. It had built-in solvers for many kinds of problems.

I'm not saying this isn't a concern, but addressing financially-based inequities in learning is not a new problem within certain bounds. There's established ways to deal with it. If we can get AI cheap enough that you can cover a year of education with $100 then we're in a good range.


That is my hope. At the same time, feels like a peak “don’t know what we don’t know” situation

Lots of social sites are facing this problem. It's nearly impossible to grow on Twitch without viewbotting: https://x.com/Reedjd/status/2028533060632010759 and Nikita is calling out Perplexity on X: https://x.com/nikitabier/status/2044902122995548330

The problem is that social platforms benefit from this behavior as long as it doesn't get too egregious. Bots contribute to metrics just as easily as real humans as long as investors and ad purchasers feel like it's kept to managable levels.

Nothing on social is organic anymore, and hasn't been long before AI came around, which is why I welcome the AI slop era. It will accelerate us to the endgame, which is acknowledging how bad the problem really is and to start cleaning it up.


I have thought exactly as you do for a long time. Recently a side project of my blew up and it was completely organic. I'm just a solo dev. No marketing budget at all. No PR team.

Made me realize that it's still possible for things to organically get big.

It's just way way harder now.


I think it's still possible, but to your point it is way harder. Not only that, but as a consumer I never know what is authentic.

A first-hand anecdote: I write music.

Ambient variety, you know, almost static drone, very niche style per se. Never did anything to promote it in any way. Just released it via my friend's digital label on a handful of platforms.

Never had more than ~100 listens a month, and never expected that to change and earn any substantial royalties.

One day, the friend calls and tells he's willing to pay me some pretty penny, and replies to my bewilderment that just a single track from the whole album blew up, glitched the Matrix and obtained some 10'000s of listens.

I investigated a little bit and found out that the track's title coincided with that of some other, much more popular and promoted band.

So I just happened to ride on those coattails.

Edit: removed extra zero in the number of listens :)


Got a link? I want to hear it.

Well, I don't want to make this look like a coordinated psyop in the vein of the topic, sorry)..

Fair enough. Sad times we live in.

> Opus 4.7 introduces a new xhigh (“extra high”) effort level

I hope we standardize on what effort levels mean soon. Right now it has big Spinal Tap "this goes to 11" energy.


wait till you hear about how we standardized RF bands. We have gems such as "High frequency", "Very High Frequency", "Ultra High Frequency", "Super High Frequency", and the cherry on top, "Extremely High Frequency". Then they went with the boring" Teraherz Frequency", truly a disappointment.

These are all mirrored on the low side btw, so we also have "Extremely Low Frequency", and all the others.


I hear you (see what I did there?)

What makes this even more complicated is that multiple models use these terms. Does "high" effort mean the same thing in Claude and GPT?


It’s the punchline at the very end of the article. They ended up with a different SaaS vendor.


Yeah I read through it but all of that is surface level. Any real insider info?

Not sure why I was downvoted. I read the post and the linked articles.


There is something uncanny about the bandwidth and quality of all the artifacts coming from this mission.

I've subsisted on photos from the Apollo missions and artistic renditions for so long that seeing the modern, high resolution real thing to be quite stirring in a way I didn't expect. It actually does make me believe that the future could be quite cool.


We haven't even seen the full quality images yet. They've commented that the live feed from the GoPro is a limited bandwidth because they have to share the bandwidth with running the capsule. The images from the Nikons onboard are just scaled down. My initial guess was from an export specifically to get an early dump to get everyone on the ground chomping at the bit something to see. They'll get the full images when the SD cards splash down. When those are released, I'm expecting quite a few OMG images


I wouldn't mind some raw files but I honestly don't think they'll be too strikingly different than these (make sure you're looking at the full 20 MP images which should be several MB, not the 2 MP previews at ~200 KB).


I don't know what the Lightroom* skillz of the astronauts are, but I would not be surprised if they were shooting RAW+JPEG and only processed the JPGs in Lightroom. They probably had presets to export to smaller images that was created months ago and loaded onto their PCDs. I'd imagine 4 humans in a tincan have more things to do than to be developing RAW images by digging out the details in the shadows, push the exposure and pull back the highlights, and then apply all of those settings to each sequence of images. They'll let the folks on the ground do that.

* The exif data has Adobe Lightroom Classic (Windows) metadata in it.


In that case with the metadata I wonder if the astronauts already sent the raw files over the laser link and the images were just processed by the ground staff for posting on the site.


The raw files have a ton more dynamic range, however. You could pull out a lot more detail in shadows.


That’s really exciting!


> something uncanny about the bandwidth and quality of all the artifacts coming from this mission

Back in 2019, Robert Zubrin suggested using rovers "to do detailed photography of the [Moon] base area and its surroundings" to "ultimately form the basis of a virtual reality experience that will allow millions of members of the public to participate vicariously in the missions" [1].

[1] https://spacenews.com/op-ed-lunar-gateway-or-moon-direct/


I cannot wait until we get 4k video of people walking on the surface, kicking up dust.


The existing 16mm film from Apollo is roughly equivalent to 2K, and you can see dust kicked up pretty nicely!


Where can we see that in high quality? Generally if I look at YouTube it's compressed and poor quality.


Perhaps try the clips available at the "Video and 16-mm Galleries" from the Apollo Lunar Surface Journal:

https://apollojournals.org/alsj/alsj-video.html

On the other hand, maybe don't get your hopes up--I've only tried a few, but even the large MPG files don't seem to be "super high quality," but maybe they will meet your expectations.


They never shot 16mm film on the moon. They had weird tv cameras and took photos in 35mm.


Sure they did. Here’s some footage:

https://youtu.be/7o3Oi9JWsyM


Oh that's what the silent footage was! So sorry!


Yeah, I think we got so accustomed to that analog look that seeing them like this feels almost like viewing a World War I photo in full color and 4K.


I agree 100%. Seeing the picture of the backside of the moon with the earth in view really drove home that the moon really is just a large rock.


> backside of the moon

I think perhaps you mean the far side of the moon. The "backside" of the moon implies a large graben stretching almost from pole to pole, and I have seen no evidence of such a geological formation in any photos.


thanks, I audibly laughed


> the moon really is just a large rock

It really is surprising being able to see the Moon isn't spherical. (Are those abberations?) It makes sene, given the moon isn't in hydrostatic equilibrium.


Space monkeys, moon pirates, and a Starbucks in the moon mall.


What? The Apollo photos were with extremely high quality cameras on film. They're incredibly high resolution.


I'm getting so exhausted of the "slop" accusation on new project launches. There are legit criticisms of EmDash in the parent comment that are overshadowed by the implication it was AI coded and, thus, unusable quality.

The problem is there's no beating the slop allegation. There's no "proof of work" that can be demonstrated in this comment section that satisfies, which you can see if you just keep following the entire chain. I'd rather read slop comments than this.

The main engineer of this project is in the comments and all he's being engaged with on is the definition of vibes.


They called the project EmDash and launched it on April 1st with a blog which brags about how little effort it took to write because of agents before even saying what it is.

If the product launch involves dressing the engineering team up in duck suits and releasing to a soundtrack of quacking, it's really not surprising people are asking the guy they hid behind the Daffy mask on why he's dressed as a duck rather than what he learned about headless CMS architecture from being on the Astro core team...


I know that it's discourteous to write-off a potentially valuable project because the release post showed a lack of self-awareness, but I think it's indicative of the larger struggle taking place: that trust is decaying.

It's decaying for a lot of the reasons displayed in the post, like you described, but the post also:

  - is overlong (probably LLM assisted)
  - is self-congratulatory
  - boosts AI
  - rewrites an existing project (vs contributing to the original)
  - conjures long-term maintenance doubt/suspicions
  - is functionally an advertisement (for CloudFlare)
So yeah, maybe EmDash is revolutionary with respect to Wordpress, but it hasn't signaled trust, and that's a difficult hurdle to get past.


This is a great point. I wish we started from this.


There's plenty of other comments saying this. It isn't that I don't understand, and need a clever metaphor.

But to run with your metaphor, can we, maybe, just ignore the quacking since we all know that's just how you get attention these days and instead focus on that other stuff? Because it seems like asking about the duck mask will never produce a satisfactory answer and instead turn into a debate on the merits of ducks.

Dare I suggest that this debate has become boring and beside the point. Unless someone on HN has been living under a rock they've already made up their mind about ducks.


Obtuse and repetitive debates is what HN comments are for. :)

But in this case it feels less like somebody has launched a revolutionary new product and HN is debating the MIT licence and landing page weight, and more like somebody has announced they've a plug-in replacement for a popular repository with a troll post and HN chooses not to spend enough time on Github to discover the all-star team and excellent architectural decisions the blog didn't bother mentioning.

Plus Cloudflare deliberately signalling that at best they're not very invested in its success and it might well just be low-effort slop probably is more pertinent to whether a purported WordPress replacement actually gains any traction than its technical merit, and headless CMS with vendor lockin vs managing WordPress security isn't likely to be a more productive debate than one on "slop". The target audience for this product is much more 'HN crowd' than 'read about agentic solutions to workforce automation on Gartner crowd' too, so the quacking alienating HN is actually relevant.


> Obtuse and repetitive debates is what HN comments are for. :)

Fair


wait, this was not an april 1st joke?

how is anyone supposed to believe cloudflare to build a reliable cms after its reputation with the outages and mass layoffs


I am not implying unusuablilty due to AI involvement.

I am implying that Cloudflare is publishing unusable one-off software without care because they have done it before and the blog post indicates that they are doing it again („look how CHEAP it is to pump out code now“).

I don’t need a proof of work, I need a proof of quality, and the blog post is the opposite of that.


This feels like a great example of a project that wouldn't exist if not for AI coding.


I am not Nick, but there's a few ways that world happens: the free tier goes away and what people pay for more correctly reflects what they use, this all becomes cheap enough that it doesn't matter, or we come up with an end to end method of determining usage is triggered by a person.

Another way is to just do better isolation as a user. That's probably your best shot without hoping these companies change policies.


This is so disingenuous. You literally clipped the full sentence that changes the context significantly.

> "Once I’ve proven to myself that rendering was feasible, I used Claude to create an approximate version of the game loop in JavaScript based on the original DOOM source, which to me is the least interesting part of the project"

This post is about whether you can render Doom in CSS not whether Claude can replicate Doom gameplay. I doubt the author even bothered to give the game loop much QA.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: