This seems like a great time to mention C2PA, a specification for positively affirming image sources. OpenAI participates in this, and if I load an image I had AI generate in a C2PA Viewer it shows ChatGPT as the source.
Bad actors can strip sources out so it's a normal image (that's why it's positive affirmation), but eventually we should start flagging images with no source attribution as dangerous the way we flag non-https.
The standard itself being open is irrelevant. I'm not sure why this is always brought up for attestation standards. It is fundamentally impossible to trust the signature from open-source software or hardware, so a signature from open-source software is essentially the same as no signature.
So now, if we were to start marking all images that do not have a signature as "dangerous", you would have effectively created an enforcement mechanism in which the whole pipeline, from taking a photo to editing to publishing, can only be done with proprietary software and hardware.
We already have a centrally curated trust model in https. Browsers only treat connections as "secure" if they chain up to a root CA in their trust store. You can operate outside that system, but users will see warnings and friction. Some level of trust concentration isn’t new.
I'm curious if you think this is worse or not as bad as a best-case broad implementation c2pa...especially if there is a similar Let's Encrypt entity assisting with signatures.
I think the issue is that it's not just bad actors. It's every social platform that strips out metadata. If I post an image on Instagram, Facebook, or anywhere else, they're going to strip the metadata for my privacy. Sometimes the exif data has geo coordinates. Other times it's less private data like the file name, file create/access/modification times, and the kind of device it was taken on (like iPhone 16 Pro Max).
Usually, they strip out everything and that's likely to include C2PA unless they start whitelisting that to be kept or even using it to flag images on their site as AI.
But for now, it's not just bad actors stripping out metadata. It's most sites that images are posted on.
There’s actually a part of the NY state budget right now (TEDE part X, for my law nerds) that’d require social media companies to preserve non-PII provenance metadata and surface it to the user, if the uploaded image has it.
Yeah, OpenAI has been attaching C2PA manifests to all their generated images from the very beginning. Also, based on a small evaluation that I ran, modern ML based AI generated image detectors like OmniAID[1] seem to do quite well at detecting GPT-Image-2 generated images. I use both in an on-device AI generated image detector that I built.
The comments that aren't directly discussing the technical achievement here are bemoaning the destruction to society that AI generated images can cause, which is a fair criticism. I'm genuinely curious what you think the greater horror is. Or what a better solution might be.
Reddit blurs nsfw images by default. You can change that in settings. I don't see what it so terrible about the idea of doing this with untrusted image sources.
I hope so too, however cheap is relative. One's ordinary morning coffee is a full day wage for someone else. If we could have decent models fitting laptops of most students, that would be point where we could possibly treat AI as we treat calculator or computer today.
Just to put things in context, https://www.bbc.com/news/articles/ce8444gex65o shares income for a good number of people now a days. (note that many of those workers are taking care of a family of 2+ members, most of the time)
I remember a TI-89 being mandatory for my AP math classes (calculus and statistics). It was utterly essential for solving problems in a reasonable amount of time. There were programs available to assist families who couldn't afford one so their children wouldn't be left behind.
AI in it‘s current phase, definitely. However, we‘ve been seeing the transformer architecture plateauing in the last couple of years. There are still improvements, but open source models are catching up.
I feel like at this point it’s an inevitability that given enough time, capable models will be cheap enough for everyone.
If poor students have capable models but rich students have much better models that go the extra mile for a great mark and do everything in a single prompt, it would still be unfair.
For it to be fair, you would not only need good free models, but actual parity between free models and the highest subscription tier the big AI companies can offer. And I don't think that will happen in the short or mid term future.
When I was in AP classes in high school, you were required to have a TI-89 calculator. If you couldn't afford one, there were assistance programs.
You were not allowed to use a TI-92, which was the next step up. It had built-in solvers for many kinds of problems.
I'm not saying this isn't a concern, but addressing financially-based inequities in learning is not a new problem within certain bounds. There's established ways to deal with it. If we can get AI cheap enough that you can cover a year of education with $100 then we're in a good range.
The problem is that social platforms benefit from this behavior as long as it doesn't get too egregious. Bots contribute to metrics just as easily as real humans as long as investors and ad purchasers feel like it's kept to managable levels.
Nothing on social is organic anymore, and hasn't been long before AI came around, which is why I welcome the AI slop era. It will accelerate us to the endgame, which is acknowledging how bad the problem really is and to start cleaning it up.
I have thought exactly as you do for a long time. Recently a side project of my blew up and it was completely organic. I'm just a solo dev. No marketing budget at all. No PR team.
Made me realize that it's still possible for things to organically get big.
Ambient variety, you know, almost static drone, very niche style per se. Never did anything to promote it in any way. Just released it via my friend's digital label on a handful of platforms.
Never had more than ~100 listens a month, and never expected that to change and earn any substantial royalties.
One day, the friend calls and tells he's willing to pay me some pretty penny, and replies to my bewilderment that just a single track from the whole album blew up, glitched the Matrix and obtained some 10'000s of listens.
I investigated a little bit and found out that the track's title coincided with that of some other, much more popular and promoted band.
So I just happened to ride on those coattails.
Edit: removed extra zero in the number of listens :)
wait till you hear about how we standardized RF bands. We have gems such as "High frequency", "Very High Frequency", "Ultra High Frequency", "Super High Frequency", and the cherry on top, "Extremely High Frequency". Then they went with the boring" Teraherz Frequency", truly a disappointment.
These are all mirrored on the low side btw, so we also have "Extremely Low Frequency", and all the others.
There is something uncanny about the bandwidth and quality of all the artifacts coming from this mission.
I've subsisted on photos from the Apollo missions and artistic renditions for so long that seeing the modern, high resolution real thing to be quite stirring in a way I didn't expect. It actually does make me believe that the future could be quite cool.
We haven't even seen the full quality images yet. They've commented that the live feed from the GoPro is a limited bandwidth because they have to share the bandwidth with running the capsule. The images from the Nikons onboard are just scaled down. My initial guess was from an export specifically to get an early dump to get everyone on the ground chomping at the bit something to see. They'll get the full images when the SD cards splash down. When those are released, I'm expecting quite a few OMG images
I wouldn't mind some raw files but I honestly don't think they'll be too strikingly different than these (make sure you're looking at the full 20 MP images which should be several MB, not the 2 MP previews at ~200 KB).
I don't know what the Lightroom* skillz of the astronauts are, but I would not be surprised if they were shooting RAW+JPEG and only processed the JPGs in Lightroom. They probably had presets to export to smaller images that was created months ago and loaded onto their PCDs. I'd imagine 4 humans in a tincan have more things to do than to be developing RAW images by digging out the details in the shadows, push the exposure and pull back the highlights, and then apply all of those settings to each sequence of images. They'll let the folks on the ground do that.
* The exif data has Adobe Lightroom Classic (Windows) metadata in it.
In that case with the metadata I wonder if the astronauts already sent the raw files over the laser link and the images were just processed by the ground staff for posting on the site.
> something uncanny about the bandwidth and quality of all the artifacts coming from this mission
Back in 2019, Robert Zubrin suggested using rovers "to do detailed photography of the [Moon] base area and its surroundings" to "ultimately form the basis of a virtual reality experience that will allow millions of members of the public to participate vicariously in the missions" [1].
On the other hand, maybe don't get your hopes up--I've only tried a few, but even the large MPG files don't seem to be "super high quality," but maybe they will meet your expectations.
I think perhaps you mean the far side of the moon. The "backside" of the moon implies a large graben stretching almost from pole to pole, and I have seen no evidence of such a geological formation in any photos.
It really is surprising being able to see the Moon isn't spherical. (Are those abberations?) It makes sene, given the moon isn't in hydrostatic equilibrium.
I'm getting so exhausted of the "slop" accusation on new project launches. There are legit criticisms of EmDash in the parent comment that are overshadowed by the implication it was AI coded and, thus, unusable quality.
The problem is there's no beating the slop allegation. There's no "proof of work" that can be demonstrated in this comment section that satisfies, which you can see if you just keep following the entire chain. I'd rather read slop comments than this.
The main engineer of this project is in the comments and all he's being engaged with on is the definition of vibes.
They called the project EmDash and launched it on April 1st with a blog which brags about how little effort it took to write because of agents before even saying what it is.
If the product launch involves dressing the engineering team up in duck suits and releasing to a soundtrack of quacking, it's really not surprising people are asking the guy they hid behind the Daffy mask on why he's dressed as a duck rather than what he learned about headless CMS architecture from being on the Astro core team...
I know that it's discourteous to write-off a potentially valuable project because the release post showed a lack of self-awareness, but I think it's indicative of the larger struggle taking place: that trust is decaying.
It's decaying for a lot of the reasons displayed in the post, like you described, but the post also:
- is overlong (probably LLM assisted)
- is self-congratulatory
- boosts AI
- rewrites an existing project (vs contributing to the original)
- conjures long-term maintenance doubt/suspicions
- is functionally an advertisement (for CloudFlare)
So yeah, maybe EmDash is revolutionary with respect to Wordpress, but it hasn't signaled trust, and that's a difficult hurdle to get past.
There's plenty of other comments saying this. It isn't that I don't understand, and need a clever metaphor.
But to run with your metaphor, can we, maybe, just ignore the quacking since we all know that's just how you get attention these days and instead focus on that other stuff? Because it seems like asking about the duck mask will never produce a satisfactory answer and instead turn into a debate on the merits of ducks.
Dare I suggest that this debate has become boring and beside the point. Unless someone on HN has been living under a rock they've already made up their mind about ducks.
Obtuse and repetitive debates is what HN comments are for. :)
But in this case it feels less like somebody has launched a revolutionary new product and HN is debating the MIT licence and landing page weight, and more like somebody has announced they've a plug-in replacement for a popular repository with a troll post and HN chooses not to spend enough time on Github to discover the all-star team and excellent architectural decisions the blog didn't bother mentioning.
Plus Cloudflare deliberately signalling that at best they're not very invested in its success and it might well just be low-effort slop probably is more pertinent to whether a purported WordPress replacement actually gains any traction than its technical merit, and headless CMS with vendor lockin vs managing WordPress security isn't likely to be a more productive debate than one on "slop". The target audience for this product is much more 'HN crowd' than 'read about agentic solutions to workforce automation on Gartner crowd' too, so the quacking alienating HN is actually relevant.
I am not implying unusuablilty due to AI involvement.
I am implying that Cloudflare is publishing unusable one-off software without care because they have done it before and the blog post indicates that they are doing it again („look how CHEAP it is to pump out code now“).
I don’t need a proof of work, I need a proof of quality, and the blog post is the opposite of that.
I am not Nick, but there's a few ways that world happens: the free tier goes away and what people pay for more correctly reflects what they use, this all becomes cheap enough that it doesn't matter, or we come up with an end to end method of determining usage is triggered by a person.
Another way is to just do better isolation as a user. That's probably your best shot without hoping these companies change policies.
This is so disingenuous. You literally clipped the full sentence that changes the context significantly.
> "Once I’ve proven to myself that rendering was feasible, I used Claude to create an approximate version of the game loop in JavaScript based on the original DOOM source, which to me is the least interesting part of the project"
This post is about whether you can render Doom in CSS not whether Claude can replicate Doom gameplay. I doubt the author even bothered to give the game loop much QA.
Bad actors can strip sources out so it's a normal image (that's why it's positive affirmation), but eventually we should start flagging images with no source attribution as dangerous the way we flag non-https.
Learn more at https://c2pa.org
reply