Hacker Newsnew | past | comments | ask | show | jobs | submit | sangwulee's commentslogin

I actually tried a few experiments in early exploration stages! I trained a small classifier to judge AI vs non-AI images. Use it as a reward model to do small RL / post training experiments. Sadly, was not too successful. We found that directly finetuning the model on high quality photorealistic image was most reliable.

Another note about preference optimisation and RL is that it has really high quality ceiling but needs to be very carefully tuned. It's easy to get perfect anatomy and structure if you decide to completely "collapse" the model. For instance, ChatGPT images are collapsed to have slight yellow color palette. FLUX images always have this glossy, plastic texture with overly blurry background. It's similar to reward hacking behavior you see in LLMs where they sound overly nice and chatty.

I had to make a few compromises to balance between "stable, collapsed, boring model" and "unstable, diverse, explorative" model.


I could see how you might need a multi channel classifier so that one exists on a range (A) of -1 = "This looks like AI" to 1="This does not look like AI" and another(R) where 1="The above factor is relevant to this image" to 0="The AI-ness of this image is not a meaningful concept

Then optimise for max (Quality + A*R)

Arguably amplitude of A should do R but I think the AI-ness and the AI-ness-relevance are distinct concepts (It could be highly relevant but it can't tell what it should be).


We used two types of datasets for post-training. Supervised finetuning data and preference data used for RLHF stage. You can actually use less than < 1M samples to significantly boost the aesthetics. Quality matters A LOT. Quantity helps with generalisation and stability of the checkpoints though.


How is the data collected?


The highest quality finetuning data was hand curated internally. I would say our post training pipeline is quite similar to SeedDream 2.0 ~ 3.0 series from ByteDance. Similar to them, we use extensive quality filters and internal models to get the highest quality possible. Even from there, we still hand curate a hand-picked subset.


We have not added a separate RTX accelerated version for FLUX.1 Krea, but the model is fully compatible with existing FLUX.1 dev codebase. I don't think we made a separate onnx export for it though. Doing 4~8 bit quantized version with SVDQuant would be a nice follow up so that the checkpoint is more friendly for consumer grade hardware.


Quick napkin math assuming bfloat16 format : 1B * 16 bits = 16B bits = 2GB. Since it's a 12B parameter model, you get around ~24GB. Downcasting to bfloat16 from float32 comes with pretty minimal performance degradation, so we uploaded the weights in bfloat16 format.


I love owls. Photorealism was one of the focus areas for training because "AI look" (e.g. plastic skin) was biggest complaint for FLUX.1 model series. Photorealism was achieved with both careful curation of finetuning and preference dataset.


Thank you! Glad you find it helpful. The model is focused on photorealism so it should be able to generate most realistic scenes. Although, I think using 3D engines would be more suitable for typical cases for robotics training since it gives you ground truth data on objects, location, etc.

One interesting use case would be if you are focusing on a robotics task that would require perception of realistic scenes.


Hi there, I'm Sangwu Lee, one of the researchers behind this model. I'm happy to answer any questions here.

---

I also commented in this other submission: https://news.ycombinator.com/item?id=44748056


Hello HackerNews. My name is Sangwu Lee . I work for Krea and I led the research efforts around the post-training for this model. I'll try to answer any questions you may have, but I recommend you read the technical report I wrote on our site (https://www.krea.ai/blog/flux-krea-open-source-release).

I also see that my colleagues already commented here, but I'll try to answer questions you may have.


The model looks incredible!

Regarding this part: > Since flux-dev-raw is a guidance distilled model, we devise a custom loss to finetune the model directly on a classifier-free guided distribution.

Could you go more into detail on the loss used for this and other possible tips for finetuning those? I remember the general open source ai art community had a hard time with finetuning the original distilled flux-dev so I'm very curious about that.


Best you comment on the bigger discussion (106 points, 41 comments) https://news.ycombinator.com/item?id=44745555


Hi! I'm lead researcher on Krea-1. FLUX.1 Krea is a 12B rectified flow model distilled from Krea-1, designed to be compatible with FLUX architecture. Happy to answer any technical questions :)


The model looks incredible!

Regarding this part: > Since flux-dev-raw is a guidance distilled model, we devise a custom loss to finetune the model directly on a classifier-free guided distribution.

Could you go more into detail on the specific loss used for this and any other possible tips for finetuning this that you might have? I remember the general open source ai art community had a hard time with finetuning the original distilled flux-dev so I'm very curious about that.


From a traditional media production background, where media is produced in separate layers, which are then composited together to create a final deliverable still image, motion clip, and/or audio clip - this type of media production through the creation of elements that are then combined is an essential aspect of expense management, and quality control. Current AI image, video and audio generation methods do not support any of that. ForgeUI did briefly, but that went away, which I suspect because few understand large scale media production requirements.

I guess my point being: do you have any (real) experienced media production people working with you? People that have experience working in actual feature film VFX, animated commercial, and multi-million dollar budget productions?

If you really want to make your efforts a wild success, simply support traditional media production. None of the other AI image/video/audio providers seem to understand this, and it is gargantuan: if your tools plugged into traditional media production, it will be adopted immediately. Currently, they are tentatively and not adopted because they do not integrate with production tools or expectations at all.


I recently ran a training experiment using the same dataset, number of steps, and epochs on both Flux Dev and Flux Krea models.

What stood out to me was that Flux Dev followed the text prompts more accurately, whereas Krea’s generations were more loosely aligned or "off" in terms of prompt fidelity with deformations in body type and the architecture.

Does this suggest that Flux Krea requires more training to achieve strong text-to-image alignment compared to Flux Dev? Or is it possible that Krea is optimized differently (e.g. for style, detail, or artistic variation rather than strict prompt adherence)?

Curious if anyone else has experienced this or has any insight into the differences between these two. Would love to hear your thoughts


thanks for doing this!

what does " designed to be compatible with FLUX architecture" mean and why is that important?


FLUX.1 is one of the most popular open weights text-to-image models. We distilled Krea-1 to FLUX.1 [dev] model so that the community can adopt it seamlessly into existing ecosystem. Any finetuning code, workflows, etc that was built on top of FLUX.1 [dev] can be reused with our model :)


do LoRAs conflict with your distillation?


The architecture is the same so we found that some LoRAs work out-of-the box, but some LoRAs don't. In those cases, I would expect people to re-run their LoRA finetuning with the trainer they've used.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: