Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Centerfold Flux 5 + Krea + Chroma

https://huggingface.co/Tiwaz/CenKreChro_V2_3

Image AddedImage Added


Code Block
You can try using it as low as 12 steps (prefer 30).

...

Test 4 - Other model Covers


Image RemovedImage RemovedzImage RemovedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage RemovedImage Added

Image AddedImage AddedImage AddedImage AddedImage Added

v2.3 examples

Image AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

System info


Code Block
Sat Oct 25 12:53:29 2025
app: sdnext.git updated: 2025-10-26 hash: 59174fe8c url: https://github.com/liutyi/sdnext.git/tree/ipex
arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-33-generic
python: 3.12.3 python: 3.12.3 Torch: 2.7.1+xpu
device: Intel(R) Arc(TM) Graphics (1) pex: 2.7.10+xpu
ram: free:117.99 used:7.34 total:125.33
gpu: free:83.55 used:33.83 total:117.37  gpu-active: current:31.44 peak:31.44 gpu-allocated: current:31.44 peak:31.44 gpu-reserved: current:33.83 peak:33.83 gpu-inactive: current:0.01 peak:0.01
events: retries:0 oom:0 utilization: 0
xformers: diffusers: 0.36.0.dev0 transformers: 4.57.1
active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16
base: Tiwaz/CenKreChro refiner: none vae: none te: none unet: none
Backend: ipex Pipeline: native Memory optimization: none Cross-attention: Scaled-Dot-Product

...