Model Info and links
https://huggingface.co/black-forest-labs/FLUX.2-klein-4B
| Code Block |
|---|
import torch
from diffusers import Flux2KleinPipeline
device = "cuda"
dtype = torch.bfloat16
pipe = Flux2KleinPipeline.from_pretrained("black-forest-labs/FLUX.2-klein-4B", torch_dtype=dtype)
pipe.enable_model_cpu_offload() # save some VRAM by offloading the model to CPU
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
height=1024,
width=1024,
guidance_scale=1.0,
num_inference_steps=4,
generator=torch.Generator(device=device).manual_seed(0)
).images[0]
image.save("flux-klein.png") |
Test 0 - Seed and guidance
...
Test 4 - Other model covers
Test 5 - Other prompts
...
| seed:1 | seed:2 | seed:3 | seed:4 | seed:5 |
|---|---|---|---|---|
| seed:6 | seed:7 | seed:8 | seed:9 | seed:10 |
| seed:21 | seed:42 | seed:68 | seed:324 | seed:2026 |
System Info
| Code Block |
|---|
Mon Feb 2 07:12:09 2026
app: sdnext.git updated: 2026-01-31 hash: 12d4a059b tag: tags: url: https://github.com/liutyi/sdnext/tree/pytorch
arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-37-generic
python: 3.12.3 Torch 2.10.0+xpu
device: Intel(R) Arc(TM) Graphics (1) ipex:
ram: free:50.0 used:12.33 total:62.33
xformers: diffusers: 0.37.0.dev0 transformers: 4.57.5
active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16
base: Diffusers/black-forest-labs/FLUX.2-klein-4B [5e67da950f] refiner: none vae: none te: none unet: none
Backend: ipex Pipeline: native Memory optimization: none Cross-attention: Scaled-Dot-Product |
...