Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Model Info and links

Image Added

https://huggingface.co/black-forest-labs/FLUX.2-klein-4B

Code Block
import torch
from diffusers import Flux2KleinPipeline

device = "cuda"
dtype = torch.bfloat16

pipe = Flux2KleinPipeline.from_pretrained("black-forest-labs/FLUX.2-klein-4B", torch_dtype=dtype)
pipe.enable_model_cpu_offload()  # save some VRAM by offloading the model to CPU

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt,
    height=1024,
    width=1024,
    guidance_scale=1.0,
    num_inference_steps=4,
    generator=torch.Generator(device=device).manual_seed(0)
).images[0]
image.save("flux-klein.png")


Test 0 - Seed and guidance

...

Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling



2481632
CFG1

Image Modified

Image Modified

Image Modified

Image Modified

Image Modified

Test 2 - Face and hands

Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.

CFG1

24681012
CFG1

Image Added

Image Added

Image Added

Image Added

Image Added

Test 3 - Legs

Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.


24681012
CFG1

Image Added

Image Added

Image Added

Image Added

Image Added

Image Added

Test 4 - Other model covers

Image AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage RemovedImage RemovedImage RemovedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage AddedImage Added

Test 5 - Other prompts

...

seed:1seed:2seed:3seed:4seed:5

seed:6seed:7seed:8seed:9seed:10

seed:21seed:42seed:68seed:324seed:2026


System Info

Code Block
Mon Feb  2 07:12:09 2026
app: sdnext.git updated: 2026-01-31 hash: 12d4a059b tag: tags: url: https://github.com/liutyi/sdnext/tree/pytorch
arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-37-generic 
python: 3.12.3 Torch 2.10.0+xpu
device: Intel(R) Arc(TM) Graphics (1) ipex: 
ram: free:50.0 used:12.33 total:62.33
xformers: diffusers: 0.37.0.dev0 transformers: 4.57.5
active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16
base: Diffusers/black-forest-labs/FLUX.2-klein-4B [5e67da950f] refiner: none vae: none te: none unet: none
Backend: ipex Pipeline: native Memory optimization: none Cross-attention: Scaled-Dot-Product

...

Code Block
.


Model metadata

epicsoraXL_01 [c6fcb16341Diffusers/black-forest-labs/FLUX.2-klein-4B [5e67da950f]

ModuleClassDeviceDtypeQuantParamsModulesConfig
vaeAutoencoderKLFlux2xpu:0torch.bfloat16None84046115244

FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 32, 'norm_num_groups': 32, 'sample_size': 1024, 'force_upcast': True, 'use_quant_conv': True, 'use_post_quant_conv': True, 'mid_block_add_attention': True, 'batch_norm_eps': 0.0001, 'batch_norm_momentum': 0.1, 'patch_size': [2, 2], '_class_name': 'AutoencoderKLFlux2', '_diffusers_version': '0.37.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--black-forest-labs--FLUX.2-klein-4B/snapshots/5e67da950fce4a097bc150c22958a05716994cea/vae'})

text_encoderQwen3ForCausalLMxpu:0torch.bfloat16None4022468096547

Qwen3Config { "architectures": [ "Qwen3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 151643, "dtype": "bfloat16", "eos_token_id": 151645, "head_dim": 128, "hidden_act": "silu", "hidden_size": 2560, "initializer_range": 0.02, "intermediate_size": 9728, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 40960, "max_window_layers": 36, "model_type": "qwen3", "num_attention_heads": 32, "num_hidden_layers": 36, "num_key_value_heads": 8, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 1000000, "sliding_window": null, "tie_word_embeddings": true, "transformers_version": "4.57.5", "use_cache": true, "use_sliding_window": false, "vocab_size": 151936 }

tokenizerQwen2TokenizerFastNoneNoneNone00

None

schedulerFlowMatchEulerDiscreteSchedulerNoneNoneNone00

FrozenDict({'num_train_timesteps': 1000, 'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'time_shift_type': 'exponential', 'stochastic_sampling': False, '_class_name': 'FlowMatchEulerDiscreteScheduler', '_diffusers_version': '0.37.0.dev0'})

transformerFlux2Transformer2DModelxpu:0torch.bfloat16None3875544576356

FrozenDict({'patch_size': 1, 'in_channels': 128, 'out_channels': None, 'num_layers': 5, 'num_single_layers': 20, 'attention_head_dim': 128, 'num_attention_heads': 24, 'joint_attention_dim': 7680, 'timestep_guidance_channels': 256, 'mlp_ratio': 3.0, 'axes_dims_rope': [32, 32, 32, 32], 'rope_theta': 2000, 'eps': 1e-06, 'guidance_embeds': False, '_class_name': 'Flux2Transformer2DModel', '_diffusers_version': '0.37.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.2-klein-4B'})

is_distilledboolNoneNoneNone00

None