You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 7 Next »

Model Info and links

Gated model: https://huggingface.co/black-forest-labs/FLUX.2-klein-base-9B (require login and API key)

import torch
from diffusers import Flux2KleinPipeline

device = "cuda"
dtype = torch.bfloat16

pipe = Flux2KleinPipeline.from_pretrained("black-forest-labs/FLUX.2-klein-base-9B", torch_dtype=dtype)
pipe.enable_model_cpu_offload()  # save some VRAM by offloading the model to CPU

prompt = "A cat holding a sign that says hello world"
image = pipe(
    prompt,
    height=1024,
    width=1024,
    guidance_scale=4.0,
    num_inference_steps=50,
    generator=torch.Generator(device=device).manual_seed(0)
).images[0]
image.save("flux-klein.png")


Test 0 - Seed and guidance

Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling

Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.

Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.

CFGx, STEPxxSeed: 1620085323Seed:1931701040Seed:4075624134Seed:2736029172
Bookshop girl

Face and hand

Legs and shoes

Test 1 - Bookstore

Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling



8163264
CFG1

CFG2

CFG3

CFG4

CFG5

CFG6

CFG8

Test 2 - Face and hands

Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.


8163264
CFG1

CFG2

CFG3

CFG4

CFG5

CFG6

CFG8

Test 3 - Legs

Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.


8163264
CFG1

CFG2

CFG3

CFG4

CFG5

CFG6

CFG8

Test 4 - Other model covers

Test 5 - Other prompts

Test 6 - Optional find the cover


Test 7 - Empty prompts


seed:1seed:2seed:3seed:4seed:5





seed:6seed:7seed:8seed:9seed:10





seed:21seed:42seed:68seed:324seed:2026






System Info



App config

.


Model metadata


ModuleClassDeviceDtypeQuantParamsModulesConfig
vaeAutoencoderKLFlux2xpu:0torch.bfloat16None84046115244

FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 32, 'norm_num_groups': 32, 'sample_size': 1024, 'force_upcast': True, 'use_quant_conv': True, 'use_post_quant_conv': True, 'mid_block_add_attention': True, 'batch_norm_eps': 0.0001, 'batch_norm_momentum': 0.1, 'patch_size': [2, 2], '_class_name': 'AutoencoderKLFlux2', '_diffusers_version': '0.37.0.dev0', '_name_or_path': 'models/Diffusers/models--black-forest-labs--FLUX.2-klein-base-9B/snapshots/17c3b160520b7dd44665dbf0b9ed9dd30c15cd06/vae'})

text_encoderQwen3ForCausalLMxpu:0torch.bfloat16None8190735360547

Qwen3Config { "architectures": [ "Qwen3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 151643, "dtype": "bfloat16", "eos_token_id": 151645, "head_dim": 128, "hidden_act": "silu", "hidden_size": 4096, "initializer_range": 0.02, "intermediate_size": 12288, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 40960, "max_window_layers": 36, "model_type": "qwen3", "num_attention_heads": 32, "num_hidden_layers": 36, "num_key_value_heads": 8, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 1000000, "sliding_window": null, "tie_word_embeddings": false, "transformers_version": "4.57.5", "use_cache": true, "use_sliding_window": false, "vocab_size": 151936 }

tokenizerQwen2TokenizerFastNoneNoneNone00

None

schedulerFlowMatchEulerDiscreteSchedulerNoneNoneNone00

FrozenDict({'num_train_timesteps': 1000, 'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'time_shift_type': 'exponential', 'stochastic_sampling': False, '_class_name': 'FlowMatchEulerDiscreteScheduler', '_diffusers_version': '0.37.0.dev0'})

transformerFlux2Transformer2DModelxpu:0torch.bfloat16None9078581248482

FrozenDict({'patch_size': 1, 'in_channels': 128, 'out_channels': None, 'num_layers': 8, 'num_single_layers': 24, 'attention_head_dim': 128, 'num_attention_heads': 32, 'joint_attention_dim': 12288, 'timestep_guidance_channels': 256, 'mlp_ratio': 3.0, 'axes_dims_rope': [32, 32, 32, 32], 'rope_theta': 2000, 'eps': 1e-06, 'guidance_embeds': False, '_class_name': 'Flux2Transformer2DModel', '_diffusers_version': '0.37.0.dev0', '_name_or_path': 'black-forest-labs/FLUX.2-klein-base-9B'})

is_distilledboolNoneNoneNone00

None

  • No labels