Info
https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev
Test 0 - Different seed variations
| CFG6, STEP 32 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| bookshop girl | ||||
| hand and face | ||||
| legs and shoes |
Test 1 - Bookshop
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
| 4 | 8 | 16 | 20 | 32 | |
|---|---|---|---|---|---|
CFG0 768px | |||||
CFG1 512px | |||||
CFG2 512px | |||||
CFG3 512px | |||||
CFG4 512px | |||||
CFG6 512px | |||||
CFG8 512px |
Test 2 - Face and hand
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
| 8 | 16 | 20 | 32 | |
|---|---|---|---|---|
CFG0 CFG1 768px | ||||
CFG1 512px | ||||
CFG2 512px | ||||
CFG3 512px | ||||
CFG4 512px | ||||
CFG6 512px | ||||
CFG8 512px |
Test 3 - Legs
| 8 | 16 | 20 | 32 | |
|---|---|---|---|---|
CFG0 CFG1 768px | ||||
CFG1 512px | ||||
CFG2 512px | ||||
CFG3 512px | ||||
CFG4 512px | ||||
CFG6 512px | ||||
CFG8 512px |
System info
Config
Model info
| Module | Class | Device | DType | Params | Modules | Config |
|---|---|---|---|---|---|---|
vae | AutoencoderKLWan | xpu:0 | torch.bfloat16 | 126892531 | 260 | FrozenDict({'base_dim': 96, 'z_dim': 16, 'dim_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_scales': [], 'temperal_downsample': [False, True, True], 'dropout': 0.0, 'latents_mean': [-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508, 0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921], 'latents_std': [2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743, 3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.916], '_class_name': 'AutoencoderKLWan', '_diffusers_version': '0.33.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Wan-AI--Wan2.1-T2V-14B-Diffusers/snapshots/38ec498cb3208fb688890f8cc7e94ede2cbd7f68/vae'}) |
text_encoder | UMT5EncoderModel | cpu | torch.bfloat16 | 5680910336 | 486 | UMT5Config { "architectures": [ "UMT5EncoderModel" ], "classifier_dropout": 0.0, "d_ff": 10240, "d_kv": 64, "d_model": 4096, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": true, "layer_norm_epsilon": 1e-06, "model_type": "umt5", "num_decoder_layers": 24, "num_heads": 64, "num_layers": 24, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "scalable_attention": true, "tie_word_embeddings": false, "tokenizer_class": "T5Tokenizer", "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "use_cache": true, "vocab_size": 256384 } |
vae | AutoencoderKLWan | xpu:0 | torch.bfloat16 | 126892531 | 260 | FrozenDict({'base_dim': 96, 'decoder_base_dim': None, 'z_dim': 16, 'dim_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_scales': [], 'temperal_downsample': [False, True, True], 'dropout': 0.0, 'latents_mean': [-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508, 0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921], 'latents_std': [2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743, 3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.916], 'is_residual': False, 'in_channels': 3, 'out_channels': 3, 'patch_size': None, 'scale_factor_temporal': 4, 'scale_factor_spatial': 8, 'clip_output': True, '_use_default_values': ['patch_size', 'out_channels', 'scale_factor_spatial', 'is_residual', 'clip_output', 'in_channels', 'decoder_base_dim', 'scale_factor_temporal'], '_class_name': 'AutoencoderKLWan', '_diffusers_version': '0.35.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Wan-AI--Wan2.2-T2V-A14B-Diffusers/snapshots/20fb953f43ad58be9a9614a89fde4653f4ae5947/vae'}) |
text_encoder | UMT5EncoderModel | xpu:0 | torch.bfloat16 | 5680910336 | 486 | UMT5Config { "architectures": [ "UMT5EncoderModel" ], "classifier_dropout": 0.0, "d_ff": 10240, "d_kv": 64, "d_model": 4096, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": true, "layer_norm_epsilon": 1e-06, "model_type": "umt5", "num_decoder_layers": 24, "num_heads": 64, "num_layers": 24, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "scalable_attention": true, "tie_word_embeddings": false, "tokenizer_class": "T5Tokenizer", "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "use_cache": true, "vocab_size": 256384 } |
tokenizer | T5TokenizerFast | None | None | 0 | 0 | None |
transformer | WanTransformer3DModel | xpu:0 | torch.bfloat16 | 14288491584 | 1138 | FrozenDict({'patch_size': [1, 2, 2], 'num_attention_heads': 40, 'attention_head_dim': 128, 'in_channels': 16, 'out_channels': 16, 'text_dim': 4096, 'freq_dim': 256, 'ffn_dim': 13824, 'num_layers': 40, 'cross_attn_norm': True, 'qk_norm': 'rms_norm_across_heads', 'eps': 1e-06, 'image_dim': None, 'added_kv_proj_dim': None, 'rope_max_seq_len': 1024, 'pos_embed_seq_len': None, '_class_name': 'WanTransformer3DModel', '_diffusers_version': '0.35.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Wan-AI--Wan2.2-T2V-A14B-Diffusers/snapshots/20fb953f43ad58be9a9614a89fde4653f4ae5947/transformer'}) |
scheduler | UniPCMultistepScheduler | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'beta_start': 0.0001, 'beta_end': 0.02, 'beta_schedule': 'linear', 'trained_betas': None, 'solver_order': 2, 'prediction_type': 'flow_prediction', 'thresholding': False, 'dynamic_thresholding_ratio': 0.995, 'sample_max_value': 1.0, 'predict_x0': True, 'solver_type': 'bh2', 'lower_order_final': True, 'disable_corrector': [], 'solver_p': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'use_flow_sigmas': True, 'flow_shift': 3.0, 'timestep_spacing': 'linspace', 'steps_offset': 0, 'final_sigmas_type': 'zero', 'rescale_betas_zero_snr': False, 'use_dynamic_shifting': False, 'time_shift_type': 'exponential', '_class_name': 'UniPCMultistepScheduler', '_diffusers_version': '0.35.0.dev0'}) |
transformer_2 | WanTransformer3DModel | xpu:0 | torch.bfloat16 | 14288491584 | 1138 | FrozenDict({'patch_size': [1, 2, 2], 'num_attention_heads': 40, 'attention_head_dim': 128, 'in_channels': 16, 'out_channels': 16, 'text_dim': 4096, 'freq_dim': 256, 'ffn_dim': 13824, 'num_layers': 40, 'cross_attn_norm': True, 'qk_norm': 'rms_norm_across_heads', 'eps': 1e-06, 'image_dim': None, 'added_kv_proj_dim': None, 'rope_max_seq_len': 1024, 'pos_embed_seq_len': None, '_class_name': 'WanTransformer3DModel', '_diffusers_version': '0.35.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Wan-AI--Wan2.2-T2V-A14B-Diffusers/snapshots/20fb953f43ad58be9a9614a89fde4653f4ae5947/transformer_2'}) |
boundary_ratio | float | None | None | 0 | 0 | None |
expand_timesteps | bool | None | None | 0 | 0 | None |
_name_or_path | str | None | None | 0 | 0 | None |