Info
https://huggingface.co/Wan-AI/Wan2.1-T2V-14B
Note:
If you are using theT2V-1.3Bmodel, we recommend setting the parameter--sample_guide_scale 6.
The--sample_shift parametercan be adjusted within the range of 8 to 12 based on the performance.
Test 1 - Bookshop
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
| 4 | 8 | 16 | 20 | 32 |
|---|
CFG1 |
CFG2 |
512px
CFG3
512px
CFG4
512px
CFG6
512px
CFG8
CFG3 | |||||
CFG4 | |||||
CFG5 | |||||
CFG6 | |||||
CFG10 |
Test 2 - Face and hand
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
| 8 | 16 | 20 | 32 |
|---|
| 64 | ||
|---|---|---|
CFG1 |
CFG2 |
512px
CFG3
512px
CFG4
512px
CFG6
512px
CFG8
CFG3 | |||||
CFG4 | |||||
CFG5 | |||||
CFG6 | |||||
CFG10 |
Test 3 - Legs
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
...
CFG0
CFG1
768px
...
CFG1
512px
...
CFG2
512px
...
CFG3
512px
...
CFG4
512px
...
CFG6
512px
...
CFG8
512px
Test 4 - Different seed variations and resolutions
512px
...
...
...
...
...
...
...
...
...
...
...
...
768px
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
Parameters: Steps: 3220| Size: 768x7681024x1024| Seed: 16200853231931701040| CFG scale: 6| App: SD.Next| Version: 34031f5| Pipeline: WanPipeline| Operations: txt2img| Model: Wan2.1-T2V-1.3B-Diffusers
Execution: Time: 3m
...
54.
...
34s | total
...
439.
...
88 pipeline
...
233.
...
25 preview
...
194.
...
33 te 5.
...
20 offload
...
5.
...
07 vae 1.20 decode 0.
...
80 post 0.
...
27 gc
| 8 | 16 | 20 | 32 | |
|---|---|---|---|---|
CFG4 | ||||
CFG6 | ||||
CFG8 | ||||
CFG10 |
Test 4 - Different seed variations and resolutions
...
...
...
...
...
...
...
...
...
...
...
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
Parameters: Steps: 32| Size: 1024x1024 768x768| Seed: 40756241341620085323| CFG scale: 6| App: SD.Next| Version: 34031f5| Pipeline: WanPipeline| Operations: txt2img| Model: Wan2.1-T2V-1.3B-Diffusers
Time: 5m 513m 11.86s 75s | total 466356.56 41 pipeline 350191.76 48 preview 103154.42 52 te 5.43 39 offload 54.07 33 vae 1.03 decode 0.81 65 post 0.28 gc 0.25 | GPU 11822 14298 MB 9% 11% | RAM 1613.31 68 GB 13%11%
| CFG6, STEP 32 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 | ||
|---|---|---|---|---|---|---|
| 512px | ||||||
| 768px | ||||||
| 1024px | bookshop girl | |||||
| 769px | ||||||
| 1024px | ||||||
| 5112px | ||||||
| 768 | ||||||
| 1024px |
System info
| Code Block |
|---|
app: sdnext.git updated: 2025-07-21 hash: 34031f54 url: https://github.com/vladmandic/sdnext.git/tree/dev arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-24-generic python: 3.12.3 Torch 2.7.1+xpu device: Intel(R) Arc(TM) Graphics (1) ipex: xformers: diffusers: 0.35.0.dev0 transformers: 4.53.2 active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16 base: Diffusers/Wan-AI/Wan2.1-T2V-1.3B-Diffusers [0fad780a53] refiner: none vae: none te: none unet: none |
Config
| Code Block |
|---|
{
"sd_model_checkpoint": "Diffusers/Wan-AI/Wan2.1-T2V-1.3B-Diffusers [0fad780a53]",
"diffusers_version": "9c13f8657986e68f5f05987912c54432fd28d86f",
"sd_checkpoint_hash": null,
"diffusers_offload_min_gpu_memory": 0.05,
"diffusers_offload_max_gpu_memory": 0.95,
"diffusers_vae_tiling": true,
"diffusers_vae_tile_size": 512,
"dynamic_attention_slice_rate": 1,
"dynamic_attention_trigger_rate": 2,
"samples_filename_pattern": "[seq]-[date]-[model_name]-[height]x[width]-STEP[steps]-CFG[cfg]-Seed[seed]"
} |
Model info
| Module | Class | Device | DType | Params | Modules | Config |
|---|---|---|---|---|---|---|
vae | AutoencoderKLWan | xpu:0 | torch.bfloat16 | 126892531 | 260 | FrozenDict({'base_dim': 96, 'z_dim': 16, 'dim_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_scales': [], 'temperal_downsample': [False, True, True], 'dropout': 0.0, 'latents_mean': [-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508, 0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921], 'latents_std': [2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743, 3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.916], '_class_name': 'AutoencoderKLWan', '_diffusers_version': '0.33.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Wan-AI--Wan2.1-T2V-1.3B-Diffusers/snapshots/0fad780a534b6463e45facd96134c9f345acfa5b/vae'}) |
text_encoder | UMT5EncoderModel | cpu | torch.bfloat16 | 5680910336 | 486 | UMT5Config { "architectures": [ "UMT5EncoderModel" ], "classifier_dropout": 0.0, "d_ff": 10240, "d_kv": 64, "d_model": 4096, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": true, "layer_norm_epsilon": 1e-06, "model_type": "umt5", "num_decoder_layers": 24, "num_heads": 64, "num_layers": 24, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "scalable_attention": true, "tie_word_embeddings": false, "tokenizer_class": "T5Tokenizer", "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "use_cache": true, "vocab_size": 256384 } |
tokenizer | T5TokenizerFast | None | None | 0 | 0 | None |
transformer | WanTransformer3DModel | xpu:0 | torch.bfloat16 | 1418996800 | 858 | FrozenDict({'patch_size': [1, 2, 2], 'num_attention_heads': 12, 'attention_head_dim': 128, 'in_channels': 16, 'out_channels': 16, 'text_dim': 4096, 'freq_dim': 256, 'ffn_dim': 8960, 'num_layers': 30, 'cross_attn_norm': True, 'qk_norm': 'rms_norm_across_heads', 'eps': 1e-06, 'image_dim': None, 'added_kv_proj_dim': None, 'rope_max_seq_len': 1024, 'pos_embed_seq_len': None, '_use_default_values': ['pos_embed_seq_len'], '_class_name': 'WanTransformer3DModel', '_diffusers_version': '0.33.0.dev0', '_name_or_path': 'Wan-AI/Wan2.1-T2V-1.3B-Diffusers'}) |
scheduler | UniPCMultistepScheduler | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'beta_start': 0.0001, 'beta_end': 0.02, 'beta_schedule': 'linear', 'trained_betas': None, 'solver_order': 2, 'prediction_type': 'flow_prediction', 'thresholding': False, 'dynamic_thresholding_ratio': 0.995, 'sample_max_value': 1.0, 'predict_x0': True, 'solver_type': 'bh2', 'lower_order_final': True, 'disable_corrector': [], 'solver_p': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'use_flow_sigmas': True, 'flow_shift': 3.0, 'timestep_spacing': 'linspace', 'steps_offset': 0, 'final_sigmas_type': 'zero', 'rescale_betas_zero_snr': False, 'use_dynamic_shifting': False, 'time_shift_type': 'exponential', '_use_default_values': ['use_dynamic_shifting', 'time_shift_type'], '_class_name': 'UniPCMultistepScheduler', '_diffusers_version': '0.33.0.dev0'}) |
_name_or_path | str | None | None | 0 | 0 | None |
_class_name | str | None | None | 0 | 0 | None |
_diffusers_version | str | None | None | 0 | 0 | None |
...



































