...
Test 0 - Different seed variations
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
Parameters: Steps: 9| Size: 1024x1024| Seed: 1620085323| CFG scale: 1.0| CFG true: 1| App: SD.Next| Version: 7df6ff7| Pipeline: ZImagePipeline| Operations: txt2img| Model: Z-Image-Turbo
Time: 2m 52.64s | total 182.41 pipeline 167.45 preview 5.81 decode 5.16 callback 2.36 te 1.32 gc 0.29 | GPU 22700 MB 18% | RAM 33.25 GB 27%
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
| CFG0CFG1, STEP9 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| bookshop girl | ||||
| hand and face | ||||
| legs and shoes |
...
Test 4 Other Model Covers
Test 5 Art Prompts
Test 6 - Empty prompts
...
Test 7 - CivitAi generations
System info
| Code Block |
|---|
Thu Nov 27 19:01:49 2025 arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-36-generic python: 3.12.3 Torch: 2.9.1+xpu device: Intel(R) Arc(TM) Graphics (1) ram: free:117.8 used:7.54 total:125.33 xformers: diffusers: 0.36.0.dev0 transformers: 4.57.1 active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16 base: Diffusers/Tongyi-MAI/Z-Image-Turbo [8dc64d5281] refiner: none vae: none te: none unet: none Backend: ipex Pipeline: native Cross-attention: Scaled-Dot-Product |
Config
| Code Block |
|---|
{
"diffusers_version": "6bf668c4d217ebc96065e673d8a257fd79950d34",
"sd_checkpoint_hash": null,
"diffusers_offload_mode": "none",
"ui_request_timeout": 300000,
"huggingface_token": "hf_...FraU",
"extra_network_reference_values": true,
"sd_model_checkpoint": "Diffusers/Tongyi-MAI/Z-Image-Turbo [8dc64d5281]"
} |
Model info
Diffusers/Tongyi-MAI/Z-Image-Turbo [8dc64d5281]
| Module | Class | Device | Dtype | Quant | Params | Modules | Config |
|---|---|---|---|---|---|---|---|
| vae | AutoencoderKL | xpu:0 | torch.bfloat16 | None83819683 | 83819683 | 241 | FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 16, 'norm_num_groups': 32, 'sample_size': 1024, 'scaling_factor': 0.3611, 'shift_factor': 0.1159, 'latents_mean': None, 'latents_std': None, 'force_upcast': True, 'use_quant_conv': False, 'use_post_quant_conv': False, 'mid_block_add_attention': True, '_class_name': 'AutoencoderKL', '_diffusers_version': '0.36.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Tongyi-MAI--Z-Image-Turbo/snapshots/8dc64d5281ef263238d1b12eb617b4bf1ed3ff2f/vae'}) |
| text_encoder | Qwen3ModelQwen3Model | xpu:0 | torch.bfloat16 | None4022468096 | 4022468096 | 545 | Qwen3Config { "architectures": [ "Qwen3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 151643, "dtype": "bfloat16", "eos_token_id": 151645, "head_dim": 128, "hidden_act": "silu", "hidden_size": 2560, "initializer_range": 0.02, "intermediate_size": 9728, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 40960, "max_window_layers": 36, "model_type": "qwen3", "num_attention_heads": 32, "num_hidden_layers": 36, "num_key_value_heads": 8, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 1000000, "sliding_window": null, "tie_word_embeddings": true, "transformers_version": "4.57.1", "use_cache": true, "use_sliding_window": false, "vocab_size": 151936 } |
| tokenizerQwen2Tokenizer | Qwen2Tokenizer | None | None | None | 0 | 0 | None |
| schedulerFlowMatchEulerDiscreteScheduler | FlowMatchEulerDiscreteScheduler | None | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'shift': 3.0, 'use_dynamic_shifting': False, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'time_shift_type': 'exponential', 'stochastic_sampling': False, '_use_default_values': ['base_shift', 'max_shift', 'use_karras_sigmas', 'shift_terminal', 'base_image_seq_len', 'invert_sigmas', 'max_image_seq_len', 'use_exponential_sigmas', 'time_shift_type', 'use_beta_sigmas', 'stochastic_sampling'], '_class_name': 'FlowMatchEulerDiscreteScheduler', '_diffusers_version': '0.36.0.dev0'}) |
| transformerZImageTransformer2DModel | ZImageTransformer2DModel | xpu:0 | torch.bfloat16 | None6154908736 | 6154908736 | 697 | FrozenDict({'all_patch_size': [2], 'all_f_patch_size': [1], 'in_channels': 16, 'dim': 3840, 'n_layers': 30, 'n_refiner_layers': 2, 'n_heads': 30, 'n_kv_heads': 30, 'norm_eps': 1e-05, 'qk_norm': True, 'cap_feat_dim': 2560, 'rope_theta': 256.0, 't_scale': 1000.0, 'axes_dims': [32, 48, 48], 'axes_lens': [1536, 512, 512], '_class_name': 'ZImageTransformer2DModel', '_diffusers_version': '0.36.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Tongyi-MAI--Z-Image-Turbo/snapshots/8dc64d5281ef263238d1b12eb617b4bf1ed3ff2f/transformer'}) |
...