![]()
![]()
Prompt: School teacher with chalk in one hand and "AI Art" book in the other hand, looking at the class explaining the code. Blackboard behind him is with python code fragment written using chalk. Black bold text "6B" and neon lightning-bolt lamp (⚡️) is on the wall, brush painted above the blackboard at the left The code is image = pipe( prompt=prompt, height=1024, width=1024, num_inference_steps=9, # This actually results in 8 DiT forwards guidance_scale=0.0, # Guidance should be 0 for the Turbo models generator=torch.Generator("cuda").manual_seed(42), ).images[0] cinematic light, natural sunset sunlight through the window. photorealistic.
Parameters: Steps: 9| Size: 1024x1024| Seed: 777| CFG scale: 1.0| App: SD.Next| Version: 7df6ff7| Pipeline: ZImagePipeline| Operations: txt2img| Model: Z-Image-Turbo
Time: 4m 3.13s | total 189.98 pipeline 173.54 preview 7.18 decode 5.18 callback 2.43 te 1.31 gc 0.32 | GPU 22700 MB 18% | RAM 33.42 GB 27%
https://tongyi-mai.github.io/Z-Image-blog/
https://github.com/Tongyi-MAI/Z-Image
https://huggingface.co/Tongyi-MAI/Z-Image-Turbo
https://civitai.com/models/2168935/z-image?modelVersionId=2442439
prompt = "Young Chinese woman in red Hanfu, intricate embroidery. Impeccable makeup, red floral forehead pattern. Elaborate high bun, golden phoenix headdress, red flowers, beads. Holds round folding fan with lady, trees, bird. Neon lightning-bolt lamp (⚡️), bright yellow glow, above extended left palm. Soft-lit outdoor night background, silhouetted tiered pagoda (西安大雁塔), blurred colorful distant lights."
# 2. Generate Image
image = pipe(
prompt=prompt,
height=1024,
width=1024,
num_inference_steps=9, # This actually results in 8 DiT forwards
guidance_scale=0.0, # Guidance should be 0 for the Turbo models
generator=torch.Generator("cuda").manual_seed(42),
).images[0]
|
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
| CFG0, STEP9 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| bookshop girl | ||||
| hand and face | ||||
| legs and shoes |
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
| 2 | 4 | 9 | 12 | 16 | |
|---|---|---|---|---|---|
| CFG0 |
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
| 2 | 4 | 9 | 12 | 16 | |
|---|---|---|---|---|---|
| CFG0 |
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
| 2 | 4 | 9 | 12 | 16 | |
|---|---|---|---|---|---|
| CFG0 |
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
| seed 1 | seed 2 | seed 3 | seed 4 | seed 5 |
| seed 6 | seed 7 | seed 8 | seed 9 | |
| seed 42 | seed 324 | seed 777 |
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-36-generic python: 3.12.3 Torch: 2.9.1+xpu device: Intel(R) Arc(TM) Graphics (1) xformers: diffusers: 0.36.0.dev0 transformers: 4.57.1 active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16 Backend: ipex Pipeline: native Cross-attention: Scaled-Dot-Product |
{
} |
Diffusers/Tongyi-MAI/Z-Image-Turbo [8dc64d5281]
| Module | Class | Device | Dtype | Quant | Params | Modules | Config |
|---|---|---|---|---|---|---|---|
| vae | AutoencoderKL | xpu:0 | torch.bfloat16 | None | 83819683 | 241 | FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 16, 'norm_num_groups': 32, 'sample_size': 1024, 'scaling_factor': 0.3611, 'shift_factor': 0.1159, 'latents_mean': None, 'latents_std': None, 'force_upcast': True, 'use_quant_conv': False, 'use_post_quant_conv': False, 'mid_block_add_attention': True, '_class_name': 'AutoencoderKL', '_diffusers_version': '0.36.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Tongyi-MAI--Z-Image-Turbo/snapshots/8dc64d5281ef263238d1b12eb617b4bf1ed3ff2f/vae'}) |
| text_encoder | Qwen3Model | xpu:0 | torch.bfloat16 | None | 4022468096 | 545 | Qwen3Config { "architectures": [ "Qwen3ForCausalLM" ], "attention_bias": false, "attention_dropout": 0.0, "bos_token_id": 151643, "dtype": "bfloat16", "eos_token_id": 151645, "head_dim": 128, "hidden_act": "silu", "hidden_size": 2560, "initializer_range": 0.02, "intermediate_size": 9728, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 40960, "max_window_layers": 36, "model_type": "qwen3", "num_attention_heads": 32, "num_hidden_layers": 36, "num_key_value_heads": 8, "rms_norm_eps": 1e-06, "rope_scaling": null, "rope_theta": 1000000, "sliding_window": null, "tie_word_embeddings": true, "transformers_version": "4.57.1", "use_cache": true, "use_sliding_window": false, "vocab_size": 151936 } |
| tokenizer | Qwen2Tokenizer | None | None | None | 0 | 0 | None |
| scheduler | FlowMatchEulerDiscreteScheduler | None | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'shift': 3.0, 'use_dynamic_shifting': False, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'time_shift_type': 'exponential', 'stochastic_sampling': False, '_use_default_values': ['base_shift', 'max_shift', 'use_karras_sigmas', 'shift_terminal', 'base_image_seq_len', 'invert_sigmas', 'max_image_seq_len', 'use_exponential_sigmas', 'time_shift_type', 'use_beta_sigmas', 'stochastic_sampling'], '_class_name': 'FlowMatchEulerDiscreteScheduler', '_diffusers_version': '0.36.0.dev0'}) |
| transformer | ZImageTransformer2DModel | xpu:0 | torch.bfloat16 | None | 6154908736 | 697 | FrozenDict({'all_patch_size': [2], 'all_f_patch_size': [1], 'in_channels': 16, 'dim': 3840, 'n_layers': 30, 'n_refiner_layers': 2, 'n_heads': 30, 'n_kv_heads': 30, 'norm_eps': 1e-05, 'qk_norm': True, 'cap_feat_dim': 2560, 'rope_theta': 256.0, 't_scale': 1000.0, 'axes_dims': [32, 48, 48], 'axes_lens': [1536, 512, 512], '_class_name': 'ZImageTransformer2DModel', '_diffusers_version': '0.36.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Tongyi-MAI--Z-Image-Turbo/snapshots/8dc64d5281ef263238d1b12eb617b4bf1ed3ff2f/transformer'}) |