Info
https://civitai.com/models/133005/juggernaut-xl
...
| Code Block |
|---|
Res: 832*1216 (For Portrait, but any SDXL Res will work fine) Sampler: DPM++ 2M SDE Steps: 30-40 CFG: 3-6 (less is a bit more realistic) |
Test 0 - Different seed variations
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
...
| CFG6, STEP20 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| bookshop girl | ||||
| hand and face | ||||
| legs and shoes |
Test 0.5 - 832x1216
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
...
| CFG6, STEP20 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| bookshop girl | ||||
| hand and face | ||||
| legs and shoes |
Test 1 - Bookshop
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
| 4 | 8 | 16 | 32 | 64 | |
|---|---|---|---|---|---|
CFG1 | |||||
CFG2 | |||||
CFG3 | |||||
CFG4 | |||||
CFG5 | |||||
CFG6 | |||||
CFG7 | |||||
CFG8 |
Test 2 - Face and hand
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
Parameters: Steps: 32| Size: 1024x1024| Seed: 2736029172| CFG scale: 4| App: SD.Next| Version: f71db69| Pipeline: StableDiffusionXLPipeline| Operations: txt2img| Model: juggernautXL_ragnarokBy| Model hash: dd08fa32f9
Time: 2m 53.49s | total 193.31 pipeline 167.19 callback 13.19 decode 6.27 preview 5.31 prompt 1.02 gc 0.29 | GPU 9440 MB 8% | RAM 22.77 GB 18%
| 8 | 16 | 20 | 32 | |||||||
|---|---|---|---|---|---|---|---|---|---|---|
CFG1 | 8 | 16 | 20 | 32 | CFG1 | |||||
CFG2 | ||||||||||
CFG3 | ||||||||||
CFG4 | ||||||||||
CFG5 | ||||||||||
CFG6 | ||||||||||
CFG8 |
Test 3 - Legs
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
...
CFG1
...
...
...
...
...
CFG2
...
...
...
...
...
CFG3
...
CFG4
...
CFG5
...
CFG6
...
CFG8
Test 5 Different samplers
Prompt: photo of a cute female teal robot, walking on water surface with rocks and mountains visible in background, during sunset, rich details
Parameters: Steps: 20| Size: 1024x1024| Seed: 159345170| CFG scale: 4| App: SD.Next| Version: 57fdc0a| Pipeline: StableDiffusionXLPipeline| Operations: txt2img| Model: tempestByVlad_baseV01| Model hash: 8bfad17222
Time: 1m 52.40s | total 128.89 pipeline 106.15 callback 8.31 preview 7.24 decode 6.20 prompt 0.68 gc 0.26 | GPU 9432 MB 8% | RAM 3.86 GB 3%
Sampler: Default
Sampler: DPM++
Sampler: DPM++ SDE
Sampler: DDPM
Sampler: Euler
Sampler: Euler a
Sampler: UniPC
Sampler: DPM++ 1S
slightly bent at the knee.
| 8 | 16 | 20 | 32 | |
|---|---|---|---|---|
CFG1 | ||||
CFG2 | ||||
CFG3 | ||||
CFG4 | ||||
CFG5 | ||||
CFG6 | ||||
CFG8 |
Sampler: DPM SDE
Sampler: DDIM
Sampler: Heun
Sampler: DEIS
Sampler: PNDM
Sampler: DC Solver
Sampler: SA Solver
Sampler: LMSD
Sampler: LCM
Sampler: TCD
Sampler: TDD
Sampler: KDPM2
Sampler: KDPM2 a
Test 5 CFG 6 vs CFG 3.5
System info
| Code Block |
|---|
Mon Sep 29 13:03:03 2025 app: sdnext.git updated: 2025-10-03 hash: 48bcf6a76 url: https://github.com/liutyi/sdnext.git/tree/ipex arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-33-generic python: 3.12.3 Torch: 2.7.1+xpu device: Intel(R) Arc(TM) Graphics (1) ipex: 2.7.10+xpu ram: free:116.31 used:9.02 total:125.33 gpu: free:108.15 used:9.22 total:117.37 gpu-active: current:6.64 peak:7.28 gpu-allocated: current:6.64 peak:7.28 gpu-reserved: current:9.22 peak:9.22 gpu-inactive: current:0.26 peak:0.65 events: retries:0 oom:0 utilization: 0 xformers: diffusers: 0.36.0.dev0 transformers: 4.56.2 active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16 base: juggernautXL_ragnarokBy [dd08fa32f9] refiner: none vae: none te: none unet: none Backend: ipex Cross-attention: Scaled-Dot-Product |
Config
| Code Block |
|---|
{
"sd_model_checkpoint": "juggernautXL_ragnarokBy [dd08fa32f9]",
"theme_type": "Standard",
"diffusers_version": "b4297967a04cca6ac4493202c02d81c30d0f9ee8",
"sd_checkpoint_hash": "dd08fa32f98d05a2443ca1419e46df1575a0811f6e3b246d9dd47ff20f5eb66a",
"huggingface_token": "hf_xxx",
"samples_filename_pattern": "[date]-[seq]-[model_name]-[width]x[height]-Seed[seed]-CFG[cfg]-AG[pag]-STEP[steps]",
"diffusers_to_gpu": true,
"device_map": "gpu",
"diffusers_offload_mode": "none",
"diffusers_generator_device": "Unset",
"queue_history_retention_days": "3 days",
"model_modular_enable": true
} |
Model info
| Code Block |
|---|
| Module | Class | Device | Dtype | Quant | Params | Modules | Config |
|---|---|---|---|---|---|---|---|
| vae | AutoencoderKL | xpu:0 | torch.bfloat16 | None | 83653863 | 243 | FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 4, 'norm_num_groups': 32, 'sample_size': 1024, 'scaling_factor': 0.13025, 'shift_factor': None, 'latents_mean': None, 'latents_std': None, 'force_upcast': False, 'use_quant_conv': True, 'use_post_quant_conv': True, 'mid_block_add_attention': True, '_use_default_values': ['use_post_quant_conv', 'latents_std', 'use_quant_conv', 'shift_factor', 'mid_block_add_attention', 'latents_mean'], '_class_name': 'AutoencoderKL', '_diffusers_version': '0.20.0.dev0', '_name_or_path': '../sdxl-vae/'}) |
| text_encoder | CLIPTextModel | xpu:0 | torch.bfloat16 | None | 123060480 | 152 | CLIPTextConfig { "architectures": [ "CLIPTextModel" ], "attention_dropout": 0.0, "bos_token_id": 0, "dropout": 0.0, "dtype": "float16", "eos_token_id": 2, "hidden_act": "quick_gelu", "hidden_size": 768, "initializer_factor": 1.0, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 77, "model_type": "clip_text_model", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "projection_dim": 768, "transformers_version": "4.56.2", "vocab_size": 49408 } |
| text_encoder_2 | CLIPTextModelWithProjection | xpu:0 | torch.bfloat16 | None | 694659840 | 393 | CLIPTextConfig { "architectures": [ "CLIPTextModelWithProjection" ], "attention_dropout": 0.0, "bos_token_id": 0, "dropout": 0.0, "dtype": "float16", "eos_token_id": 2, "hidden_act": "gelu", "hidden_size": 1280, "initializer_factor": 1.0, "initializer_range": 0.02, "intermediate_size": 5120, "layer_norm_eps": 1e-05, "max_position_embeddings": 77, "model_type": "clip_text_model", "num_attention_heads": 20, "num_hidden_layers": 32, "pad_token_id": 1, "projection_dim": 1280, "transformers_version": "4.56.2", "vocab_size": 49408 } |
| tokenizer | CLIPTokenizer | None | None | None | 0 | 0 | None |
| tokenizer_2 | CLIPTokenizer | None | None | None | 0 | 0 | None |
| unet | UNet2DConditionModel | xpu:0 | torch.bfloat16 | None | 2567463684 | 1930 | FrozenDict({'sample_size': 128, 'in_channels': 4, 'out_channels': 4, 'center_input_sample': False, 'flip_sin_to_cos': True, 'freq_shift': 0, 'down_block_types': ['DownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D'], 'mid_block_type': 'UNetMidBlock2DCrossAttn', 'up_block_types': ['CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'UpBlock2D'], 'only_cross_attention': False, 'block_out_channels': [320, 640, 1280], 'layers_per_block': 2, 'downsample_padding': 1, 'mid_block_scale_factor': 1, 'dropout': 0.0, 'act_fn': 'silu', 'norm_num_groups': 32, 'norm_eps': 1e-05, 'cross_attention_dim': 2048, 'transformer_layers_per_block': [1, 2, 10], 'reverse_transformer_layers_per_block': None, 'encoder_hid_dim': None, 'encoder_hid_dim_type': None, 'attention_head_dim': [5, 10, 20], 'num_attention_heads': None, 'dual_cross_attention': False, 'use_linear_projection': True, 'class_embed_type': None, 'addition_embed_type': 'text_time', 'addition_time_embed_dim': 256, 'num_class_embeds': None, 'upcast_attention': None, 'resnet_time_scale_shift': 'default', 'resnet_skip_time_act': False, 'resnet_out_scale_factor': 1.0, 'time_embedding_type': 'positional', 'time_embedding_dim': None, 'time_embedding_act_fn': None, 'timestep_post_act': None, 'time_cond_proj_dim': None, 'conv_in_kernel': 3, 'conv_out_kernel': 3, 'projection_class_embeddings_input_dim': 2816, 'attention_type': 'default', 'class_embeddings_concat': False, 'mid_block_only_cross_attention': None, 'cross_attention_norm': None, 'addition_embed_type_num_heads': 64, '_use_default_values': ['attention_type', 'dropout', 'reverse_transformer_layers_per_block'], '_class_name': 'UNet2DConditionModel', '_diffusers_version': '0.19.0.dev0'}) |
| scheduler | EulerDiscreteScheduler | None | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'prediction_type': 'epsilon', 'interpolation_type': 'linear', 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'sigma_min': None, 'sigma_max': None, 'timestep_spacing': 'leading', 'timestep_type': 'discrete', 'steps_offset': 1, 'rescale_betas_zero_snr': False, 'final_sigmas_type': 'zero', '_use_default_values': ['final_sigmas_type', 'use_beta_sigmas', 'use_exponential_sigmas', 'sigma_min', 'rescale_betas_zero_snr', 'timestep_type', 'sigma_max'], '_class_name': 'EulerDiscreteScheduler', '_diffusers_version': '0.19.0.dev0', 'clip_sample': False, 'sample_max_value': 1.0, 'set_alpha_to_one': False, 'skip_prk_steps': True}) |
| image_encoder | NoneType | None | None | None | 0 | 0 | None |
| feature_extractor | NoneType | None | None | None | 0 | 0 | None |
| force_zeros_for_empty_prompt | bool | None | None | None | 0 | 0 | None |
...