...
| CFG7, STEP30 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| bookshop girl | ||||
| hand and face | ||||
| legs and shoes |
improved prompts
Prompt: masterpiece, best quality, photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
...
| CFG5, STEP 28 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| bookshop girl | ||||
| hand and face | ||||
| legs and shoes |
Test 1 - Bookshop
romptPrompt: masterpiece, best quality, photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
Negative: nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthroanthr
Parameters: Steps: 30| Size: 1024x1024| Seed: 4075624134| CFG scale: 42| App: SD.Next| Version: e762994d5eaed8| Pipeline: StableDiffusionXLPipeline| Operations: txt2img| Model: noobaiXLNAIXL_vPred10VersionepsilonPred11Version| Model hash: ea349eeae86681e8e4b1
Time: 2m 4443.63s 03s | total 187185.94 22 pipeline 158156.36 82 callback 12.38 36 preview 109.61 03 decode 6.17 prompt 0.23 47 gc 0.31 33 | GPU 9436 MB 8% | RAM 4445.95 06 GB 36%
| 20 | 30 | 40 | 50 | ||
|---|---|---|---|---|---|
CFG1 | |||||
CFG2 | |||||
CFG3 | |||||
CFG4 | |||||
CFG5 | |||||
CFG6 | CFG7 | ||||
CFG7 | |||||
CFG8 | |||||
CFG9 |
Test 2 - Face Test 2 - Face and hand
Prompt: masterpiece, best quality, Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
Negative: nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthro, furry, ambiguous form, feral, semi-anthroanthr
Parameters: Steps: 32| Size: 1024x1024| Seed: 27360291721620085323| CFG scale: 49| App: SD.Next| Version: e762994d5eaed8| Pipeline: StableDiffusionXLPipeline| Operations: txt2img| Model: noobaiXLNAIXL_vPred10VersionepsilonPred11Version| Model hash: ea349eeae86681e8e4b1
Time: 2m 5550.81s 55s | total 197188.72 15 pipeline 169164.41 37 callback 13.25 preview 7.51 23 decode 6.36 prompt 0.82 14 preview 4.06 gc 0.32 | GPU 9436 9438 MB 8% | RAM 4445.99 29 GB 36%
| 8 | 16 | 20 | 32 | |
|---|---|---|---|---|
CFG1 | ||||
CFG2 | ||||
CFG3 | ||||
CFG4 | ||||
CFG5 | CFG8 |
Test 3 - Legs
CFG6 | ||||
CFG8 | ||||
CFG9 |
Test 3 - Legs
Prompt: masterpiece, best quality, Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
Negative: nsfw, worst quality, old, early, low quality, lowres, signature, username, logo, bad hands, mutated hands, mammal, anthroanthropoid, furry, ambiguous form, feral, semi-anthro
Parameters: Steps: 32| Size: 1024x1024| Seed: 19317010402736029172| CFG scale: 35| App: SD.Next| Version: 3e105f3d5eaed8| Pipeline: StableDiffusionXLPipeline| Operations: txt2img| Model: noobaiXLNAIXL_vPred10VersionepsilonPred11Version| Model hash: ea349eeae86681e8e4b1
Time: 3m 502m 53.46s 66s | total 190193.37 44 pipeline 166167.10 47 callback 13.21 23 decode 6.34 14 preview 35.48 41 prompt 0.80 gc 0.41 34 | GPU 9442 9434 MB 8% | RAM 2144.42 98 GB 17%36%
| 8 | 16 | 20 | 32 | |
|---|---|---|---|---|
CFG1 | ||||
CFG2 | ||||
CFG3 | CFG4 | CFG4.5 | CFG5 | CFG8 |
...
CFG4 | ||||
CFG5 | ||||
CFG6 | ||||
CFG7 | ||||
CFG8 | ||||
CFG9 |
Test 4 - Other model Covers
System info
| Code Block |
|---|
Wed OctNov 2912 1921:0723:2724 2025 app: sdnext.git updated: 2025-1011-2911 hash: 3e105f34cd5eaed811 url: https://github.com/liutyi/sdnext.git/tree/ipex arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-3335-generic python: 3.12.3 Torch: 2.7.1+xpu device: Intel(R) Arc(TM) Graphics (1) ipex: 2.7.10+xpu ram: free:120102.9811 used:423.3522 total:125.33 gpu: free:106108.95 used:108.4787 total:117.37 gpu-active: current:6.7275 peak:8.3409 gpu-allocated: current:6.7275 peak:8.3409 gpu-reserved: current:108.4787 peak:108.4787 gpu-inactive: current:0.587 peak:10.1591 events: retries:0 oom:0 utilization: 0 xformers: diffusers: 0.36.0.dev0 transformers: 4.57.1 active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16 bfloat16 base: noobaiXLNAIXL_vPred10VersionepsilonPred11Version [6681e8e4b1] refiner: none vae: none te: none unet: none |
Config
| Code Block |
|---|
{
"diffuserssd_model_versioncheckpoint": "84e16575e4c5e90b6b49301cfa162ced4cf478d2noobaiXLNAIXL_epsilonPred11Version [6681e8e4b1]",
"diffusers_to_gpu": true,
"device_map": "gpu",
"model_wan_stage": "combined",
"diffusers_offload_mode": "none",
"uisdnq_requestdequantize_timeoutcompile": 300000false,
"civitaiui_request_tokentimeout": "f1 .. 65"300000,
"huggingface_token": "hf _... FraU",
"hfdiffusers_transfer_modeversion": "xetb3e9dfced7c9e8d00f646c710766b532383f04c6",
"sd_checkpoint_hash": "6681e8e4b134c81f16533acedb0d406d7e5e366e1624b4105178c64d00b05d51ea349eeae87ca8d25ba902c93810f7ca83e5c82f920edf12f273af004ae02819",
"sdnqcivitai_dequantize_compiletoken": false,
"sd_model_checkpoint": "noobaiXLNAIXL_epsilonPred11Version",
"schedulers_shift": 4 "xxx"
} |
Model info
noobaiXLNAIXL_vPred10Version epsilonPred11Version [ea349eeae86681e8e4b1]
| Module | Class | Device | Dtype | Quant | Params | Modules | Config |
|---|---|---|---|---|---|---|---|
| vae | AutoencoderKL | xpu:0 | torch.float16bfloat16 | None83653863 | 83653863 | 243 | FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 4, 'norm_num_groups': 32, 'sample_size': 1024, 'scaling_factor': 0.13025, 'shift_factor': None, 'latents_mean': None, 'latents_std': None, 'force_upcast': False, 'use_quant_conv': True, 'use_post_quant_conv': True, 'mid_block_add_attention': True, '_use_default_values': ['mid_block_add_attention', 'latents_std', 'useshift_quant_convfactor', 'use_post_quant_conv', 'shiftlatents_factormean', 'latentsuse_quant_meanconv'], '_class_name': 'AutoencoderKL', '_diffusers_version': '0.20.0.dev0', '_name_or_path': '../sdxl-vae/'}) |
| text_encoder | CLIPTextModel | xpu:0 | torch.float16bfloat16 | None123060480 | 123060480 | 152 | CLIPTextConfig { "architectures": [ "CLIPTextModel" ], "attention_dropout": 0.0, "bos_token_id": 0, "dropout": 0.0, "dtype": "float16", "eos_token_id": 2, "hidden_act": "quick_gelu", "hidden_size": 768, "initializer_factor": 1.0, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 77, "model_type": "clip_text_model", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "projection_dim": 768, "transformers_version": "4.57.1", "vocab_size": 49408 } |
| text_encoder_2 | CLIPTextModelWithProjection | xpu:0 | torch.float16bfloat16 | None694659840 | 694659840 | 393 | CLIPTextConfig { "architectures": [ "CLIPTextModelWithProjection" ], "attention_dropout": 0.0, "bos_token_id": 0, "dropout": 0.0, "dtype": "float16", "eos_token_id": 2, "hidden_act": "gelu", "hidden_size": 1280, "initializer_factor": 1.0, "initializer_range": 0.02, "intermediate_size": 5120, "layer_norm_eps": 1e-05, "max_position_embeddings": 77, "model_type": "clip_text_model", "num_attention_heads": 20, "num_hidden_layers": 32, "pad_token_id": 1, "projection_dim": 1280, "transformers_version": "4.57.1", "vocab_size": 49408 } |
| tokenizer | CLIPTokenizer | None | None | None | 0 | 0 | None |
| tokenizer_2 | CLIPTokenizer | None | None | None | 0 | 0 | None |
| unet | UNet2DConditionModel | xpu:0 | torch.float16bfloat16 | None2567463684 | 2567463684 | 1930 | FrozenDict({'sample_size': 128, 'in_channels': 4, 'out_channels': 4, 'center_input_sample': False, 'flip_sin_to_cos': True, 'freq_shift': 0, 'down_block_types': ['DownBlock2D', 'CrossAttnDownBlock2D', 'CrossAttnDownBlock2D'], 'mid_block_type': 'UNetMidBlock2DCrossAttn', 'up_block_types': ['CrossAttnUpBlock2D', 'CrossAttnUpBlock2D', 'UpBlock2D'], 'only_cross_attention': False, 'block_out_channels': [320, 640, 1280], 'layers_per_block': 2, 'downsample_padding': 1, 'mid_block_scale_factor': 1, 'dropout': 0.0, 'act_fn': 'silu', 'norm_num_groups': 32, 'norm_eps': 1e-05, 'cross_attention_dim': 2048, 'transformer_layers_per_block': [1, 2, 10], 'reverse_transformer_layers_per_block': None, 'encoder_hid_dim': None, 'encoder_hid_dim_type': None, 'attention_head_dim': [5, 10, 20], 'num_attention_heads': None, 'dual_cross_attention': False, 'use_linear_projection': True, 'class_embed_type': None, 'addition_embed_type': 'text_time', 'addition_time_embed_dim': 256, 'num_class_embeds': None, 'upcast_attention': None, 'resnet_time_scale_shift': 'default', 'resnet_skip_time_act': False, 'resnet_out_scale_factor': 1.0, 'time_embedding_type': 'positional', 'time_embedding_dim': None, 'time_embedding_act_fn': None, 'timestep_post_act': None, 'time_cond_proj_dim': None, 'conv_in_kernel': 3, 'conv_out_kernel': 3, 'projection_class_embeddings_input_dim': 2816, 'attention_type': 'default', 'class_embeddings_concat': False, 'mid_block_only_cross_attention': None, 'cross_attention_norm': None, 'addition_embed_type_num_heads': 64, '_use_default_values': ['dropout', 'attention_type', 'reverse_transformer_layers_per_block', 'dropout', 'attention_type'], '_class_name': 'UNet2DConditionModel', '_diffusers_version': '0.19.0.dev0'}) |
| scheduler | EulerAncestralDiscreteScheduler | None | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'scaled_linear', 'trained_betas': None, 'prediction_type': 'epsilon', 'timestep_spacing': 'trailing', 'steps_offset': 1, 'rescale_betas_zero_snr': False, 'interpolation_type': 'linear', 'use_karras_sigmas': False, '_class_name': 'EulerAncestralDiscreteScheduler', '_diffusers_version': '0.35.1', 'clip_sample': False, 'sample_max_value': 1.0, 'set_alpha_to_one': False, 'skip_prk_steps': True}) |
| image_encoder | NoneType | None | None | None | 0 | 0 | None |
| feature_extractor | NoneType | None | None | None | 0 | 0 | None |
| force_zeros_for_empty_prompt | bool | None | None | None | 0 | 0 | None |
...