You are viewing an old version of this page. View the current version.

Compare with Current View Page History

Version 1 Next »

Info

https://huggingface.co/stabilityai/stable-diffusion-3.5-large-turbo

Defaults for Turbo:

    num_inference_steps=4,
    guidance_scale=0.0,

Defaults for large

    num_inference_steps=28,
    guidance_scale=3.5,

Test 1 - Bookshop

Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling

Parameters: Steps: 8| Size: 512x512| Seed: 3462878332| App: SD.Next| Version: 2548b20| Pipeline: StableDiffusion3Pipeline| Operations: txt2img| Model: stable-diffusion-3.5-large
processing | 17.9/7.6s

Time: 1m 7.44s | total 94.06 pipeline 55.41 preview 17.62 move 9.00 prompt 8.40 decode 3.37 gc 0.47 | GPU 1100 MB 1% | RAM 28.51 GB 23%


Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling

Parameters: Steps: 8| Size: 768x768| Seed: 1461218801| App: SD.Next| Version: 2548b20| Pipeline: StableDiffusion3Pipeline| Operations: txt2img| Model: stable-diffusion-3.5-large

Time: 1m 40.18s | total 120.67 pipeline 95.94 preview 20.44 decode 3.97 gc 0.53 post 0.27 | GPU 2040 MB 2% | RAM 28.57 GB 23%



2481632

CFG0
CFG1

768px






CFG2

512px






CFG3

512px






CFG5

512px







Test 2 - Face and hand

Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.



2481632

CFG0

CFG1

768px






CFG2

512px






CFG3

512px






CFG4

512px






Test 3 - Legs

Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.



2481632

CFG0
CFG1

768px






CFG0
CFG1

512px






CFG2

512px







System info




Model


Module Class Device DType Params Modules Config

vae

AutoencoderKL

cpu

torch.bfloat16

83819683

241

FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 16, 'norm_num_groups': 32, 'sample_size': 1024, 'scaling_factor': 1.5305, 'shift_factor': 0.0609, 'latents_mean': None, 'latents_std': None, 'force_upcast': True, 'use_quant_conv': False, 'use_post_quant_conv': False, 'mid_block_add_attention': True, '_class_name': 'AutoencoderKL', '_diffusers_version': '0.31.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--stabilityai--stable-diffusion-3.5-large/snapshots/ceddf0a7fdf2064ea28e2213e3b84e4afa170a0f/vae'})

text_encoder

CLIPTextModelWithProjection

cpu

torch.bfloat16

123650304

153

CLIPTextConfig { "architectures": [ "CLIPTextModelWithProjection" ], "attention_dropout": 0.0, "bos_token_id": 0, "dropout": 0.0, "eos_token_id": 2, "hidden_act": "quick_gelu", "hidden_size": 768, "initializer_factor": 1.0, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 77, "model_type": "clip_text_model", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "projection_dim": 768, "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "vocab_size": 49408 }

text_encoder_2

CLIPTextModelWithProjection

cpu

torch.bfloat16

694659840

393

CLIPTextConfig { "architectures": [ "CLIPTextModelWithProjection" ], "attention_dropout": 0.0, "bos_token_id": 0, "dropout": 0.0, "eos_token_id": 2, "hidden_act": "gelu", "hidden_size": 1280, "initializer_factor": 1.0, "initializer_range": 0.02, "intermediate_size": 5120, "layer_norm_eps": 1e-05, "max_position_embeddings": 77, "model_type": "clip_text_model", "num_attention_heads": 20, "num_hidden_layers": 32, "pad_token_id": 1, "projection_dim": 1280, "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "vocab_size": 49408 }

text_encoder_3

T5EncoderModel

cpu

torch.bfloat16

4762310656

463

T5Config { "architectures": [ "T5EncoderModel" ], "classifier_dropout": 0.0, "d_ff": 10240, "d_kv": 64, "d_model": 4096, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": true, "layer_norm_epsilon": 1e-06, "model_type": "t5", "num_decoder_layers": 24, "num_heads": 64, "num_layers": 24, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "use_cache": true, "vocab_size": 32128 }

tokenizer

CLIPTokenizer

None

None

0

0

None

tokenizer_2

CLIPTokenizer

None

None

0

0

None

tokenizer_3

T5TokenizerFast

None

None

0

0

None

transformer

SD3Transformer2DModel

xpu:0

torch.bfloat16

8056627520

1456

FrozenDict({'sample_size': 128, 'patch_size': 2, 'in_channels': 16, 'num_layers': 38, 'attention_head_dim': 64, 'num_attention_heads': 38, 'joint_attention_dim': 4096, 'caption_projection_dim': 2432, 'pooled_projection_dim': 2048, 'out_channels': 16, 'pos_embed_max_size': 192, 'dual_attention_layers': (), 'qk_norm': 'rms_norm', '_use_default_values': ['dual_attention_layers'], '_class_name': 'SD3Transformer2DModel', '_diffusers_version': '0.31.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--stabilityai--stable-diffusion-3.5-large/snapshots/ceddf0a7fdf2064ea28e2213e3b84e4afa170a0f/transformer'})

scheduler

FlowMatchEulerDiscreteScheduler

None

None

0

0

FrozenDict({'num_train_timesteps': 1000, 'shift': 3.0, 'use_dynamic_shifting': False, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'time_shift_type': 'exponential', 'stochastic_sampling': False, '_use_default_values': ['max_image_seq_len', 'base_image_seq_len', 'max_shift', 'shift_terminal', 'use_dynamic_shifting', 'use_karras_sigmas', 'use_beta_sigmas', 'invert_sigmas', 'use_exponential_sigmas', 'stochastic_sampling', 'time_shift_type', 'base_shift'], '_class_name': 'FlowMatchEulerDiscreteScheduler', '_diffusers_version': '0.29.0.dev0'})

image_encoder

NoneType

None

None

0

0

None

feature_extractor

NoneType

None

None

0

0

None

_name_or_path

str

None

None

0

0

None

_class_name

str

None

None

0

0

None

_diffusers_version

str

None

None

0

0

None


  • No labels