https://huggingface.co/black-forest-labs/FLUX.1-Krea-dev
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
Parameters: Steps: 32| Size: 1024x1024| Seed: 1620085323| CFG scale: 6| App: SD.Next| Version: 42706fb| Pipeline: FluxPipeline| Operations: txt2img| Model: FLUX.1-Krea-dev
Time: 27m 32.84s | total 1084.77 pipeline 773.78 preview 300.27 decode 6.11 prompt 2.02 te 1.99 gc 0.30 post 0.26 | GPU 34656 MB 27% | RAM 2.64 GB 2%
| CFG6, STEP 32 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| bookshop girl |
|
|
|
|
| hand and face |
|
|
|
|
| legs and shoes |
|
|
|
|
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
| 4 | 8 | 16 | 20 | 32 | |
|---|---|---|---|---|---|
CFG1 |
|
| |||
CFG2 | |||||
CFG3 | |||||
CFG4 | |||||
CFG6 | |||||
CFG8 |
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
| 811 | 16 | 20 | 32 | |
|---|---|---|---|---|
CFG1 | ||||
CFG2 | ||||
CFG3 | ||||
CFG4 | ||||
CFG6 | ||||
CFG8 |
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
| 8 | 16 | 20 | 32 | |
|---|---|---|---|---|
CFG1 | ||||
CFG2 | ||||
CFG3 | ||||
CFG4 | ||||
CFG6 | ||||
CFG8 |
{
"samples_filename_pattern": "[seq]-[date]-[model_name]-[height]x[width]-Seed[seed]-CFG[cfg]-STEP[steps]",
"diffusers_version": "c052791b5fe29ce8a308bf63dda97aa205b729be",
"diffusers_offload_mode": "none",
"diffusers_to_gpu": true,
"device_map": "gpu",
"ui_request_timeout": 120000,
"diffusers_vae_tile_size": 512,
"sd_model_checkpoint": "Diffusers/black-forest-labs/FLUX.1-Krea-dev [8162a9c7b0]"
} |
Model: Diffusers/black-forest-labs/FLUX.1-dev Type: f1 Class: FluxPipeline Size: 0 bytes Modified: 2025-07-15 12:03:09 |
| Module | Class | Device | DType | Params | Modules | Config |
|---|---|---|---|---|---|---|
vae | AutoencoderKL | xpu:0 | torch.bfloat16 | 83819683 | 241 | FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 16, 'norm_num_groups': 32, 'sample_size': 1024, 'scaling_factor': 0.3611, 'shift_factor': 0.1159, 'latents_mean': None, 'latents_std': None, 'force_upcast': True, 'use_quant_conv': False, 'use_post_quant_conv': False, 'mid_block_add_attention': True, '_class_name': 'AutoencoderKL', '_diffusers_version': '0.30.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--black-forest-labs--FLUX.1-dev/snapshots/3de623fc3c33e44ffbe2bad470d0f45bccf2eb21/vae'}) |
text_encoder | CLIPTextModel | xpu:0 | torch.bfloat16 | 123060480 | 152 | CLIPTextConfig { "architectures": [ "CLIPTextModel" ], "attention_dropout": 0.0, "bos_token_id": 0, "dropout": 0.0, "eos_token_id": 2, "hidden_act": "quick_gelu", "hidden_size": 768, "initializer_factor": 1.0, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 77, "model_type": "clip_text_model", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "projection_dim": 768, "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "vocab_size": 49408 } |
text_encoder_2 | T5EncoderModel | xpu:0 | torch.bfloat16 | 4762310656 | 463 | T5Config { "architectures": [ "T5EncoderModel" ], "classifier_dropout": 0.0, "d_ff": 10240, "d_kv": 64, "d_model": 4096, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": true, "layer_norm_epsilon": 1e-06, "model_type": "t5", "num_decoder_layers": 24, "num_heads": 64, "num_layers": 24, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "use_cache": true, "vocab_size": 32128 } |
tokenizer | CLIPTokenizer | None | None | 0 | 0 | None |
tokenizer_2 | T5TokenizerFast | None | None | 0 | 0 | None |
transformer | FluxTransformer2DModel | xpu:0 | torch.bfloat16 | 11901408320 | 1279 | FrozenDict({'patch_size': 1, 'in_channels': 64, 'out_channels': None, 'num_layers': 19, |
scheduler | FlowMatchEulerDiscreteScheduler | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'time_shift_type': 'exponential', 'stochastic_sampling': False, '_use_default_values': ['time_shift_type', 'use_exponential_sigmas', 'invert_sigmas', 'use_karras_sigmas', 'stochastic_sampling', 'shift_terminal', 'use_beta_sigmas'], '_class_name': 'FlowMatchEulerDiscreteScheduler', '_diffusers_version': '0.30.0.dev0'}) |
image_encoder | NoneType | None | None | 0 | 0 | None |
feature_extractor | NoneType | None | None | 0 | 0 | None |
_name_or_path | str | None | None | 0 | 0 | None |