Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Test 0 - Different seed variations


CFG6, STEP 32Seed: 1620085323Seed:1931701040Seed:4075624134Seed:2736029172
bookshop girl

Image Modified

Image Modified

Image Modified

Image Modified

hand and face

Image Modified

Image Modified

Image Modified

Image Added

legs and shoes

Image Modified

Image Modified

Image Modified

Image Modified

Test 1 - Bookshop

Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling

Parameters: Steps: 32| Size: 1024x1024| Seed: 1620085323| CFG scale: 3| App: SD.Next| Version: e90ac68| Pipeline: FluxKontextPipeline| Operations: txt2img| Model: FLUX.1-Kontext-dev

Execution: Time: 12m 35.94s | total 1487.74 pipeline 749.29 preview 727.88 decode 6.61 prompt 3.64 gc 0.26 | GPU 35280 MB 27% | RAM 3.02 GB 2%


48162032

CFG1

Image Added

Image Added

Image Added

Image Added

Image Added

CFG2

Image Added

Image Added

Image Added

Image Added

Image Added

CFG3

Image Added

Image Added

Image Added

Image Added

Image Added

CFG4

Image Added

Image Added

Image Added

Image Added

Image Added

CFG6

Image Added

Image Added

Image Added

Image Added

Image Added

CFG8

Image Added

Image Added

Image Added

Image Added

Image Added


Test 2 - Face and hand

Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.



8162032

CFG1

Image Added

Image Added

Image Added

Image Added

CFG2

Image Added

Image Added

Image Added

Image Added

CFG3

Image Added

Image Added

Image Added

Image Added

CFG4

Image Added

Image Added

Image Added

Image Added

CFG6

Image Added

Image Added

Image Added

Image Added

CFG8

Image Added

Image Added

Image Added

Image Added

Test 3 - Legs

Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.

...

Execution: Time: 12m 36.06s | total 1404.96 pipeline 749.36 preview 644.64 decode 6.67 prompt 3.99 gc 0.25 | GPU 35284 MB 27% | RAM 2.98 GB 2%



8162032

CFG1

Image Added

Image Added

Image Added

Image Added

CFG2

Image Added

Image Added

Image Added

Image Added

CFG3

Image Added

Image Added

Image Added

Image Added

CFG4

Image Added

Image Added

Image Added

Image Added

CFG6

Image Added

Image Added

Image Added

Image Added

CFG8

Image Added

Image Added

Image Added

Image Added

Test 4 CivitAi profile cover generation

...

Prompt: image with 5 canvases and one android robot painting on the 3rd canvas. first two is done with some futuristic images, last two is blank. robot is surprised look back at the camera. paint brush is in the robot hand. Left bottom corner text "Flux.1 Krea"

shape '[1, 25, 100, 16, 2, 2]' is invalid for input of size 262144

Time: 12m 36.65s | pipeline 756.49 prompt 1.92 | GPU 34682 MB 27% | RAM 2.99 GB 2%


System info


Code Block
Sat Aug  2 14:08:11 2025
app: sdnext.git updated: 2025-07-31 hash: 42706fb9 url: https://github.com/vladmandic/sdnext.git/tree/dev
arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-27-generic 
python: 3.12.3 Torch 2.7.1+xpu
device: Intel(R) Arc(TM) Graphics (1) ipex: 
ram: free:122.22 used:3.11 total:125.33
xformers:  diffusers: 0.35.0.dev0 transformers: 4.53.2
active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16
base: Diffusers/black-forest-labs/FLUX.1-Kontext-dev [af58063aa4] refiner: none vae: none te: none unet: none
Backend: ipex Pipeline: native Memory optimization: none Cross-attention: Scaled-Dot-Product

...

Code Block
Model: Diffusers/black-forest-labs/FLUX.1-Kontext-dev
Type: f1
Class: FluxKontextPipeline
Size: 0 bytes
Modified: 2025-07-15 10:11:42


ModuleClassDeviceDTypeParamsModulesConfig

vae

AutoencoderKL

xpu:0

torch.bfloat16

83819683

241

FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 16, 'norm_num_groups': 32, 'sample_size': 1024, 'scaling_factor': 0.3611, 'shift_factor': 0.1159, 'latents_mean': None, 'latents_std': None, 'force_upcast': True, 'use_quant_conv': False, 'use_post_quant_conv': False, 'mid_block_add_attention': True, '_class_name': 'AutoencoderKL', '_diffusers_version': '0.34.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--black-forest-labs--FLUX.1-Kontext-dev/snapshots/af58063aa431f4d2bbc11ae46f57451d4416a170/vae'})

text_encoder

CLIPTextModel

xpu:0

torch.bfloat16

123060480

152

CLIPTextConfig { "architectures": [ "CLIPTextModel" ], "attention_dropout": 0.0, "bos_token_id": 0, "dropout": 0.0, "eos_token_id": 2, "hidden_act": "quick_gelu", "hidden_size": 768, "initializer_factor": 1.0, "initializer_range": 0.02, "intermediate_size": 3072, "layer_norm_eps": 1e-05, "max_position_embeddings": 77, "model_type": "clip_text_model", "num_attention_heads": 12, "num_hidden_layers": 12, "pad_token_id": 1, "projection_dim": 768, "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "vocab_size": 49408 }

text_encoder_2

T5EncoderModel

xpu:0

torch.bfloat16

4762310656

463

T5Config { "architectures": [ "T5EncoderModel" ], "classifier_dropout": 0.0, "d_ff": 10240, "d_kv": 64, "d_model": 4096, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": true, "layer_norm_epsilon": 1e-06, "model_type": "t5", "num_decoder_layers": 24, "num_heads": 64, "num_layers": 24, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "use_cache": true, "vocab_size": 32128 }

tokenizer

CLIPTokenizer

None

None

0

0

None

tokenizer_2

T5TokenizerFast

None

None

0

0

None

transformer

FluxTransformer2DModel

xpu:0

torch.bfloat16

11901408320

1279

FrozenDict({'patch_size': 1, 'in_channels': 64, 'out_channels': None, 'num_layers': 19, 'num_single_layers': 38, 'attention_head_dim': 128, 'num_attention_heads': 24, 'joint_attention_dim': 4096, 'pooled_projection_dim': 768, 'guidance_embeds': True, 'axes_dims_rope': [16, 56, 56], '_class_name': 'FluxTransformer2DModel', '_diffusers_version': '0.34.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--black-forest-labs--FLUX.1-Kontext-dev/snapshots/af58063aa431f4d2bbc11ae46f57451d4416a170/transformer'})

scheduler

FlowMatchEulerDiscreteScheduler

None

None

0

0

FrozenDict({'num_train_timesteps': 1000, 'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'time_shift_type': 'exponential', 'stochastic_sampling': False, '_class_name': 'FlowMatchEulerDiscreteScheduler', '_diffusers_version': '0.34.0.dev0'})

image_encoder

NoneType

None

None

0

0

None

feature_extractor

NoneType

None

None

0

0

None

_name_or_path

str

None

None

0

0

None