Prompt: Cowboy holding a pole of road sign with text "Flux.2 Dev". text "32B" is handwritten on a corner of the sign. Golden Gate bridge on the background. Parameters: Steps: 20| Size: 1024x1024| Seed: 2926064816| CFG scale: 6| App: SD.Next| Version: 38e4541| Pipeline: Flux2Pipeline| Operations: txt2img| Model: FLUX.2-dev-SDNQ-uint4-svd-r32 Time: 30m 12.49s | total 1838.65 pipeline 1812.42 callback 7.92 onload 7.76 vae 6.12 offload 4.37 | GPU 36128 MB 28% | RAM 48.28 GB 39% |
Prompt: Cowboy holding a pole of road sign "Flux.2 Dev". 32B is handwritten on a corner of the sign. Golden Gate bridge on the background. Parameters: Steps: 50| Size: 1024x1024| Seed: 2934815806| CFG scale: 4| App: SD.Next| Version: 38e4541| Pipeline: Flux2Pipeline| Operations: txt2img| Model: FLUX.2-dev-SDNQ-uint4-svd-r32 |
https://huggingface.co/black-forest-labs/FLUX.2-dev
https://docs.bfl.ai/guides/prompting_guide_flux2
prompt = "Realistic macro photograph of a hermit crab using a soda can as its shell, partially emerging from the can, captured with sharp detail and natural colors, on a sunlit beach with soft shadows and a shallow depth of field, with blurred ocean waves in the background. The can has the text `BFL Diffusers` on it and it has a color gradient that start with #FF5733 at the top and transitions to #33FF57 at the bottom."
image = pipe(
prompt_embeds=remote_text_encoder(prompt),
#image=load_image("https://huggingface.co/spaces/zerogpu-aoti/FLUX.1-Kontext-Dev-fp8-dynamic/resolve/main/cat.png") #optional image input
generator=torch.Generator(device=device).manual_seed(42),
num_inference_steps=50, #28 steps can be a good trade-off
guidance_scale=4,
).images[0] |
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
| CFG4, STEP50 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| bookshop girl |
|
|
|
|
| hand and face |
|
|
|
|
| legs and shoes |
|
|
|
|
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
| 8 | 16 | 20 | 32 | 50 | |
|---|---|---|---|---|---|
| CFG1 | |||||
| CFG2 | |||||
| CFG3 | |||||
| CFG4 | |||||
| CFG5 | |||||
| CFG6 | |||||
| CFG7 |
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
| 8 | 16 | 20 | 32 | 50 | |
|---|---|---|---|---|---|
| CFG1 | |||||
| CFG2 | |||||
| CFG3 | |||||
| CFG4 | |||||
| CFG5 | |||||
| CFG6 | |||||
| CFG7 |
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
| 8 | 16 | 20 | 32 | 50 | |
|---|---|---|---|---|---|
| CFG1 | |||||
| CFG2 | |||||
| CFG3 | |||||
| CFG4 | |||||
| CFG5 | |||||
| CFG6 | |||||
| CFG7 |

Prompt: image with 5 canvases and one android robot painting on the 3rd canvas. first two is done with some futuristic images, last two is blank. robot is surprised look back at the camera. paint brush is in the robot hand. Left bottom corner text "Flux.2 dev"
Parameters: Steps: 32| Size: 1600x400| Seed: 4014230639| CFG scale: 8

Prompt: with analog noise and glitches, CivitAi cover image with 5 canvases and one android robot painting on the 3rd canvas. first two is done with some futuristic images, last two is blank. robot is surprised look back at the camera. paint brush is in the robot hand.
Parameters: Steps: 32| Size: 1600x400| Seed: 2404111820| CFG scale: 3.5

Prompt: CivitAi cover image with 5 canvases and one android robot painting on the 3rd canvas. first two is done with some futuristic images, last two is blank. robot is surprised look back at the camera. paint brush is in the robot hand. Left bottom corner text "SD.Next + Flux.2 Dev", upper right corner text "Stable Diffusion at home"
Parameters: Steps: 50| Size: 1600x400| Seed: 978455805| CFG scale: 4

Prompt: LEGO style, CivitAi cover image with 5 canvases and one android robot painting on the 3rd canvas. first two is done with some futuristic images, last two is blank. robot is surprised look back at the camera. paint brush is in the robot hand. Left bottom corner text "SD.Next + Flux.2 Dev", upper right corner text "Stable Diffusion at home"
Parameters: Steps: 50| Size: 1600x400| Seed: 1179249187| CFG scale: 4
| CFG4 | |||
|---|---|---|---|
| CFG6 |
1024x1024, Steps 20
| seed 1 | seed 2 | seed 3 | seed 4 | seed 5 |
| seed 6 | seed 7 | seed 8 | seed 9 | |
| seed 42 | seed 324 | seed 777 |
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
![]()
Wed Nov 26 20:17:09 2025 app: sdnext.git updated: 2025-11-26 hash: 38e4541ea url: https://github.com/liutyi/sdnext/tree/pytorch arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-36-generic python: 3.12.3 Torch: 2.9.1+xpu device: Intel(R) Arc(TM) Graphics (1) ram: free:105.18 used:20.15 total:125.33 xformers: diffusers: 0.36.0.dev0 transformers: 4.57.1 active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16 base: Diffusers/Disty0/FLUX.2-dev-SDNQ-uint4-svd-r32 [497b2d3cc1] refiner: none vae: none te: none unet: none Backend: ipex Pipeline: native Cross-attention: Scaled-Dot-Product |
{
"sd_model_checkpoint": "Diffusers/Disty0/FLUX.2-dev-SDNQ-uint4-svd-r32 [497b2d3cc1]",
"diffusers_version": "c8656ed73c638e51fc2e777a5fd355d69fa5220f",
"sd_checkpoint_hash": null,
"diffusers_to_gpu": true,
"device_map": "gpu",
"diffusers_offload_mode": "none",
"ui_request_timeout": 300000,
"huggingface_token": "hf_...FraU",
"extra_network_reference_values": true,
"queue_paused": true
} |
Diffusers/Disty0/FLUX.2-dev-SDNQ-uint4-svd-r32 [497b2d3cc1]
Parameters count is wrong for quantized models
| Module | Class | Device | Dtype | Quant | Params | Modules | Config |
|---|---|---|---|---|---|---|---|
| vae | AutoencoderKLFlux2 | xpu:0 | torch.bfloat16 | None | 84046115 | 244 | FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 32, 'norm_num_groups': 32, 'sample_size': 1024, 'force_upcast': True, 'use_quant_conv': True, 'use_post_quant_conv': True, 'mid_block_add_attention': True, 'batch_norm_eps': 0.0001, 'batch_norm_momentum': 0.1, 'patch_size': [2, 2], '_class_name': 'AutoencoderKLFlux2', '_diffusers_version': '0.36.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--Disty0--FLUX.2-dev-SDNQ-uint4-svd-r32/snapshots/497b2d3cc195d985f98f713a38c16e165a808c8b/vae'}) |
| text_encoder | Mistral3ForConditionalGeneration | cpu | torch.bfloat16 | QuantizationMethod.SDNQ | 13251660800 | 853 | Mistral3Config { "architectures": [ "Mistral3ForConditionalGeneration" ], "dtype": "bfloat16", "image_token_index": 10, "model_type": "mistral3", "multimodal_projector_bias": false, "projector_hidden_act": "gelu", "quantization_config": { "add_skip_keys": false, "dequantize_fp32": false, "group_size": 0, "is_integer": true, "is_training": false, "modules_dtype_dict": {}, "modules_to_not_convert": [ ".img_out", "wte", "patch_emb", ".final_layer", "multi_modal_projector", "time_text_embed", ".emb_in", ".img_in", ".condition_embedder", "lm_head.weight", "lm_head", ".emb_out", "patch_embed", ".txt_in", ".time_embed", ".context_embedder", ".x_embedder", ".vid_in", ".txt_out", ".norm_out", ".vid_out", ".proj_out", "patch_embedding" ], "non_blocking": false, "quant_conv": false, "quant_method": "sdnq", "quantization_device": null, "quantized_matmul_dtype": null, "return_device": null, "svd_rank": 32, "svd_steps": 8, "use_grad_ckpt": true, "use_quantized_matmul": false, "use_quantized_matmul_conv": false, "use_static_quantization": true, "use_stochastic_rounding": false, "use_svd": true, "weights_dtype": "uint4" }, "spatial_merge_size": 2, "text_config": { "attention_dropout": 0.0, "dtype": "bfloat16", "head_dim": 128, "hidden_act": "silu", "hidden_size": 5120, "initializer_range": 0.02, "intermediate_size": 32768, "max_position_embeddings": 131072, "model_type": "mistral", "num_attention_heads": 32, "num_hidden_layers": 40, "num_key_value_heads": 8, "rms_norm_eps": 1e-05, "rope_theta": 1000000000.0, "sliding_window": null, "use_cache": true, "vocab_size": 131072 }, "transformers_version": "4.57.1", "vision_config": { "attention_dropout": 0.0, "dtype": "bfloat16", "head_dim": 64, "hidden_act": "silu", "hidden_size": 1024, "image_size": 1540, "initializer_range": 0.02, "intermediate_size": 4096, "model_type": "pixtral", "num_attention_heads": 16, "num_channels": 3, "num_hidden_layers": 24, "patch_size": 14, "rope_theta": 10000.0 }, "vision_feature_layer": -1 } |
| tokenizer | PixtralProcessor | None | None | None | 0 | 0 | None |
| scheduler | FlowMatchEulerDiscreteScheduler | None | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'time_shift_type': 'exponential', 'stochastic_sampling': False, '_class_name': 'FlowMatchEulerDiscreteScheduler', '_diffusers_version': '0.36.0.dev0'}) |
| transformer | Flux2Transformer2DModel | xpu:0 | torch.bfloat16 | QuantizationMethod.SDNQ | 17211867136 | 702 | FrozenDict({'patch_size': 1, 'in_channels': 128, 'out_channels': None, 'num_layers': 8, 'num_single_layers': 48, 'attention_head_dim': 128, 'num_attention_heads': 48, 'joint_attention_dim': 15360, 'timestep_guidance_channels': 256, 'mlp_ratio': 3.0, 'axes_dims_rope': [32, 32, 32, 32], 'rope_theta': 2000, 'eps': 1e-06, '_class_name': 'Flux2Transformer2DModel', '_diffusers_version': '0.36.0.dev0', '_name_or_path': 'Disty0/FLUX.2-dev-SDNQ-uint4-svd-r32', 'quantization_config': {'weights_dtype': 'uint4', 'quantized_matmul_dtype': None, 'is_training': False, 'quant_method': , 'group_size': 0, 'svd_rank': 32, 'svd_steps': 8, 'use_svd': True, 'use_grad_ckpt': True, 'quant_conv': False, 'use_quantized_matmul': False, 'use_quantized_matmul_conv': False, 'use_static_quantization': True, 'use_stochastic_rounding': False, 'dequantize_fp32': False, 'non_blocking': False, 'add_skip_keys': False, 'quantization_device': None, 'return_device': None, 'modules_to_not_convert': ['x_embedder', 'context_embedder', 'double_stream_modulation_img', 'single_stream_modulation', 'double_stream_modulation_txt', 'norm_out', 'time_guidance_embed', '.proj_out'], 'modules_dtype_dict': {}, 'is_integer': True}, '_pre_quantization_dtype': torch.bfloat16}) |