Info
https://huggingface.co/briaai/BRIA-3.2
- Using negative prompt is recommended.
- For Fine-tuning, use zeros instead of null text embedding.
- We support multiple aspect ratios, yet resolution should overall consists approximately
1024*1024=1Mpixels, for example:((1024,1024), (1280, 768), (1344, 768), (832, 1216), (1152, 832), (1216, 832), (960,1088) - Use 30-50 steps (higher is better)
- Use
guidance_scaleof 5.0
Test 1 - Bookshop
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
Parameters: Steps: 64| Size: 768x768| Seed: 2464773602| App: SD.Next| Version: 30f3648| Pipeline: BriaPipeline| Operations: txt2img| Model: BRIA-3.2
Time: 6m 52.58s | total 317.63 pipeline 312.94 decode 3.19 vae 0.76 gc 0.51 te 0.43 post 0.25 | GPU 22754 MB 18% | RAM 2.81 GB 2%
| 8 | 16 | 20 | 32 | 64 | |
|---|---|---|---|---|---|
CFG0 CFG1 | |||||
CFG2 | |||||
CFG3 | |||||
CFG4 | |||||
CFG5 | |||||
CFG6 | |||||
CFG8 | |||||
CFG12 |
Test 2 - Face and hand
Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
| 8 | 16 | 20 | 32 | 64 | |
|---|---|---|---|---|---|
CFG1 | |||||
CFG2 | |||||
CFG3 | |||||
CFG4 | |||||
CFG5 | |||||
CFG6 | |||||
CFG7 | |||||
CFG12 |
Test 3 - Legs
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
| 8 | 16 | 20 | 32 | 64 | |
|---|---|---|---|---|---|
CFG1 | |||||
CFG2 | |||||
CFG3 | |||||
CFG4 | |||||
CFG5 | |||||
CFG6 | |||||
CFG7 | |||||
CFG12 |
Test 4 - Different seed variations and resolutions
| CFG5, STEP 32 | Seed: 1620085323 | Seed:1931701040 | Seed:4075624134 | Seed:2736029172 |
|---|---|---|---|---|
| 768x768px | ||||
| 768x1024px | ||||
| 768x768px | ||||
| 768x1024px | ||||
| 768x768px | ||||
| 768x1024px |
System info
app: sdnext.git updated: 2025-07-23 hash: 30f36487 url: https://github.com/vladmandic/sdnext.git/tree/dev arch: x86_64 cpu: x86_64 system: Linux release: 6.14.0-24-generic python: 3.12.3 Torch: 2.7.1+xpu device: Intel(R) Arc(TM) Graphics (1) ipex: xformers: diffusers: 0.35.0.dev0 transformers: 4.53.2 active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16 base: Diffusers/briaai/BRIA-3.2 [651c6ef831] refiner: none vae: none te: none unet: none
Config
{
"samples_filename_pattern": "[seq]-[date]-[model_name]-[height]x[width]-STEP[steps]-CFG[cfg]-Seed[seed]",
"diffusers_version": "1c50a5f7e0392281336e21bc3f74ba48f8819207",
"sd_model_checkpoint": "Diffusers/briaai/BRIA-3.2 [651c6ef831]",
"sd_checkpoint_hash": null,
"diffusers_to_gpu": true,
"diffusers_offload_mode": "none"
}
Model info
Model: Diffusers/briaai/BRIA-3.2 Type: bria Class: BriaPipeline Size: 0 bytes Modified: 2025-07-24 11:13:42
| Module | Class | Device | DType | Params | Modules | Config |
|---|---|---|---|---|---|---|
vae | AutoencoderKL | xpu:0 | torch.float32 | 83653863 | 243 | FrozenDict({'in_channels': 3, 'out_channels': 3, 'down_block_types': ['DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D', 'DownEncoderBlock2D'], 'up_block_types': ['UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D', 'UpDecoderBlock2D'], 'block_out_channels': [128, 256, 512, 512], 'layers_per_block': 2, 'act_fn': 'silu', 'latent_channels': 4, 'norm_num_groups': 32, 'sample_size': 512, 'scaling_factor': 0.13025, 'shift_factor': None, 'latents_mean': None, 'latents_std': None, 'force_upcast': False, 'use_quant_conv': True, 'use_post_quant_conv': True, 'mid_block_add_attention': True, '_class_name': 'AutoencoderKL', '_diffusers_version': '0.33.1', '_name_or_path': '/mnt/models/Diffusers/models--briaai--BRIA-3.2/snapshots/651c6ef831864ff9843084f3424f6da7fb0f5a9c/vae'}) |
text_encoder | T5EncoderModel | xpu:0 | torch.bfloat16 | 4762310656 | 463 | T5Config { "architectures": [ "T5EncoderModel" ], "classifier_dropout": 0.0, "d_ff": 10240, "d_kv": 64, "d_model": 4096, "decoder_start_token_id": 0, "dense_act_fn": "gelu_new", "dropout_rate": 0.1, "eos_token_id": 1, "feed_forward_proj": "gated-gelu", "initializer_factor": 1.0, "is_encoder_decoder": true, "is_gated_act": true, "layer_norm_epsilon": 1e-06, "model_type": "t5", "num_decoder_layers": 24, "num_heads": 64, "num_layers": 24, "output_past": true, "pad_token_id": 0, "relative_attention_max_distance": 128, "relative_attention_num_buckets": 32, "tie_word_embeddings": false, "torch_dtype": "bfloat16", "transformers_version": "4.53.2", "use_cache": true, "vocab_size": 32128 } |
tokenizer | T5TokenizerFast | None | None | 0 | 0 | None |
transformer | BriaTransformer2DModel | xpu:0 | torch.bfloat16 | 3785669392 | 713 | FrozenDict({'patch_size': 1, 'in_channels': 16, 'num_layers': 8, 'num_single_layers': 28, 'attention_head_dim': 96, 'num_attention_heads': 24, 'joint_attention_dim': 4096, 'pooled_projection_dim': None, 'guidance_embeds': False, 'axes_dims_rope': [0, 48, 48], 'rope_theta': 10000, 'time_theta': 10000, '_class_name': 'BriaTransformer2DModel', '_diffusers_version': '0.33.1', '_name_or_path': 'briaai/BRIA-3.2'}) |
scheduler | FlowMatchEulerDiscreteScheduler | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'shift': 3.0, 'use_dynamic_shifting': True, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, 'invert_sigmas': False, 'shift_terminal': None, 'use_karras_sigmas': False, 'use_exponential_sigmas': False, 'use_beta_sigmas': False, 'time_shift_type': 'exponential', 'stochastic_sampling': False, '_use_default_values': ['stochastic_sampling'], '_class_name': 'FlowMatchEulerDiscreteScheduler', '_diffusers_version': '0.33.1'}) |
_name_or_path | str | None | None | 0 | 0 | None |
_class_name | str | None | None | 0 | 0 | None |
_diffusers_version | str | None | None | 0 | 0 | None |











