Info
https://huggingface.co/CalamitousFelicitousness/hyphoria_qwen_v1.0-BF16-Diffusers
https://civitai.com/models/2120166/hyphoria-qwen
| class | size | sfw | abstract | realism | art | text | anime | glass/mirror | complex |
|---|---|---|---|---|---|---|---|---|---|
| qwen | 20B |
| Code Block |
|---|
Base Generation Sampler: res_3s Scheduler: bong_tangent Steps: varies (8-12) CFG: always 1 Lightning 8-step strength: 0.5-0.55 Hires (Latent Upscale) Sampler: res_2s Scheduler: bong_tangent Steps: varies (6-12) CFG: always 1 Lightning 8-step strength: 0.4-0.5 |
...
Prompt: photorealistic girl in bookshop choosing the book in romantic stories shelf. smiling
| 4 | 8 | 1620 | 3250 | |
|---|---|---|---|---|
| CFG1 | ||||
| CFG2 | ||||
| CFG3 | ||||
| CFG4 | ||||
| CFG5 | ||||
| CFG6 | ||||
| CFG7 | ||||
| CFG8 |
Test 5 - Face and hand
| AG10 |
Test 5 - Face and hand
Prompt: Prompt: Create a close-up photograph of a woman's face and hand, with her hand raised to her chin. She is wearing a white blazer and has a gold ring on her finger. Her nails are neatly manicured and her hair is pulled back into a low bun. She is smiling and has a radiant expression on her face. The background is a plain light gray color. The overall mood of the photo is elegant and sophisticated. The photo should have a soft, natural light and a slight warmth to it. The woman's hair is dark brown and pulled back into a low bun, with a few loose strands framing her face.
Parameters: Steps: 16| Size: 1024x1024| Seed: 4075624134| CFG scale: 5| App: SD.Next| Version: d7eb90e| Pipeline: OvisImagePipeline| Operations: txt2img| Model: Ovis-Image-7B
285H Time: 2m 58.98s | total 180.04 pipeline 177.45 decode 1.51 callback 0.78 gc 0.28 | GPU 21078 MB 17% | RAM 31.23 GB 25%
| 8 | 10 | 14 | 24 | |
|---|---|---|---|---|
| AG1 | ||||
| AG2 | 8 | 10 | 14 | 24 |
| AG1 | ||||
| AG2 | ||||
| AG3 | ||||
| AG4 | ||||
| AG5 | ||||
| AG6 | ||||
| AG8 |
Test 6 - Legs
Prompt: Generate a photo of a woman's legs, with her feet crossed and wearing white high-heeled shoes with ribbons tied around her ankles. The shoes should have a pointed toe and a stiletto heel. The woman's legs should be smooth and tanned, with a slight sheen to them. The background should be a light gray color. The photo should be taken from a low angle, looking up at the woman's legs. The ribbons should be tied in a bow shape around the ankles. The shoes should have a red sole. The woman's legs should be slightly bent at the knee.
Parameters: Steps: 32| Size: 1024x1024| Seed: 1931701040| CFG scale: 5| App: SD.Next| Version: d7eb90e| Pipeline: OvisImagePipeline| Operations: txt2img| Model: Ovis-Image-7B
285H Time: 5m 56.50s | total 358.27 pipeline 354.97 decode 1.52 callback 1.48 gc 0.29 | GPU 21078 MB 17% | RAM 31.29 GB 25%
| 8 | 16 | 32 | 64 | |
|---|---|---|---|---|
| AG1 | ||||
| AG2 | ||||
| AG3 | 8 | 16 | 32 | 64 |
| AG1 | ||||
| AG2 | ||||
| AG3 | ||||
| AG4 | ||||
| AG5 | ||||
| AG6 | ||||
| AG8 |
Test 7 CivitAi profile cover generation
...
Test 9 - Other Models cover
Test 10 - Art Test 10 - Art Prompts
Test 11 - Search for cover
\
Test 11 - Search for cover
Prompt: Prompt: Brick wall with bright "Hyphoria Qwen" graffiti .Fantasy wizard girl gently blows a stream of glowing spores and luminous particles from her lips behind the text. Graffiti main colors are Lime and Mint. Dark grey asphalt on a sidewalk below. Large building number sign with text "20B" upper left side the graffiti.
Parameters: Steps: 32| Size: 1328x1328| Sampler: Euler FlowMatch| Seed: 1499371390| CFG scale: 4| CFG true: 4| App: SD.Next| Version: a84ddc3| Pipeline: QwenImagePipeline| Operations: txt2img| Model: hyphoria_qwen_v1.0-BF16-Diffusers
285H Time: 17m 48.28s | total 1100.14 pipeline 1068.19 preview 26.62 te 2.45 callback 2.29 vae 0.51 | GPU 76372 MB 61% | RAM 83.53 GB 68%
Compare Art to Z Image turbo
System info
| Code Block |
|---|
. |
Config
| Code Block |
|---|
{
.
} |
...
System info
| Code Block |
|---|
Tue Jan 6 20:50:32 2026
app: sdnext.git updated: 2026-01-06 hash: e33daab6a url: https://github.com/liutyi/sdnext/tree/pytorch
arch: x86_64 cpu: x86_64 system: Linux release: 6.17.0-8-generic
python: 3.12.3 Torch: 2.9.1+xpu
device: Intel(R) Arc(TM) Graphics (1) ipex:
ram: free:119.41 used:3.66 total:123.07
xformers: diffusers: 0.37.0.dev0 transformers: 4.57.3
active: xpu dtype: torch.bfloat16 vae: torch.bfloat16 unet: torch.bfloat16
base: Diffusers/CalamitousFelicitousness/hyphoria_qwen_v1.0-BF16-Diffusers [bc0f90041c] refiner: none vae: none te: none unet: none
ipex native none Scaled-Dot-Product |
Config
| Code Block |
|---|
{
"diffusers_version": "88ffb0013972c7b9fd3725bcd63e3c3c1400834f",
"sd_model_checkpoint": "Diffusers/CalamitousFelicitousness/hyphoria_qwen_v1.0-BF16-Diffusers [bc0f90041c]",
"sd_checkpoint_hash": null,
"diffusers_offload_mode": "none",
"huggingface_token": "hf_..raU"
} |
Model info
Diffusers/CalamitousFelicitousness/hyphoria_qwen_v1.0-BF16-Diffusers [bc0f90041c]| Module | Class | Device | Dtype | Quant | Params | Modules | Config |
|---|---|---|---|---|---|---|---|
| vae | AutoencoderKLQwenImage | xpu:0 | torch.bfloat16 | None | 126892531 | 260 | FrozenDict({'base_dim': 96, 'z_dim': 16, 'dim_mult': [1, 2, 4, 4], 'num_res_blocks': 2, 'attn_scales': [], 'temperal_downsample': [False, True, True], 'dropout': 0.0, 'input_channels': 3, 'latents_mean': [-0.7571, -0.7089, -0.9113, 0.1075, -0.1745, 0.9653, -0.1517, 1.5508, 0.4134, -0.0715, 0.5517, -0.3632, -0.1922, -0.9497, 0.2503, -0.2921], 'latents_std': [2.8184, 1.4541, 2.3275, 2.6558, 1.2196, 1.7708, 2.6052, 2.0743, 3.2687, 2.1526, 2.8652, 1.5579, 1.6382, 1.1253, 2.8251, 1.916], '_use_default_values': ['input_channels'], '_class_name': 'AutoencoderKLQwenImage', '_diffusers_version': '0.34.0.dev0', '_name_or_path': '/mnt/models/Diffusers/models--CalamitousFelicitousness--hyphoria_qwen_v1.0-BF16-Diffusers/snapshots/bc0f90041c79c3cbd1936143e574035335e0207b/vae'}) |
| text_encoder | Qwen2_5_VLForConditionalGeneration | xpu:0 | torch.bfloat16 | None | 8292166656 | 763 | Qwen2_5_VLConfig { "architectures": [ "Qwen2_5_VLForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 151643, "dtype": "bfloat16", "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 3584, "initializer_range": 0.02, "intermediate_size": 18944, "max_position_embeddings": 128000, "max_window_layers": 28, "model_type": "qwen2_5_vl", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "rms_norm_eps": 1e-06, "rope_scaling": { "mrope_section": [ 16, 24, 24 ], "rope_type": "default", "type": "default" }, "rope_theta": 1000000.0, "sliding_window": 32768, "text_config": { "_name_or_path": "hunyuanvideo-community/HunyuanImage-2.1-Diffusers", "architectures": [ "Qwen2_5_VLForConditionalGeneration" ], "attention_dropout": 0.0, "bos_token_id": 151643, "dtype": "bfloat16", "eos_token_id": 151645, "hidden_act": "silu", "hidden_size": 3584, "image_token_id": 151655, "initializer_range": 0.02, "intermediate_size": 18944, "layer_types": [ "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention", "full_attention" ], "max_position_embeddings": 128000, "max_window_layers": 28, "model_type": "qwen2_5_vl_text", "num_attention_heads": 28, "num_hidden_layers": 28, "num_key_value_heads": 4, "rms_norm_eps": 1e-06, "rope_scaling": { "mrope_section": [ 16, 24, 24 ], "rope_type": "default", "type": "default" }, "rope_theta": 1000000.0, "sliding_window": null, "use_cache": true, "use_sliding_window": false, "video_token_id": 151656, "vision_end_token_id": 151653, "vision_start_token_id": 151652, "vision_token_id": 151654, "vocab_size": 152064 }, "tie_word_embeddings": false, "transformers_version": "4.57.3", "use_cache": true, "use_sliding_window": false, "vision_config": { "depth": 32, "dtype": "bfloat16", "fullatt_block_indexes": [ 7, 15, 23, 31 ], "hidden_act": "silu", "hidden_size": 1280, "in_channels": 3, "in_chans": 3, "initializer_range": 0.02, "intermediate_size": 3420, "model_type": "qwen2_5_vl", "num_heads": 16, "out_hidden_size": 3584, "patch_size": 14, "spatial_merge_size": 2, "spatial_patch_size": 14, "temporal_patch_size": 2, "tokens_per_second": 2, "window_size": 112 }, "vision_token_id": 151654, "vocab_size": 152064 } |
| tokenizer | Qwen2Tokenizer | None | None | None | 0 | 0 | None |
| transformer | QwenImageTransformer2DModel | xpu:0 | torch.bfloat16 | None | 20430401088 | 2297 | FrozenDict({'patch_size': 2, 'in_channels': 64, 'out_channels': 16, 'num_layers': 60, 'attention_head_dim': 128, 'num_attention_heads': 24, 'joint_attention_dim': 3584, 'guidance_embeds': False, 'axes_dims_rope': [16, 56, 56], 'zero_cond_t': False, 'use_additional_t_cond': False, 'use_layer3d_rope': False, '_use_default_values': ['zero_cond_t', 'use_layer3d_rope', 'use_additional_t_cond'], '_class_name': 'QwenImageTransformer2DModel', '_diffusers_version': '0.35.2', 'pooled_projection_dim': 768, '_name_or_path': 'CalamitousFelicitousness/hyphoria_qwen_v1.0-BF16-Diffusers'}) |
| scheduler | FlowMatchDPMSolverMultistepScheduler | None | None | None | 0 | 0 | FrozenDict({'num_train_timesteps': 1000, 'beta_start': 0.00085, 'beta_end': 0.012, 'beta_schedule': 'linear', 'trained_betas': None, 'solver_order': 2, 'algorithm_type': 'dpmsolver2', 'solver_type': 'midpoint', 'sigma_schedule': None, 'shift': 3, 'midpoint_ratio': 0.5, 's_noise': 1.0, 'use_noise_sampler': True, 'use_beta_sigmas': False, 'use_dynamic_shifting': False, 'base_shift': 0.5, 'max_shift': 1.15, 'base_image_seq_len': 256, 'max_image_seq_len': 4096, '_use_default_values': ['solver_type', 's_noise', 'midpoint_ratio', 'trained_betas', 'max_image_seq_len', 'base_image_seq_len']}) |
...



