Blender&Stable Diffusion の練習 0097Blenderのレンダリング結果をStable Diffusionで変換してみました。これは私にとって、フレーム単位Stable DiffusionとControlNet適用のレンダリング安定性の限界値です。何か質問や提案があればコメントください、できるだけ答えます。I converted the Blender render results with Stable Diffusion.This is currently the limit of rendering stability for frame-by-frame Stable Diffusion with ControlNet applications for me.If you have any questions or suggestions, please comment and I will answer them if possible.方法 / MethodBlenderレンダリング結果をStable Diffusionのimg2imgに適用します。Apply the Blender render results to img2img in Stable Diffusion.深度画像をControlNetのinput_imageに適用します。Apply the depth images to input_image in ControlNet.以下の記事でソース動画を見ることができます。You can watch the source videos in the following post./ operations can be automated with the following script./ and assetsstable diffusion webui:/ diffusion model: AbyssOrangeMix3 (AOM3A1)/ model: control_sd15_depth.pth/ diffusion prompts:(realistic, cosplay, professional photo:1.4), (yazawa nico:1.2), (love live!), 16yo, (nude), smile, short twintails, red eyes, small breasts, frontal light, white background, on white floor, pubic hair, eye lids, look at viewerNegative prompt: (worst quality, low quality:1.4), (EasyNegative:1.2), extra fingers,fewer fingers, anus, bad anatomy, navels, multiple navels, bad hands, motion lines, socks, shoes, blush, dusty sunbeams, letters, watermarks, mask, (censored, mosaic censoring, bar censor, convenient censoring, pointless censoring:1.0)Steps: 15, Sampler: DPM SDE Karras, CFG scale: 7, Seed: 0, Size: 1536x1536, Model hash: f303d10812, Model: AOM3A1, Denoising strength: 0.45, Mask blur: 4, ControlNet Enabled: True, ControlNet Module: none, ControlNet Model: control_sd15_depth [fef5e48e], ControlNet Weight: 0.95, ControlNet Guidance Strength: 0.95softwares: Blender 3.3.4 / Eevee mmd_tools/ blender_mmd_uuunyaa_tools v2.2.4/ head: (MMD SIFAS)Yazawa Nico Normal v0.3 DL/ body: Sour式初音ミク Breath You Ver.demi 1.01/ script: Batch processing for stable-diffusion-webui API with sd-webui-controlnet/
コメント