AI Generated Images
Community for AI image generation. Any models are allowed. Creativity is valuable! It is recommended to post the model used for reference, but not a rule.
No explicit violence, gore, or nudity.
This is not a NSFW community although exceptions are sometimes made. Any NSFW posts must be marked as NSFW and may be removed at any moderator's discretion. Any suggestive imagery may be removed at any time.
Refer to https://lemmynsfw.com/ for any NSFW imagery.
No misconduct: Harassment, Abuse or assault, Bullying, Illegal activity, Discrimination, Racism, Trolling, Bigotry.
AI Generated Videos are allowed under the same rules. Photosensitivity warning required for any flashing videos.
To embed images type:
“![](put image url in here)”
Follow all sh.itjust.works rules.
Community Challenge Past Entries
Related communities:
- [email protected]
Useful general AI discussion - [email protected]
Photo-realistic AI images - [email protected] Stable Diffusion Art
- [email protected] Stable Diffusion Anime Art
- [email protected] AI art generated through bots
- [email protected]
NSFW weird and surreal images - [email protected]
NSFW AI generated porn
view the rest of the comments
Okay soooooo, that took a lot longer than I anticipated, but I think I got it. It seems it is a problem with the VAE encoding process and it can be handled with the ImageCompositeMasked node that combines the padded image with the new outpainted area so that pre-outpainted area isn't affected by the VAE. I learned this here https://youtu.be/ufzN6dSEfrw?si=4w4vjQTfbSozFC6F&t=498. The whole video is quite useful, but the part I linked to is where he talks about that problem.
The next problem I ran into is that at around the fourth from the last outpainting, ComfyUI would stop, it just wouldn't go any further. The system I'm using has 24GB of VRAM and 42 GB of RAM so I didn't think that was the problem, but just in case I tried it on a beastly RunPod machine that had 48GB VRAM and 58GB of RAM. It had the exact same problem.
To work around this I first bypassed everything except the original gen and the first outpaint. Then I enabled each outpaint one by one until I got to the fourth from the last. At that point I saved the output image and bypassed everything except the original gen and first outpaint and enabled the last four outpaints, loading the last image manually.
I used DreamShaper XL Lightning because there was no way I was going to wait for 60 steps each time with FenrisXL 😂 I tried two different ways of using the same model for inpainting. The first was using the Fooocus Inpaint node and Differential Diffusion node. This worked well, but when comfy stopped working I thought maybe that was the problem so I switched all of those out for some model merging. Basically, it subtracts the base SDXL model from the SDXL inpainting model and adds the Dreamshaper XL Lighting model to that. This creates a "Dreamshaper XL Lightning inpainting model". The SDXL inpainting model can be found here.
You should be able to use this workflow with FenrisXL the whole time if you want. You'll just need to change the steps, CFG, and maybe sampler at each ksampler.
Image with ImageMaskedComposite: https://files.catbox.moe/my4u7r.png
Image without ImageMaskedComposite: https://files.catbox.moe/h8yiut.png
Wow! Thank you for the effort and time you put into this! I will definitely look into the workflow. Model Merging sounds very interesting. I will look into it!