Image size comfyui reddit
Image size comfyui reddit. It animates 16 frames and uses the looping context options to make a video that loops. In truth, 'AI' never stole anything, any more than you 'steal' from the people who's images you have looked at when their images influence your own art; and while anyone can use an AI tool to make art, having an idea for a picture in your head, and getting any generative system to actually replicate that takes a considerable amount of skill and effort. You will need to launch comfyUI with this option each time, so modify your bat file or launch script. I've built many ComfyUI web apps for personal business purposes and have helped others on Reddit as well. Or add the Image Gallery extension. It's based on the wonderful example from Sytan, but I un-collapsed it and removed upscaling to make it very simple to understand. I first get the prompt working as a list of the basic contents of your image. Is there a way to pull this off within ComfyUI? Welcome to the unofficial ComfyUI subreddit. This way its an end-to-end txt to animation. As an input I use various image sizes and find I have to manually enter the image size in the Empty Latent Image node that leads to the KSampler each time I work on a new image. This comprehensive tutorial covers 10 vital steps, including cropping, mask detection, sampler erasure, mask fine-tuning, and streamlined inpainting for incredible results. - comfyanonymous/ComfyUI Copy that into user. And above all, BE NICE. Probably not what you want but, the preview chooser\image chooser node is a custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow. The option has been around for a long time with other UIs like Automatic1111 and Visions of Chaos. . Save the new image. you wont get obvious seams or strange lines Welcome to the unofficial ComfyUI subreddit. In the process, we also discuss SDXL architecture, how it is supp Welcome to the unofficial ComfyUI subreddit. In this case, the image from comfy has some extra glitches. This workflow generates an image with SD1. I can obviously pick a size when doing Text2Image but when prompting off an existing image my final image will always just be the same size as the inspiration image. Howdy! I'm not too advanced with ComfyUI for SD generation yet, but I've made a lot of progress thanks to your help. Generated images automatic1111 image. A lot of people are just discovering this technology, and want to show off what they created. css and change the font-size to something higher than 10px and you should see a difference. The only way I can think of is just Upscale Image Model (4xultrasharp), get my image to 4096 , and then downscale with nearest-extact back to 1500. Insert the new image in again in the workflow and inpaint something else rinse and repeat until you loose interest :-) and no workflow metadata will be saved in any image. (using SD webUI before) I am getting blurry image when using "Realities Edge XL ⊢ ⋅ LCM+SDXLTurbo" model in ComfyUI I got the same issue in SD webUI but after using sdxl-vae-fp16-fix, images are good But when I try to use the same to fix this issue, not working. I think the bare minimum would be the following but having the rest of the defaults next to it could be handy if you want to make other changes. I started with ComfyUI 3 days ago. I have a workflow I use fairly often where I convert or upscale images using ControlNet. Increasing the tile size to half the image's dimensions (1536) does improve image quality, but the speed benefit diminishes. How do I do the same with ComfyUI? Welcome to the unofficial ComfyUI subreddit. To then view the generated images click on View History and go through your generations by loading them. You set the height and the width to change the image size in pixel space. This will be follow-along type step-by-step tutorials where we start from an empty ComfyUI canvas and slowly implement SDXL. Here’s how you can do it: Automatic1111 May 14, 2024 · A basic description on a couple of ways to resize your photos or images so that they will work in ComfyUI. A bit of an obtuse take. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. I have a ComfyUI workflow that produces great results. 5 is trained on images 512 x 512. comfyui image. If I were to make some type of custom node or modify the core node and allow a larger latent image size, would that break the whole process and there is some larger reason that 8192 is the hard Welcome to the unofficial ComfyUI subreddit. First we calculate the ratios, or we use a text file where we Mar 22, 2024 · This simple checkbox in the Automatic1111 WebUI interface allows you to generate high-resolution images that look much better than the default output. The hard part is knowing when the image is ready to be retreived and getting the image. A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) When I generate an image with the prompt "attractive woman" in ComfyUI, I get the exact same face for every image I create. The first branch has: Txt to Image and then Image to SDVID with the new SD vid models that came out. Learn how to master inpainting on large images using ComfyUI and Stable Diffusion. However, my goal is to recreate the exact same image, I understand that the DPM++2M model can do this, at least in auto11 it does repeat the same image all the time. It is not a problem in the seed, because I tried different seeds. Im instead going to just try to work around it but trying to downscale the size of the image. Layer copy & paste this PNG on top of the original in your go to image editing software. Enjoy a comfortable and intuitive painting app. eg: batch index 2, Length 2 would send image number 3 and 4 to preview img in this example. Want 10 images? Click that button till the Queue size is 10 (or select Extra options and put in 10 in Batch count). The denoise on the video generation KSampler is at 0. Ignore the LoRA node that makes the result look EXACTLY like my girlfriend. Please share your tips, tricks, and workflows for using this software to create your AI art. (207) ComfyUI Artist Inpainting Tutorial - YouTube Welcome to the unofficial ComfyUI subreddit. In an effort the generate images faster on my potato pc. In the process, we also discuss SDXL architecture, how it is supp During my img2img experiments with 3072x3072 images, I noticed a quality drop using Hypertile with standard settings (tile size 256, swap size = 2, max depth = 0). I have a workflow that is basically two user branches. Stable Diffusion 1. Here, you can also set the batch size , which is how many images you generate in each run. I have tried to push down the sampling step count as low as possible. Welcome to the unofficial ComfyUI subreddit. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. Please keep posted images SFW. 8 so that some of the structure of the original image generated is retained. When I do the same in Automatic1111, I get completely different people and different compositions for every image. can prettymuch be scaled to whatever batch size by repetition. A transparent PNG in the original size with only the newly inpainted part will be generated. Belittling their efforts will get you banned. You probably still want an Exif Viewer/Remover/Cleaner to double check images since you haven't been using this setting and presumably have prior work to sanitize of metadata. I have managed to push it down to 3 steps with some nifty tricks I found The demo images aren't curated, all images just use the seed "3" with a basic prompt, so this is really useful for experimenting. Stable diffusion has a bad understanding of relative terms, try prompting: "a puppy and a kitten, the puppy on the left and the kitten on the right" to see what I mean. I want to upscale my image with a model, and then select the final size of it. This youtube video should help answer your questions. Hello, Stable Diffusion enthusiasts! We decided to create a new educational series on SDXL and ComfyUI (it's free, no paywall or anything). The one that is shown in the "post view" is a "preview JPEG" (even though it looks as if it is full size) which does not have the metadata. and see if you can get the image size to be used for the empty latent (converted) height and width (later on - empty Welcome to the unofficial ComfyUI subreddit. So I can't give a simple answer but I'd say if you're still interested and need some help we can join a discord call or something and I can help. A mask adds a layer to the image that tells comfyui what area of the image to apply the prompt too. Hey everyone, I've been exploring the possibility of using an image as input and generating an output image that retains the original input's dimensions. Also the exact same position of the body. you can just plug the width and height from get image size directly into nodes where you need it too. Aug 21, 2023 · If we want to change the image size of our ComfyUI Stable Diffusion image generator, we have to type the width and height. You can't enter a latent image size larger than 8192. Automatic1111 would let you pick the final image size no matter what and give you options for crop, just resize, etc. Batch index counts from 0 and is used to select a target in your batched images Length defines the ammount of images after the target to send ahead. Stable Diffusion XL is Jul 6, 2024 · So, if you want to change the size of the image, you change the size of the latent image. Input your batched latent and vae. If you just want to see the size of an image you can open an image in a seperate tab of your browser and look up top to find the resolution too. No, you don't erase the image. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. So you have the preview and a button to continue the workflow, but no mask and you would need to add a save image after this node in your workflow. Its solvable, ive been working on a workflow for this for like 2 weeks trying to perfect it for comfyUI but man no matter what you do there are usually some kind of artifacting, its a challenging problem to solve, unless you really want to use this process, my advice would be to generate subject smaller and then crop in and upscale instead. Also, if this is new and exciting to you, feel free to post Posted by u/tobi1577 - 216 votes and 49 comments Welcome to the unofficial ComfyUI subreddit. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. i do that alot. New users of civitai should be aware the PNG (which contains the metadata) can only be downloaded from the "image view". I would like to know if that is due to some reason other than images that large take a long time. so I would assume generating 4 images (with the `batch_size` property) would give me four images with seeds `1`, `2 I think the intended workflow here is to just press several times on the Queue Prompt button. /* Put custom styles here */ . Works great. I know i can run the img to vid portion with 512 x 512 input image but im struggling trying to downscale the image by 2. comfy-multiline-input { font-size: 10px; } ComfyShop has been introduced to the ComfyI2I family. With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. How to Magically Resize Your Images: The 1024px Rule That Will Change Everything. qycdsy mnapozv gwv ifafdv xbcefx gjdext aimw gix adyly ifav