Theta Health - Online Health Shop

Comfyui upscale image reddit

Comfyui upscale image reddit. This way I can upscale my images while I am away from my system. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. Still working on the the whole thing but I got the idea down You guys have been very supportive, so I'm posting here first. Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. k. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. Do the same comparison with images that are much more detailed, with characters and patterns. Upscaled by ultrasharp 4x upscaler. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. There is a face detailer node. I upscaled it to a resolution of 10240x6144 px for us to examine the results. So I basically want to select multiple images from my drive so that the upscaler scales all the images I have selected, using the same sampler settings and whatnot. There's "latent upscale by", but I don't want to upscale the latent image. 2 options here. You could try to pp your denoise at the start of an iterative upscale at say . 9, end_percent 0. No negatives needed. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. It will replicate the image's workflow and seed. g. Depending on the noise and strength it end up treating each square as an individual image. This means that your prompt (a. ComfyUI : Ultimate Upscaler - Upscale any image from Stable Diffusion, MidJourney, or photo! - YouTube. Thanks. We would like to show you a description here but the site won’t allow us. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. In this ComfyUI tutorial we look at my favorite upscaler, the Ultimate SD Upscaler 4 days ago · In Episode 12 of the ComfyUI tutorial series, you'll learn how to upscale AI-generated images without losing quality. Pause/Preview images to proceed forward in workflow. I created a workflow with comfy for upscaling images. All images except the last two made by Masslevel. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. upscale image - these can be used to downscale by setting either a direct resolution or going under 1 on the 'upscale image by' node. There’s only so much you can do with an SD1. second pic. The best method I Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. These comparisons are done using ComfyUI with default node settings and fixed seeds. upscale by model will take you up to like 2x or 4x or whatever. I've played around with different upscale models in both applications as well as settings. Last two images are just “a photo of a woman/man”. Latent quality is better but the final image deviates significantly from the initial generation. 2x upscale using Ultimate SD Upscale and TileE Controlnet. Vase Lichen. Until now I was launching a pipeline on each image one by one, but is it possible to have an automatic iterative task to do this? I would give the input directory and the pipeline would run by itself on each image. it upscales the second image up to 4096x4096 (4xultrasharp) by default for simplicity but can be changed to whatever. Also, I did edit the custom node ComfyUI-Custom-Scripts' python file: string_function. go up by 4x then downscale to your desired resolution using image upscale. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. Easy prompting to achieve good results. A lot of people are just discovering this technology, and want to show off what they created. It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. This next queue will then create a new batch of four images, but also upscale the selected images cached in the previous prompt. I liked the ability in MJ, to choose an image from the batch and upscale just that image. 5 and I was able to get some decent images by running my prompt through a sampler to get a decent form, then refining while doing an iterative upscale for 4-6 iterations with a low noise and bilinear model, negating the need for an advanced sampler to refine the image. Both these are of similar speed. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. Working on larger latents, the challenge is to keep the model somehow still generating an image that is relatively coherent with the original low resolution image. This is not the case. 0 Alpha + SD XL Refiner 1. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. a. - latent upscale looks much more detailed, but gets rid of the detail of the original image. But i want your guys opinion on the upscale you can download both images in my google drive cloud i cannot upload them since they are both 500mb - 700mb. This is done after the refined image is upscaled and encoded into a latent. Mar 22, 2024 · You have two different ways you can perform a “Hires Fix” natively in ComfyUI: Latent Upscale; Upscaling Model; You can download the workflows over on the Prompting Pixels website. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. 0. Enhance image by adding HDR effects. It fixes issues with bad skin on the base model. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. 5, euler, sgm_uniform or CNet strength 0. Hi, guys. No matter what, UPSCAYL is a speed demon in comparison. This works best with Stable Cascade images, might still work with SDXL or SD1. Belittling their efforts will get you banned. Girl with flowers. The resolution is okay, but if possible I would like to get something better. Instead, I use Tiled KSampler with 0. This. But I probably wouldn't upscale by 4x at all if fidelity is important. 5 model, since their training was done at a low resolution. 5, but appears to work poorly with external (e. I want to upscale my image with a model, and then select the final size of it. It is intended to upscale and enhance your input images. you can change the initial image size from 1024x1024 to other sizes compatible with SDXL as well. (206x206) when I'm then upscaling in photopea to 512x512 just to give me a base image that matches the 1. LOL yeah I push the denoising on Ultimate Upscale too, quite often, just saying "I'll fix it in Photoshop". Latent upscale it or use a model upscale then vae encode it again and then run it through the second sampler. Does anyone have any suggestions, would it be better to do an ite A homogenous image like that doesn't tell the whole story though ^^. Overall: - image upscale is less detailed, but more faithful to the image you upscale. The workflow is kept very simple for this test; Load image Upscale Save image. Welcome to the unofficial ComfyUI subreddit. Also ultimate sd upscale is also a node if you dont have enough vram it tiles the image so that you dont run out of memory. If I want larger images, I upscale the image. Nearest-exact is a crude image upscaling algorithm that, when combined with your low denoise strength and step count in the KSampler, means you are basically doing nothing to the image when you denoise it, leaving all the jagged pixels introduced from your initial upscale. this is just a simple node build off what's given and some of the newer nodes that have come out. (Optional) Upscale to 3x by Default and using ControlNet to stick to base image, speed provided by Automatic CFG. Uses face Detailer to enhance faces if required. Thanks for all your comments. I'm new to the channel and to ComfyUI, and I come looking for a solution to an upscaling problem. ) You could add a latent upscale in the middle of the process then a image downscale in pixel space at the end (use upscale node with 0. I gave up on latent upscale. 9 , euler Once I've amassed a collection of noteworthy images, my plan is to compile them into a folder and execute a 2x upscale in a batch. This is the fastest way to test images vs an image I have a higher rez sample of for testing. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. I only have 4gb of nvidia vram, so large images crash my process. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). Is this possible? Thanks! Welcome to the unofficial ComfyUI subreddit. I have been generally pleased with the results I get from simply using additional samplers. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. Thank I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. No attempts to fix jpg artifacts, etc. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. 6 denoise and either: Cnet strength 0. 5 models (seems pointless to go larger). We introduced a Freedom parameter that will drive how much new detail will be introduced in the upscaled image. My problem is that my generation produce a 1 pixel line at the right/bottom of the image which is weird/white. After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. X values) if you want to benefit from the higher res processing. There are also "face detailer" workflows for faces specifically. You end up with images anyway after ksampling so you can use those upscale node. Ugh. Please keep posted images SFW. Taking the output of a KSampler and running it through a latent upscaling node results in major artifacts (lots of horizontal and vertical lines, and blurring). Before. Because the upscale model of choice can only output 4x image and they want 2x. I have a custom image resizer that ensures the input image matches the output dimensions. FYI, values closer to 1 will stick to your input image more, while value closer to 10 allows more creative freedom but may introduce unwanted elements in your new Generate initial image at 512x768 Upscale x1. py, in order to allow the the 'preview image' node to At the moment i generate my image with detail lora at 512 or 786 to avoid weird generations I then latent upscale them by 2 with nearest and run them with 0. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. I haven't been able to replicate this in Comfy. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions I was running some tests last night with SD1. A step-by-step guide to mastering image quality. After borrowing many ideas, and learning ComfyUI. ( I am unable to upload the full-sized image. the title says it all, after launching a few batches of low res images I'd like to upscale all the good results. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). Heres an example with some math to double the original images resolution Bella donna Italiana - 8K image - ComfyUI + DreamshaperXL + TiledDiffusion + Kohya deep shrink - latent upscale + clipvision and my poor 4060ti upvotes · comment r/ProGolf Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Upscaled img 4x using nearest-exact upscale method. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. After. And I'm sometimes too busy scrutinizing the city, landscape, object, vehicle or creature in which I'm trying to encourage insane detail to see what hallucinations it has manifested in the sky. Using ComfyUI, you can increase the siz For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). it's nothing spectacular but gives good consistent results without This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. After that I send it through a face detailer and an ultimate sd upscale. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. Ideally, I'd love to leverage the prompt loaded from the image metadata (optional), but more crucially, I'm seeking guidance on how to efficiently batch load images from a folder for subsequent upscaling. And above all, BE NICE. The final node is where comfyui take those images and turn it into a video. natural or MJ) images. . PS: If someone has access to Magnific AI, please can you upscale and post result for 256x384 (5 jpg quality) and 256x384 (0 jpg quality). Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. 5 denoise (needed for latent idk why though) through a second ksample. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y Grab the image from your file folder, drag it onto the entire ComfyUI window. Save image with meta data. Is there benefit to upscaling the latent instead? Some images made with my next model, Aether Real SDXL. Please share your tips, tricks, and workflows for using this software to create your AI art. The key observation here is that by using the efficientnet encoder from huggingface , you can immediately obtain what your image should look like after stage C if you were to create it with stage The problem here is the step after your image loading, where you scale up the image using the "Image Scale to Side" node. "LoadImage / Load Image" "Upscale Model Loader / Load Upscale Model" "ImageUpscaleWithModel / Upscale Image (using Model)" "Image Save / Image Save" or "SaveImage / Save Image" That will upscale with no latent invention/injection of creative bits, but still intelligently adds pixels per ESRGAN upscaler models. As my test bed, i'll be downloading the thumbnail from say my facebook profile picture, which is fairly small. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. 2x upscale using lineart controlnet. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. Hires fix with add detail lora. zonlu osgrm ieotd sxxzy mljfq ftaw srd wxihk ctepreg ixkrwa
Back to content