Upscale comfyui reddit


Upscale comfyui reddit. Clearing up blurry images have it's practical use, but most people are looking for something like Magnific - where it actually fixes all the smudges and messy details of the SD generated images and in the same time produces very clean and sharp results. Tried it, it is pretty low quality and you cannot really diverge from CFG1 (so, no negative prompt) otherwise the picture gets baked instantly, cannot either go higher than 512 up to 768 resolution (which is quite lower than 1024 + upscale), and when you ask for slightly less rough output (4steps) as in the paper's comparison, its gets slower. I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. 10 votes, 18 comments. I tried all the possible upscalers in ComfyUI (LDSR, Latent Upcale, several models such as NMKV, the Ultimate SDUpscale node, "hires fix" (yuck!), the Iterative Latent upscale via pixel space node (mouthful), and even bought a license from Topaz to compare the results with Faststone (which is great btw for this type of work). 5 if you want to divide by 2) after upscaling by a model. To find the downscale factor in the second part, calculate by: factor = desired total upscale / fixed upscale factor = 2. This will allow detail to be built in during the upscale. Instead, I use Tiled KSampler with 0. 21K subscribers in the comfyui community. 17K subscribers in the comfyui community. Because the upscale model of choice can only output 4x image and they want 2x. The final node is where comfyui take those images and turn it into a video. My fiancé was killed by a drunk driver, the photo is a mugshot of the individual that I want to use in a big sign but it needs to be upscaled so it doesn’t lose qualit. The upscale not being latent creating minor distortion effects and/or artifacts makes so much sense! And latent upscaling takes longer for sure, no wonder why my workflow was so fast. It depends what you are looking for. I was just using Sytan’s workflow with a few changes to some of the settings, and I replaced the last part of his workflow with a 2-steps upscale using the refiner model via Ultimate SD upscale like you mentioned. 2 Upscale to 2x and 4x in multi-steps, both with and without sampler (all images are saved) Multiple LORAs can be added and easily turned on/off (currently configured for up to three LORAs, but it can easily add more) Details and bad-hands LORAs loaded I use it with dreamshaperXL mostly and works like a charm. 9, end_percent 0. 5, euler, sgm_uniform or CNet strength 0. For example, if you start with a 512x512 latent empty image, then apply a 4x model, apply "upscale by" 0. Please keep posted images SFW. This is the image I created using ComfyUI, utilizing Dream ShaperXL 1. I have a 4090 rig, and i can 4x the exact same images at least 30x faster than using ComfyUI workflows. Please share your tips, tricks, and workflows for using this… Welcome to the unofficial ComfyUI subreddit. It's high quality, and easy to control the amount of detail added, using control scale and restore cfg, but it slows down at higher scales faster than ultimate SD upscale does. That's exactly what I ended up planning, I'm a newbie to ComfyUI, so I setup Searg's workflow, then copied the official ComfyUI i2v workflow into it and pass into the node whatever image I like. Increasing the mask blur lost details, but increasing the tile padding to 64 helped. Like many XL users out there, I’m also new to ComfyUI and very much just a beginner in this regard. That’s a cost of abou Welcome to the unofficial ComfyUI subreddit. Now, transitioning to Comfy, my workflow continues at the 1280x1920 resolution. And above all, BE NICE. Please share your tips, tricks, and workflows for using this software to create your AI art. 1-0. Latent upscale is different from pixel upscale. Here is details on the workflow I created: This is an img2img method where I use Blip Model Loader from WAS to set the positive caption. The best method as said below is to upscale the image with a model ( then downscale if necessary to desirate size because most upscalers do X4 and it's often too big size to process) then send it back to VAE encode and sample it again. The only approach I've seen so far is using a the Hires fix node, where its latent input comes from AI upscale > downscale image, nodes. Also, both have a denoise value that drastically changes the result. ATM I start the first sampling in 512x512, upscale with 4x esrgan, downscale the image to 1024x1024, sample it again, like the docs tell. That's because latent upscale turns the base image into noise (blur). g. I created a workflow with comfy for upscaling images. That is using an actual SD model to do the upscaling that, afaik, doesn't yet exist in ComfyUI. I've so far achieved this with the Ultimate SD image upscale and using the 4x-Ultramix_restore upscale model. 9 , euler Upscale x1. fix and Loopback Scaler either don't produce the desired output, meaning they change too much about the image (especially faces), or they don't increase the details enough which causes the end result to look too smooth (sometimes losing details) or even blurry and smeary with Welcome to the unofficial ComfyUI subreddit. If you use Iterative Upscale, it might be better to approach it by adding noise using techniques like noise injection or unsampler hook. The resolution is okay, but if possible I would like to get something better. I have a custom image resizer that ensures the input image matches the output dimensions. Look at this workflow : The issue I think people run into is that they think the latent upscale is the same as the Latent Upscale from Auto1111. It's why you need at least 0. Upscale and then fix will work better here. It uses CN tile with ult SD upscale. then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. 55 Here is a workflow that I use currently with Ultimate SD Upscale. 6 denoise and either: Cnet strength 0. There are also "face detailer" workflows for faces specifically. 0. Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. Because the SFM uses the same assets as the game, anything that exists in the game can be used in the movie, and vice versa. . Images are too blurry and lack of details, it's like upscaling any regular image with some traditional methods. The issue is likely caused by a quirk in the way MultiAreaConditioning works: its sizes are defined in pixels. 5 are usually a better idea than going 2+ here because latent upscale introduces noise which requires an offset denoise value be added in the following ksampler) a second ksampler at 20+ steps set to probably over 0 Tried the llite custom nodes with lllite models and impressed. Belittling their efforts will get you banned. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. It will replicate the image's workflow and seed. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. a. thats Grab the image from your file folder, drag it onto the entire ComfyUI window. Adding LORAs in my next iteration. report. Ugh. Switch the toggle to upscale, make sure to enter the right CFG, make sure randomize is off, and press queue. - image upscale is less detailed, but more faithful to the image you upscale. I wanted to know what difference they make, and they do! Credit to Sytan's SDXL workflow, which I reverse engineered, mostly because I'm new to ComfyUI and wanted to figure it all out. articles on new photogrammetry software or techniques. 5 ~ x2 - no need for model, can be a cheap latent upscale Sample again, denoise=0. Vase Lichen. My workflow runs about like this: [ksampler] [Vae decode] [Resize] [Vae encode] [Ksampler #2 thru #n] ^ I typically use the same or a closely related prompt for the addl ksamplers, same seed and most other settings, with the only differences among my (for example) four ksamplers in the #2-#n positions Welcome to the unofficial ComfyUI subreddit. Jan 13, 2024 · So I was looking through the ComfyUI nodes today and noticed that there is a new one, called SD_4XUpscale_Conditioning which adds support for x4-upscaler-ema. Custom nodes are Impact pack for wildcards, rgthree because it's the shit, and Ult SD upscale. fix and other upscaling methods like the Loopback Scaler script and SD Upscale. "Latent upscale" is an operation in latent space and I don't know any way to use the model, mentioned above, in latent space. The workflow has different upscale flow that can upscale up to 4x and in my recent version I added a more complex flow that is meant to add details to a generated image. The Source Filmmaker (SFM) is the movie-making tool built and used by Valve to make movies inside the Source game engine. Search for upscale and click on Install for the models you want. Then I upscale with 2xesrgan and sample the 2048x2048 again, and upscale again with 4x esrgan. I want to replicate the "upscale" feature inside "extras" in A1111, where you can select a model and the final size of the image. An alternative method is: - make sure you are using the k-sampler (efficient) version, or another sampler node that has the 'sampler state' setting, for the first pass (low resolution) sample /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Girl with flowers. 72 votes, 65 comments. The upscale quality is mediocre to say the least. Edit: Also I woudn't recommend doing a 4x Upscale using a 4x Upscaler (such as 4x Siax). 2 options here. Try immediately VAEDecode after latent upscale to see what I mean. If it’s a close up then fix the face first. Still working on the the whole thing but I got the idea down Image generated with my new hopefully upcoming Instantly Transfer Face By Using IP-Adapter-FaceID: Full Tutorial & GUI For Windows, RunPod & Kaggle tutorial and Web APP In A1111, I employed a resolution of 1280x1920 (with HiRes fix), generating 10-20 images per prompt. started to use comfyui/SD local a few days ago und I wanted to know, how to get the best upscaling results. You just have to use the node "upscale by" using bicubic method and a fractional value (0. 24K subscribers in the comfyui community. Please share your tips, tricks, and… I have been generally pleased with the results I get from simply using additional samplers. A lot of people are just discovering this technology, and want to show off what they created. The latent upscale in ComfyUI is crude as hell, basically just a "stretch this image" type of upscale. I upscaled it to a… 17K subscribers in the comfyui community. Depending on the noise and strength it end up treating each square as an individual image. 20K subscribers in the comfyui community. Both these are of similar speed. positive image conditioning) is no longer a simple text description of what should be contained in the total area of the image; they are now a specific description that in the area defined by the coordinates starting from x:0px y:320px, to x:768px y New to Comfyui, so not an expert. Links to different 3D models, images, articles, and videos related to 3D photogrammetry are highly encouraged, e. You end up with images anyway after ksampling so you can use those upscale node. Solution: click the node that calls the upscale model and pick one. if I feel I need to add detail, ill do some image blend stuff and advanced samplers to inject the old face into the process. Where a 2x upscale at 30 steps took me ~2 minutes, a 4x upscale took 15, and this is with tiling, so my VRAM usage was moderate in all cases. The reason I haven't raised issues on any of the repos is because I am not sure where the problem actually exists: ComfyUI, Ultimate Upscale, or some other custom node entirely. But I probably wouldn't upscale by 4x at all if fidelity is important. Hires. I generate an image that I like then mute the first ksampler, unmute Ult. Jul 23, 2024 · The standard ERSGAN4x is a good jack of all trades that doesn't come with a crazy performance cost, and if you're low vram, i would expect you're using some sort of tiled upscale solution like ultimate sd upscale, yea? permalink. Please share your tips, tricks, and… Hi, there I am use UItimate SD Upscale but it just doing same process again and again blow is the console code - hoping to get some help Upscaling iteration 1 with scale factor 2 Tile size: 768x768 Tiles amount: 6 Grid: 2x3 Redraw enabled: True Seams fix mode: NONE Requested to load AutoencoderKL Loading 1 new model Thank you for your help! I switched to the Ultimate SD Upscale (with Upscale), but the results appear less real to me and it seems like it is making my machine work 'harder'. So instead of one girl in an image you got 10 tiny girls stitch into one giant upscale image. 5 denoise. And when purely upscaling, the best upscaler is called LDSR. Hello, A1111 user here, trying to make a transition to Comfyui, or at least to learn of ways to use both. Maybe it doesn't seem intuitive but it's better to go 4x Upscaler for a 2x Upscale and 8x Upscaler for a 4x Upscale. I found a tile model but could not figure it out as lllite seems to require input image match output so unsure how it works for scaling with tile. Subsequently, I'd cherry-pick the best one and employ the Ultimate SD upscale for a 2x upscale. I gave up on latent upscale. SD upscaler and upscale from that. With it, I either can't get rid of visible seams, or the image is too constrained by low denoise and so lacks detail. ComfyUI's upscale with model node doesn't have an output size option like other upscale nodes, so one has to manually downscale the image to the appropriate size. This is a community to share and discuss 3D photogrammetry modeling. k. Thanks for all your comments. 2 / 4. This is the 'latent chooser' node - it works but is slightly unreliable. I've played around with different upscale models in both applications as well as settings. Welcome to the unofficial ComfyUI subreddit. If it’s a distant face then you probably don’t have enough pixel area to do the fix justice. Basically if i find the sdxl turbo preview close enough to what i have in mind, i 1click the group toggle node and i use the normal sdxl model to iterate on sdxl turbos result, effectively iterating with a 2nd ksampler at a denoise strength of 0. Good for depth, open pose so far so good. I usually use 4x-UltraSharp for realistic videos and 4x-AnimeSharp for anime videos. 0 = 0. 5 "Upscaling with model" and then denoising 0. 2 and resampling faces 0. The best method I (possibly for automatic1111, but I only use comfyUI now) I had seen a tutorial method a while back that would allow you upscale your image by grid areas, potentially allow you to specify the "desired grid size" on the output of an upscale and how many grids, (rows and columns) you wanted. 5 for latent upscale you can get issues, I tend to use 4x ultrasharp image upscale and then re-encode back thought a ksampler at the higher resolution with a 0. this breaks the composition a little bit, because the mapped face is most of the time to clean or has a slightly different lighting etc. A step-by-step guide to mastering image quality. So if you want 2. But i want your guys opinion on the upscale you can download both images in my google drive cloud i cannot upload them since they are both 500mb - 700mb 43 votes, 16 comments. I've struggled with Hires. - latent upscale looks much more detailed, but gets rid of the detail of the original image. This means that your prompt (a. The "Upscale and Add Details" part splits the generated image, upscales each part individually, adds details using a new sampling step and after that stiches the parts together Hi! I was wondering of someone could help me upscale a photo to 8k? Or super high resolution. We would like to show you a description here but the site won’t allow us. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. Hello, For more consistent faces i sample an image using the ipadapter node (so that the sampled image has a similar face), then i latent upscale the image and use the reactor node to map the same face used in the ipadapter on the latent upscaled image. 0 Alpha + SD XL Refiner 1. 5=1024). ultrasharp), then downscale. This is done after the refined image is upscaled and encoded into a latent. This results is the same as with the newest Topaz. Makeing a bit of progress this week in ComfyUI. 3 denoise, takes a bit longer but gives more consistent results than latent upscale. 2x, upscale using a 4x model (e. safetensors (SD 4X Upscale Model) Jan 5, 2024 · Click on Install Models on the ComfyUI Manager Menu. Hello! I am hoping to find find a ComfyUI workflow that allows me to use Tiled Diffusion + Controlnet Tile for upscaling images~ can anyone point me toward a comfy workflow that does a good job of this? A few examples of my ComfyUI workflow to make very detailed 2K images of real people (cosplayers in my case) using LoRAs and with fast renders (10 minutes on a laptop RTX3060) Welcome to the unofficial ComfyUI subreddit. For now I got this: A gorgeous woman with long light-blonde hair wearing a low cut tanktop, standing in the rain on top of a mountain, highly detailed, artstation, concept art, sharp focus, illustration, art by artgerm and alphonse mucha, trending on Behance, very detailed, by the best painters Welcome to the unofficial ComfyUI subreddit. I find if it's below 0. 5 to get a 1024x1024 final image (512 *4*0. If you want more details latent upscale is better, and of course noise injection will let more details in (you need noises in order to diffuse into details). Sure, it comes up with new details, which is fine, even beneficial for 2nd pass in t2i process, since the miniature 1st pass often has some issues due to imperfec My guess is you downloaded a workflow from somewhere, but the person who created that workflow has changed the filename of the upscale model, and that's why your comfyui can't find it. And you may need to do some fiddling to get certain models to work but copying them over works if you are super duper uper lazy. Adding in Iterative Mixing KSampler from the early work on DemoFusion produces far more spatially consistent results as shown in the second image. I liked the ability in MJ, to choose an image from the batch and upscale just that image. Jan 8, 2024 · Learn how to upscale images using ComfyUI and the 4x-UltraSharp model for crystal-clear enhancements. 10 votes, 15 comments. Usually I use two my wokrflows: "Latent upscale" and then denoising 0. Point the install path in the automatic 1111 settings to the comfyUI folder inside your comfy ui install folder which is probably something like comfyui_portable\comfyUI or something like that. Thanks! then plug the output from this into 'latent upscale by' node set to whatever you want your end image to be at (lower values like 1. Hello, I did some testing of KSampler schedulers used during an upscale pass in ComfyUI. Please share your tips, tricks, and workflows for using this… After 2 days of testing, I found Ultimate SD Upscale to be detrimental here. The downside is that it takes a very long time. Thanks. I added a switch toggle for the group on the right. 40. Even with ControlNets, if you simply upscale and then de-noise latents, you'll get weird artifacts like the face in the bottom right instead of a teddy bear. Please share your tips, tricks, and… We would like to show you a description here but the site won’t allow us. It added nothing. 1’s 200,000 GPU hours. Latent quality is better but the final image deviates significantly from the initial generation. Imagine it gets to the point that temporal consistency is solid enough, and generation time is fast enough, that you can play & upscale games or footage in real-time to this level of fidelity. 19K subscribers in the comfyui community. I'm trying to find a way of upscaling the SD video up from its 1024x576. 44 votes, 21 comments. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. No matter what, UPSCAYL is a speed demon in comparison. 5, don't need that many steps From there you can use 4x upscale model and run sample again at low denoise if you want higher resolution. This is not the case. After borrowing many ideas, and learning ComfyUI. That said, Upscayl is SIGNIFICANTLY faster for me. save. If this can be solved, I think it would help lots of other people who might be running into this issue without knowing it. It depends on how large the face in your original composition is. embed. Does anyone have any suggestions, would it be better to do an ite I generally do the reactor swap at a lower resolution then upscale the whole image in very small steps with very very small denoise ammounts. fzhpdsax kptxxe ndgawt bpdz bfgcu voqlk vulkglw dmd uexkk erzzpxh

© 2018 CompuNET International Inc.