Comfyui image to image reddit
Comfyui image to image reddit. Improves and Enhances Images v2. The hard part is knowing when the image is ready to be retreived and getting the image. comfyui image. I want to upscale my image with a model, and then select the final size of it. 2 would give a kinda-sorta similar image, 1. First of all, there a 'heads up display' (top left) that lets you cancel the Image Choice without finding the node (plus it lets you know that you are paused!). It will even try and load things that aren't images if you don't provide a matching pattern for it - this is the main problem, really, it uses the pattern matching from the "glob" python library, which makes it hard to specify multiple file extensions at once. I'm having the same problem. Welcome to the unofficial ComfyUI subreddit. The latest version adds two main things: - in the main settings menu you can control the location of the HUD that shows what node is currently running (or turn it off) in order to not clash with other GUI elements I started with ComfyUI 3 days ago. 1. This is what I have so far (using the custom nodes to reduce the visual clutteR) . The cloth was masked, but int the result image, the color of the cloth changed. So I can't give a simple answer but I'd say if you're still interested and need some help we can join a discord call or something and I can help. 4sec for the save image operator and about 0. So I use batch picker, but I cant use that with efficiency nodes. simply add LORAs into your workflow: https://civitai. So I tried to create the outpainting workflow from the ComfyUI example site. I can get it to create images functionally similar to stableUI, but occasionally it will give me images like you were getting, with the low quality, thick outlines, and pixel artifacts. 98K subscribers. 0! You can now find it at the following link: Delve into the advanced techniques of Image-to-Image transformation using Stable Diffusion in ComfyUI. (using SD webUI before) I am getting blurry image when using "Realities Edge XL โข ⋅ LCM+SDXLTurbo" model in ComfyUI I got the same issue in SD webUI but after using sdxl-vae-fp16-fix, images are good But when I try to use the same to fix this issue, not working. 0. I'm not aware of an image browser node that lets you look at earlier generations within the UI. 0&modelType=LORA&sortBy=models_v8&query=details. Appreciate just looking into it. I understand that Chat GPT is great for prompt & text to image, but it obviously can’t do everything I want for images. sft file in your: ComfyUI/models/unet/ folder. 3K views 4 months ago. I found one that doesn't use sdxl but can't find any others. Hi all! Was wondering, is there any way to load an image into comfyui and read the generation data from it? I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. The denoise on the video generation KSampler is at 0. With controlnet I can input an image and begin working on it. It works beautifully to select images from a batch, but only if I have everything enabled when I first run the workflow. The goal is to take an input image and a float between 0->1the float determines how different the output image should be. I want ONE part of an image … say a hand or a necklace or hat… and just superimpose JUST that into the other image. For example, I can load an image, select a model (4xUltrasharp, for example), and select the final resolution (from 1024 to 1500, for example). Generated images automatic1111 image. The intention will be to have ComfyUI, A1111, fooocus, SD. So ive used openpose to get the pose right and prompt to create the image which im happy with as a version 1. In truth, 'AI' never stole anything, any more than you 'steal' from the people who's images you have looked at when their images influence your own art; and while anyone can use an AI tool to make art, having an idea for a picture in your head, and getting any generative system to actually replicate that takes a considerable amount of skill and effort. and spit it out in some shape or form. , and then re-enable once I make my selections. And above all, BE NICE. Can I ask what the problem was with Load Image Batch from WAS? It has a "random" mode that seems to do what you want. ComfyUI only use tensors temporarily converted to PIL for internal processing purposes; they are not passed as output from the node. a suite of nodes for ComfyUI called ColorMod, which uses the fact that image outputs in comfyui are of 32bit float type to save images as 16bit files instead of 8bit files. I haven't been able to replicate this in Comfy. So 0. Specfiically, the padded image is sent to the control net as pixels as the "image" input , and the padded image is also sent as VAE encoded to the sampler as the latent image. Rob Adams. 0 would be a totally new image, and 0. Intel provides their own Docker images for pytorch and tensorflow, but I don't think they've been updated in a while. Along with normal image preview other methods are: Latent Upscaled 2x Hires fix 2x(two pass img) Welcome to the unofficial ComfyUI subreddit. Is it possible to do that in ComfyUI? I'm struggling to find a workflow that allows image/ img input into comfy ui that uses sdxl. Please keep posted images SFW. I liked the ability in MJ, to choose an image from the batch and upscale just that image. The problem is when i need to make alterations but keep the image the same, ive tried inpainting to change eye colour or add a bit of hair etc but the image quality goes to shit and the inpainting isnt really doing what i want. 8 so that some of the structure of the original image generated is retained. will load images in two ways, 1 direct load from HDD, 2 load from a folder (picks next image when generated) Prediffusion - A nested node (requires nested nodes to load correclty) this creats a very basic image from a simple prompt and sends it as a source. A lot of people are just discovering this technology, and want to show off what they created. Please share your tips, tricks, and workflows for using this software to create your AI art. I save only best images with their respective data. it is a simple way to compare these methods, it is a bit messy as I have no artistic cell in my body. Also, sometimes put images from the same generation batch to different folders, for example Best, Good etc. In this case, the image from comfy has some extra glitches. It's never clear what triggered it, but I can't get it to generate normal images again without relaunching comfyUI. com/search/models?baseModel=SDXL%201. Nearest-exact is a crude image upscaling algorithm that, when combined with your low denoise strength and step count in the KSampler, means you are basically doing nothing to the image when you denoise it, leaving all the jagged pixels introduced from your initial upscale. So I select a face I want to do img2img on in Photoshop, paste into ComfyUI, ctrl-Enter, and I get the redrawn face. No, in txt2img. Initial Input block - Hello Fellow ComfyUI users, this is my workflow for testing different methods to improve image resolution. So you have the preview and a button to continue the workflow, but no mask and you would need to add a save image after this node in your workflow. Understand the principles of Overdraw and Reference methods, and how they can enhance your image generation process. The problem here is the step after your image loading, where you scale up the image using the "Image Scale to Side" node. However, when I use ComfyUI and your "Seed (rgthree)" node as an input to KSampler, the saved images are not reproducible when image batching is used. Probably not what you want but, the preview chooser\image chooser node is a custom node that pauses the flow while you choose which image (or latent) to pass on to the rest of the workflow. 5, then uses Grounding Dino to mask portions of the image to animate with AnimateLCM. Then pass the new image off to the rest of the nodes…. If you do them as a batch they should all appear together in the UI but be replaced when the next batch is completed. There's "latent upscale by", but I don't want to upscale the latent image. You can then load or drag the following image in ComfyUI to get the workflow: This image contains the workflow (https://comfyanonymous. The repo you linked looks very good for local use. This workflow generates an image with SD1. After downloading perplexity pro, I saw the option for SDXL, which made me look into stablediffusionart. It animates 16 frames and uses the looping context options to make a video that loops. However, my goal is to recreate the exact same image, I understand that the DPM++2M model can do this, at least in auto11 it does repeat the same image all the time. Is there something like this for Comfyui including sdxl? Welcome to the unofficial ComfyUI subreddit. I've built many ComfyUI web apps for personal business purposes and have helped others on Reddit as well. After borrowing many ideas, and learning ComfyUI. Hi everyone! I wanted to share with you that I've updated my workflow to version 2. If you have created a 4 image batch, and later you drop the 3rd one into comfy to generate with that image, you dont get the third image, you get the first. Sorry if I seemed greedy, but for Upscale Image Comparing, I think the best tool is from Upscale. Thanks a lot for this amazing node! I've been wanting it for a while to compare various versions of one image. Belittling their efforts will get you banned. A short beginner video about the first steps using Image to Image, Aug 2, 2024 ยท Put the flux1-dev. The one thing I would add is that a lot of the time you will spend learning ComfyUI, you will also be learning about the underlying technologies, since you can combine anything together. I think the DALL-E 3 does a good job of following prompts to create images, but Microsoft Image Creator only supports 1024x1024 sizes, so I thought it would be nice to outpaint with ComfyUI. When I try to reproduce an image, I get a different image. png) May 6, 2024 ยท Those detail loras are 100% compatible with comfyui, and yes, that's the first, second and third recommendation I would give. github. Ty i will try this. Almost identical. I am building a "live" image-to-image pipeline, and noticed when debugging that the "save image" or even "preview image" nodes are the slowest part of the workflow! I'm getting latencies of about 0. I tried installing the ComfyUI-Image-Selector plugin, which claims that I can simple mute or disconnect the Save Image node, etc. This Docker image does all of the annoying stuff for you to get the Intel SDK installed and setup pytorch and the necessary Intel extensions so that you can run this "ComfyUI" stable diffusion frontend. Things like Automatic1111, ComfyUI & Forge seem overwhelming when I only want to learn about specific purposes. These nodes pause the workflow and display previews of images, allowing you to select the image or images you want to proceed with. . I think I have a reasonable workflow, that allows you to test your prompts and settings and then "flip a switch", put in the image numbers you want to upscale and rerun the workflow. media which can zoom in and move around simultaneously, making it easy to check details of big images. io/ComfyUI_examples/flux/flux_dev_example. Very curious to hear what approaches folks would recommend! Thanks Welcome to the unofficial ComfyUI subreddit. Subscribed. I just keep an image viewer open in a separate window to look at the contents of the output directory. In A1111 the image metadata always contains the correct seed for each image, allowing me to reproduce the same image if I want to. tensor with dimensions (b, h, w, c). When I use comfyui image to image with mask, the color of masked area changed also, won't happen with 1111 Need help, thanks. If you are interested in learning about how things work behind the scenes, then you're better off investing the time into learning ComfyUI. 0 ** ๐. I'll be creating a cloud focused image that contains more than ComfyUI quuite soon. next, invoke and Kohya initially - Naturally it'll be able to share models and I'll better test it locally Welcome to the unofficial ComfyUI subreddit. (which is getting Img2img and ksampler processing). And its hard to find other people asking this question on here. How can this be accomplished? With Masquerades nodes (install using comfyui node manager), you can maskToregion, cropByregion (both the image and the large mask), inpaint the smaller image, pasteByMask into the smaller image, then pasteByRegion into the bigger image. Basic Image to Image in ComfyUI - YouTube. So, good news first, to get things into Comfy, you can just paste an image that you copied in Photoshop into a Load Image node. Welcome to the unofficial ComfyUI subreddit. I have 2 images. 25sec for the preview operator. com. However, there's a specific constraint I'd like to implement: ensuring the shorter side of the image is always 1024 pixels, with the longer side adjusted proportionally to maintain the aspect ratio. 2. ๐ ** Workflow Update to Version 2. With ComfyUI, what technique should I use to embed a predetermined image into an image that is yet to be generated? For example, I want to create an image of a person wearing a t-shirt, and I need ComfyUI to place a specific image onto the t-shirt. When there are 3 images worth the log file that shows 100-200 generations, it's hard to quickly find the information I need. Then comes the higher resolution by upscaling. Bit of an update to the Image Chooser custom nodes - the main things are in this screenshot. A bit of an obtuse take. 01 would be a very very similar image. 91. I don't think I understand everything that's being said/explored here, and I don't think the person who makes these nodes is super sure either ("Or I might just be graphing Welcome to the unofficial ComfyUI subreddit. In ComfyUI, the data exchanged as IMAGE type is always a 4-dimensional torch. I've been exploring the possibility of using an image as input and generating an output image that retains the original input's dimensions. bnlyhwo itcp bfqu qqnwoei tid ugjobnw vfywzmj vlzjrp enrl arw