Best comfyui seed node reddit. Planned for version three: Nodes that take multiple arbitrary inputs and broadcast them all. XY plot for other nodes. Hey everyone! I'm thrilled to share some recent updates we've made to our project. The new update to Efficiency added a bunch of new nodes for XY Plotting, and you can add inputs on the fly. Generate from Comfy and paste the result in Photoshop for manual adjustments, OR. IC-Light - For manipulating the illumination of images, GitHub repo and ComfyUI node by kijai (only SD1. Id say it allows a very high level of access and customization, more thanA1111 - but with added complexity. I don't know why you don't want to use manager, if you install nodes with manager, a new folder is created in the custom_nodes folder, if something is messed up after installation, you sort folders by modification date and remove the last one you installed. It's less than ideal, but a clever solution when working within those parameters. What this really allows me to do is run a dozen initial generations with the rgthree seed node at random. Go to the end of the file and rename the NODE_CLASS_MAPPINGS and NODE_DISPLAY_NAME_MAPPINGS. Read the nodes installation information on github. I forget what custom node it’s in, but the integer output would work in your use case. ๐ Improved control over text generation with temperature, top_p, top_k, and repetition_penalty. I would like to see the raw output that node passed to the target node. is the correct naming. You either upscale in pixel space first and then do a low denoise 2nd pass or you upscale in latent space and do a high denoise 2nd pass. So far drum beats are good, drum+bass too. If you drag a noodle off it will give you some node options that have that variable type as an output. It operates the largest electronic trading platform in the U. Rerunning will do no more work. safetensors' not in [] Also you can make batch and set node to select index number from batch (latent or image). When the tab drops down, click to the right of the url to copy it. There is a text generated which is shown in connected "show-text" node. Oh well, it's not like it stopped working altogether Choose option “pixel” rather than “latent” will fix problem. I converted variation_seed on the Hijack node to input because this node has no "control_after_generate" option, and added a Variation Seed node to feed it with the variation seed instead. log, comfyui. 1) in ComfyUI is much stronger than (word:1. This is then fed into the "select" input on the Switch. and 'ctrl+B' or 'ctrl+M' that groups when you I've been loving ComfyUI and have been playing with inpaint and masking and having a blast, but I often switch the A1111 for the X/Y plot for the needed step values, but I'd like to learn how to do it in Comfy. They were working up until about a week or so ago for me. Connect the string to the seed input on your sampler (Right click on the sampler and convert seed to input/convert noise seed to input). I've submitted a bug to both ComfyUI and Fizzledorf as I'm not sure which side will need to correct it. This appears to be because your nodes (and other nodes available for ComfyUI) do not save the correct seed. B. To get more control than ComfyUI you would need to program in python calling the AI API directly. And above all, BE NICE. So you will upscale just one selected image. So me-as-a-noob-mode: mute random nodes in the middle of the workflow. Sort by: rgthree. I did a full reinstall of ComfyUI and it still doesn't work. ComfyUI is amazing. tiktaalik111. ) WAS suite has a wildcards node that reads from a text file and randomly chooses a line from it which has a separate seed attached to it which can randomize after each generation. I did a plot of all the samplers and schedulers as a test at 50 steps. I see node ui as a means to an end - like programming. Open-sourced the nodes and example workflow in this Github repo and my colleague Polina made a video walkthrough to help explain how they work! Follow the link to the Plush for ComfyUI Github page if you're not already here. Hope that helps. Latent quality is better but the final image deviates significantly from the initial generation. Hello, I am trying to figure out the best nodes for painting simple colors and masks ontop of an image while being able to route the color layers + mask layers separately for image editing + inpainting within comfy Look in the javascript console for debugging information. This way the values will randomize/increment etc. ๐ trust_remote_code parameter for enhanced security when loading models. ๐ท. The "seed" primitive was generated by double-clicking it out of the sampler. Sorry for the extract click. log, etc. Please keep posted images SFW. Not how comfyui is built. I'm trying to run a json workflow I got from this sub, but can't find the post after a lot of search (line art workflow), so here's my problem. 2. This is outlined in ComfyUI-Impact-Pack README. Please share your tips, tricks, and workflows for using this software to create your AI art. ComfyUI generates its seeds on the CPU by default instead of the GPU like A1111 does. prev. The company brokers stocks, options, futures, EFPs, futures options, forex, bonds, and funds. Random selection node. Looks like it has its own seed control that may be overriding the seed input from the rgthree Seed. I'm using Use Everywhere Seed node for now. The “MultiLatentComposite 1. I tried to paste the list here but reddit was not formatting it properly. Click it if it is unchecked and it should generate those files when the server is running. ComfyUI prompting is different. But if you choose increment/decrement, which would be the delta (rate of change), you can't change how much it increments/decrements. The SUPIR First Stage (Denoise) -node doesn't have to be used at all, you can just use the SUPIR Encode -node on it's own. I have this full version saved, as well as a simplified workflow with just the first step to quickly find good seeds and prompts, which I can then pass on to the full workflow to upscale. comfyui. The KSampler (efficient) then adds these integers to the current seed, resulting in image outputs for seed+0, seed+1, and seed+2. But now I have a huge problem which I´ve been trying to solve for hours now. There should be a node for int to string conversion, maybe in WAS suite. With Style Aligned, the idea is to create a batch of 2 or more images that are aligned stylistically. You can check it out on this GitHub Readme. Using the encoding but doing decoding with normal Comfy VAE decoder however gives pretty good quality with far less memory use, so that's also an option with my nodes. A. After the preview the upscale node is muted. Input image for style isn't necessary, you can use text prompts too. For some reason, it doesn't do this, it just keeps on going. I am trying to use wildcards with comfyui now for the first time. Run it with new seeds as many times as I’m happy and, when I get one, I make sure the seed is then fixed and, again, everything is cached. Put the node in right after the checkpoint loader (I've been loading it before LoRAs, haven't tried it afterwards). However, if you edit such images with software like Photoshop, Photoshop will wipe the metadata out. Not sure about that KSample Efficiency node. 1. s1: responsible for the details in b2. For starters, the original image is in pretty bad shape right off the bat. Problem is when I set the batch to 80 in latent nodes I get 80 completely unrelated images from the example workflow when I run the workflow. Reply reply. Would love to see a seed or some such that allows you to control whether the prompt is regenerated on each execution without making changes. ComfyUI LLM Node - update v2. png from Dall-e3. This tool revolutionizes the process by allowing users to visualize the MultiLatentComposite node, granting an advanced level of control over image synthesis. It gives very nice photographic skin details and works for illustrations too. It got discontinued ๐คทโ๏ธ. I then use the ImpactInt node to convert it into something that can be used by pygosssss' math expression node to keep the values between 0 and 1, and we then add 1 to this to get our index. Unfortunately no. MultiLatentComposite 1. "Depth" seems fairly self-explanatory, it must be something to do with how deep into the unet it goes when clamping the latents. ago. Personally I use DPM++ SDE Karras for most of my gens. I am using the primitive node to increment values like CFG, Noise Seed, etc. Finally, someone adds to ComfyUI what should have already been there! I know, I know, learning & experimenting. For instance if you did a batch of 4 and really just want to work on the second image, the batch index would be 1. Some UI to show you what is connecting to where (and what isn't because of ambiguity). 1) Go and search in Google for the repo containing that node. There are some custom nodes/extensions to make generation between the two interfaces compatible. Was suite has a number counter node that will do that. It'll turn the Primitive into a random number generator. 3 Share. Just tick the extra option then you can see your generating queues and disable if you don't like how it's working out. seed" if you use the KSampler node in your workflow. You can add seed to the filename by adding "KSampler. r/StableDiffusion. . Then your using HiRes-Fix to scale this thing up to 3072x2048! Now keep in mind theres more than 1 way to skin a cat Instead, restart comfyui, start the workflow, Check if it works, if it doesn't, copy the console log, maybe I can figure out what's going on. use fixed seed and play with sigma_min parameter to get variations of the same beat/pattern. I don't know the proper way to achieve this, but this works for me. And now it will just take the dust. Same if I set it to randomize, it will give me numbers outside of the initial seed/start and the maximum. This is why I save the json file as a backup, and I only do this backup json to images I really value. Using the "simple wildcards". Even better it has an option to only randomize seed BEFORE generation which is great for this type of workflow. Could a similar feature be implemented for other nodes, such as the Apply IPAdapter, to test different values for Extract the contents and put them in the custom_nodes folder. Hey everyone,Got a lot of interest in the documentation we did of 1600+ ComfyUI nodes and wanted to share the workflow + nodes we used to do so using GPT4. Also, if this is new and exciting to you, feel free to post Yes, you will need to install the nodes that way if its not in the managers list, sometimes you get a new workflow and it will be a missing node so can install via the manager that way. A node that uses ChatGPT to create SD and Dall-e3 prompts from your prompts, from an image or both, based on art styles. 115 upvotes · 30 comments. It's been a productive period, and after some intense coding sessions, we're rolling out a few enhancements that I believe will significantly improve the flexibility and functionality of our system. A node that can take in/out basic dataset logic functions (if, then, combine, etc. If I understood you right you may use groups with upscaling, face restoration etc. I know dragging the image into comfyui loads the entire workflow, but I was hoping I could load an image and have a node read the generation data like prompts, steps, sampler etc. Also - while the text files are easy to edit/add - if it could consume th . Reply. For initial testing, I put a Hijack node at the front of the SDXL10 KSampler chain (Base + Refiner), and Unhijack at the end, before the VAE Decode. The Sampler also now has a new option for seeds which is a nice feature. A node that takes a text prompt and produces a . • 2 mo. This one was a little rough to edit! Please let me know if any issues pop up! I’m not sure if i may have missed a bad edit! Besides that I hope this is useful! Next video I’ll be diving deeper into various controlnet models, and working on better quality results. It just feels so unpolished - No. You could try to pp your denoise at the start of an iterative upscale at say . If you double click and start typing 'seed', you'll find a couple seed generation nodes to use. I believe it's due to the syntax within the scheduler node breaking the syntax of the overall prompt JSON load. In my opinion, this approach is the “proper” way to You can just drag the png into Comfyui and it will restore the workflow. ComfyShop has been introduced to the ComfyI2I family. Added: Dynamic torch_dtype selection for optimal performance on CUDA devices. ComfyShop phase 1 is to establish the basic painting features for ComfyUI. Problem with SD3 triple Clip loader (comfy is up to date) So I load the basic workflow from their huggingface example but I get the following error: Prompt outputs failed validation. There are multiple posts asking how to do this and no definitive answers. I'm currently facing Great work - only been messing about with it for a few minutes, but might replace Chat-GPT nodes in my workflows. Then I unmute the save and run again to save the output. Enjoy a comfortable and intuitive painting app. Belittling their efforts will get you banned. For those who haven't seen these before - the variation nodes allow you to generate small variations on the initial noise (basically mixing a fraction of the noise generated by an alternative seed) to produce slightly different images. Hello r/comfyui , I put together a list of all the custom nodes that came out in May 2024. Delete selected nodes: Ctrl + Backspace: Delete the current graph: Space: Move the canvas around when held and moving the cursor: Ctrl/Shift + Click: Add clicked node to selection: Ctrl + C/Ctrl + V: Copy and paste selected nodes (without maintaining connections to outputs of unselected nodes) Ctrl + C/Ctrl + Shift + V The Incremeter, for example, has a set end number. The length should be 1 in this case. Here's a quick rundown of what's new: I'm not sure if this is what you want: fix the seed of the initial image, and when you adjust the subsequent seed (such as in the upscale or facedetailer node), the workflow would resume from the point of alteration. Earlier I made a mistake regarding that and the latents from it being not accepted by the conditioning node, but that's been fixed. There are some settings, but haven't really figured them out yet. The seed generators that have a dropdown to select the seed generation type don't have the issue. (Randomly selects checkpoint or lora or sampler etc) I can generate random numbers to "randomly wander" through generation batches so I can randomize steps and cfg and anything that requires numbers. Instrumentals not so much, often it's just some cacophony, like they play off key. b2: responsible for the smaller areas on the image. ComfyUI LLM Node - update. PSA: If you've used the ComfyUI_LLMVISION node from u/AppleBotzz, you've been hacked I've blocked the user so they can't see this post to give you time to address this if you've been compromised. This same way could add any details you want from any node by replacing the "KSampler" part with the text in "Node name for S&R" in their property window and "seed Welcome to the unofficial ComfyUI subreddit. Click on the green Code button at the top right of the page. I'd seriously work on getting higher quality output on step 1. Welcome to the unofficial ComfyUI subreddit. (Randomly selects checkpoint or lora or sampler etc) : r/comfyui. Batch of two images, Style Aligned on : edit: better examples. If the graph hasn't changed, then all previous nodes are cached, and it picks up where it left off seamlessly. Combine both methods: gen, draw, gen, draw, gen! Always check the inputs, disable the KSamplers you don’t intend to use, make sure to have the same resolution in Photoshop than in ToonCrafter itself does use a lot more VRAM due to it's new encoding/decoding method, skipping that however reduces quality a lot. I don't want the nodes to be the final interface. • 8 mo. Yes I can save the images, but I need to be able to reproduce specific image generations for the latent data. Draw in Photoshop then paste the result in one of the benches of the workflow, OR. You can right click on a node and change many selections to an input. The repo isn't updated for a while now, and the forks doesn't seem to work either. I use the Global Seed (Inspire) node from the ComfyUI-Inspire-Pack by Lt. Then navigate, in the command window on your computer, to the ComfyUI/custom_nodes folder and enter the command by typing git clone Euler a according to this site is even better (I use it when I need an extra soft look) DPM++ 2M Karras is good for super-low steps, but DPM++ SDE (normal or Karras) is better for higher. Is it definitely an abandoned node ? : (. When I get something that works, unmute the upscale node, decrement the SDXL, ComfyUI, and Stability AI, where is this heading? r/StableDiffusion • Using DeepFace to prove that when training individual people, using celebrity instance tokens result in better trainings and that regularization is pointless Hi I am struggling to find any help or tutorials on how to connect inpainting using the efficiency loader I'm new to stable diffusion so it's all a bit confusing Does anyone have a screenshot of how it is connected I just want to see what nodes go where Welcome to SketchUp's home on reddit: a place to discuss Trimble's easy to use 3D modeling program, plugins and best practices. Long story short, if you've installed and used that node, your browser passwords, credit card info, and browsing history have been sent to a Discord Nodes that have failed to load will show as red on the graph. When finding it, this is the standard procedure I always try: Go to the manager and "Click Install Missing Custom Nodes. < Nodes! eeeee!, so because you can move these around and connect them however you want you can also tell it to save out an image at any point along the way, which is great! because I often forget that stuff. But they all break the whole system. Added support for gpt_refact. That seems to cover lots of poor UI dev. It provides several ways of distributing seed numbers to other nodes all without the connecting lines! You just have to set "control_after_generate" widget on nodes to "fixed" for it to work. However, when I use ComfyUI and your "Seed (rgthree)" node as an input to KSampler, the saved images are not reproducible when image batching is used. Today, even through Comfyui manager, where FOOOCUS node is still available, and install it, the node is marked as "unloaded" and I cannot use it. I need to "nodes" that I can't seam to find anywhere : Any Ideas how to get/install Seed and ClipInterrogate? I've found ClipInterrogate in a tool set, but it doesn't seam to work. When running a batch, every image will have the exact same seed and metadata even when seed is random, and even though they are entirely different images. I am thinking of the scenario, where you have generated, say, a 1000 images with a randomized prompt and low quality settings and then have selected the 100 best and want to create high quality I JUST discovered it yesterday! I started experimenting with ComfyUI a couple of days ago, found the number of nodes required for basic workflow stupidly high, so I was glad there were custom nodes that work just as ComfyUI should by default. You can stop your generation anytime. If they don't rerun it means you didn't change their setting. Invoke just released 3. All of them are because Comfy does things differently than A1111 . There’s a Random Number node that can output either a float or an integer within a user defined range. py with a plain text editor, like Notepad (I prefer Notepad++ or VSCode). s2: responsible for the details in b1. TripleCLIPLoader: Value not in list: clip_name1: 'clip_g_sdxl_base. A lot of people are just discovering this technology, and want to show off what they created. I am quite new to ComfyUI, so this may be a silly question but how do you find a node!? I see in various how-tos to select this or that node, such…. One way to simplify and beautiful node-based workflows is to allow I really don't enjoy having to run the whole setup and then cancel when it starts the ksampler instead of just having an option just to run the preprocessor. To open ComfyShop, simply right click on any image node that outputs an image and mask and you will see the ComfyShop option much in the same way you would see MaskEditor. Cheers, appreciate any pointers! Somebody else on Reddit mentioned this application to drop and read. Some update to either ComfyUI or maybe a custom node must have broken that functionality. There are a lot of them, some more niche that others. I added the required nodes nodes using the manager. It didn't happen. LoRAs in ComfyUI are loaded into the workflow outside of the For instance (word:1. Both these are of similar speed. Let's say that I want to transmit the output of a Math node that does a calculation. Also, if this is new and exciting to you, feel free to post Welcome to the unofficial ComfyUI subreddit. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. It is often best known for its trader workstation, API's, and low margins. b1: responsible for the larger areas on the image. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Heads up: Batch Prompt Schedule does not work with the python API templates provided by ComfyUI github. It would be great to have a set of nodes that can further process the metadata, for example extract the seed and prompt to re-use in the workflow. • 5 mo. If you know how to use the "Markdown Editor" let me know. I found out about the right click --> Queue selected Add Node | utils | Primitive. 1. Not unexpected, but as they are not the default values in the node, I mention it here. Hi, I understand that the Efficiency node pack's XY plot functionality enables the automatic variation and testing of parameters such as "CFG", "Seeds", and "Checkpoints" within the KSampler (Efficiency). Download KSampler (efficient) uses scripts and the “XY Input: Seeds++ Batch” node is configured to send a list of integers (0, 1, 2) to the KSampler (efficient). Efficiency Nodes have been updated. It (I would think) would be easy to make or mod a node to just have the seed set to one every time it sends it's maximum number. ZeonSeven. Other than that, you can restart comfyui like said. 1” custom node introduces a new dimension of control and precision to your image generation endeavors. And now I have the problem. " If the node is listed, click and download. Node-based workflows typically will never have a final interface, because node is designed to replace programming and custom interface. I think the noise is also generated differently where A1111 uses GPU by default and ComfyUI uses CPU by default, which makes using the same seed give different results. XY Plot. ComfyUI nodes missing. Something like this. Use increment instead of randomize? Okay, ended up solving the problem by changing those 4 lines of code manually (it was a bug). Every Sampler node (the step that actually generates the image) on ComfyUI requires a latent image as an input. Seed set to increment. > <. If you increase this above 1, you'll get more images from your batch up to the max # in your original batch. They also added a combined sampler for SDXL. Members Online Best Laptop Recommendation for Interior Design Students Using SketchUp within a $1300 Budget? In IP-adapter the idea is to incorporate style from a source image. -A node that extracts AI generation data: prompt, seed, model ect from comfyui images; and Exif data ( camera settings from jpg photographs, AI generation data Reply. Dr. When I try to reproduce an image, I get a different image. and spit it out in some shape or form. There is a view logs button that should let you view them. The weight of values is different, ComfyUI seems to be more sensitive to higher numbers than A1111. Award. S. I The batch index should match the picture index minus 1. 0. If the node is not available in the Manager, then: 2. ComfyUI + Stable Audio Sampler: Node Update and some Beats! it also responds to BPM in the prompt. Open Settings (small gear icon in the top right corner of control panel) and change Widget Value Control Mode to "before". The Checkpoint selector node can sometimes be a pain as it's not a string but some custom nodes want a string. safetensors' not in [] Value not in list: clip_name2: 'clip_l_sdxl_base. Each node does some specific function and has a lot of knobs you can tweak. Disclaimer: (I love ComfyUI for how it effortlessly optimizes the backend and keeps me out of that shit. The Ultimate Guide to Master Comfy UI Controlnet: Part 1. Here are my findings: Neutral value for all FreeU options b1, b2, s1 and s2 is 1. But, if you have, say, a random seed or something like that, it won't work as expected. ๐พ You can set the noise seed manually by right clicking on the sampler, and under bypass, choose convert seed to input. In my case, I renamed the folder to ComfyUI_IPAdapter_plus-v1. Nope. There’s been a few projects that tried this. 2 options here. 0, but it only took like 15 minutes. Add a Comment. 4, but use a relevent to your image control net so you don't lose to much of your original image, and combining that with the iterative upscaler and concat a secondary posative telling the model to add detail or improve detail. A node that can take in or spit out basic numeric math solutions. If you want to grow your userbase, make your app USER FRIENDLY. It just needs a bunch of custom nodes installed and some fiddling to update it to SDXL 1. 5 for the moment) 3. Using the primitive node is like walking to a giant drunk guy in a bar and make a derogatory comment - it's not going to end well due to things like data type handling, casting and rounding. Expert-mode: mute node (s) at the end of the workflow. A node hub - A node that accepts any input (including inputs of the same type) from any node in any order, able to: transport that set of inputs across the workflow (a bit like u/rgthree 's Context node does, but without the explicit definition of each input, and without the restriction to the existing set of inputs) output the first non-null 2. 1) in A1111. by number of daily average revenue trades. To create a new image from scratch you input an Empty Latent Image node, and to do img2img you use a Load Image node and a VAE Encode to load the image and convert it into a Latent Image. (I saw this because it has its own “๐ฒ Randomize / โป๏ธ…” button, which makes me think it’s overriding the seed) I can only get the seed of the ksampler to randomize once per queued generation- When doing batches/repeated processes during a single queued generation, how can I make the seed change with each batched iteration? Mar 18, 2024 ยท 2. Data. Might be useful. before launching the workflow and you get actual values used in the launch, until you hit the Queue Prompt button again. the best part about it though >. Open the IPAdapterPlus. Entering 18 digit random seeds get tedious very fast to re-generate images from an x/y plot. If you're using nodes that always randomize and global seed isn't enough, just have a load image node ready on the side, quickly paste the output of the stage you wanna skip, connect it to the next stage and mute that output. Hello, update to LLM Node. You create nodes and "wire" them together. And if you click the gear wheel to get to the settings you should see something named Logging. Other features that get requested and seem like they might be fun. hl ax ie av nr bb ls zk fr tx