Comfyui arguments reddit
Comfyui arguments reddit. I used to do it manually, with Symlinks and command arguments and the like. 53 it/s for SDXL and approximately 4. This narrows the problem down to GPU/Pytorch packages. Has anyone managed to implement Krea. I originally created it to learn more about the underlying code base powering ComfyUI, but after building it I think it could be useful for anyone in the community who is more comfortable coding than using GUIs. And with comfyui my commandline arguments are : " --directml --use-split-cross-attention --lowvram" The most important thing is use tiled vae for decoding that ensures no out of memory at that step. r/synthdiy • We're going live with a workshop in an hour. On Linux with the latest ComfyUI I am getting 3. For a portable install, launch terminal in comfyUI folder and use . Inpainting (with auto-generated transparency masks). 5 (+ Controlnet,PatchModel. But these arguments did not work for me, --xformers gave me a minor bump in performance (8s/it vs 11s/it) but still taking about 10mins per image. Discord bot users lightly familiar with the models) to supply prompts that involve custom numeric arguments (like # of diffusion steps, LoRA strength, etc. I think a function must always have "self" as its first argument. ==[Update]== Launching in CPU mode is successful ( python main. SD1. I only have 4gb vram so I'm just trying get my settings optimized. 0` Additionally, I've added some firewall rules for TCP/UDP for Port 8188. However, I kept getting a black image. I or Magnific AI in comfyui? I've seen the websource code for Krea AI and I've seen that they use SD 1. Updating to the correct latest version of PyTorch is what is needed. Only if you want it early. (Same image takes 5. Invoke just released 3. Also, if this is new and exciting to you, feel free to post On vacation for a few days, I installed ComfyUI portable on a USB key, and plugged it into a laptop that wasn't too powerful (just the minimum 4 gigabytes of Vram). 6s/it with comfy as opposed to 4. Thanks in advanced for any information. Please share your tips, tricks, and… Hi all, How to ComfyUI with Zluda All credit goes to the people who did the work! lshqqytiger, LeagueRaINi, Next Tech and AI(Youtuber) I just pieced… These images might not be enough (in numbers) for my argument, so I invite you to try it out yourselves and see if its any different in your case. that did not I am using ComfyUI with its default settings. Workflows are much more easily reproducible and versionable. Doing in comfyui or any other other SD UI dont matter for me, only that its done locally. Launch arguments that I don't know about for ComfyUI, or, Some config stuff I've missed with ComfyUI. I down loaded the Windows 7-Zip file and ended up once unzipped with a large folder of files. But where do I begin, anyone know any good tutorials for a lora training beginner. 21K subscribers in the comfyui community. 0 that ads controlnet and a node based backend that you can use for plugins etc so seems a big teams finally taking node based expansion serious i love comfy but a bigger team and really nice ui with node plugin support gives serious potential to them… wonder if comfy and invoke will somehow work together or if things will stay fragmented between all the various Welcome to the unofficial ComfyUI subreddit. Hey all, is there a way to set a command line argument on startup for ComfyUI to use the second GPU in the system, with Auto1111 you add the following to the Webui-user. Using ComfyUI with my GTX 1650 is simply way better than using Automatic1111. I stand corrected. A lot of people are just discovering this technology, and want to show off what they created. /main. Where ever you launch comfyui from is where you need to set the launch options, like so: python main. Anyway, whenever you define a function, never forget the self argument! I have barely scratched the surface, but through personal experience, you will go much further! Welcome to the unofficial ComfyUI subreddit. Using ComfyUI was a better experience the images took around 1:50mns to 2:25mns 1024x1024 / 1024x768 all with the refiner. VFX artists are also typically very familiar with node based UIs as they are very common in that space. For example, this is mine: Hey I'm new to using ComfyUI and was wondering if there are command line arguments to add to launch file like there is in Automatic1111. I get 1. bat file set CUDA_VISIBLE_DEVICES=1. We would like to show you a description here but the site won’t allow us. ComfyUI was written with experimentation in mind and so it's easier to do different things in it. . Can you let me know how to fix this issue? I have the following arguments: --windows-standalone-build --disable-cuda-malloc --lowvram --fp16-vae --disable-smart-memory Welcome to the unofficial ComfyUI subreddit. Comfyui is much better suited for studio use than other GUIs available now. Installation¶ Welcome to the unofficial ComfyUI subreddit. ComfyUI is also trivial to extend with custom nodes. Please keep posted images SFW. I've ensured both CUDA 11. It appears some other AMD GPU users have similar unsolved issues: I just released an open source ComfyUI extension that can translate any native ComfyUI workflow into executable Python code. Scoured the internet and came across multiple posts saying to add the arguments --xformers --medvram. g. I keep hearing that A1111 uses GPU to feed the noise creation part, and Comfyui uses the CPU. “The training requirements of our approach consists of 24,602 A100-GPU hours – compared to Stable Diffusion 2. bat file, . I tested with different SDXL models and tested without the Lora but the result is always the same. py", whether that be in a . 1’s 200,000 GPU hours. Finally I gave up with ComfyUI nodes and wanted my extensions back in A1111. 0. 1, and SDXL are all trained on different resolutions, and so models for one will not work with the others. But inpainting in Comfy is still terrible, not sure it I simply didn't managed to config it right, but when inpainting faces in 1111 I can make it inpaint in a specific resolution, so I do faces at 512x512. Both character and environment. py --cpu ) but of course not ideal. Please share your tips, tricks, and workflows for using this software to create your AI art. I think for me at least for now with my current laptop using comfyUI is the way to go. the example pictures do load a workflow, but they don't have a label or text that indicates if its version 3. For example, this is mine: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Hello, community! I'm happy to announce I have finally finished my ComfyUI SD Krita plugin. Welcome to the unofficial ComfyUI subreddit. Here are some examples I did generate using comfyUI + SDXL 1. I tried installing the dependencies by running the pip install in the terminal window in ComfyUI folder. The VAE can be found here and should go in your ComfyUI/models/vae/ folder. 0 with refiner. Lt. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNets Even thou i keep hearing people focusing the discussion on the time it takes to generate the image (and yes Comfyui is faster, i have a 3060) i would like people to be discussing if the image quality is better in which. ” From the paper, training the entire Würschten model (the predecessor to Stable Cascade) cost about 1/10th of Stable Diffusion. This is a plugin that allows users to run their favorite features from ComfyUI and at the same time, being able to work on a canvas. 2 seconds, with TensorRT. While I primarily utilize PyTorch cross attention (SDP) I also tested xformers to no avail. exe -m pip install [dependency] I have read the section of the Github page on installing ComfyUI. 5, SD2. ) and I am trying out using SDXL in ComfyUI. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. bat looks like this: `. It's very early stage but I am curious what folks think / excited to update it over time! The goal is to a) make it easy for semi-casual users (e. Any idea why the qualty is much better in Comfy? I like InvokeAI - its more user-friendly, and although I aspire to master Comfy, it is disheartening to see a much easier UI give sub-par results. exe -s ComfyUI\main. Welcome to the ComfyUI Community Docs!¶ This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Find your ComfyUI main directory (usually something like C:\ComfyUI_windows_portable) and just put your arguments in the run_nvidia_gpu. Hi r/comfyui, we worked with Dr. What worked for me was to add a simple command line argument to the file: `--listen 0. --show-completion: Show completion for the current shell, to copy it or customize the installation. safetensors instead for lower memory usage but the fp16 one is recommended if you have more than 32GB ram. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. bat file. It said follow the instructions for manually installing for Windows. I did a clean install right now and it works perfectly. 1 or not. Launch and run workflows from the command line Install and manage custom nodes via cm-cli (ComfyUI-Manager as a cli) Hello! I been playing around with comfyui for months now and reached a level where I wanna make my own loras. Every time you run the . nms' with arguments from seen people say comfyui is better than A1111, and gave better results, so wanted to give it a try, but cant find a good guide or info on how to install it on an AMD GPU, with also conflicting resources, like original comfyui github page says you need to install directml and then somehow run it if you already have A1111, while other places say you need miniconda/anaconda to run it, but just can 3. Command line arguments can be put in the bat files used to run comfyui like this separated by a space after each command Aug 2, 2024 · You can use t5xxl_fp8_e4m3fn. And above all, BE NICE. But there's an even easier way now: StabilityMatrix It's the same thing, but they've done most of the work for you. I'm not sure why and I don't know if it's specific to Comfy or if it's a general rule for Python. sh file, or in the command line, you can just add the --lowvram option straight after main. So download the workflow picture and dragged it to comfyui but it doesn't load anything, looks like the metadata is not complete. 1 are updated and used by ComfyUI. py --lowvram --auto-launch. bat file with notepad, make your changes, then save it. Some main features are: Automatically install ComfyUI dependencies. ) I haven't managed to reproduce this process in Comfyui yet. py, eg: python3 . This time about arpeggiators - how to design your own arp on the Daisy Seed using Arduino and C++ classes. For some reason, it broke my ComfyUI when I did it earlier. That’s a cost of abou /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. bat file, it will load the arguments. I don't know what Magnific AI uses. I am not sure what kind of settings ComfyUI used to achieve such optimization, but if you are using Auto111, you could disable live preview and enable xformers (what I did before switching to ComfyUI). But with Comfy UI this doesn't seem to work! Thanks! Welcome to the unofficial ComfyUI subreddit. 37 votes, 11 comments. After playing around with it for a while, here are 3 basic workflows that work with older models (here, AbsoluteReality). I use an 8GB GTX 1070 without comfyui launch options and I can see from the console output that it chooses NORMAL_VRAM by default for me. Options:--install-completion: Install completion for the current shell. Anything that works well gets adopted by the larger community and finds it's way into other Stable Diffusion software eventually. A. You might still have to add the occasional command line argument for extensions or something. 9s/it with 1111. Basic img2img. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. Belittling their efforts will get you banned. 6 seconds in ComfyUI) and I cannot get TensorRT to work in ComfyUI as the installation is pretty complicated and I don't have 3 hours to burn doing it. Supports: Basic txt2img. \python_embeded\python. py --windows-standalone-build --normalvram --listen 0. With this combo it is now rarely gives out of memory (unless you try crazy things) Before I couldn't even generate with sdxl on comfyui or anything Just download it, drag it inside ComfyUI, and you’ll have the same workflow you see above. 55 it/s for SD1. 0` The final line in the run_nvidia_gpu. Open the . Data to create a command line tool to improve the ergonomics of using ComfyUI. I did't quite understand the part where you can use the venv folder from other webui like A1111 to launch it instead and bypass all the requirements to launch comfyui. That helped the speed a bit FETCH DATA from: H:\Stable Diffusion Apps\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager\extension-node-map. Update ComfyUI and all your custom nodes, and make sure you are using the correct models. For the latest daily release, launch ComfyUI with this command line argument: --front-end-version Comfy-Org/ComfyUI_frontend@latest For a specific version, replace latest with the desired version number: Aug 8, 2023 · Wherever you are running the "main. py --normalvram. Has anyone tried or is still trying? I somehow got it to magically run with AMD despite to lack of clarity and explanation on the github and literally no video tutorial on it. json got prompt… Welcome to the unofficial ComfyUI subreddit. 5 while creating a 896x1152 image via the Euler-A sampler. 8 and PyTorch 2. I don't find ComfyUI faster, I can make an SDXL image in Automatic 1111 in 4 . and any other arguments you want to add. gkop wzwmt fgieg alynl jdjpcbe xsmtew bjepn gwtiz qzmfrug odanvuum