Next, run install. 8, 2023. How to use Stable Diffusion V2. Apply Style Model. They appear in the model list but don't run (I would have been. Automatic1111 is great, but the one that impressed me, in doing things that Automatic1111 can't, is ComfyUI. But it gave better results than I thought. 0. He published on HF: SD XL 1. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. I think the a1111 controlnet extension also. Only T2IAdaptor style models are currently supported. ComfyUI Custom Workflows. ComfyUI-Manager is an extension designed to enhance the usability of ComfyUI. Just enter your text prompt, and see the generated image. 投稿日 2023-03-15; 更新日 2023-03-15 If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes, ComfyUI_I2I, and ComfyI2I. locon, and loha), Hypernetworks, ControlNet, T2I-Adapter, Upscale Fashions (ESRGAN, SwinIR, and many others. No description, website, or topics provided. • 3 mo. Although it is not yet perfect (his own words), you can use it and have fun. Downloaded the 13GB satefensors file. Invoke should come soonest via a custom node at first, though the once my. Once the image has been uploaded they can be selected inside the node. In the AnimateDiff Loader node,. {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"js","path":"js","contentType":"directory"},{"name":"misc","path":"misc","contentType. r/StableDiffusion. Everything you need to generate amazing images! Packed full of useful features that you can enable and disable on the fly. 0 、 Kaggle. Colab Notebook:. Wanted it to look neat and a addons to make the lines straight. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. it seems that we can always find a good method to handle different images. Learn about the use of Generative Adverserial Networks and CLIP. This is the input image that will be used in this example source: Here is how you use the depth T2I-Adapter: Here is how you use the depth Controlnet. args and prepend the comfyui directory to sys. SargeZT has published the first batch of Controlnet and T2i for XL. and no, I don't think it saves this properly. To modify the trigger number and other settings, utilize the SlidingWindowOptions node. No virus. Welcome to the ComfyUI Community Docs! This is the community-maintained repository of documentation related to ComfyUI, a powerful and modular stable diffusion GUI and backend. Note: Remember to add your models, VAE, LoRAs etc. 04. arxiv: 2302. We release T2I-Adapter-SDXL models for sketch, canny, lineart, openpose, depth-zoe, and depth-mid. Check some basic workflows, you can find some in the official web of comfyui. Hi all! I recently made the shift to ComfyUI and have been testing a few things. ComfyUI Weekly Update: Free Lunch and more. It will download all models by default. bat on the standalone). About. The incredible generative ability of large-scale text-to-image (T2I) models has demonstrated strong power of learning complex structures and meaningful semantics. comment sorted by Best Top New Controversial Q&A Add a Comment. Take a deep breath,. Advanced Diffusers Loader Load Checkpoint (With Config) Conditioning. Each one weighs almost 6 gigabytes, so you have to have space. We release T2I-Adapter-SDXL models for sketch , canny , lineart , openpose , depth-zoe , and depth-mid . ai has now released the first of our official stable diffusion SDXL Control Net models. Info. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. Furthermore, this extension provides a hub feature and convenience functions to access a wide range of information within ComfyUI. T2I-Adapter-SDXL - Canny. 4 Python ComfyUI VS T2I-Adapter T2I-Adapter sd-webui-lobe-theme. , color and. . You can assign the first 20 steps to the base model and delegate the remaining steps to the refiner model. Install the ComfyUI dependencies. When comparing T2I-Adapter and ComfyUI you can also consider the following projects: stable-diffusion-webui - Stable Diffusion web UI stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. , ControlNet and T2I-Adapter. He published on HF: SD XL 1. October 22, 2023 comfyui. The Load Style Model node can be used to load a Style model. Note that these custom nodes cannot be installed together – it’s one or the other. 0 for ComfyUI. stable-diffusion-webui-colab - stable diffusion webui colab. 1 - Inpainting and img2img is possible with SDXL, and to shamelessly plug, I just made a tutorial all about it. In the Comfyui SDXL workflow example, the refiner is an integral part of the generation process. ComfyUI-Impact-Pack. Your Ultimate ComfyUI Resource Hub: ComfyUI Q&A, Examples, Nodes and Workflows. Embeddings/Textual Inversion. We’re on a journey to advance and democratize artificial intelligence through open source and open science. The T2I-Adapter network provides supplementary guidance to the pre-trained text-to-image models such as the text-to-image SDXL model from Stable Diffusion. ipynb","path":"notebooks/comfyui_colab. When comparing ComfyUI and T2I-Adapter you can also consider the following projects: stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Just download the python script file and put inside ComfyUI/custom_nodes folder. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. bat you can run to install to portable if detected. Extract the downloaded file with 7-Zip and run ComfyUI. Welcome to the Reddit home for ComfyUI a graph/node style UI for Stable Diffusion. #1732. T2I Adapter is a network providing additional conditioning to stable diffusion. In ComfyUI, txt2img and img2img are. A comprehensive collection of ComfyUI knowledge, including ComfyUI installation and usage, ComfyUI Examples, Custom Nodes, Workflows, and ComfyUI Q&A. I myself are a heavy T2I Adapter ZoeDepth user. Now we move on to t2i adapter. Thats the closest best option for this at the moment, but would be cool if there was an actual toggle switch with one input and 2 outputs so you could literally flip a switch. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Note that --force-fp16 will only work if you installed the latest pytorch nightly. We would like to show you a description here but the site won’t allow us. 1. Single-family homes make up a large proportion of the market, but Greater Victoria also has a number of high-end luxury properties. Cannot find models that go with them. ComfyUI-Advanced-ControlNet for loading files in batches and controlling which latents should be affected by the ControlNet inputs (work in progress, will include more advance workflows + features for AnimateDiff usage later). SargeZT has published the first batch of Controlnet and T2i for XL. I just deployed #ComfyUI and it's like a breath of fresh air for the i. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. "Want to master inpainting in ComfyUI and make your AI Images pop? 🎨 Join me in this video where I'll take you through not just one, but THREE ways to creat. Style models can be used to provide a diffusion model a visual hint as to what kind of style the denoised latent should be in. 1: Due to the feature update in RegionalSampler, the parameter order has changed, causing malfunctions in previously created RegionalSamplers. bat you can run to install to portable if detected. radames HF staff. 7 Python The most powerful and modular stable diffusion GUI with a graph/nodes interface. 9模型下载和上传云空间. Part 3 - we will add an SDXL refiner for the full SDXL process. Not all diffusion models are compatible with unCLIP conditioning. The easiest way to generate this is from running a detector on an existing image using a preprocessor: For ComfyUI ControlNet preprocessor nodes has "OpenposePreprocessor". But I haven't heard of anything like that currently. I always get noticiable grid seams, and artifacts like faces being created all over the place, even at 2x upscale. CLIP_vision_output The image containing the desired style, encoded by a CLIP vision model. 6 there are plenty of new opportunities for using ControlNets and sister models in A1111,Get the COMFYUI SDXL Advanced co. But is there a way to then to create. Saved searches Use saved searches to filter your results more quickly[GUIDE] ComfyUI AnimateDiff Guide/Workflows Including Prompt Scheduling - An Inner-Reflections Guide (Including a Beginner Guide) Tutorial | Guide AnimateDiff in ComfyUI is an amazing way to generate AI Videos. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. When comparing sd-webui-controlnet and ComfyUI you can also consider the following projects: stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. When comparing ComfyUI and sd-webui-controlnet you can also consider the following projects: stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Reuse the frame image created by Workflow3 for Video to start processing. By default, the demo will run at localhost:7860 . The only important thing is that for optimal performance the resolution should be set to 1024x1024 or other resolutions with the same amount of pixels but a different aspect ratio. Next, run install. Core Nodes Advanced. 04. A T2I style adaptor. 20. T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. Step 1: Install 7-Zip. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. How to use ComfyUI controlnet T2I-Adapter with SDXL 0. r/StableDiffusion. Recipe for future reference as an example. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Area Composition Noisy Latent Composition ControlNets and T2I-Adapter GLIGEN unCLIP SDXL Model Merging LCM The Node Guide (WIP) documents what each node does. [ SD15 - Changing Face Angle ] T2I + ControlNet to adjust the angle of the face. Tip 1. ComfyUI Basic Tutorial VN: All the art is made with ComfyUI. The Fetch Updates menu retrieves update. Follow the ComfyUI manual installation instructions for Windows and Linux. So my guess was that ControlNets in particular are getting loaded onto my CPU even though there's room on the GPU. Just enter your text prompt, and see the generated image. These models are the TencentARC T2I-Adapters for ControlNet (TT2I Adapter research paper here), converted to Safetensor. The sliding window feature enables you to generate GIFs without a frame length limit. Open the sh files in the notepad, copy the url for the download file and download it manually, then move it to models/Dreambooth_Lora folder, hope this helps. If you get a 403 error, it's your firefox settings or an extension that's messing things up. You can run this cell again with the UPDATE_COMFY_UI or UPDATE_WAS_NS options selected to update. SargeZT has published the first batch of Controlnet and T2i for XL. For T2I, you can set the batch_size through the Empty Latent Image, while for I2I, you can use the Repeat Latent Batch to expand the same latent to a batch size specified by amount. 8, 2023. g. ComfyUI is a powerful and modular stable diffusion GUI and backend with a user-friendly interface that empowers users to effortlessly design and execute intricate Stable Diffusion pipelines. 139. Diffusers. T2I-Adapter, and Latent previews with TAESD add more. 2) Go SUP. IP-Adapter for ComfyUI [IPAdapter-ComfyUI or ComfyUI_IPAdapter_plus] ; IP-Adapter for InvokeAI [release notes] ; IP-Adapter for AnimateDiff prompt travel ; Diffusers_IPAdapter: more features such as supporting multiple input images ; Official Diffusers Disclaimer . optional. e. sd-webui-comfyui 是 Automatic1111's stable-diffusion-webui 的扩展,它将 ComfyUI 嵌入到它自己的选项卡中。 : 其他 : Advanced CLIP Text Encode : 包含两个 ComfyUI 节点,允许更好地控制提示权重的解释方式,并让您混合不同的嵌入方式 : 自定义节点 : AIGODLIKE-ComfyUI. Advanced Diffusers Loader Load Checkpoint (With Config) Conditioning. r/comfyui. If you have another Stable Diffusion UI you might be able to reuse the dependencies. With the arrival of Automatic1111 1. Install the ComfyUI dependencies. Examples. LibHunt Trending Popularity Index About Login. Preprocessor Node sd-webui-controlnet/other Use with ControlNet/T2I-Adapter Category; LineArtPreprocessor: lineart (or lineart_coarse if coarse is enabled): control_v11p_sd15_lineart: preprocessors/edge_lineIn part 1 (this post), we will implement the simplest SDXL Base workflow and generate our first images. I think the old repo isn't good enough to maintain. With the presence of the SDXL Prompt Styler, generating images with different styles becomes much simpler. gitignore","contentType":"file"},{"name":"LICENSE","path":"LICENSE. main T2I-Adapter. Our method not only outperforms other methods in terms of image quality, but also produces images that better align with the reference image. AnimateDiff ComfyUI. If you have another Stable Diffusion UI you might be able to reuse the dependencies. A node system is a way of designing and executing complex stable diffusion pipelines using a visual flowchart. ) Automatic1111 Web UI - PC - Free. Welcome. Also there is no problem w. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. Liangbin add zoedepth model. We collaborate with the diffusers team to bring the support of T2I-Adapters for Stable Diffusion XL (SDXL) in diffusers! It achieves impressive results in both performance and efficiency. USE_GOOGLE_DRIVE : UPDATE_COMFY_UI : Update WAS Node Suite. Please keep posted images SFW. Anyone using DW_pose yet? I was testing it out last night and it’s far better than openpose. Directory Placement: Scribble ControlNet; T2I-Adapter vs ControlNets; Pose ControlNet; Mixing ControlNets For the T2I-Adapter the model runs once in total. Welcome to the unofficial ComfyUI subreddit. ago. Model card Files Files and versions Community 17 Use with library. These originate all over the web on reddit, twitter, discord, huggingface, github, etc. FROM nvidia/cuda: 11. 🤗 Diffusers is the go-to library for state-of-the-art pretrained diffusion models for generating images, audio, and even 3D structures of molecules. although its not an SDXL tutorial, the skills all transfer fine. A real HDR effect using the Y channel might be possible, but requires additional libraries - looking into it. An NVIDIA-based graphics card with 4 GB or more VRAM memory. {"payload":{"allShortcutsEnabled":false,"fileTree":{"ComfyUI-Impact-Pack/tutorial":{"items":[{"name":"ImpactWildcard-LBW. Model card Files Files and versions Community 17 Use with library. Note that --force-fp16 will only work if you installed the latest pytorch nightly. Split into two nodes: DetailedKSampler with denoise and DetailedKSamplerAdvanced with start_at_step. October 22, 2023 comfyui manager. ci","contentType":"directory"},{"name":". This innovative system employs a visual approach with nodes, flowcharts, and graphs, eliminating the need for manual coding. Follow the ComfyUI manual installation instructions for Windows and Linux. 阅读建议:适合使用过WebUI,并准备尝试使用ComfyUI且已经安装成功,但弄不清ComfyUI工作流的新人玩家阅读。我也是刚刚开始尝试各种玩具的新人玩家,希望大家也能分享更多自己的知识!如果不知道怎么安装和初始化配置ComfyUI,可以先看一下这篇文章:Stable Diffusion ComfyUI 入门感受 - 旧书的文章 - 知. net モデルのロード系 まずはモデルのロードについてみていきましょう。 CheckpointLoader チェックポイントファイルからModel(UNet)、CLIP(Text. This repo contains examples of what is achievable with ComfyUI. The text was updated successfully, but these errors were encountered: All reactions. October 22, 2023 comfyui manager . But you can force it to do whatever you want by adding that into the command line. 6 there are plenty of new opportunities for using ControlNets and sister models in A1111,Get the COMFYUI SDXL Advanced co. Go to comfyui r/comfyui •. ComfyUI/custom_nodes以下. These are optional files, producing similar results to the official ControlNet models, but with added Style and Color functions. Crop and Resize. i combined comfyui lora and controlnet and here the results upvotes. ) Automatic1111 Web UI - PC - Free. Create. This project strives to positively impact the domain of AI. By using it, the algorithm can understand outlines of. py. 5 models has a completely new identity : coadapter-fuser-sd15v1. T2I-Adapter aligns internal knowledge in T2I models with external control signals. bat on the standalone). Sign In. Drop in your ComfyUI_windows_portableComfyUIcustom_nodes folder and select the Node from the Image Processing Node list. 003997a 2 months ago. Join us in this exciting contest, where you can win cash prizes and get recognition for your skills!" $10kTotal award pool5Award categories3Special awardsEach category will have up to 3 winners ($500 each) and up to 5 honorable. This is for anyone that wants to make complex workflows with SD or that wants to learn more how SD works. Complete. Embark on an intriguing exploration of ComfyUI and master the art of working with style models from ground zero. Provides a browser UI for generating images from text prompts and images. If there is no alpha channel, an entirely unmasked MASK is outputted. I have primarily been following this video. 8. If you have another Stable Diffusion UI you might be able to reuse the dependencies. Some commonly used blocks are Loading a Checkpoint Model, entering a prompt, specifying a sampler, etc. ComfyUI operates on a nodes/graph/flowchart interface, where users can experiment and create complex workflows for their SDXL projects. The ControlNet Detectmap will be cropped and re-scaled to fit inside the height and width of the txt2img settings. Core Nodes Advanced. Single metric head models (Zoe_N and Zoe_K from the paper) have the common definition and are defined under. T2I-Adapter. IPAdapters, SDXL ControlNets, and T2i Adapters Now Available for Automatic1111. Product. Info. Now, this workflow also has FaceDetailer support with both SDXL. Many of the new models are related to SDXL, with several models for Stable Diffusion 1. 0 at 1024x1024 on my laptop with low VRAM (4 GB). ComfyUI checks what your hardware is and determines what is best. The interface follows closely how SD works and the code should be much more simple to understand than other SD UIs. This detailed step-by-step guide places spec. ComfyUI is the Future of Stable Diffusion. ComfyUI with SDXL (Base+Refiner) + ControlNet XL OpenPose + FaceDefiner (2x) ComfyUI is hard. . Create. 0workflow primarily provides various built-in stylistic options for Text-to-Image (T2I), generating high-definition resolution images, facial restoration, and switchable functions such as Controlnet easy switching(canny and depth). Contains multi-model / multi-LoRA support and multi-upscale options with img2img and Ultimate SD Upscaler. py Old one . This checkpoint provides conditioning on canny for the StableDiffusionXL checkpoint. ComfyUI Guide: Utilizing ControlNet and T2I-Adapter. Edited in AfterEffects. 10 Stable Diffusion extensions for next-level creativity. Your results may vary depending on your workflow. 5 vs 2. 3. StabilityAI official results (ComfyUI): T2I-Adapter. Users are now starting to doubt that this is really optimal. ComfyUI has been updated to support this file format. Learn some advanced masking skills, compositing and image manipulation skills directly inside comfyUI. Yeah, suprised it hasn't been a bigger deal. For the T2I-Adapter the model runs once in total. another fantastic video. 2. Encompassing QR code, Interpolation (2step and 3step), Inpainting, IP Adapter, Motion LoRAs, Prompt Scheduling, Controlnet, and Vid2Vid. py --force-fp16. Resources. py","contentType":"file. This will alter the aspect ratio of the Detectmap. Follow the ComfyUI manual installation instructions for Windows and Linux. For the T2I-Adapter the model runs once in total. Your tutorials are a godsend. Both of the above also work for T2I adapters. Write better code with AI. Because this plugin requires the latest code ComfyUI, not update can't use, if you have is the latest ( 2023-04-15) have updated after you can skip this step. Otherwise it will default to system and assume you followed ConfyUI's manual installation steps. t2i部分のKSamplerでseedをfixedにしてHires fixの部分を調整しながら生成を繰り返すとき、変更点であるHires fixのKSamplerから処理が始まるので効率的に動いているのがわかります。. (Results in following images -->) 1 / 4. Apply your skills to various domains such as art, design, entertainment, education, and more. . py. I'm using macbook intel i9 machine which is not powerfull for batch diffusion operations and I couldn't share. While some areas of machine learning and generative models are highly technical, this manual shall be kept understandable by non-technical users. ci","path":". T2I-Adapter at this time has much less model types than ControlNets but with my ComfyUI You can combine multiple T2I-Adapters with multiple controlnets if you want. Easy to share workflows. ksamplesdxladvanced node missing. 6 there are plenty of new opportunities for using ControlNets and. ではここからComfyUIの基本的な使い方についてご説明していきます。 ComfyUIは他のツールとは画面の使い方がかなり違う ので最初は少し戸惑うかもしれませんが、慣れればとても便利なのでぜひマスターしてみてください。 We’re on a journey to advance and democratize artificial intelligence through open source and open science. I think the a1111 controlnet extension also supports them. ComfyUI gives you the full freedom and control to. It divides frames into smaller batches with a slight overlap. Two of the most popular repos. Contribute to Gasskin/ComfyUI_MySelf development by creating an account on GitHub. This is the initial code to make T2I-Adapters work in SDXL with Diffusers. At the moment it isn't possible to use it in ComfyUI due to a mismatch with the LDM model (I was engaging with @comfy to see if I could make any headroom there), and A1111/SD. With the arrival of Automatic1111 1. py --force-fp16. Reply reply{"payload":{"allShortcutsEnabled":false,"fileTree":{"models/controlnet":{"items":[{"name":"put_controlnets_and_t2i_here","path":"models/controlnet/put_controlnets_and. Upload g_pose2. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. I want to use ComfyUI with openpose controlnet or T2i adapter with SD 2. Have fun! 01 award winning photography, a cute monster holding up a sign saying SDXL, by pixarEnhances ComfyUI with features like autocomplete filenames, dynamic widgets, node management, and auto-updates. UPDATE_WAS_NS : Update Pillow for WAS NS: Hello, I got research access to SDXL 0. Any hint will be appreciated. Skip to content. T2I-Adapters are plug-and-play tools that enhance text-to-image models without requiring full retraining, making them more efficient than alternatives like ControlNet. openpose-editor - Openpose Editor for AUTOMATIC1111's stable-diffusion-webui. D: cd D:workaiai_stable_diffusioncomfyComfyUImodels. Control the strength of the color transfer function. Step 3: Download a checkpoint model. ComfyUI - コーディング不要なノードベースUIでStable Diffusionワークフローを構築し実験可能なオープンソースインターフェイス!This is for anyone that wants to make complex workflows with SD or that wants to learn more how SD works. I've started learning ComfyUi recently and you're videos are clicking with me. ComfyUI breaks down a workflow into rearrangeable elements so you can. Fizz Nodes. ago. Ardan - Fantasy Magic World (Map Bashing)At the moment, my best guess involves running ComfyUI in Colab, taking the IP address it provides at the end, and pasting it into the websockets_api script, which you'd run locally. These files are Custom Workflows for ComfyUI ComfyUI is a super powerful node-based , modular , interface for Stable Diffusion. The script should then connect to your ComfyUI on Colab and execute the generation. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. For AMD (Linux only) or Mac, check the beginner's guide to ComfyUI. You can even overlap regions to ensure they blend together properly. (early and not finished) Here are some more advanced examples: “Hires Fix” aka 2 Pass Txt2Img. A few days ago I implemented T2I-Adapter support in my ComfyUI and after testing them out a bit I'm very surprised how little attention they get compared to controlnets. Most are based on my SD 2. In this Stable Diffusion XL 1. Step 2: Download the standalone version of ComfyUI. The ControlNet Detectmap will be cropped and re-scaled to fit inside the height and width of the txt2img settings. (early. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples FeaturesInstall the ComfyUI dependencies. If you get a 403 error, it's your firefox settings or an extension that's messing things up. Might try updating it with T2I adapters for better performance . 「ControlNetが出たぞー!」という話があって実装したと思ったらその翌日にT2I-Adapterが発表されて全力で脱力し、しばらくやる気が起きなかったのだが、ITmediaの連載でも触れたように、AI用ポーズ集を作ったので、それをMemeplex上から検索してimg2imgまたはT2I-Adapterで好きなポーズや表情をベースとし. Right click image in a load image node and there should be "open in mask Editor". . New Style Transfer Extension, ControlNet of Automatic1111 Stable Diffusion T2I-Adapter Color ControlControlnet works great in comfyui, but the preprocessors (that I use, at least) don't have the same level of detail, e. py", line 151, in recursive_execute output_data, output_ui = get_output_data(obj, input_data_all)These work in ComfyUI now, just make sure you update (update/update_comfyui. The workflows are meant as a learning exercise, they are by no means "the best" or the most optimized but they should give you a good understanding of how ComfyUI works. This detailed step-by-step guide places spec. AnimateDiff ComfyUI. Unlike unCLIP embeddings, controlnets and T2I adaptors work on any model. I'm not a programmer at all but feels so weird to be able to lock all the other nodes and not these. In my case the most confusing part initially was the conversions between latent image and normal image.