Jun 17, 2025
Tags: easydiffusion, blog
Development update for Easy Diffusion - It’s chugging along in starts and stops. Broadly, there are three tracks:
-
Maintenance: The past few months have seen increased support for AMD, Intel and integrated GPUs. This includes AMD on Windows. Added support for the new AMD 9060/9070 cards last week, and the new NVIDIA 50xx cards in March.
-
Flux to the main branch / release v3.5 to stable: Right now, Flux / v3.5 still requires you to enable ED beta first. And then install Forge. Last week I got Flux working in our main engine (with decent rendering speed). It still needs more work to support all the different models formats for Flux. Using Forge was a temporary arrangement, until Flux worked in our main engine.
Mar 4, 2025
Tags: easydiffusion
Upgraded the default version of Easy Diffusion to Python 3.9. Newer versions of torch don’t support Python 3.8, so this became urgent after the release of NVIDIA’s 50xx series GPUs.
I choose 3.9 as a temporary fix (instead of a newer Python version), since it had the least amount of package conflicts. The future direction of Easy Diffusion’s backend is unclear right now - there are a bunch of possible paths. So I didn’t want to spend too much time on this. I also wanted to minimize the risk to existing users.
Feb 10, 2025
Tags: easydiffusion, sdkit, amd, torchruntime, windows, intel, integrated, directml
Easy Diffusion (and sdkit
) now also support AMD on Windows automatically (using DirectML), thanks to integrating with torchruntime. It also supports integrated GPUs (Intel and AMD) on Windows, making Easy Diffusion faster on PCs without dedicated graphics cards.
Feb 10, 2025
Tags: easydiffusion, torchruntime, sdkit
Spent the last week or two getting torchruntime fully integrated into Easy Diffusion, and making sure that it handles all the edge-cases.
Easy Diffusion now uses torchruntime
to automatically install the best-possible version of torch
(on the users’ computer) and support a wider variety of GPUs (as well as older GPUs). And it uses a GPU-agnostic device API, so Easy Diffusion will automatically support additional GPUs when they are supported by torchruntime
.
Jan 28, 2025
Tags: easydiffusion, sdkit, freebird, worklog
Continued to test and fix issues in sdkit, after the change to support DirectML. The change is fairly intrusive, since it removes direct references to torch.cuda
with a layer of abstraction.
Fixed a few regressions, and it now passes all the regression tests for CPU and CUDA support (i.e. existing users). Will test for DirectML next, although it will fail (with out-of-memory) for anything but the simplest tests (since DirectML is quirky with memory allocation).
Jan 27, 2025
Tags: easydiffusion, sdkit
Worked on adding support for DirectML in sdkit. This allows AMD GPUs and Integrated GPUs to generate images on Windows.
DirectML seems like it’s really inefficient with memory though. So for now it only manages to generate images using SD 1.5. XL and larger models fail to generate, even though I have a 12 GB of VRAM in my graphics card.
Jan 22, 2025
Tags: rocm, pytorch, easydiffusion, torchruntime
Continued from Part 1.
Spent a few days figuring out how to compile binary wheels of PyTorch and include all the necessary libraries (ROCm libs or CUDA libs).
tl;dr - In Part 2, the compiled PyTorch wheels now include the required libraries (including ROCm). But this isn’t over yet. Torch starts now, but adding two numbers with it produces garbage values (on the GPU). There’s probably a bug in the included ROCBLAS version, might need to recompile ROCBLAS for gfx803 separately. Will tackle that in Part 3 (tbd).
Jan 17, 2025
Tags: rocm, pytorch, easydiffusion, torchruntime
Continued in Part 2, where I figured out how to include the required libraries in the wheel.
Spent all of yesterday trying to compile pytorch
with the compile-time PYTORCH_ROCM_ARCH=gfx803
environment variable.
tl;dr - In Part 1, I compiled wheels for PyTorch with ROCm, in order to add support for older AMD cards like RX 480. I managed to compile the wheels, but the wheel doesn’t include the required ROCm libraries. I figured that out in Part 2.
Jan 13, 2025
Tags: easydiffusion, torchruntime, torch, ml
Spent the last few days writing torchruntime, which will automatically install the correct torch distribution based on the user’s OS and graphics card. This package was written by extracting this logic out of Easy Diffusion, and refactoring it into a cleaner implementation (with tests).
It can be installed (on Win/Linux/Mac) using pip install torchruntime
.
The main intention is that it’ll be easier for developers to contribute updates (for e.g. for newer or older GPUs). It wasn’t easy to find or modify this code previously, since it was buried deep inside Easy Diffusion’s internals.
Jan 4, 2025
Tags: easydiffusion, amd, directml
Spent most of the day doing some support work for Easy Diffusion, and experimenting with torch-directml for AMD support on Windows.
From the initial experiments, torch-directml seems to work properly with Easy Diffusion. I ran it on my NVIDIA card, and another user ran it on their AMD Radeon RX 7700 XT.
It’s 7-10x faster than the CPU, so looks promising. It’s 2x slower than CUDA on my NVIDIA card, but users with NVIDIA cards are not the target audience of this change.
Jan 3, 2025
Tags: easydiffusion, ui, v4
Spent a few days prototyping a UI for Easy Diffusion v4. Files are at this repo.
The main focus was to get a simple but pluggable UI, that was backed by a reactive data model, and to allow splitting the codebase into individual components (with their own files). And require only a text editor and a browser to develop, i.e. no compilation or nodejs-based developer experiences.
I really want something that is easy to understand - for an outside developer and for myself (for e.g. if I’m returning to a portion of the codebase after a while). And with very little friction to start developing for it.
Dec 17, 2024
Tags: easydiffusion, v4, ui
Notes on two directions for ED4’s UI that I’m unlikely to continue on.
One is to start a desktop app with a full-screen webview (for the app UI). The other is writing the tabbed browser-like shell of ED4 in a compiled language (like Go or C++) and loading the contents of the tabs as regular webpages (by using webviews). So it would load URLs like http://localhost:9000/ui/image_editor
and http://localhost:9000/ui/settings
etc.
In the first approach, we would start an empty full-screen webview, and let the webpage draw the entire UI, including the tabbed shell. The only purpose of this would be to start a desktop app instead of opening a browser tab, while being very lightweight (compared to Electron/Tauri style implementations).
Dec 14, 2024
Tags: easydiffusion, ui, design, v4
Worked on a few UI design ideas for Easy Diffusion v4. I’ve uploaded the work-in-progress mockups at https://github.com/easydiffusion/files.
So far, I’ve mocked out the design for the outer skeleton. That is, the new tabbed interface, the status bar, and the unified main menu. I also worked on how they would look like on mobile devices.
It gives me a rough idea of the Vue
components that would need to be written, and the surface area that plugins can impact. For e.g. plugins can add a new menu entry only in the Plugins
sub-menu.
Nov 21, 2024
Tags: easydiffusion, stable-diffusion, c++
Spent some more time on the v4 experiments for Easy Diffusion (i.e. C++ based, fast-startup, lightweight). stable-diffusion.cpp
is missing a few features, which will be necessary for Easy Diffusion’s typical workflow. I wasn’t keen on forking stable-diffusion.cpp, but it’s probably faster to work on a fork for now.
For now, I’ve added live preview and per-step progress callbacks (based on a few pending pull-requests on sd.cpp). And protection from GGML_ASSERT
killing the entire process. I’ve been looking at the ability to load individual models (like the vae) without needing to reload the entire SD model.
Nov 19, 2024
Tags: easydiffusion, stable-diffusion
Spent a few days getting a C++ based version of Easy Diffusion working, using stable-diffusion.cpp. I’m working with a fork of stable-diffusion.cpp here, to add a few changes like per-step callbacks, live image previews etc.
It doesn’t have a UI yet, and currently hardcodes a model path. It exposes a RESTful API server (written using the Crow
C++ library), and uses a simple task manager that runs image generation tasks on a thread. The generated images are available at an API endpoint, and it shows the binary JPEG/PNG image (instead of base64 encoding).