Tutorial: Run Stable Diffusion locally in Conda environment

Antanas Baltrušaitis
5 min readJan 12, 2024

--

My journey with AI continues. I’ve previously shared how I ran a Large Language Model (LLM) locally. Now, it’s time to explore image generation models. I’m eager to try running the Stable Diffusion (SD) model locally on Windows.

While there are many similar tutorials, my approach has a unique twist: I’ll be running SD within a Conda Environment. Technically, this isn’t necessary, as SD typically operates in a VENV Python environment. However, I currently use Python 3.11, and SD requires Python 3.10. I prefer not to install any libraries globally, since I’m also working with other models. Using Conda allows me to efficiently compartmentalize everything.

Requirements:

  • Windows
  • Decent PC with decent Nvidia GPU
  • Nvidia CUDA and Miniconda installed

This could be done outside windows and with different GPU, but steps might be slightly different.

Let’s start

You still do not have Conda — install it and add to Windows Path variable. Follow step 1 and 2 from my 1st tutorial. Also do step Nr. 4 — install Nvidia CUDA version 12.1 (newest version 12.3 will not work with Torch at the moment of writing this).

Step 1 — Create new folder where you will have all Stable Diffusion files.

In my case it will be C:\local_SD\

Using Command Prompt enter this directory:

cd C:\local_SD

Step 2 — Clone stable-diffusion-webui

git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git

Enter stable-diffusion-webui folder:

cd stable-diffusion-webui

Step 3 — Create conda environement and activate it

conda env create -f ./environment-wsl2.yaml -n local_SD

local_SD — name of the environment. Python version and other needed details are in environment-wsl2.yaml file, so not need to specify separately.

Activate environment

conda activate local_SD
You should see (local_SD) at the beginning of the command which would indicate you successfully created and activated Conda environment

Step 4 — Edit webui-user.bat

By adding this line on top

call conda activate local_SD

Save and close the file.

While you are in this file you can set also this command, it will improve your experience later.

set COMMANDLINE_ARGS= --xformers --autolaunch --theme dark

Step 5 — Start Stable-Diffusion-Webui

.\webui-user.bat

It will install needed libraries, which might take a while.

Also it should run WEBUI automatically, if not copy and open this link in web browser:

Step 6 — Download and load the model

In my case it’s automatically downloaded this model: v1–5-pruned-emaonly.safetensors

But if it’s not auto-downloaded — it’s anyway good idea to test different models.

You can download models from two locations.

  1. Huggingface

2. civitai.

Huggingface has more base models — good for traininig yourself, while civitai has community trainned models to get impresive results for specific use-cases.

Pay attention to the model type. I suggest starting with checkpoint or safetensors full models.

There are smaller models called ‘lora’ which are used for specific types and usually are used together with big base model to enhance results.

Place files to C:\local_SD\stable-diffusion-webui\models\Stable-diffusion (or C:\local_SD\stable-diffusion-webui\models\Lora for ‘lora’ models).

When done click ‘refresh’ in stable-diffusion-webui.

Choose the model from drop-down and you can start generating images

Step 7 — Generate some images

If you don’t know where to strart — head to Model page in civitai, scroll-down to the gallery. Find image you like, lick on it and click ‘copy generation data’

Paste in stable-diffusion-webui first field and click ‘Read generation parameters’

Click ‘Generate’ wait a bit and voila!

Hope you have enjoyed this tutorial! Subscribe and follow for more.

Here are some of generated images:

--

--