---
title: Getting Stable Diffusion Running on NixOS
date: 2022-08-23
series: howto
tags:
- stablediffusion
- ai
- ml
- nixos
hero:
ai: Stable Diffusion
file: the-forbidden-shape
prompt: The Forbidden Shape by M.C. Escher, dystopian vibes, 8k uhd
---
Computers are awesome gestalts of sand and oil that can let us do anything we
want given we can supply the correct incantations. One of these things you can
do with computers is give plain text descriptions of what an image should
contain and then get back an approximation of that image. There are tools like
[DALL-E 2](https://openai.com/dall-e-2/) that can let you do this on someone
else's computer with the power of the cloud, but until recently there hasn't
been a good option for being able to run one of these on your own hardware.
— Xe Iaso | @cadey@pony.social (@theprincessxena){" "} August 22, 2022## Install dependencies The part I was dreading about this process is the "installing all of the goddamn dependencies" step. Most of this AI/ML stuff is done in Python. Among more experienced Linux users, programs written in Python have a reputation of being "the worst thing ever to try to package" and "hopefully reliable but don't sneeze at the setup once it works". I've had my share of spending approximately way too long trying to bash things into shape with no success. I was kind of afraid that this would be more of the same. Turns out all the AI/ML people have started using this weird thing called [conda](https://docs.conda.io/en/latest/) which gives you a more reproducible environment for AI/ML crap. It does mean that I'll have to have conda install all of the dependencies and can't reuse the NixOS copies of things like Cuda, but I'd rather deal with that than have to reinvent the state of the world for this likely barely hacked together AI/ML thing. Here is what I needed to do in order to get things installed on NixOS: First I cloned the [optimized version of Stable Diffusion for GPUs with low amounts of vram](https://github.com/basujindal/stable-diffusion) and then I ran these commands: ``` $ nix shell nixpkgs#conda $ conda-shell conda-shell$ conda-install conda-shell$ conda env create -f environment.yaml conda-shell$ exit $ conda-shell conda-shell$ conda activate ldm ``` And then I could [download the model](https://github.com/CompVis/stable-diffusion#weights) and put it in the folder that the AI wanted. ## Make art Then I was able to make art by running the `optimizedSD/optimized_txt2img.py` tool. I personally prefer using these flags: ``` python optimizedSD/optimized_txt2img.py \ --H 512 \ --W 768 \ --n_iter 1 \ --n_samples 4 \ --ddim_steps 50 \ --prompt "The Forbidden Shape by M.C. Escher, pencil on lined paper, dystopian vibes, 8k uhd" ``` | Flag | Meaning | | :------------- | :------------------------------------------------------------------------------------ | | `--H` | Image height | | `--W` | Image width | | `--n_iter` | Number of iterations/batches of images to generate | | `--n_samples` | Number of images to generate per batch | | `--ddim_steps` | Number of steps to take to diffuse/generate the image, more means it will take longer | | `--prompt` | Plain-text prompt to feed into the AI to generate images from | I've found that a 512x512 image will render in about 30 seconds on my 2060 and a 512x768 image will render in something barely over that. ## Gallery Here are some images I've generated:  > The legend of zelda breath of the wild, windows xp wallpaper, vaporwave style, > anime influences  > Cyberpunk style image of a Telsa car reflection in rain  > An impressionist painting of Richard Stallman at Starbucks  > Cyberpunk style image of a motorcycle reflection in rain, ukiyo-e, unreal > engine, trending on artstation  > Cyberpunk style image of the breath of the wild link on a motorcycle  > A Tabaxi druid tending to her cannabis crop, weed, marijuana, digital art, > trending on artstation