It is possible to run ATOMATIC1111’s Stable Diffusion WebUI locally on Linux PC or deploy it to servers.
1. Environment
- OS:Ubuntu 22.04 LTS
- Kernel:5.19.0
- CPU: Intel i5-7400
- GPU: Nvidia GTX1050 Ti 4GB
- RAM: 16GB
- 20GB+ disk space
- Stable Diffusion WebUI version:0cc0ee1 (2023/2/20)
2. Download Stable Diffusion models
Go to HuggingFace or Civitai to find a model.
Use wget command to download the model
|
|
Here are some good models for your reference.
2.1. Realistic style models
Stable Diffusion, the original model published by CompVis and StabilityAI.
2.2. Anime style models
I would suggest you to start from “Anything” model if you want to draw anime artworks.
3. Install Stable Diffusion WebUI
3.1. Install dependencies
- Install proprietary Nvidia drivers in order to use CUDA. Then reboot.
|
|
- Follow the instructions on Nvidia Developers to install CUDA. Reboot again.
|
|
- Verify the installation
|
|
- Install Python, wget, git
|
|
- Because we need a Pyhon 3.6 enviroment for SD WebUi, we have to install Anaconda
|
|
- Create a virtual environment of Python 3.10.6
|
|
3.2. Clone Stable Diffusion WebUI repository
- Clone the repository of Stable Diffusion WebUI
|
|
- Move
.ckpt
models tostable-diffusion-webui
|
|
- Enter the virtual enviroment
|
|
- If you want to activate virtual environment in a bash script, add these on the top of
webui-user.sh
|
|
3.3. Setup commandline arguments
According to Wiki,we have to change some commdanline arguments in order to start SD WebUI.
Edit webui-user.sh
|
|
If the VRAM of GPU is lower than 4GB, add: COMMANDLINE_ARGS=--medvram --opt-split-attention
If your PC has RAM lower than 8GB, add: COMMANDLINE_ARGS=--lowvram --opt-split-attention
You could also add --listen
so you can access the WebUI from other PC on the same network. Or add --share
to generate a public Gradio link for accessing WebUI while deploying SD WebUI to servers.
3.4. Launch Stable Diffusion WebUI
- Run
webui.sh
, it will install all the dependencies. Then a link should pop up:http://127.0.0.1:7860
|
|
- To access WebUI from other PC on the same network,enter
http://<IP of the PC>:7860
in the address bar of your browser. Don’t forget to open firewall port
|
|
3.5. How to update Stable Diffusion WebUI
- Get current branch
|
|
- Pull latest files
|
|
- If something is broken after updating, roll back to the previous branch
|
|
4. How to use Stable Diffusion WebUI
4.1. Prompts
Use “Prompts” and “Ngative Prompts” to tell AI what to draw.
See Vodly Artist name and Danbooru tags for choosing prompts.
For example, to draw Jeanne from Fate/Grand Order, we type the name of the character and characteristics of her body in the prompt fields.
|
|
Then type negative prompts.
|
|
4.2. Text to image
Go to SD WebUI, type the prompts
Check
Restore faces
Clcik
Generate
butoon, it will start generate a imageYou shall see the result at the right panel
All generated images will be stored at stable-diffusion-webui/outputs
You can also increase the value of Batch count
so it will generate multiple images in one run.
4.3. Imge to image
Type the prompts
Upload a image. Check
Restore faces
. ClcikGenerate
.
You can change the value of CFG Scale
and Denoising strength
. The lower the value of Denoising strength
is, the output would be more similar the original image.
Click Interrogate Deepboooru
to generate prompts automatically accodring to the image you uploaded.
References
- Rombach, R., Blattmann, A., Lorenz, D., Esser, P., & Ommer, B. (2022). High-resolution image synthesis with latent diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10684-10695).
- Stable Diffusion web UI Wiki - GitHub
- Voldy Retard Guide The definitive Stable Diffusion experience ™