Skip to content

Commit 5ef669d

Browse files
committed
Merge branch 'release_candidate'
2 parents c9c8485 + e7965a5 commit 5ef669d

File tree

129 files changed

+7098
-3747
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

129 files changed

+7098
-3747
lines changed

.eslintrc.js

+6
Original file line numberDiff line numberDiff line change
@@ -87,5 +87,11 @@ module.exports = {
8787
modalNextImage: "readonly",
8888
// token-counters.js
8989
setupTokenCounters: "readonly",
90+
// localStorage.js
91+
localSet: "readonly",
92+
localGet: "readonly",
93+
localRemove: "readonly",
94+
// resizeHandle.js
95+
setupResizeHandle: "writable"
9096
}
9197
};

.github/ISSUE_TEMPLATE/bug_report.yml

+7-71
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@ body:
2626
id: steps
2727
attributes:
2828
label: Steps to reproduce the problem
29-
description: Please provide us with precise step by step information on how to reproduce the bug
29+
description: Please provide us with precise step by step instructions on how to reproduce the bug
3030
value: |
3131
1. Go to ....
3232
2. Press ....
@@ -37,64 +37,14 @@ body:
3737
id: what-should
3838
attributes:
3939
label: What should have happened?
40-
description: Tell what you think the normal behavior should be
40+
description: Tell us what you think the normal behavior should be
4141
validations:
4242
required: true
43-
- type: input
44-
id: commit
45-
attributes:
46-
label: Version or Commit where the problem happens
47-
description: "Which webui version or commit are you running ? (Do not write *Latest Version/repo/commit*, as this means nothing and will have changed by the time we read your issue. Rather, copy the **Version: v1.2.3** link at the bottom of the UI, or from the cmd/terminal if you can't launch it.)"
48-
validations:
49-
required: true
50-
- type: dropdown
51-
id: py-version
52-
attributes:
53-
label: What Python version are you running on ?
54-
multiple: false
55-
options:
56-
- Python 3.10.x
57-
- Python 3.11.x (above, no supported yet)
58-
- Python 3.9.x (below, no recommended)
59-
- type: dropdown
60-
id: platforms
61-
attributes:
62-
label: What platforms do you use to access the UI ?
63-
multiple: true
64-
options:
65-
- Windows
66-
- Linux
67-
- MacOS
68-
- iOS
69-
- Android
70-
- Other/Cloud
71-
- type: dropdown
72-
id: device
73-
attributes:
74-
label: What device are you running WebUI on?
75-
multiple: true
76-
options:
77-
- Nvidia GPUs (RTX 20 above)
78-
- Nvidia GPUs (GTX 16 below)
79-
- AMD GPUs (RX 6000 above)
80-
- AMD GPUs (RX 5000 below)
81-
- CPU
82-
- Other GPUs
83-
- type: dropdown
84-
id: cross_attention_opt
43+
- type: textarea
44+
id: sysinfo
8545
attributes:
86-
label: Cross attention optimization
87-
description: What cross attention optimization are you using, Settings -> Optimizations -> Cross attention optimization
88-
multiple: false
89-
options:
90-
- Automatic
91-
- xformers
92-
- sdp-no-mem
93-
- sdp
94-
- Doggettx
95-
- V1
96-
- InvokeAI
97-
- "None "
46+
label: Sysinfo
47+
description: System info file, generated by WebUI. You can generate it in settings, on the Sysinfo page. Drag the file into the field to upload it. If you submit your report without including the sysinfo file, the report will be closed. If needed, review the report to make sure it includes no personal information you don't want to share. If you can't start WebUI, you can use --dump-sysinfo commandline argument to generate the file.
9848
validations:
9949
required: true
10050
- type: dropdown
@@ -108,21 +58,7 @@ body:
10858
- Brave
10959
- Apple Safari
11060
- Microsoft Edge
111-
- type: textarea
112-
id: cmdargs
113-
attributes:
114-
label: Command Line Arguments
115-
description: Are you using any launching parameters/command line arguments (modified webui-user .bat/.sh) ? If yes, please write them below. Write "No" otherwise.
116-
render: Shell
117-
validations:
118-
required: true
119-
- type: textarea
120-
id: extensions
121-
attributes:
122-
label: List of extensions
123-
description: Are you using any extensions other than built-ins? If yes, provide a list, you can copy it at "Extensions" tab. Write "No" otherwise.
124-
validations:
125-
required: true
61+
- Other
12662
- type: textarea
12763
id: logs
12864
attributes:

CHANGELOG.md

+148
Large diffs are not rendered by default.

CITATION.cff

+7
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,7 @@
1+
cff-version: 1.2.0
2+
message: "If you use this software, please cite it as below."
3+
authors:
4+
- given-names: AUTOMATIC1111
5+
title: "Stable Diffusion Web UI"
6+
date-released: 2022-08-22
7+
url: "https://github.com/AUTOMATIC1111/stable-diffusion-webui"

README.md

+9-5
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ A browser interface based on Gradio library for Stable Diffusion.
7878
- Clip skip
7979
- Hypernetworks
8080
- Loras (same as Hypernetworks but more pretty)
81-
- A sparate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
81+
- A separate UI where you can choose, with preview, which embeddings, hypernetworks or Loras to add to your prompt
8282
- Can select to load a different VAE from settings screen
8383
- Estimated completion time in progress bar
8484
- API
@@ -88,12 +88,15 @@ A browser interface based on Gradio library for Stable Diffusion.
8888
- [Alt-Diffusion](https://arxiv.org/abs/2211.06679) support - see [wiki](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Features#alt-diffusion) for instructions
8989
- Now without any bad letters!
9090
- Load checkpoints in safetensors format
91-
- Eased resolution restriction: generated image's domension must be a multiple of 8 rather than 64
91+
- Eased resolution restriction: generated image's dimension must be a multiple of 8 rather than 64
9292
- Now with a license!
9393
- Reorder elements in the UI from settings screen
9494

9595
## Installation and Running
96-
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for both [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended) and [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
96+
Make sure the required [dependencies](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Dependencies) are met and follow the instructions available for:
97+
- [NVidia](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-NVidia-GPUs) (recommended)
98+
- [AMD](https://github.com/AUTOMATIC1111/stable-diffusion-webui/wiki/Install-and-Run-on-AMD-GPUs) GPUs.
99+
- [Intel CPUs, Intel GPUs (both integrated and discrete)](https://github.com/openvinotoolkit/stable-diffusion-webui/wiki/Installation-on-Intel-Silicon) (external wiki page)
97100

98101
Alternatively, use online services (like Google Colab):
99102

@@ -115,15 +118,15 @@ Alternatively, use online services (like Google Colab):
115118
1. Install the dependencies:
116119
```bash
117120
# Debian-based:
118-
sudo apt install wget git python3 python3-venv
121+
sudo apt install wget git python3 python3-venv libgl1 libglib2.0-0
119122
# Red Hat-based:
120123
sudo dnf install wget git python3
121124
# Arch-based:
122125
sudo pacman -S wget git python3
123126
```
124127
2. Navigate to the directory you would like the webui to be installed and execute the following command:
125128
```bash
126-
bash <(wget -qO- https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh)
129+
wget -q https://raw.githubusercontent.com/AUTOMATIC1111/stable-diffusion-webui/master/webui.sh
127130
```
128131
3. Run `webui.sh`.
129132
4. Check `webui-user.sh` for options.
@@ -169,5 +172,6 @@ Licenses for borrowed code can be found in `Settings -> Licenses` screen, and al
169172
- UniPC sampler - Wenliang Zhao - https://github.com/wl-zhao/UniPC
170173
- TAESD - Ollin Boer Bohan - https://github.com/madebyollin/taesd
171174
- LyCORIS - KohakuBlueleaf
175+
- Restart sampling - lambertae - https://github.com/Newbeeer/diffusion_restart_sampling
172176
- Initial Gradio script - posted on 4chan by an Anonymous user. Thank you Anonymous user.
173177
- (You)

extensions-builtin/Lora/extra_networks_lora.py

+9-1
Original file line numberDiff line numberDiff line change
@@ -6,9 +6,14 @@ class ExtraNetworkLora(extra_networks.ExtraNetwork):
66
def __init__(self):
77
super().__init__('lora')
88

9+
self.errors = {}
10+
"""mapping of network names to the number of errors the network had during operation"""
11+
912
def activate(self, p, params_list):
1013
additional = shared.opts.sd_lora
1114

15+
self.errors.clear()
16+
1217
if additional != "None" and additional in networks.available_networks and not any(x for x in params_list if x.items[0] == additional):
1318
p.all_prompts = [x + f"<lora:{additional}:{shared.opts.extra_networks_default_multiplier}>" for x in p.all_prompts]
1419
params_list.append(extra_networks.ExtraNetworkParams(items=[additional, shared.opts.extra_networks_default_multiplier]))
@@ -56,4 +61,7 @@ def activate(self, p, params_list):
5661
p.extra_generation_params["Lora hashes"] = ", ".join(network_hashes)
5762

5863
def deactivate(self, p):
59-
pass
64+
if self.errors:
65+
p.comment("Networks with errors: " + ", ".join(f"{k} ({v})" for k, v in self.errors.items()))
66+
67+
self.errors.clear()
+31
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
import torch
2+
3+
import networks
4+
from modules import patches
5+
6+
7+
class LoraPatches:
8+
def __init__(self):
9+
self.Linear_forward = patches.patch(__name__, torch.nn.Linear, 'forward', networks.network_Linear_forward)
10+
self.Linear_load_state_dict = patches.patch(__name__, torch.nn.Linear, '_load_from_state_dict', networks.network_Linear_load_state_dict)
11+
self.Conv2d_forward = patches.patch(__name__, torch.nn.Conv2d, 'forward', networks.network_Conv2d_forward)
12+
self.Conv2d_load_state_dict = patches.patch(__name__, torch.nn.Conv2d, '_load_from_state_dict', networks.network_Conv2d_load_state_dict)
13+
self.GroupNorm_forward = patches.patch(__name__, torch.nn.GroupNorm, 'forward', networks.network_GroupNorm_forward)
14+
self.GroupNorm_load_state_dict = patches.patch(__name__, torch.nn.GroupNorm, '_load_from_state_dict', networks.network_GroupNorm_load_state_dict)
15+
self.LayerNorm_forward = patches.patch(__name__, torch.nn.LayerNorm, 'forward', networks.network_LayerNorm_forward)
16+
self.LayerNorm_load_state_dict = patches.patch(__name__, torch.nn.LayerNorm, '_load_from_state_dict', networks.network_LayerNorm_load_state_dict)
17+
self.MultiheadAttention_forward = patches.patch(__name__, torch.nn.MultiheadAttention, 'forward', networks.network_MultiheadAttention_forward)
18+
self.MultiheadAttention_load_state_dict = patches.patch(__name__, torch.nn.MultiheadAttention, '_load_from_state_dict', networks.network_MultiheadAttention_load_state_dict)
19+
20+
def undo(self):
21+
self.Linear_forward = patches.undo(__name__, torch.nn.Linear, 'forward')
22+
self.Linear_load_state_dict = patches.undo(__name__, torch.nn.Linear, '_load_from_state_dict')
23+
self.Conv2d_forward = patches.undo(__name__, torch.nn.Conv2d, 'forward')
24+
self.Conv2d_load_state_dict = patches.undo(__name__, torch.nn.Conv2d, '_load_from_state_dict')
25+
self.GroupNorm_forward = patches.undo(__name__, torch.nn.GroupNorm, 'forward')
26+
self.GroupNorm_load_state_dict = patches.undo(__name__, torch.nn.GroupNorm, '_load_from_state_dict')
27+
self.LayerNorm_forward = patches.undo(__name__, torch.nn.LayerNorm, 'forward')
28+
self.LayerNorm_load_state_dict = patches.undo(__name__, torch.nn.LayerNorm, '_load_from_state_dict')
29+
self.MultiheadAttention_forward = patches.undo(__name__, torch.nn.MultiheadAttention, 'forward')
30+
self.MultiheadAttention_load_state_dict = patches.undo(__name__, torch.nn.MultiheadAttention, '_load_from_state_dict')
31+

extensions-builtin/Lora/network.py

+5-2
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ def calc_scale(self):
133133

134134
return 1.0
135135

136-
def finalize_updown(self, updown, orig_weight, output_shape):
136+
def finalize_updown(self, updown, orig_weight, output_shape, ex_bias=None):
137137
if self.bias is not None:
138138
updown = updown.reshape(self.bias.shape)
139139
updown += self.bias.to(orig_weight.device, dtype=orig_weight.dtype)
@@ -145,7 +145,10 @@ def finalize_updown(self, updown, orig_weight, output_shape):
145145
if orig_weight.size().numel() == updown.size().numel():
146146
updown = updown.reshape(orig_weight.shape)
147147

148-
return updown * self.calc_scale() * self.multiplier()
148+
if ex_bias is not None:
149+
ex_bias = ex_bias * self.multiplier()
150+
151+
return updown * self.calc_scale() * self.multiplier(), ex_bias
149152

150153
def calc_updown(self, target):
151154
raise NotImplementedError()

extensions-builtin/Lora/network_full.py

+6-1
Original file line numberDiff line numberDiff line change
@@ -14,9 +14,14 @@ def __init__(self, net: network.Network, weights: network.NetworkWeights):
1414
super().__init__(net, weights)
1515

1616
self.weight = weights.w.get("diff")
17+
self.ex_bias = weights.w.get("diff_b")
1718

1819
def calc_updown(self, orig_weight):
1920
output_shape = self.weight.shape
2021
updown = self.weight.to(orig_weight.device, dtype=orig_weight.dtype)
22+
if self.ex_bias is not None:
23+
ex_bias = self.ex_bias.to(orig_weight.device, dtype=orig_weight.dtype)
24+
else:
25+
ex_bias = None
2126

22-
return self.finalize_updown(updown, orig_weight, output_shape)
27+
return self.finalize_updown(updown, orig_weight, output_shape, ex_bias)
+28
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,28 @@
1+
import network
2+
3+
4+
class ModuleTypeNorm(network.ModuleType):
5+
def create_module(self, net: network.Network, weights: network.NetworkWeights):
6+
if all(x in weights.w for x in ["w_norm", "b_norm"]):
7+
return NetworkModuleNorm(net, weights)
8+
9+
return None
10+
11+
12+
class NetworkModuleNorm(network.NetworkModule):
13+
def __init__(self, net: network.Network, weights: network.NetworkWeights):
14+
super().__init__(net, weights)
15+
16+
self.w_norm = weights.w.get("w_norm")
17+
self.b_norm = weights.w.get("b_norm")
18+
19+
def calc_updown(self, orig_weight):
20+
output_shape = self.w_norm.shape
21+
updown = self.w_norm.to(orig_weight.device, dtype=orig_weight.dtype)
22+
23+
if self.b_norm is not None:
24+
ex_bias = self.b_norm.to(orig_weight.device, dtype=orig_weight.dtype)
25+
else:
26+
ex_bias = None
27+
28+
return self.finalize_updown(updown, orig_weight, output_shape, ex_bias)

0 commit comments

Comments
 (0)