Skip to content

Add bare bones web UI #74

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 11 commits into from
Aug 25, 2022
Merged

Add bare bones web UI #74

merged 11 commits into from
Aug 25, 2022

Conversation

TesseractCat
Copy link
Contributor

@TesseractCat TesseractCat commented Aug 24, 2022

This PR adds a simple Web UI that allows users to generate images remotely using the simplet2i interface.

TODO:

  • If the txt2img function gets a progress callback parameter, I could add a progress bar.
  • img2img
  • iterations
  • Configurable sampler

@lstein
Copy link
Collaborator

lstein commented Aug 24, 2022

Wow. This is fantastic. Thank you.
I am not going to merge your PR just yet because I am 90% of the way through a code refactoring, and would like to resolve conflicts just once. Will happen Thursday or Friday, depending on how busy I get.

@SergioDiazF
Copy link

SergioDiazF commented Aug 25, 2022

Hi! I'm working on the UI design, so I'd love to include it if you like!

Edit
It is already programmed and functional.

Captura de pantalla 2022-08-25 145010

@vinyvince
Copy link

Would it not be worth integrating https://github.com/hlky/stable-diffusion-webui with GFPGAN face restoration option ?

@lstein lstein merged commit d04518e into invoke-ai:main Aug 25, 2022
@lstein
Copy link
Collaborator

lstein commented Aug 25, 2022

The web server is very impressive work. Thank you! A couple of changes I made during merging:

  1. The "batch" option uses a large amount of memory, scaling linearly with the number of images being produced. "iterations" is a little slower, but only uses the amount of memory that it takes to produce one image. So I changed batch to iterations throughout in order to avoid lots of user complaints on low-memory machines.
  2. I moved the static HTML file into a top-level directory named static. I've started using this directory to store other static stuff, such as embedded images in the README. (oops, I just saw that there are .js and .css files that need to move to. fixing...)

Some questions and suggestions:

  1. Does the server do garbage collection of the images that it generates and stores locally, or is this something that users should be aware of and clean up periodically?
  2. It would be nice to print the seed under each image.
  3. On my machine when I resize the web browser to full monitor width, the layout stays fixed. Can the width available for the images be increased?

@TesseractCat
Copy link
Contributor Author

Thanks for merging! To answer your questions:

  1. No it keeps the images around for now, I like it this way as I can go back and look through the images I generated, but maybe it could be a command line flag.
  2. Right now, the seed and prompt is put in the title attribute of each image, so you can hover over each image and see the seed. You can also click on images to restore their generation settings (including seed).
  3. Yeah, right now it's fixed at a max-width of 1000px, but this could be changed to be a percentage. I initially had it at 60% but it caused issues on mobile.

Also, maybe it would be possible to have both batches and iterations as settings? I like to set batches as it's only a little slower, but outputs multiple images at once. Maybe 'Iteration Count' and 'Batch Size (Warning high VRAM usage)'.

@1blackbar
Copy link

1blackbar commented Aug 25, 2022

This looks great already, do You plan to include inpainting into the UI ?
Id say keep both itrations and batches, dont remove anything , its always better to have all possible options avalible for users as some people have other needs and specs than others and we can test if our machine is good enough for all the options.
Can You add a code to include embedding finetuned checkpoints like here ? https://github.com/nicolai256/Stable-textual-inversion_win
This will let us finetune on our images and test how good the training is .

@namion
Copy link

namion commented Aug 25, 2022

Hi, I'm brand new to github and I don't have any coding experience so I hope it is ok for me to comment (not sure of the etiquette for noobs like myself chiming in). This is my first time installing and using python/anaconda/github, just to give you an idea of how fresh I am. I'm diving in because I'm so impressed with this technology and where it is heading.

I love the progress on this branch and the new web UI is great but I noticed it is missing a field for the strength parameter for img2img usage. I know how impactful that parameter can be so I was wondering if it would be added in the future? Or am I overlooking something? Thank you all for the amazing work you are doing.

edit: It just occurred to me that I can still use the strength parameter by passing it in the prompt box (appending -f0.7). I should have thought of that sooner. But for consistency sake, it might be nice to have an actual strength box for input when an img2img file is loaded. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants