Skip to content

Commit 03efa72

Browse files
committed
docs: enhance and condense few sections
Signed-off-by: Ettore Di Giacinto <[email protected]>
1 parent 987b7ad commit 03efa72

File tree

5 files changed

+124
-79
lines changed

5 files changed

+124
-79
lines changed

docs/content/docs/reference/container-images.md renamed to docs/content/docs/getting-started/container-images.md

Lines changed: 61 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -1,13 +1,14 @@
1-
21
+++
32
disableToc = false
4-
title = "Available Container images"
5-
weight = 25
3+
title = "Run with container images"
4+
weight = 6
5+
url = '/basics/container/'
6+
ico = "rocket_launch"
67
+++
78

89
LocalAI provides a variety of images to support different environments. These images are available on [quay.io](https://quay.io/repository/go-skynet/local-ai?tab=tags) and [Docker Hub](https://hub.docker.com/r/localai/localai).
910

10-
> _For All-in-One image with a pre-configured set of models and backends, see the [AIO Images]({{%relref "docs/reference/aio-images" %}})._
11+
All-in-One images comes with a pre-configured set of models and backends, standard images instead do not have any model pre-configured and installed.
1112

1213
For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don't have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the [build section]({{%relref "docs/getting-started/build" %}}).
1314

@@ -22,6 +23,62 @@ For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA
2223

2324
{{% /alert %}}
2425

26+
## All-in-one images
27+
28+
All-In-One images are images that come pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and requires no configuration. Models configuration can be found [here](https://github.com/mudler/LocalAI/tree/master/aio) separated by size.
29+
30+
In the AIO images there are models configured with the names of OpenAI models, however, they are really backed by Open Source models. You can find the table below
31+
32+
| Category | Model name | Real model (CPU) | Real model (GPU) |
33+
| ---- | ---- | ---- | ---- |
34+
| Text Generation | `gpt-4` | `phi-2` | `hermes-2-pro-mistral` |
35+
| Multimodal Vision | `gpt-4-vision-preview` | `bakllava` | `llava-1.6-mistral` |
36+
| Image Generation | `stablediffusion` | `stablediffusion` | `dreamshaper-8` |
37+
| Speech to Text | `whisper-1` | `whisper` with `whisper-base` model | <= same |
38+
| Text to Speech | `tts-1` | `en-us-amy-low.onnx` from `rhasspy/piper` | <= same |
39+
| Embeddings | `text-embedding-ada-002` | `all-MiniLM-L6-v2` in Q4 | `all-MiniLM-L6-v2` |
40+
41+
### Usage
42+
43+
Select the image (CPU or GPU) and start the container with Docker:
44+
45+
```bash
46+
# CPU example
47+
docker run -p 8080:8080 --name local-ai -ti localai/localai:latest-aio-cpu
48+
```
49+
50+
LocalAI will automatically download all the required models, and the API will be available at [localhost:8080](http://localhost:8080/v1/models).
51+
52+
### Available images
53+
54+
| Description | Quay | Docker Hub |
55+
| --- | --- |-----------------------------------------------|
56+
| Latest images for CPU | `quay.io/go-skynet/local-ai:latest-aio-cpu` | `localai/localai:latest-aio-cpu` |
57+
| Versioned image (e.g. for CPU) | `quay.io/go-skynet/local-ai:{{< version >}}-aio-cpu` | `localai/localai:{{< version >}}-aio-cpu` |
58+
| Latest images for Nvidia GPU (CUDA11) | `quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-11` | `localai/localai:latest-aio-gpu-nvidia-cuda-11` |
59+
| Latest images for Nvidia GPU (CUDA12) | `quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-12` | `localai/localai:latest-aio-gpu-nvidia-cuda-12` |
60+
| Latest images for AMD GPU | `quay.io/go-skynet/local-ai:latest-aio-gpu-hipblas` | `localai/localai:latest-aio-gpu-hipblas` |
61+
| Latest images for Intel GPU (sycl f16) | `quay.io/go-skynet/local-ai:latest-aio-gpu-intel-f16` | `localai/localai:latest-aio-gpu-intel-f16` |
62+
| Latest images for Intel GPU (sycl f32) | `quay.io/go-skynet/local-ai:latest-aio-gpu-intel-f32` | `localai/localai:latest-aio-gpu-intel-f32` |
63+
64+
### Available environment variables
65+
66+
The AIO Images are inheriting the same environment variables as the base images and the environment of LocalAI (that you can inspect by calling `--help`). However, it supports additional environment variables available only from the container image
67+
68+
| Variable | Default | Description |
69+
| ---------------------| ------- | ----------- |
70+
| `PROFILE` | Auto-detected | The size of the model to use. Available: `cpu`, `gpu-8g` |
71+
| `MODELS` | Auto-detected | A list of models YAML Configuration file URI/URL (see also [running models]({{%relref "docs/getting-started/run-other-models" %}})) |
72+
73+
74+
## Standard container images
75+
76+
Standard container images do not have pre-installed models.
77+
78+
Images are available with and without python dependencies. Note that images with python dependencies are bigger (in order of 17GB).
79+
80+
Images with `core` in the tag are smaller and do not contain any python dependencies.
81+
2582
{{< tabs tabTotal="6" >}}
2683
{{% tab tabName="Vanilla / CPU Images" %}}
2784

@@ -100,4 +157,3 @@ For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA
100157
## See Also
101158

102159
- [GPU acceleration]({{%relref "docs/features/gpu-acceleration" %}})
103-
- [AIO Images]({{%relref "docs/reference/aio-images" %}})
Lines changed: 30 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,30 @@
1+
+++
2+
disableToc = false
3+
title = "Run with Kubernetes"
4+
weight = 6
5+
url = '/basics/kubernetes/'
6+
ico = "rocket_launch"
7+
+++
8+
9+
For installing LocalAI in Kubernetes, you can use the `go-skynet` helm chart:
10+
11+
```bash
12+
# Install the helm repository
13+
helm repo add go-skynet https://go-skynet.github.io/helm-charts/
14+
# Update the repositories
15+
helm repo update
16+
# Get the values
17+
helm show values go-skynet/local-ai > values.yaml
18+
19+
# Edit the values value if needed
20+
# vim values.yaml ...
21+
22+
# Install the helm chart
23+
helm install local-ai go-skynet/local-ai -f values.yaml
24+
```
25+
26+
If you prefer to install from manifest file, you can install from the deployment file, and customize as you like:
27+
28+
```
29+
kubectl apply -f https://raw.githubusercontent.com/mudler/LocalAI/master/examples/kubernetes/deployment.yaml
30+
```

docs/content/docs/getting-started/manual.md

Lines changed: 1 addition & 16 deletions
Original file line numberDiff line numberDiff line change
@@ -131,22 +131,7 @@ Note: If you are on Windows, please make sure the project is on the Linux Filesy
131131

132132
{{% tab tabName="Kubernetes" %}}
133133

134-
For installing LocalAI in Kubernetes, you can use the following helm chart:
135-
136-
```bash
137-
# Install the helm repository
138-
helm repo add go-skynet https://go-skynet.github.io/helm-charts/
139-
# Update the repositories
140-
helm repo update
141-
# Get the values
142-
helm show values go-skynet/local-ai > values.yaml
143-
144-
# Edit the values value if needed
145-
# vim values.yaml ...
146-
147-
# Install the helm chart
148-
helm install local-ai go-skynet/local-ai -f values.yaml
149-
```
134+
See the [Kubernetes section]({{%relref "docs/getting-started/kubernetes" %}}).
150135

151136
{{% /tab %}}
152137
{{% tab tabName="From binary" %}}

docs/content/docs/getting-started/quickstart.md

Lines changed: 32 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@ Before you begin, ensure you have a container engine installed if you are not us
3030

3131
> _Do you have already a model file? Skip to [Run models manually]({{%relref "docs/getting-started/manual" %}}) or [Run other models]({{%relref "docs/getting-started/run-other-models" %}}) to use an already-configured model_.
3232
33-
LocalAI's All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. If you don't need models pre-configured, you can use the standard [images]({{%relref "docs/reference/container-images" %}}).
33+
LocalAI's All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. If you don't need models pre-configured, you can use the standard [images]({{%relref "docs/getting-started/container-images" %}}).
3434

3535
These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and requires no configuration.
3636

@@ -91,7 +91,7 @@ services:
9191
# capabilities: [gpu]
9292
```
9393

94-
For a list of all the container-images available, see [Container images]({{%relref "docs/reference/container-images" %}}). To learn more about All-in-one images instead, see [All-in-one Images]({{%relref "docs/reference/aio-images" %}}).
94+
For a list of all the container-images available, see [Container images]({{%relref "docs/getting-started/container-images" %}}). To learn more about All-in-one images instead, see [All-in-one Images]({{%relref "docs/getting-started/container-images" %}}).
9595

9696
{{% alert icon="💡" %}}
9797

@@ -114,9 +114,36 @@ docker run -p 8080:8080 --name local-ai -ti -v localai-models:/build/models loca
114114

115115
{{% /alert %}}
116116

117+
## From binary
118+
119+
LocalAI is available as a standalone binary as well. Binaries are compiled for Linux and MacOS and automatically uploaded in the Github releases. Windows is known to work with WSL.
120+
121+
You can check out the releases in https://github.com/mudler/LocalAI/releases.
122+
123+
{{< tabs tabTotal="2" >}}
124+
{{% tab tabName="Linux" %}}
125+
| CPU flagset | Link |
126+
| --- | --- |
127+
| avx2 | [Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx2-Linux-x86_64) |
128+
| avx512 | [Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx512-Linux-x86_64) |
129+
| avx | [Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx-Linux-x86_64) |
130+
{{% /tab %}}
131+
{{% tab tabName="MacOS" %}}
132+
| CPU flagset | Link |
133+
| --- | --- |
134+
| avx2 | [Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx2-Darwin-arm64) |
135+
| avx512 | [Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx512-Darwin-arm64) |
136+
| avx | [Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx-Darwin-arm64) |
137+
138+
{{% /tab %}}
139+
140+
{{< /tabs >}}
141+
117142
## Try it out
118143

119-
LocalAI does not ship a webui by default, however you can use 3rd party projects to interact with it (see also [Integrations]({{%relref "docs/integrations" %}}) ). However, you can test out the API endpoints using `curl`, you can find few examples below.
144+
Connect to LocalAI, by default the WebUI should be accessible from http://localhost:8080 . You can also use 3rd party projects to interact with LocalAI as you would use OpenAI (see also [Integrations]({{%relref "docs/integrations" %}}) ).
145+
146+
You can also test out the API endpoints using `curl`, examples below.
120147

121148
### Text Generation
122149

@@ -300,6 +327,6 @@ Explore further resources and community contributions:
300327
- [Build LocalAI and the container image]({{%relref "docs/getting-started/build" %}})
301328
- [Run models manually]({{%relref "docs/getting-started/manual" %}})
302329
- [Run other models]({{%relref "docs/getting-started/run-other-models" %}})
303-
- [Container images]({{%relref "docs/reference/container-images" %}})
304-
- [All-in-one Images]({{%relref "docs/reference/aio-images" %}})
330+
- [Container images]({{%relref "docs/getting-started/container-images" %}})
331+
- [All-in-one Images]({{%relref "docs/getting-started/container-images" %}})
305332
- [Examples](https://github.com/mudler/LocalAI/tree/master/examples#examples)

docs/content/docs/reference/aio-images.md

Lines changed: 0 additions & 53 deletions
This file was deleted.

0 commit comments

Comments
 (0)