You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/content/docs/getting-started/container-images.md
+61-5Lines changed: 61 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -1,13 +1,14 @@
1
-
2
1
+++
3
2
disableToc = false
4
-
title = "Available Container images"
5
-
weight = 25
3
+
title = "Run with container images"
4
+
weight = 6
5
+
url = '/basics/container/'
6
+
ico = "rocket_launch"
6
7
+++
7
8
8
9
LocalAI provides a variety of images to support different environments. These images are available on [quay.io](https://quay.io/repository/go-skynet/local-ai?tab=tags) and [Docker Hub](https://hub.docker.com/r/localai/localai).
9
10
10
-
> _For All-in-One image with a pre-configured set of models and backends, see the [AIO Images]({{%relref "docs/reference/aio-images" %}})._
11
+
All-in-One images comes with a pre-configured set of models and backends, standard images instead do not have any model pre-configured and installed.
11
12
12
13
For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA images, if you don't have a GPU, use the CPU images. If you have AMD or Mac Silicon, see the [build section]({{%relref "docs/getting-started/build" %}}).
13
14
@@ -22,6 +23,62 @@ For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA
22
23
23
24
{{% /alert %}}
24
25
26
+
## All-in-one images
27
+
28
+
All-In-One images are images that come pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and requires no configuration. Models configuration can be found [here](https://github.com/mudler/LocalAI/tree/master/aio) separated by size.
29
+
30
+
In the AIO images there are models configured with the names of OpenAI models, however, they are really backed by Open Source models. You can find the table below
31
+
32
+
| Category | Model name | Real model (CPU) | Real model (GPU) |
33
+
| ---- | ---- | ---- | ---- |
34
+
| Text Generation |`gpt-4`|`phi-2`|`hermes-2-pro-mistral`|
| Latest images for CPU |`quay.io/go-skynet/local-ai:latest-aio-cpu`|`localai/localai:latest-aio-cpu`|
57
+
| Versioned image (e.g. for CPU) |`quay.io/go-skynet/local-ai:{{< version >}}-aio-cpu`|`localai/localai:{{< version >}}-aio-cpu`|
58
+
| Latest images for Nvidia GPU (CUDA11) |`quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-11`|`localai/localai:latest-aio-gpu-nvidia-cuda-11`|
59
+
| Latest images for Nvidia GPU (CUDA12) |`quay.io/go-skynet/local-ai:latest-aio-gpu-nvidia-cuda-12`|`localai/localai:latest-aio-gpu-nvidia-cuda-12`|
60
+
| Latest images for AMD GPU |`quay.io/go-skynet/local-ai:latest-aio-gpu-hipblas`|`localai/localai:latest-aio-gpu-hipblas`|
61
+
| Latest images for Intel GPU (sycl f16) |`quay.io/go-skynet/local-ai:latest-aio-gpu-intel-f16`|`localai/localai:latest-aio-gpu-intel-f16`|
62
+
| Latest images for Intel GPU (sycl f32) |`quay.io/go-skynet/local-ai:latest-aio-gpu-intel-f32`|`localai/localai:latest-aio-gpu-intel-f32`|
63
+
64
+
### Available environment variables
65
+
66
+
The AIO Images are inheriting the same environment variables as the base images and the environment of LocalAI (that you can inspect by calling `--help`). However, it supports additional environment variables available only from the container image
67
+
68
+
| Variable | Default | Description |
69
+
| ---------------------| ------- | ----------- |
70
+
|`PROFILE`| Auto-detected | The size of the model to use. Available: `cpu`, `gpu-8g`|
71
+
|`MODELS`| Auto-detected | A list of models YAML Configuration file URI/URL (see also [running models]({{%relref "docs/getting-started/run-other-models" %}})) |
72
+
73
+
74
+
## Standard container images
75
+
76
+
Standard container images do not have pre-installed models.
77
+
78
+
Images are available with and without python dependencies. Note that images with python dependencies are bigger (in order of 17GB).
79
+
80
+
Images with `core` in the tag are smaller and do not contain any python dependencies.
81
+
25
82
{{< tabs tabTotal="6" >}}
26
83
{{% tab tabName="Vanilla / CPU Images" %}}
27
84
@@ -100,4 +157,3 @@ For GPU Acceleration support for Nvidia video graphic cards, use the Nvidia/CUDA
Copy file name to clipboardExpand all lines: docs/content/docs/getting-started/quickstart.md
+32-5Lines changed: 32 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -30,7 +30,7 @@ Before you begin, ensure you have a container engine installed if you are not us
30
30
31
31
> _Do you have already a model file? Skip to [Run models manually]({{%relref "docs/getting-started/manual" %}}) or [Run other models]({{%relref "docs/getting-started/run-other-models" %}}) to use an already-configured model_.
32
32
33
-
LocalAI's All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. If you don't need models pre-configured, you can use the standard [images]({{%relref "docs/reference/container-images" %}}).
33
+
LocalAI's All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. If you don't need models pre-configured, you can use the standard [images]({{%relref "docs/getting-started/container-images" %}}).
34
34
35
35
These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and requires no configuration.
36
36
@@ -91,7 +91,7 @@ services:
91
91
# capabilities: [gpu]
92
92
```
93
93
94
-
For a list of all the container-images available, see [Container images]({{%relref "docs/reference/container-images" %}}). To learn more about All-in-one images instead, see [All-in-one Images]({{%relref "docs/reference/aio-images" %}}).
94
+
For a list of all the container-images available, see [Container images]({{%relref "docs/getting-started/container-images" %}}). To learn more about All-in-one images instead, see [All-in-one Images]({{%relref "docs/getting-started/container-images" %}}).
LocalAI is available as a standalone binary as well. Binaries are compiled for Linux and MacOS and automatically uploaded in the Github releases. Windows is known to work with WSL.
120
+
121
+
You can check out the releases in https://github.com/mudler/LocalAI/releases.
122
+
123
+
{{< tabs tabTotal="2" >}}
124
+
{{% tab tabName="Linux" %}}
125
+
| CPU flagset | Link |
126
+
| --- | --- |
127
+
| avx2 |[Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx2-Linux-x86_64) |
128
+
| avx512 |[Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx512-Linux-x86_64) |
129
+
| avx |[Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx-Linux-x86_64) |
130
+
{{% /tab %}}
131
+
{{% tab tabName="MacOS" %}}
132
+
| CPU flagset | Link |
133
+
| --- | --- |
134
+
| avx2 |[Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx2-Darwin-arm64) |
135
+
| avx512 |[Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx512-Darwin-arm64) |
136
+
| avx |[Download](https://github.com/mudler/LocalAI/releases/download/{{< version >}}/local-ai-avx-Darwin-arm64) |
137
+
138
+
{{% /tab %}}
139
+
140
+
{{< /tabs >}}
141
+
117
142
## Try it out
118
143
119
-
LocalAI does not ship a webui by default, however you can use 3rd party projects to interact with it (see also [Integrations]({{%relref "docs/integrations" %}}) ). However, you can test out the API endpoints using `curl`, you can find few examples below.
144
+
Connect to LocalAI, by default the WebUI should be accessible from http://localhost:8080 . You can also use 3rd party projects to interact with LocalAI as you would use OpenAI (see also [Integrations]({{%relref "docs/integrations" %}}) ).
145
+
146
+
You can also test out the API endpoints using `curl`, examples below.
120
147
121
148
### Text Generation
122
149
@@ -300,6 +327,6 @@ Explore further resources and community contributions:
300
327
-[Build LocalAI and the container image]({{%relref "docs/getting-started/build" %}})
0 commit comments