You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
1. Ensure you have a model file, a configuration YAML file, or both. Customize model defaults and specific settings with a configuration file. For advanced configurations, refer to the [Advanced Documentation](docs/advanced).
10
+
# Run Models Manually
12
11
13
-
2. For GPU Acceleration instructions, visit [GPU acceleration](docs/features/gpu-acceleration).
12
+
Follow these steps to manually run models using LocalAI:
13
+
14
+
1.**Prepare Your Model and Configuration Files**:
15
+
Ensure you have a model file and a configuration YAML file, if necessary. Customize model defaults and specific settings with a configuration file. For advanced configurations, refer to the [Advanced Documentation]({{% relref "docs/advanced" %}}).
16
+
17
+
2.**GPU Acceleration**:
18
+
For instructions on GPU acceleration, visit the [GPU acceleration]({{% relref "docs/features/gpu-acceleration" %}}) page.
19
+
20
+
3.**Run LocalAI**:
21
+
Choose one of the following methods to run LocalAI:
- If running on Apple Silicon (ARM) it is **not**suggested to run on Docker due to emulation. Follow the [build instructions]({{%relref "docs/getting-started/build" %}}) to use Metal acceleration for full GPU support.
82
-
- If you are running Apple x86_64 you can use `docker`, there is no additional gain into building it from source.
86
+
- If running on Apple Silicon (ARM), it is **not**recommended to run on Docker due to emulation. Follow the [build instructions]({{%relref "docs/getting-started/build" %}}) to use Metal acceleration for full GPU support.
87
+
- If you are running on Apple x86_64, you can use Docker without additional gain from building it from source.
For other Docker images, please refer to the table in [Getting Started](https://localai.io/basics/getting_started/#container-images).
126
128
{{% /alert %}}
127
129
128
-
Note: If you are on Windows, please make sure the project is on the Linux Filesystem, otherwise loading models might be slow. For more Info: [Microsoft Docs](https://learn.microsoft.com/en-us/windows/wsl/filesystems)
130
+
Note: If you are on Windows, ensure the project is on the Linux filesystem to avoid slow model loading. For more information, see the [Microsoft Docs](https://learn.microsoft.com/en-us/windows/wsl/filesystems).
129
131
130
132
{{% /tab %}}
131
-
132
133
{{% tab tabName="Kubernetes" %}}
133
134
134
-
See the [Kubernetes section]({{%relref "docs/getting-started/kubernetes" %}}).
135
+
For Kubernetes deployment, see the [Kubernetes section]({{%relref "docs/getting-started/kubernetes" %}}).
135
136
136
137
{{% /tab %}}
137
-
{{% tab tabName="From binary" %}}
138
+
{{% tab tabName="From Binary" %}}
138
139
139
-
LocalAI binary releases are available in [Github](https://github.com/go-skynet/LocalAI/releases).
140
+
LocalAI binary releases are available on [GitHub](https://github.com/go-skynet/LocalAI/releases).
140
141
141
142
{{% alert icon="⚠️" %}}
142
-
If you are installing on MacOS, when you excecute the binary, you will get a message saying:
143
+
If installing on macOS, you might encounter a message saying:
143
144
144
-
> "local-ai-git-Darwin-arm64" (or whatever name you gave to the binary) can't be opened because Apple cannot check it for malicious software.
145
+
> "local-ai-git-Darwin-arm64" (or the name you gave the binary) can't be opened because Apple cannot check it for malicious software.
145
146
146
-
Hit OK, and go to Settings > Privacy & Security > Security and look for the message:
147
+
Hit OK, then go to Settings > Privacy & Security > Security and look for the message:
147
148
148
149
> "local-ai-git-Darwin-arm64" was blocked from use because it is not from an identified developer.
149
150
150
-
And press "Allow Anyway"
151
+
Press "Allow Anyway."
151
152
{{% /alert %}}
152
153
153
-
154
154
{{% /tab %}}
155
+
{{% tab tabName="From Source" %}}
155
156
156
-
{{% tab tabName="From source" %}}
157
+
For instructions on building LocalAI from source, see the [Build Section]({{% relref "docs/getting-started/build" %}}).
157
158
158
-
See the [build section]({{%relref "docs/getting-started/build" %}}).
159
-
160
159
{{% /tab %}}
161
-
162
160
{{< /tabs >}}
163
161
164
162
For more model configurations, visit the [Examples Section](https://github.com/mudler/LocalAI/tree/master/examples/configurations).
**LocalAI** is the free, Open Source alternative to OpenAI (Anthropic, ...), acting as a drop-in replacement REST API for local inferencing. It allows you to run [LLMs]({{%relref "docs/features/text-generation" %}}), generate images, and audio, all locally or on-prem with consumer-grade hardware, supporting multiple model families and architectures.
12
-
13
-
## Installation
9
+
**LocalAI** is a free, open-source alternative to OpenAI (Anthropic, etc.), functioning as a drop-in replacement REST API for local inferencing. It allows you to run [LLMs]({{% relref "docs/features/text-generation" %}}), generate images, and produce audio, all locally or on-premises with consumer-grade hardware, supporting multiple model families and architectures.
14
10
15
-
###Using the Bash Installer
11
+
## Using the Bash Installer
16
12
17
-
You can easily install LocalAI using the bash installer with the following command:
13
+
Install LocalAI easily using the bash installer with the following command:
18
14
19
-
```
15
+
```sh
20
16
curl https://localai.io/install.sh | sh
21
17
```
22
18
23
-
See also the [Installer Options]({{%relref "docs/advanced/installer" %}}) for the full list of options.
19
+
For a full list of options, refer to the [Installer Options]({{%relref "docs/advanced/installer" %}}) documentation.
24
20
25
-
Binaries can be also [manually downloaded]({{%relref "docs/reference/binaries" %}}).
21
+
Binaries can also be [manually downloaded]({{%relref "docs/reference/binaries" %}}).
26
22
27
-
###Using Container Images
23
+
## Using Container Images or Kubernetes
28
24
29
-
LocalAI is available as a container image compatible with various container engines like Docker, Podman, and Kubernetes. Container images are published on [quay.io](https://quay.io/repository/go-skynet/local-ai?tab=tags&tag=latest) and [Docker Hub](https://hub.docker.com/r/localai/localai).
25
+
LocalAI is available as a container image compatible with various container engines such as Docker, Podman, and Kubernetes. Container images are published on [quay.io](https://quay.io/repository/go-skynet/local-ai?tab=tags&tag=latest) and [Docker Hub](https://hub.docker.com/r/localai/localai).
For detailed instructions, see [Using container images]({{%relref "docs/getting-started/container-images" %}}). For Kubernetes deployment, see [Run with Kubernetes]({{% relref "docs/getting-started/kubernetes" %}}).
32
28
33
-
###Running LocalAI with All-in-One (AIO) Images
29
+
## Running LocalAI with All-in-One (AIO) Images
34
30
35
-
> _Do you have already a model file? Skip to [Run models manually]({{%relref "docs/getting-started/manual" %}})_.
31
+
> _Already have a model file? Skip to [Run models manually]({{%relref "docs/getting-started/manual" %}})_.
36
32
37
-
LocalAI's All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. If you don't need models pre-configured, you can use the standard [images]({{%relref "docs/getting-started/container-images" %}}).
33
+
LocalAI's All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the features of LocalAI. If pre-configured models are not required, you can use the standard [images]({{%relref "docs/getting-started/container-images" %}}).
38
34
39
-
These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and requires no configuration.
35
+
These images are available for both CPU and GPU environments. AIO images are designed for ease of use and require no additional configuration.
40
36
41
-
It suggested to use the AIO images if you don't want to configure the models to run on LocalAI. If you want to run specific models, you can use the [manual method]({{%relref "docs/getting-started/manual" %}}).
37
+
It is recommended to use AIO images if you prefer not to configure the models manually or via the web interface. For running specific models, refer to the [manual method]({{%relref "docs/getting-started/manual" %}}).
42
38
43
-
The AIO Images comes pre-configured with the following features:
39
+
The AIO images come pre-configured with the following features:
44
40
- Text to Speech (TTS)
45
41
- Speech to Text
46
42
- Function calling
47
43
- Large Language Models (LLM) for text generation
48
44
- Image generation
49
45
- Embedding server
50
46
51
-
See: [Using container images]({{%relref "docs/getting-started/container-images" %}}) for instructions on how to use AIO images.
52
-
47
+
For instructions on using AIO images, see [Using container images]({{% relref "docs/getting-started/container-images#all-in-one-images" %}}).
53
48
54
-
## What's next?
49
+
## What's Next?
55
50
56
-
There is much more to explore! run any model from huggingface, video generation, and voice cloning with LocalAI, check out the [features]({{%relref "docs/features" %}}) section for a full overview.
51
+
There is much more to explore with LocalAI! You can run any model from Hugging Face, perform video generation, and also voice cloning. For a comprehensive overview, check out the [features]({{%relref "docs/features" %}}) section.
57
52
58
-
Explore further resources and community contributions:
53
+
Explore additional resources and community contributions:
59
54
60
-
-[Try it out]({{%relref "docs/getting-started/try-it-out" %}})
61
-
-[Build LocalAI and the container image]({{%relref "docs/getting-started/build" %}})
0 commit comments