Skip to content

Commit e5c6428

Browse files
committed
docs: improvements
Signed-off-by: Ettore Di Giacinto <[email protected]>
1 parent 8b812c0 commit e5c6428

File tree

3 files changed

+74
-79
lines changed

3 files changed

+74
-79
lines changed

docs/content/docs/getting-started/container-images.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -124,7 +124,7 @@ docker run -p 8080:8080 --name local-ai -ti -v localai-models:/build/models loca
124124

125125
{{% /alert %}}
126126

127-
### Available images
127+
### Available AIO images
128128

129129
| Description | Quay | Docker Hub |
130130
| --- | --- |-----------------------------------------------|
Lines changed: 49 additions & 49 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,39 @@
1-
2-
+++
3-
disableToc = false
4-
title = "Run models manually"
5-
weight = 5
6-
icon = "rocket_launch"
1+
---
72

8-
+++
3+
disableToc: false
4+
title: "Run models manually"
5+
weight: 5
6+
icon: "rocket_launch"
97

8+
---
109

11-
1. Ensure you have a model file, a configuration YAML file, or both. Customize model defaults and specific settings with a configuration file. For advanced configurations, refer to the [Advanced Documentation](docs/advanced).
10+
# Run Models Manually
1211

13-
2. For GPU Acceleration instructions, visit [GPU acceleration](docs/features/gpu-acceleration).
12+
Follow these steps to manually run models using LocalAI:
13+
14+
1. **Prepare Your Model and Configuration Files**:
15+
Ensure you have a model file and a configuration YAML file, if necessary. Customize model defaults and specific settings with a configuration file. For advanced configurations, refer to the [Advanced Documentation]({{% relref "docs/advanced" %}}).
16+
17+
2. **GPU Acceleration**:
18+
For instructions on GPU acceleration, visit the [GPU acceleration]({{% relref "docs/features/gpu-acceleration" %}}) page.
19+
20+
3. **Run LocalAI**:
21+
Choose one of the following methods to run LocalAI:
1422

1523
{{< tabs tabTotal="5" >}}
1624
{{% tab tabName="Docker" %}}
1725

1826
```bash
19-
# Prepare the models into the `model` directory
27+
# Prepare the models into the `models` directory
2028
mkdir models
2129

22-
# copy your models to it
30+
# Copy your models to the directory
2331
cp your-model.gguf models/
2432

25-
# run the LocalAI container
33+
# Run the LocalAI container
2634
docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4
27-
# You should see:
28-
#
35+
36+
# Expected output:
2937
# ┌───────────────────────────────────────────────────┐
3038
# │ Fiber v2.42.0 │
3139
# │ http://127.0.0.1:8080 │
@@ -35,7 +43,7 @@ docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-
3543
# │ Prefork ....... Disabled PID ................. 1 │
3644
# └───────────────────────────────────────────────────┘
3745

38-
# Try the endpoint with curl
46+
# Test the endpoint with curl
3947
curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d '{
4048
"model": "your-model.gguf",
4149
"prompt": "A long time ago in a galaxy far, far away",
@@ -44,28 +52,25 @@ curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d
4452
```
4553

4654
{{% alert icon="💡" %}}
47-
4855
**Other Docker Images**:
4956

50-
For other Docker images, please see the table in
51-
https://localai.io/basics/getting_started/#container-images.
52-
57+
For other Docker images, please refer to the table in [the container images section]({{% relref "docs/getting-started/container-images" %}}).
5358
{{% /alert %}}
5459

55-
Here is a more specific example:
60+
### Example:
5661

5762
```bash
5863
mkdir models
5964

6065
# Download luna-ai-llama2 to models/
6166
wget https://huggingface.co/TheBloke/Luna-AI-Llama2-Uncensored-GGUF/resolve/main/luna-ai-llama2-uncensored.Q4_0.gguf -O models/luna-ai-llama2
6267

63-
# Use a template from the examples
68+
# Use a template from the examples, if needed
6469
cp -rf prompt-templates/getting_started.tmpl models/luna-ai-llama2.tmpl
6570

6671
docker run -p 8080:8080 -v $PWD/models:/models -ti --rm quay.io/go-skynet/local-ai:latest --models-path /models --context-size 700 --threads 4
6772

68-
# Now API is accessible at localhost:8080
73+
# Now the API is accessible at localhost:8080
6974
curl http://localhost:8080/v1/models
7075
# {"object":"list","data":[{"id":"luna-ai-llama2","object":"model"}]}
7176

@@ -78,34 +83,34 @@ curl http://localhost:8080/v1/chat/completions -H "Content-Type: application/jso
7883
```
7984

8085
{{% alert note %}}
81-
- If running on Apple Silicon (ARM) it is **not** suggested to run on Docker due to emulation. Follow the [build instructions]({{%relref "docs/getting-started/build" %}}) to use Metal acceleration for full GPU support.
82-
- If you are running Apple x86_64 you can use `docker`, there is no additional gain into building it from source.
86+
- If running on Apple Silicon (ARM), it is **not** recommended to run on Docker due to emulation. Follow the [build instructions]({{% relref "docs/getting-started/build" %}}) to use Metal acceleration for full GPU support.
87+
- If you are running on Apple x86_64, you can use Docker without additional gain from building it from source.
8388
{{% /alert %}}
8489

8590
{{% /tab %}}
86-
{{% tab tabName="Docker compose" %}}
91+
{{% tab tabName="Docker Compose" %}}
8792

8893
```bash
8994
# Clone LocalAI
9095
git clone https://github.com/go-skynet/LocalAI
9196

9297
cd LocalAI
9398

94-
# (optional) Checkout a specific LocalAI tag
99+
# (Optional) Checkout a specific LocalAI tag
95100
# git checkout -b build <TAG>
96101

97-
# copy your models to models/
102+
# Copy your models to the models directory
98103
cp your-model.gguf models/
99104

100-
# (optional) Edit the .env file to set things like context size and threads
105+
# (Optional) Edit the .env file to set parameters like context size and threads
101106
# vim .env
102107

103-
# start with docker compose
108+
# Start with Docker Compose
104109
docker compose up -d --pull always
105-
# or you can build the images with:
110+
# Or build the images with:
106111
# docker compose up -d --build
107112

108-
# Now API is accessible at localhost:8080
113+
# Now the API is accessible at localhost:8080
109114
curl http://localhost:8080/v1/models
110115
# {"object":"list","data":[{"id":"your-model.gguf","object":"model"}]}
111116

@@ -117,48 +122,43 @@ curl http://localhost:8080/v1/completions -H "Content-Type: application/json" -d
117122
```
118123

119124
{{% alert icon="💡" %}}
120-
121125
**Other Docker Images**:
122126

123-
For other Docker images, please see the table in
124-
https://localai.io/basics/getting_started/#container-images.
125-
127+
For other Docker images, please refer to the table in [Getting Started](https://localai.io/basics/getting_started/#container-images).
126128
{{% /alert %}}
127129

128-
Note: If you are on Windows, please make sure the project is on the Linux Filesystem, otherwise loading models might be slow. For more Info: [Microsoft Docs](https://learn.microsoft.com/en-us/windows/wsl/filesystems)
130+
Note: If you are on Windows, ensure the project is on the Linux filesystem to avoid slow model loading. For more information, see the [Microsoft Docs](https://learn.microsoft.com/en-us/windows/wsl/filesystems).
129131

130132
{{% /tab %}}
131-
132133
{{% tab tabName="Kubernetes" %}}
133134

134-
See the [Kubernetes section]({{%relref "docs/getting-started/kubernetes" %}}).
135+
For Kubernetes deployment, see the [Kubernetes section]({{% relref "docs/getting-started/kubernetes" %}}).
135136

136137
{{% /tab %}}
137-
{{% tab tabName="From binary" %}}
138+
{{% tab tabName="From Binary" %}}
138139

139-
LocalAI binary releases are available in [Github](https://github.com/go-skynet/LocalAI/releases).
140+
LocalAI binary releases are available on [GitHub](https://github.com/go-skynet/LocalAI/releases).
140141

141142
{{% alert icon="⚠️" %}}
142-
If you are installing on MacOS, when you excecute the binary, you will get a message saying:
143+
If installing on macOS, you might encounter a message saying:
143144

144-
> "local-ai-git-Darwin-arm64" (or whatever name you gave to the binary) can't be opened because Apple cannot check it for malicious software.
145+
> "local-ai-git-Darwin-arm64" (or the name you gave the binary) can't be opened because Apple cannot check it for malicious software.
145146
146-
Hit OK, and go to Settings > Privacy & Security > Security and look for the message:
147+
Hit OK, then go to Settings > Privacy & Security > Security and look for the message:
147148

148149
> "local-ai-git-Darwin-arm64" was blocked from use because it is not from an identified developer.
149150
150-
And press "Allow Anyway"
151+
Press "Allow Anyway."
151152
{{% /alert %}}
152153

153-
154154
{{% /tab %}}
155+
{{% tab tabName="From Source" %}}
155156

156-
{{% tab tabName="From source" %}}
157+
For instructions on building LocalAI from source, see the [Build Section]({{% relref "docs/getting-started/build" %}}).
157158

158-
See the [build section]({{%relref "docs/getting-started/build" %}}).
159-
160159
{{% /tab %}}
161-
162160
{{< /tabs >}}
163161

164162
For more model configurations, visit the [Examples Section](https://github.com/mudler/LocalAI/tree/master/examples/configurations).
163+
164+
---
Lines changed: 24 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -1,65 +1,60 @@
1-
21
+++
32
disableToc = false
43
title = "Quickstart"
54
weight = 3
65
url = '/basics/getting_started/'
76
icon = "rocket_launch"
8-
97
+++
108

11-
**LocalAI** is the free, Open Source alternative to OpenAI (Anthropic, ...), acting as a drop-in replacement REST API for local inferencing. It allows you to run [LLMs]({{%relref "docs/features/text-generation" %}}), generate images, and audio, all locally or on-prem with consumer-grade hardware, supporting multiple model families and architectures.
12-
13-
## Installation
9+
**LocalAI** is a free, open-source alternative to OpenAI (Anthropic, etc.), functioning as a drop-in replacement REST API for local inferencing. It allows you to run [LLMs]({{% relref "docs/features/text-generation" %}}), generate images, and produce audio, all locally or on-premises with consumer-grade hardware, supporting multiple model families and architectures.
1410

15-
### Using the Bash Installer
11+
## Using the Bash Installer
1612

17-
You can easily install LocalAI using the bash installer with the following command:
13+
Install LocalAI easily using the bash installer with the following command:
1814

19-
```
15+
```sh
2016
curl https://localai.io/install.sh | sh
2117
```
2218

23-
See also the [Installer Options]({{%relref "docs/advanced/installer" %}}) for the full list of options.
19+
For a full list of options, refer to the [Installer Options]({{% relref "docs/advanced/installer" %}}) documentation.
2420

25-
Binaries can be also [manually downloaded]({{%relref "docs/reference/binaries" %}}).
21+
Binaries can also be [manually downloaded]({{% relref "docs/reference/binaries" %}}).
2622

27-
### Using Container Images
23+
## Using Container Images or Kubernetes
2824

29-
LocalAI is available as a container image compatible with various container engines like Docker, Podman, and Kubernetes. Container images are published on [quay.io](https://quay.io/repository/go-skynet/local-ai?tab=tags&tag=latest) and [Docker Hub](https://hub.docker.com/r/localai/localai).
25+
LocalAI is available as a container image compatible with various container engines such as Docker, Podman, and Kubernetes. Container images are published on [quay.io](https://quay.io/repository/go-skynet/local-ai?tab=tags&tag=latest) and [Docker Hub](https://hub.docker.com/r/localai/localai).
3026

31-
See: [Using container images]({{%relref "docs/getting-started/container-images" %}})
27+
For detailed instructions, see [Using container images]({{% relref "docs/getting-started/container-images" %}}). For Kubernetes deployment, see [Run with Kubernetes]({{% relref "docs/getting-started/kubernetes" %}}).
3228

33-
### Running LocalAI with All-in-One (AIO) Images
29+
## Running LocalAI with All-in-One (AIO) Images
3430

35-
> _Do you have already a model file? Skip to [Run models manually]({{%relref "docs/getting-started/manual" %}})_.
31+
> _Already have a model file? Skip to [Run models manually]({{% relref "docs/getting-started/manual" %}})_.
3632
37-
LocalAI's All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the LocalAI featureset. If you don't need models pre-configured, you can use the standard [images]({{%relref "docs/getting-started/container-images" %}}).
33+
LocalAI's All-in-One (AIO) images are pre-configured with a set of models and backends to fully leverage almost all the features of LocalAI. If pre-configured models are not required, you can use the standard [images]({{% relref "docs/getting-started/container-images" %}}).
3834

39-
These images are available for both CPU and GPU environments. The AIO images are designed to be easy to use and requires no configuration.
35+
These images are available for both CPU and GPU environments. AIO images are designed for ease of use and require no additional configuration.
4036

41-
It suggested to use the AIO images if you don't want to configure the models to run on LocalAI. If you want to run specific models, you can use the [manual method]({{%relref "docs/getting-started/manual" %}}).
37+
It is recommended to use AIO images if you prefer not to configure the models manually or via the web interface. For running specific models, refer to the [manual method]({{% relref "docs/getting-started/manual" %}}).
4238

43-
The AIO Images comes pre-configured with the following features:
39+
The AIO images come pre-configured with the following features:
4440
- Text to Speech (TTS)
4541
- Speech to Text
4642
- Function calling
4743
- Large Language Models (LLM) for text generation
4844
- Image generation
4945
- Embedding server
5046

51-
See: [Using container images]({{%relref "docs/getting-started/container-images" %}}) for instructions on how to use AIO images.
52-
47+
For instructions on using AIO images, see [Using container images]({{% relref "docs/getting-started/container-images#all-in-one-images" %}}).
5348

54-
## What's next?
49+
## What's Next?
5550

56-
There is much more to explore! run any model from huggingface, video generation, and voice cloning with LocalAI, check out the [features]({{%relref "docs/features" %}}) section for a full overview.
51+
There is much more to explore with LocalAI! You can run any model from Hugging Face, perform video generation, and also voice cloning. For a comprehensive overview, check out the [features]({{% relref "docs/features" %}}) section.
5752

58-
Explore further resources and community contributions:
53+
Explore additional resources and community contributions:
5954

60-
- [Try it out]({{%relref "docs/getting-started/try-it-out" %}})
61-
- [Build LocalAI and the container image]({{%relref "docs/getting-started/build" %}})
62-
- [Run models manually]({{%relref "docs/getting-started/manual" %}})
63-
- [Installer Options]({{%relref "docs/advanced/installer" %}})
64-
- [Run from Container images]({{%relref "docs/getting-started/container-images" %}})
55+
- [Installer Options]({{% relref "docs/advanced/installer" %}})
56+
- [Run from Container images]({{% relref "docs/getting-started/container-images" %}})
57+
- [Examples to try from the CLI]({{% relref "docs/getting-started/try-it-out" %}})
58+
- [Build LocalAI and the container image]({{% relref "docs/getting-started/build" %}})
59+
- [Run models manually]({{% relref "docs/getting-started/manual" %}})
6560
- [Examples](https://github.com/mudler/LocalAI/tree/master/examples#examples)

0 commit comments

Comments
 (0)