Skip to content

Commit 32464c7

Browse files
Merge pull request #120 from SystemKeeper/add-performance-considerations
Add doc about performance considerations
2 parents 681702e + 4d09a79 commit 32464c7

File tree

2 files changed

+85
-1
lines changed

2 files changed

+85
-1
lines changed

README.md

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -120,7 +120,9 @@ Once you've configured your database, providers and github credentials, you'll n
120120

121121
At this point, you should be done. Have a look at the [running garm document](/doc/running_garm.md) for usage instructions and available features.
122122

123-
If you would like to use ```garm``` with a different IaaS than the ones already available, have a loot at the [writing an external provider](/doc/external_provider.md) page.
123+
If you would like to use ```garm``` with a different IaaS than the ones already available, have a look at the [writing an external provider](/doc/external_provider.md) page.
124+
125+
If you like to optimize the startup time of new instance, take a look at the [performance considerations](/doc/performance_considerations.md) page.
124126

125127
## Security considerations
126128

doc/performance_considerations.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
# Performance considerations
2+
3+
Performance is often important when running GitHub action runners with garm. This document shows some ways to improve the creation time of a GitHub action runner.
4+
5+
## garm specific performance considerations
6+
7+
### Bundle the GitHub action runner
8+
9+
When a new instance is created by garm, it usually downloads the latest available GitHub action runner binary, installs the requirements and starts it afterwards. This can be a time consuming task that quickly adds up when alot of instances are created by garm throughout the day. Therefore it is recommended to include the GitHub action runner binary inside of the used image.
10+
11+
There are two ways to do that:
12+
13+
1. Add the extracted runner to `/opt/cache/actions-runner/latest` in which case, garm won't do any version checking and will blindly trust that whatever you put there is indeed the latest. This is useful if you want to run a pre-release of the runner or have your own patches applied to it. Also GitHub runners have an auto-update mechanism. When it detects that a new version is available, it updates itself to the latest version.
14+
15+
2. Add the extracted runner to `/opt/cache/actions-runner/$VERSION` where `$VERSION` is the version of the runner. In this case, if what garm fetches from GitHub is different than what you bundled in the image, it will download and install the version indicated by GitHub.
16+
17+
Note, when bundling the runner with your image, you will have to download it, extract it to one of the above mentioned locations and also run the `./bin/installdependencies.sh` inside the extracted folder. All dependencies needed to run the runner must be pre-installed when bundling.
18+
19+
Example steps:
20+
21+
```bash
22+
# Create a temporary instance from your base image
23+
lxc launch <BASE_IMAGE> temp
24+
25+
# Enter bash inside the container
26+
lxc exec temp -- bash
27+
28+
# Get and install the runner
29+
mkdir -p /opt/cache/actions-runner/latest
30+
cd /opt/cache/actions-runner/latest
31+
curl -o actions-runner-linux-x64-2.305.0.tar.gz -L https://github.com/actions/runner/releases/download/v2.305.0/actions-runner-linux-x64-2.305.0.tar.gz
32+
tar xzf ./actions-runner-linux-x64-2.305.0.tar.gz
33+
./bin/installdependencies.sh
34+
35+
# Exit the container
36+
exit
37+
38+
# Stop the instance and publish it as a new image
39+
lxc stop temp
40+
lxc publish temp --alias BASE_IMAGE-2.305.0
41+
42+
# Delete the temporary instance
43+
lxc delete temp
44+
45+
# Update garm to use the new image
46+
garm-cli pool update <POOL_ID> \
47+
--image=BASE_IMAGE-2.305.0
48+
```
49+
50+
### Disable updates
51+
52+
By default garm configures the `cloud-init` process of a new instance to update packages on startup. To prevent this from happening (and therefore reduce the time needed to start an instance) garm can be configured accordingly.
53+
54+
Example to disable this on LXD provider:
55+
56+
```bash
57+
garm-cli pool update <POOL_ID> \
58+
--extra-specs='{"disable_updates": true}'
59+
```
60+
61+
## LXD specific performance considerations
62+
63+
### Storage driver
64+
65+
LXD supports various [storage drivers](https://linuxcontainers.org/lxd/docs/latest/reference/storage_drivers/) out of the box. These storage drivers support different features which influence the creation time of a new instance. Most notably check if the driver supports `Optimized image storage` and `Optimized instance creation` as these have the biggest impact on instance creation time.
66+
67+
If you're not sure which storage driver is currently used, check your storages with `lxc storage list`.
68+
69+
### Use shiftfs/idmapped mounts
70+
71+
Whenever a new unprivileged instance is started on LXD, its filesystem gets remapped. This is a time consuming task which depends on the image size that's being used. For large images this can easily take over a minute to complete. There are two ways to get around this: `shiftfs` or `idmapped mounts`. While the latter is the preferred one, not all filesystems currently support it, so in most cases enabling `shiftfs` show a significant performance improvement.
72+
73+
Example on how to enable it on a snap installed LXD:
74+
75+
```bash
76+
snap set lxd shiftfs.enable=true
77+
systemctl reload snap.lxd.daemon
78+
```
79+
80+
Some details and discussions around `shiftfs` can be found [here](https://discuss.linuxcontainers.org/t/trying-out-shiftfs/5155).
81+
82+
Note: When `shiftfs` is used, mounting between host and container might need some extra steps to be secure. See [here](https://discuss.linuxcontainers.org/t/share-folders-and-volumes-between-host-and-containers/7735) for details.

0 commit comments

Comments
 (0)