Releases: av/harbor
v0.3.10 - Modular MAX
Modular MAX
MAX is a platform for inference from Modular (creators of Mojo lang). You will need Nvidia GPU to work with this service.
# Pull the image, start the service, tail logs
# ⚠️ Service will download configured model automatically
harbor up modularmax --tail
Misc
docs
- dev pipeline improvements, enables normalising local MD links in the future (to fix dead local links in the Wiki/repo)
- normalising formatting for the service metadata
- modernised
searxng
wiki entry
harbor help
/harbor how
- CLI help updates
Full Changelog: v0.3.9...v0.3.10
v0.3.9 - Agent Zero
Agent Zero
General-purpose personal assistant with: Web RAG, persistent memory, tools, Browser Use and more.
# --open is optional to open in the browser
harbor up agentzero --open
Misc
metamcp
- fix broken build, point to v0.4.3 (see #156)mistralrs
- fixing wrong service tag
Full Changelog: v0.3.8...v0.3.9
v0.3.8 - LocalAI, Local Deep Research
LocalAI
All-in-one service to download and run LLMs, TTS/STT/Image generation models
harbor up localai
Local Deep Research
Deep Research functionality with local LLMs
harbor up ldr searxng
Harbor Boost - new workflows
dot
- Draft of Thoughtsgrug
- Grug make plan with simple word-think. Grug find new path in head-picture that smart-talk no find.polyglot
- Model generates answers in multiple langugages then synthesises a final solutiontri
- Generates three aspects, then three pitfalls and then three alternative answers before synthesising a final solution
Misc
boost
- Switching to av/tools, use
uv
llm
-stream_completion
allows skipping emit to the client- querying downstream models is cached for 1 minute
- Switching to av/tools, use
ollama
- disabling automatic restarts to align with other Harbor servicessearxng
- aligning configuration with other services - configurable image/version
- fixed healthcheck
harbor config set searxng.internal_url
to point all harbor searxng use to an external SearXNG instance
librechat
- connect toboost
ollama
- modelfile for Gemma 3 QAT w/ tool use in the template
Full Changelog: v0.3.7...v0.3.8
v0.3.7 - Tools
In this release, Harbor gets access to MCP and OpenAPI tools ecosystems with the following new services:
# Bring your MCP tools to Open WebUI
harbor up mcpo metamcp
More details in the Harbor Tools guide
Additionally, I'm launching av/tools to simplify containerized tool use in general.
Full Changelog: v0.3.6...v0.3.7
v0.3.6 - LibreTranslate
v0.3.6 - LibreTranslate
Free and Open Source Machine Translation API, entirely self-hosted.
# [Optional] pre-pull the image
harbor pull libretranslate
# Start the service
harbor up libretranslate
Pulls translation models on the first start - may take a while, monitor with
harbor logs libretranslate
Misc
app
- bundle docs in GH action
- attempt to fix #64
boost
- basic support for proxying tool calls
- logging revamp
- request ID tracking
n8n
- fix typo in the docs preventing initial workflow import
Full Changelog: v0.3.5...v0.3.6
v0.3.5 - in-app docs
v0.3.5 - in-app docs
Harbor App now has dedicated service pages. There are a lot of plans, but we start simple, you'll be able to perform the same actions as in the Home page + see service wiki in the app.
Misc
harbor dev app
helper- App - bumping Tauri deps
Full Changelog: v0.3.4...v0.3.5
v0.3.4 - llama-swap
llama-swap
llama-swap is a lightweight, transparent proxy server that provides automatic model swapping to llama.cpp's server.
# [Optional] pre-pull the image
harbor pull llamaswap
# Edit the swap config
open $(harbor home)/llamaswap/config.yaml
# Run the service
harbor up llamaswap
Misc
boost
- docs revamp
- fixing plain proxy without modules
tgi
- HF cache normalisedraglite
- adding missing traefik config
Full Changelog: v0.3.3...v0.3.4
v0.3.3 - oterm, RAGLite
oterm
the text-based terminal client for Ollama.
harbor up ollama
harbor oterm
RAGLite
⚠️ Unfortunately current integration is not fully compatible with Ollama. See docs for more info and this issue for details.
RAGLite is a Python toolkit for Retrieval-Augmented Generation (RAG) with PostgreSQL or SQLite. Harbor's intergration is centered around the provided chainlit
WebUI.
ROCm Ollama
Automatic AMD detection and capability by @cedstrom in #143
DND Module for Boost
dnd-skill-check.mp4
This module makes LLM pass a skill check before generating a reply to your last message.
App - service names, tooltips
App now displays actual Service names and tooltips with service info. Huge thanks to @cedstrom for the inspiration 🙌
Misc
boost
- There's now a starter repo for standalone use of Harbor Boost
- Removed incompatible
num_ctx
from default LLM params - Fixed streaming for incomplete/merged chunks (seen in Azure/Groq APIs)
n8n
- fixed persistence for custom module installation (broken workspace path)docs
- service index now generated from app service metadata (now has tags)ollama
- configure default ctx length withharbor ctx
webtop
- restored Harbor App functionality
New Contributors
Full Changelog: v0.3.2...v0.3.3
v0.3.2
v0.3.2
This is a very minor bugfix release
boost
- now correctly handles incomplete chunks from downstream APIs by attempting buffering and then parsing (tested with Groq API)
Full Changelog: v0.3.1...v0.3.2
v0.3.1
v0.3.1
This is a maintenance release with a few fixes, nothing exciting
harbor dev docs
- fixes relative URLs so that Boost README links now finally work- README - revamp, supporters
boost
- fixed mismatch between docs and actual env vars
r0
- workflow for R1-like reasoning chains for any LLM (including older ones, like Llama 2)
markov
- Open WebUI-only, serves an artifact showing a token graph for the current completiondocs
- numerous tweaks and adjustmentsn8n
- fixed missing EOF preventingharbor env n8n
from working as expectedtxtai
- restored functionality with a monkey patch until this PR is merged
Full Changelog: v0.3.0...v0.3.1