Releases: SciSharp/LLamaSharp
Releases · SciSharp/LLamaSharp
v0.4.2-preview: new backends
What's Changed
- update webapi example by @xbotter in #39
- MacOS metal support by @SignalRT in #47
- Basic ASP.NET Core website example by @saddam213 in #48
- fix breaking change in llama.cpp; bind to latest version llama.cpp to… by @fwaris in #51
- Documentation Spelling/Grammar by @martindevans in #52
- XML docs fixes by @martindevans in #53
- Cleaned up unnecessary extension methods by @martindevans in #55
- Memory Mapped LoadState/SaveState by @martindevans in #56
- Larger states by @martindevans in #57
- Instruct & Stateless web example implemented by @saddam213 in #59
- Fixed Multiple Enumeration by @martindevans in #54
- Fixed More Multiple Enumeration by @martindevans in #63
- Low level new loading system by @martindevans in #64
- Fixed Memory pinning in Sampling API by @martindevans in #68
- Fixed Spelling Mirostate -> Mirostat by @martindevans in #69
- Fixed Mirostate Sampling by @martindevans in #72
- GitHub actions by @martindevans in #74
- Update llama.cpp binaries to 5f631c2 and align the LlamaContext by @SignalRT in #77
- Expose some native classes by @saddam213 in #80
- feat: update the llama backends. by @AsakusaRinne in #78
New Contributors
- @xbotter made their first contribution in #39
- @saddam213 made their first contribution in #48
- @fwaris made their first contribution in #51
- @martindevans made their first contribution in #52
Full Changelog: v0.4.1-preview...v0.4.2-preview
v0.4.1-preview - follow up llama.cpp latest commit
This is a preview version which followed up the latest modifications of llama.cpp.
For some reasons the cuda backend hasn't been okay, we'll release v0.4.1 after dealing with that.
v0.4.0 - Executor and ChatSession
Version 0.4.0 introduces many break changes. However we strongly recommend to upgrade to 0.4.0 because it provides better abstractions and stability by refactoring the framework. The backend v0.3.0
and v0.3.1
still works for LLamaSharp v0.4.0
.
The main changes:
- Add three-level abstractions:
LLamaModel
,LLamaExecutor
andChatSession
. - Fix the BUG of saving and loading state.
- Support saving/loading chat session directly.
- Add more flexible APIs in the chat session.
- Add detailed documentations: https://scisharp.github.io/LLamaSharp/0.4/
Acknowledge
During the development, thanks a lot for the help from @TheTerrasque ! His/Her fork gives us many inspirations. Besides, many thanks for the following contributors!
- MacOS Arm64 support by @SignalRT in #24
- Fixed a typo in FixedSizeQueue by @mlof in #25
- Document interfaces by @mlof in #26
New Contributors
v0.3.0 - Load and save state
- Support loading and saving state.
- Support tokenization and detokenization.
- Fix BUGs of instruct mode.
- break change:
n_parts
param is removed. - break change:
LLamaModelV1
is dropped. - Remove dependencies for third-party loggers.
- Verified model repo is added on huggingface.
- Optimize the examples.
v0.2.3 - Inference BUG Fix
Fix some strange behaviors of model inference.
v0.2.2 - Embedder
- Sync with the latest llama.cpp master branch.
- Add
LLamaEmbedder
to support to get the embeddings only. - Add
n_gpu_layers
andprompt_cache_all
param. - Split the package into main package + backend package.
v0.2.1 - Chat session, quantization and Web API
- Add basic APIs and chat session.
- Support quantization.
- Add Web API support.