Skip to content

Commit 53092eb

Browse files
authored
Merge pull request #25 from AsakusaRinne/doc_ci
docs: update the information to v0.11.0.
2 parents 976f7fc + 36202be commit 53092eb

File tree

5 files changed

+91
-35
lines changed

5 files changed

+91
-35
lines changed

CONTRIBUTING.md

Lines changed: 58 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -2,21 +2,65 @@
22

33
Hi, welcome to develop LLamaSharp with us together! We are always open for every contributor and any format of contributions! If you want to maintain this library actively together, please contact us to get the write access after some PRs. (Email: [email protected])
44

5-
In this page, we'd like to introduce how to make contributions here easily. 😊
5+
In this page, we introduce how to make contributions here easily. 😊
66

7-
## Compile the native library from source
7+
## The goal of LLamaSharp
88

9-
Firstly, please clone the [llama.cpp](https://github.com/ggerganov/llama.cpp) repository and following the instructions in [llama.cpp readme](https://github.com/ggerganov/llama.cpp#build) to configure your local environment.
9+
At the beginning, LLamaSharp is a C# binding of [llama.cpp](https://github.com/ggerganov/llama.cpp). It provided only some wrappers for llama.cpp to let C#/.NET users could run LLM models on their local device efficiently even if without any experience with C++. After around a year of development, more tools and integrations has been added to LLamaSharp, significantly expanding the application of LLamaSharp. Though llama.cpp is still the only backend of LLamaSharp, the goal of this repository is more likely to be an efficient and easy-to-use library of LLM inference, rather than just a binding of llama.cpp.
1010

11-
If you want to support cublas in the compilation, please make sure that you've installed the cuda.
11+
In this way, our development of LLamaSharp is divided into two main directions:
1212

13-
When building from source, please add `-DBUILD_SHARED_LIBS=ON` to the cmake instruction. For example, when building with cublas but without openblas, use the following instruction:
13+
1. To make LLamaSharp more efficient. For example, `BatchedExecutor` could accept multiple queries and generate the response for them at the same time, which significantly improves the throughput. This part is always related with native APIs and executors in LLamaSharp.
14+
2. To make it easier to use LLamaSharp. We believe the best library is to let users build powerful functionalities with simple code. Higher-level APIs and integrations with other libraries are the key points of it.
15+
16+
17+
## How to compile the native library from source
18+
19+
If you want to contribute to the first direction of our goal, you may need to compile the native library yourself.
20+
21+
Firstly, please follow the instructions in [llama.cpp readme](https://github.com/ggerganov/llama.cpp#build) to configure your local environment. Most importantly, CMake with version higher than 3.14 should be installed on your device.
22+
23+
Secondly, clone the llama.cpp repositories. You could manually clone it and checkout to the right commit according to [Map of LLamaSharp and llama.cpp versions](https://github.com/SciSharp/LLamaSharp?tab=readme-ov-file#map-of-llamasharp-and-llama.cpp-versions), or use clone the submodule of LLamaSharp when cloning LLamaSharp.
24+
25+
```shell
26+
git clone --recursive https://github.com/SciSharp/LLamaSharp.git
27+
```
28+
29+
If you want to support cublas in the compilation, please make sure that you've installed it. If you are using Intel CPU, please check the highest AVX ([Advanced Vector Extensions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions)) level that is supported by your device.
30+
31+
As shown in [llama.cpp cmake file](https://github.com/ggerganov/llama.cpp/blob/master/CMakeLists.txt), there are many options that could be enabled or disabled when building the library. The following ones are commonly used when using it as a native library of LLamaSharp.
32+
33+
```cpp
34+
option(BUILD_SHARED_LIBS "build shared libraries") // Please always enable it
35+
option(LLAMA_NATIVE "llama: enable -march=native flag") // Could be disabled
36+
option(LLAMA_AVX "llama: enable AVX") // Enable it if the highest supported avx level is AVX
37+
option(LLAMA_AVX2 "llama: enable AVX2") // Enable it if the highest supported avx level is AVX2
38+
option(LLAMA_AVX512 "llama: enable AVX512") // Enable it if the highest supported avx level is AVX512
39+
option(LLAMA_BLAS "llama: use BLAS") // Enable it if you want to use BLAS library to acclerate the computation on CPU
40+
option(LLAMA_CUDA "llama: use CUDA") // Enable it if you have CUDA device
41+
option(LLAMA_CLBLAST "llama: use CLBlast") // Enable it if you have a device with CLBLast or OpenCL support, for example, some AMD GPUs.
42+
option(LLAMA_VULKAN "llama: use Vulkan") // Enable it if you have a device with Vulkan support
43+
option(LLAMA_METAL "llama: use Metal") // Enable it if you are using a MAC with Metal device.
44+
option(LLAMA_BUILD_TESTS "llama: build tests") // Please disable it.
45+
option(LLAMA_BUILD_EXAMPLES "llama: build examples") // Please disable it.
46+
option(LLAMA_BUILD_SERVER "llama: build server example")// Please disable it.
47+
```
48+
49+
Most importantly, `-DBUILD_SHARED_LIBS=ON` must be added to the cmake instruction and other options depends on you. For example, when building with cublas but without openblas, use the following instruction:
1450
1551
```bash
52+
mkdir build && cd build
1653
cmake .. -DLLAMA_CUBLAS=ON -DBUILD_SHARED_LIBS=ON
54+
cmake --build . --config Release
1755
```
1856

19-
After running `cmake --build . --config Release`, you could find the `llama.dll`, `llama.so` or `llama.dylib` in your build directory. After pasting it to `LLamaSharp/LLama/runtimes` you can use it as the native library in LLamaSharp.
57+
Now you could find the `llama.dll`, `libllama.so` or `llama.dylib` in your build directory (or `build/bin`).
58+
59+
To load the compiled native library, please add the following code to the very beginning of your code.
60+
61+
```cs
62+
NativeLibraryConfig.Instance.WithLibrary("<Your native library path>");
63+
```
2064

2165

2266
## Add a new feature to LLamaSharp
@@ -39,19 +83,19 @@ You could use exactly the same prompt, the same model and the same parameters to
3983

4084
If the experiment showed that it worked well in llama.cpp but didn't in LLamaSharp, a search for the problem could be started. While the reason of the problem could be various, the best way I think is to add log-print in the code of llama.cpp and use it in LLamaSharp after compilation. Thus, when running LLamaSharp, you could see what happened in the native library.
4185

42-
After finding out the reason, a painful but happy process comes. When working on the BUG fix, there's only one rule to follow, that is keeping the examples working well. If the modification fixed the BUG but impact on other functions, it would not be a good fix.
43-
44-
During the BUG fix process, please don't hesitate to discuss together when you stuck on something.
86+
During the BUG fix process, please don't hesitate to discuss together when you are blocked.
4587

4688
## Add integrations
4789

48-
All kinds of integration are welcomed here! Currently the following integrations are under work or on our schedule:
90+
All kinds of integration are welcomed here! Currently the following integrations have been added but still need improvement:
91+
92+
1. semantic-kernel
93+
2. kernel-memory
94+
3. BotSharp (maintained in SciSharp/BotSharp repo)
95+
4. Langchain (maintained in tryAGI/LangChain repo)
4996

50-
1. BotSharp
51-
2. semantic-kernel
52-
3. Unity
97+
If you find another library that is good to be integrated, please open an issue to let us know!
5398

54-
Besides, for some other integrations, like `ASP.NET core`, `SQL`, `Blazor` and so on, we'll appreciate it if you could help with that. If the time is limited for you, providing an example for it also means a lot!
5599

56100
## Add examples
57101

LLama.KernelMemory/LLamaSharp.KernelMemory.csproj

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
<TargetFrameworks>net6.0;net7.0;net8.0</TargetFrameworks>
55
<ImplicitUsings>enable</ImplicitUsings>
66
<Nullable>enable</Nullable>
7-
<Version>0.8.0</Version>
7+
<Version>0.11.0</Version>
88
<Authors>Xbotter</Authors>
99
<Company>SciSharp STACK</Company>
1010
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
@@ -17,7 +17,7 @@
1717
The integration of LLamaSharp and Microsoft kernel-memory. It could make it easy to support document search for LLamaSharp model inference.
1818
</Description>
1919
<PackageReleaseNotes>
20-
Support integration with kernel-memory
20+
v0.11.0 updated the kernel-memory package and Fixed System.ArgumentException: EmbeddingMode must be true.
2121
</PackageReleaseNotes>
2222
<PackageLicenseExpression>MIT</PackageLicenseExpression>
2323
<PackageOutputPath>packages</PackageOutputPath>

LLama.SemanticKernel/LLamaSharp.SemanticKernel.csproj

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
<ImplicitUsings>enable</ImplicitUsings>
1111
<Nullable>enable</Nullable>
1212

13-
<Version>0.8.0</Version>
13+
<Version>0.11.0</Version>
1414
<Authors>Tim Miller, Xbotter</Authors>
1515
<Company>SciSharp STACK</Company>
1616
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
@@ -23,7 +23,7 @@
2323
The integration of LLamaSharp and Microsoft semantic-kernel.
2424
</Description>
2525
<PackageReleaseNotes>
26-
Support integration with semantic-kernel
26+
v0.11.0 updates the semantic-kernel package.
2727
</PackageReleaseNotes>
2828
<PackageLicenseExpression>MIT</PackageLicenseExpression>
2929
<PackageOutputPath>packages</PackageOutputPath>

LLama/LLamaSharp.csproj

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -7,8 +7,8 @@
77
<Platforms>AnyCPU;x64;Arm64</Platforms>
88
<AllowUnsafeBlocks>True</AllowUnsafeBlocks>
99

10-
<Version>0.10.0</Version>
11-
<Authors>Yaohui Liu, Martin Evans, Haiping Chen</Authors>
10+
<Version>0.11.0</Version>
11+
<Authors>Rinne, Martin Evans, jlsantiago and all the other contributors in https://github.com/SciSharp/LLamaSharp/graphs/contributors.</Authors>
1212
<Company>SciSharp STACK</Company>
1313
<GeneratePackageOnBuild>true</GeneratePackageOnBuild>
1414
<Copyright>MIT, SciSharp STACK $([System.DateTime]::UtcNow.ToString(yyyy))</Copyright>
@@ -17,11 +17,12 @@
1717
<PackageIconUrl>https://avatars3.githubusercontent.com/u/44989469?s=200&amp;v=4</PackageIconUrl>
1818
<PackageTags>LLama, LLM, GPT, ChatGPT, NLP, AI, Chat Bot, SciSharp</PackageTags>
1919
<Description>
20-
The .NET binding of LLama.cpp, making LLM inference and deployment easy and fast. For model
21-
weights to run, please go to https://github.com/SciSharp/LLamaSharp for more information.
20+
LLamaSharp is a cross-platform library to run 🦙LLaMA/LLaVA model (and others) in your local device.
21+
Based on [llama.cpp](https://github.com/ggerganov/llama.cpp), inference with LLamaSharp is efficient on both CPU and GPU.
22+
With the higher-level APIs and RAG support, it's convenient to deploy LLM (Large Language Model) in your application with LLamaSharp.
2223
</Description>
2324
<PackageReleaseNotes>
24-
LLamaSharp 0.10.0 supports automatically device feature detection, adds integration with kernel-memory and fixes some performance issues.
25+
LLamaSharp 0.11.0 added support for multi-modal (LLaVA), improved the BatchedExecutor and added state management of `ChatSession`.
2526
</PackageReleaseNotes>
2627
<PackageLicenseExpression>MIT</PackageLicenseExpression>
2728
<PackageOutputPath>packages</PackageOutputPath>

README.md

Lines changed: 23 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -11,37 +11,39 @@
1111
[![LLamaSharp Badge](https://img.shields.io/nuget/v/LLamaSharp.Backend.OpenCL?label=LLamaSharp.Backend.OpenCL)](https://www.nuget.org/packages/LLamaSharp.Backend.OpenCL)
1212

1313

14-
**LLamaSharp is a cross-platform library to run 🦙LLaMA/LLaVA model (and others) in local device. Based on [llama.cpp](https://github.com/ggerganov/llama.cpp), inference with LLamaSharp is efficient on both CPU and GPU. With the higher-level APIs and RAG support, it's convenient to deploy LLM (Large Language Model) in your application with LLamaSharp.**
14+
**LLamaSharp is a cross-platform library to run 🦙LLaMA/LLaVA model (and others) in your local device. Based on [llama.cpp](https://github.com/ggerganov/llama.cpp), inference with LLamaSharp is efficient on both CPU and GPU. With the higher-level APIs and RAG support, it's convenient to deploy LLM (Large Language Model) in your application with LLamaSharp.**
1515

1616
**Please star the repo to show your support for this project!🤗**
1717

1818
---
1919

2020

21-
2221
<details>
2322
<summary>Table of Contents</summary>
2423
<ul>
2524
<li><a href="#Documentation">Documentation</a></li>
2625
<li><a href="#Console Demo">Console Demo</a></li>
27-
<li><a href="#Toolkits & Examples">Toolkits & Examples</a></li>
26+
<li><a href="#Integrations & Examples">Integrations & Examples</a></li>
2827
<li><a href="#Get started">Get started</a></li>
2928
<li><a href="#FAQ">FAQ</a></li>
3029
<li><a href="#Contributing">Contributing</a></li>
3130
<li><a href="#Join the community">Join the community</a></li>
31+
<li><a href="#Star history">Star history</a></li>
32+
<li><a href="#Contributor wall of fame">Contributor wall of fame</a></li>
3233
<li><a href="#Map of LLamaSharp and llama.cpp versions">Map of LLamaSharp and llama.cpp versions</a></li>
3334
</ul>
3435
</details>
3536

36-
## Documentation
37+
## 📖Documentation
3738

38-
- [Quick start](https://scisharp.github.io/LLamaSharp/latest/GetStarted/)
39-
- [Tricks for FAQ](https://scisharp.github.io/LLamaSharp/latest/Tricks/)
39+
- [Quick start](https://scisharp.github.io/LLamaSharp/latest/QuickStart/)
40+
- [FAQ](https://scisharp.github.io/LLamaSharp/latest/FAQ/)
41+
- [Tutorial](https://scisharp.github.io/LLamaSharp/latest/Tutorial/)
4042
- [Full documentation](https://scisharp.github.io/LLamaSharp/latest/)
4143
- [API reference](https://scisharp.github.io/LLamaSharp/latest/xmldocs/)
4244

4345

44-
## Console Demo
46+
## 📌Console Demo
4547

4648
<table class="center">
4749
<tr style="line-height: 0">
@@ -55,7 +57,7 @@
5557
</table>
5658

5759

58-
## Toolkits & Examples
60+
## 🔗Integrations & Examples
5961

6062
There are integarions for the following libraries, making it easier to develop your APP. Integrations for semantic-kernel and kernel-memory are developed in LLamaSharp repository, while others are developed in their own repositories.
6163

@@ -76,7 +78,7 @@ The following examples show how to build APPs with LLamaSharp.
7678
![LLamaShrp-Integrations](./Assets/LLamaSharp-Integrations.png)
7779

7880

79-
## Get started
81+
## 🚀Get started
8082

8183
### Installation
8284

@@ -168,7 +170,7 @@ while (userInput != "exit")
168170
For more examples, please refer to [LLamaSharp.Examples](./LLama.Examples).
169171

170172

171-
## FAQ
173+
## 💡FAQ
172174

173175
#### Why GPU is not used when I have installed CUDA
174176

@@ -197,9 +199,9 @@ Generally, there are two possible cases for this problem:
197199
Please set anti-prompt or max-length when executing the inference.
198200

199201

200-
## Contributing
202+
## 🙌Contributing
201203

202-
Any contribution is welcomed! There's a TODO list in [LLamaSharp Dev Project](https://github.com/orgs/SciSharp/projects/5) and you could pick an interesting one to start. Please read the [contributing guide](https://scisharp.github.io/LLamaSharp/latest/ContributingGuide/) for more information.
204+
Any contribution is welcomed! There's a TODO list in [LLamaSharp Dev Project](https://github.com/orgs/SciSharp/projects/5) and you could pick an interesting one to start. Please read the [contributing guide](./CONTRIBUTING.md) for more information.
203205

204206
You can also do one of the followings to help us make LLamaSharp better:
205207

@@ -215,6 +217,14 @@ Join our chat on [Discord](https://discord.gg/7wNVU65ZDY) (please contact Rinne
215217

216218
Join [QQ group](http://qm.qq.com/cgi-bin/qm/qr?_wv=1027&k=sN9VVMwbWjs5L0ATpizKKxOcZdEPMrp8&authKey=RLDw41bLTrEyEgZZi%2FzT4pYk%2BwmEFgFcrhs8ZbkiVY7a4JFckzJefaYNW6Lk4yPX&noverify=0&group_code=985366726)
217219

220+
## Star history
221+
222+
[![Star History Chart](https://api.star-history.com/svg?repos=SciSharp/LLamaSharp)](https://star-history.com/#SciSharp/LLamaSharp&Date)
223+
224+
## Contributor wall of fame
225+
226+
[![LLamaSharp Contributors](https://contrib.rocks/image?repo=SciSharp/LLamaSharp)](https://github.com/SciSharp/LLamaSharp/graphs/contributors)
227+
218228
## Map of LLamaSharp and llama.cpp versions
219229
If you want to compile llama.cpp yourself you **must** use the exact commit ID listed for each version.
220230

@@ -232,6 +242,7 @@ If you want to compile llama.cpp yourself you **must** use the exact commit ID l
232242
| v0.8.1 | | [`e937066`](https://github.com/ggerganov/llama.cpp/commit/e937066420b79a757bf80e9836eb12b88420a218) |
233243
| v0.9.0, v0.9.1 | [Mixtral-8x7B](https://huggingface.co/TheBloke/Mixtral-8x7B-v0.1-GGUF) | [`9fb13f9`](https://github.com/ggerganov/llama.cpp/blob/9fb13f95840c722ad419f390dc8a9c86080a3700) |
234244
| v0.10.0 | [Phi2](https://huggingface.co/TheBloke/phi-2-GGUF) | [`d71ac90`](https://github.com/ggerganov/llama.cpp/tree/d71ac90985854b0905e1abba778e407e17f9f887) |
245+
| v0.11.0 | [LLaVA-v1.6](https://huggingface.co/ShadowBeast/llava-v1.6-mistral-7b-Q5_K_S-GGUF), [Phi2](https://huggingface.co/TheBloke/phi-2-GGUF)| [`3ab8b3a`](3ab8b3a92ede46df88bc5a2dfca3777de4a2b2b6) |
235246

236247
## License
237248

0 commit comments

Comments
 (0)