Skip to content

Commit 81b022b

Browse files
authored
Merge pull request #1196 from omahs/patch-1
Fix typos
2 parents 0339b03 + 6fd2eec commit 81b022b

File tree

10 files changed

+14
-14
lines changed

10 files changed

+14
-14
lines changed

LLama.Web/wwwroot/lib/jquery-validation/dist/additional-methods.js

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -359,7 +359,7 @@ $.validator.addMethod( "creditcard", function( value, element ) {
359359
}, "Please enter a valid credit card number." );
360360

361361
/* NOTICE: Modified version of Castle.Components.Validator.CreditCardValidator
362-
* Redistributed under the the Apache License 2.0 at http://www.apache.org/licenses/LICENSE-2.0
362+
* Redistributed under the Apache License 2.0 at http://www.apache.org/licenses/LICENSE-2.0
363363
* Valid Types: mastercard, visa, amex, dinersclub, enroute, discover, jcb, unknown, all (overrides all other settings)
364364
*/
365365
$.validator.addMethod( "creditcardtypes", function( value, element, param ) {

LLama/ChatSession.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -637,7 +637,7 @@ public record SessionState
637637
public IHistoryTransform HistoryTransform { get; set; } = new LLamaTransforms.DefaultHistoryTransform();
638638

639639
/// <summary>
640-
/// The the chat history messages for this session.
640+
/// The chat history messages for this session.
641641
/// </summary>
642642
public ChatHistory.Message[] History { get; set; } = [ ];
643643

LLama/Native/SafeLlamaModelHandle.cs

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -702,7 +702,7 @@ public int Count
702702
}
703703

704704
/// <summary>
705-
/// Get the the type of this vocabulary
705+
/// Get the type of this vocabulary
706706
/// </summary>
707707
public LLamaVocabType Type
708708
{

docs/Architecture.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The figure below shows the core framework structure of LLamaSharp.
66

77
- **Native APIs**: LLamaSharp calls the exported C APIs to load and run the model. The APIs defined in LLamaSharp specially for calling C APIs are named `Native APIs`. We have made all the native APIs public under namespace `LLama.Native`. However, it's strongly recommended not to use them unless you know what you are doing.
88
- **LLamaWeights**: The holder of the model weight.
9-
- **LLamaContext**: A context which directly interact with the native library and provide some basic APIs such as tokenization and embedding. It takes use of `LLamaWeights`.
9+
- **LLamaContext**: A context which directly interacts with the native library and provides some basic APIs such as tokenization and embedding. It takes use of `LLamaWeights`.
1010
- **LLamaExecutors**: Executors which define the way to run the LLama model. It provides text-to-text and image-to-text APIs to make it easy to use. Currently we provide four kinds of executors: `InteractiveExecutor`, `InstructExecutor`, `StatelessExecutor` and `BatchedExecutor`.
1111
- **ChatSession**: A wrapping for `InteractiveExecutor` and `LLamaContext`, which supports interactive tasks and saving/re-loading sessions. It also provides a flexible way to customize the text process by `IHistoryTransform`, `ITextTransform` and `ITextStreamTransform`.
1212
- **Integrations**: Integrations with other libraries to expand the application of LLamaSharp. For example, if you want to do RAG ([Retrieval Augmented Generation](https://en.wikipedia.org/wiki/Prompt_engineering#Retrieval-augmented_generation)), kernel-memory integration is a good option for you.

docs/FAQ.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -29,7 +29,7 @@ Generally, there are two possible cases for this problem:
2929

3030
Please set anti-prompt or max-length when executing the inference.
3131

32-
Anti-prompt can also be called as "Stop-keyword", which decides when to stop the response generation. Under interactive mode, the maximum tokens count is always not set, which makes the LLM generates responses infinitively. Therefore, setting anti-prompt correctly helps a lot to avoid the strange behaviours. For example, the prompt file `chat-with-bob.txt` has the following content:
32+
Anti-prompt can also be called as "Stop-keyword", which decides when to stop the response generation. Under interactive mode, the maximum tokens count is always not set, which makes the LLM generate responses infinitively. Therefore, setting anti-prompt correctly helps a lot to avoid the strange behaviours. For example, the prompt file `chat-with-bob.txt` has the following content:
3333

3434
```
3535
Transcript of a dialog, where the User interacts with an Assistant named Bob. Bob is helpful, kind, honest, good at writing, and never fails to answer the User's requests immediately and with precision.
@@ -43,7 +43,7 @@ User:
4343

4444
Therefore, the anti-prompt should be set as "User:". If the last line of the prompt is removed, LLM will automatically generate a question (user) and a response (bob) for one time when running the chat session. Therefore, the antiprompt is suggested to be appended to the prompt when starting a chat session.
4545

46-
What if an extra line is appended? The string "User:" in the prompt will be followed with a char "\n". Thus when running the model, the automatic generation of a pair of question and response may appear because the anti-prompt is "User:" but the last token is "User:\n". As for whether it will appear, it's an undefined behaviour, which depends on the implementation inside the `LLamaExecutor`. Anyway, since it may leads to unexpected behaviors, it's recommended to trim your prompt or carefully keep consistent with your anti-prompt.
46+
What if an extra line is appended? The string "User:" in the prompt will be followed with a char "\n". Thus when running the model, the automatic generation of a pair of question and response may appear because the anti-prompt is "User:" but the last token is "User:\n". As for whether it will appear, it's an undefined behaviour, which depends on the implementation inside the `LLamaExecutor`. Anyway, since it may lead to unexpected behaviors, it's recommended to trim your prompt or carefully keep consistent with your anti-prompt.
4747

4848
## How to run LLM with non-English languages
4949

@@ -59,6 +59,6 @@ $$ len(prompt) + len(response) < len(context) $$
5959

6060
In this inequality, `len(response)` refers to the expected tokens for LLM to generate.
6161

62-
## Choose models weight depending on you task
62+
## Choose models weight depending on your task
6363

6464
The differences between modes may lead to much different behaviours under the same task. For example, if you're building a chat bot with non-English, a fine-tuned model specially for the language you want to use will have huge effect on the performance.

docs/QuickStart.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -24,7 +24,7 @@ PM> Install-Package LLamaSharp
2424

2525
## Model preparation
2626

27-
There are two popular format of model file of LLM now, which are PyTorch format (.pth) and Huggingface format (.bin). LLamaSharp uses `GGUF` format file, which could be converted from these two formats. To get `GGUF` file, there are two options:
27+
There are two popular formats of model file of LLM now, which are PyTorch format (.pth) and Huggingface format (.bin). LLamaSharp uses `GGUF` format file, which could be converted from these two formats. To get `GGUF` file, there are two options:
2828

2929
1. Search model name + 'gguf' in [Huggingface](https://huggingface.co), you will find lots of model files that have already been converted to GGUF format. Please take care of the publishing time of them because some old ones could only work with old version of LLamaSharp.
3030

docs/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -17,7 +17,7 @@ If you are new to LLM, here're some tips for you to help you to get start with `
1717

1818
## Integrations
1919

20-
There are integarions for the following libraries, which help to expand the application of LLamaSharp. Integrations for semantic-kernel and kernel-memory are developed in LLamaSharp repository, while others are developed in their own repositories.
20+
There are integrations for the following libraries, which help to expand the application of LLamaSharp. Integrations for semantic-kernel and kernel-memory are developed in LLamaSharp repository, while others are developed in their own repositories.
2121

2222
- [semantic-kernel](https://github.com/microsoft/semantic-kernel): an SDK that integrates LLM like OpenAI, Azure OpenAI, and Hugging Face.
2323
- [kernel-memory](https://github.com/microsoft/kernel-memory): a multi-modal AI Service specialized in the efficient indexing of datasets through custom continuous data hybrid pipelines, with support for RAG ([Retrieval Augmented Generation](https://en.wikipedia.org/wiki/Prompt_engineering#Retrieval-augmented_generation)), synthetic memory, prompt engineering, and custom semantic memory processing.
@@ -32,7 +32,7 @@ There are integarions for the following libraries, which help to expand the appl
3232
Community effort is always one of the most important things in open-source projects. Any contribution in any way is welcomed here. For example, the following things mean a lot for LLamaSharp:
3333

3434
1. Open an issue when you find something wrong.
35-
2. Open an PR if you've fixed something. Even if just correcting a typo, it also makes great sense.
35+
2. Open a PR if you've fixed something. Even if just correcting a typo, it also makes great sense.
3636
3. Help to optimize the documentation.
3737
4. Write an example or blog about how to integrate LLamaSharp with your APPs.
3838
5. Ask for a missing feature and discuss with us.

docs/xmldocs/llama.abstractions.metadataoverride.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@ Implements [IEquatable&lt;MetadataOverride&gt;](https://docs.microsoft.com/en-us
1515

1616
### **Key**
1717

18-
Get the key being overriden by this override
18+
Get the key being overridden by this override
1919

2020
```csharp
2121
public string Key { get; }

docs/xmldocs/llama.native.nativeapi.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -340,7 +340,7 @@ Number of threads
340340
Binary image in jpeg format
341341

342342
`image_bytes_length` [Int32](https://docs.microsoft.com/en-us/dotnet/api/system.int32)<br>
343-
Bytes lenght of the image
343+
Bytes length of the image
344344

345345
#### Returns
346346

@@ -671,7 +671,7 @@ public static Span<float> llama_get_embeddings(SafeLLamaContextHandle ctx)
671671

672672
Apply chat template. Inspired by hf apply_chat_template() on python.
673673
Both "model" and "custom_template" are optional, but at least one is required. "custom_template" has higher precedence than "model"
674-
NOTE: This function does not use a jinja parser. It only support a pre-defined list of template. See more: https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template
674+
NOTE: This function does not use a jinja parser. It only supports a pre-defined list of template. See more: https://github.com/ggerganov/llama.cpp/wiki/Templates-supported-by-llama_chat_apply_template
675675

676676
```csharp
677677
public static int llama_chat_apply_template(SafeLlamaModelHandle model, Char* tmpl, LLamaChatMessage* chat, IntPtr n_msg, bool add_ass, Char* buf, int length)

docs/xmldocs/llama.sessionstate.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -75,7 +75,7 @@ public IHistoryTransform HistoryTransform { get; set; }
7575

7676
### **History**
7777

78-
The the chat history messages for this session.
78+
The chat history messages for this session.
7979

8080
```csharp
8181
public Message[] History { get; set; }

0 commit comments

Comments
 (0)