Skip to content

Python: Add Google PaLM connector with chat completion and text embedding #2258

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 73 commits into from
Aug 22, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
73 commits
Select commit Hold shift + click to select a range
097c8a8
g palm class and example in progress
am831 Jul 14, 2023
f35e643
Merge branch 'palm' of https://github.com/am831/semantic-kernel into …
am831 Jul 14, 2023
b12a25b
palm class object gets API response
am831 Jul 15, 2023
27ad03f
Merge branch 'main' of https://github.com/microsoft/semantic-kernel i…
am831 Jul 17, 2023
393ce3b
finished example file
am831 Jul 17, 2023
52135dd
Merge branch 'microsoft:main' into palm
am831 Jul 17, 2023
7d0e9df
unit test
am831 Jul 18, 2023
b0cb26b
Merge branch 'palm' of https://github.com/am831/semantic-kernel into …
am831 Jul 18, 2023
749ad0e
integration and unit tests in progress
am831 Jul 18, 2023
0bd165f
Merge branch 'main' of https://github.com/microsoft/semantic-kernel i…
am831 Jul 18, 2023
cef0f28
finished unit and integreation tests
am831 Jul 18, 2023
30dcee9
added complete_stream_async and all tests pass
am831 Jul 19, 2023
c6823f1
Merge branch 'main' of https://github.com/microsoft/semantic-kernel i…
am831 Jul 19, 2023
fabd791
Merge branch 'main' of https://github.com/microsoft/semantic-kernel i…
am831 Jul 19, 2023
b0de8b0
Merge branch 'microsoft:main' into palm
am831 Jul 20, 2023
876712a
Merge branch 'microsoft:main' into palm
am831 Jul 20, 2023
1fc8fa1
Merge branch 'microsoft:main' into palm
am831 Jul 20, 2023
9b5c5e3
Merge branch 'microsoft:main' into palm
am831 Jul 21, 2023
fadcc21
add delay and ouput_words to stream function
am831 Jul 21, 2023
1e7326a
Merge branch 'palm' of https://github.com/am831/semantic-kernel into …
am831 Jul 21, 2023
4809a05
fix gp class mistake
am831 Jul 21, 2023
7305d42
chat completion class
am831 Jul 21, 2023
e9627a2
logprobs exception not necessary
am831 Jul 21, 2023
b54ba2a
Merge branch 'palm' of https://github.com/am831/semantic-kernel into …
am831 Jul 21, 2023
0ead766
finished 2 example files
am831 Jul 26, 2023
dde8835
Merge branch 'microsoft:main' into palm2
am831 Jul 26, 2023
3b9fe83
unit tests
am831 Jul 27, 2023
de8b798
Merge branch 'microsoft:main' into palm2
am831 Jul 27, 2023
5e2c454
integration test
am831 Jul 27, 2023
3c7ba0f
text embedding
am831 Jul 28, 2023
9829103
embedding progress
am831 Jul 28, 2023
6e57f54
palm chat working with memories
am831 Jul 30, 2023
cfe09a1
Merge branch 'main' of https://github.com/microsoft/semantic-kernel i…
am831 Jul 30, 2023
b10736a
env example
am831 Jul 30, 2023
8854d84
finished chat and embedding
am831 Aug 1, 2023
c4ea2b4
Merge branch 'microsoft:main' into palm2
am831 Aug 1, 2023
dde2365
remove all text completion work from branch
am831 Aug 1, 2023
883f94a
Merge branch 'palm2' of https://github.com/am831/semantic-kernel into…
am831 Aug 1, 2023
86be2d1
fix function names
am831 Aug 1, 2023
b233094
remove whitespace
am831 Aug 1, 2023
0b72908
fix typo
am831 Aug 1, 2023
fe5e389
Merge branch 'microsoft:main' into palm2
am831 Aug 1, 2023
03a8360
Merge branch 'microsoft:main' into palm2
am831 Aug 2, 2023
fad92d0
remove import
am831 Aug 3, 2023
47c3354
Merge branch 'main' of https://github.com/microsoft/semantic-kernel i…
am831 Aug 4, 2023
f60159c
Merge branch 'microsoft:main' into palm2
am831 Aug 4, 2023
f16166a
Merge branch 'main' of https://github.com/microsoft/semantic-kernel i…
am831 Aug 7, 2023
5b467a5
poetry add grpcio-status==1.53.0
am831 Aug 7, 2023
191c57e
document streaming behavior
am831 Aug 7, 2023
7cc5aab
Merge branch 'microsoft:main' into palm2
am831 Aug 7, 2023
9c98d4f
Merge branch 'microsoft:main' into palm2
am831 Aug 8, 2023
75e042f
Merge branch 'microsoft:main' into palm2
am831 Aug 8, 2023
941243e
Merge branch 'main' into palm2
awharrison-28 Aug 8, 2023
2e989d4
group dependencies, skip integration
am831 Aug 8, 2023
95b7cc3
remove streaming
am831 Aug 9, 2023
0d159f2
Merge branch 'main' of https://github.com/microsoft/semantic-kernel i…
am831 Aug 10, 2023
5e3b0d8
Merge branch 'microsoft:main' into palm2
am831 Aug 14, 2023
5820ad9
Merge branch 'microsoft:main' into palm2
am831 Aug 15, 2023
72df1a1
Merge branch 'microsoft:main' into palm2
am831 Aug 16, 2023
e5283d3
Merge branch 'main' into palm2
awharrison-28 Aug 17, 2023
ff060fe
Merge branch 'main' into palm2
awharrison-28 Aug 17, 2023
c6235a0
merged with main, resolved conflicts, fixed spelling error in google …
awharrison-28 Aug 21, 2023
acde623
ran precommit checks, added skipifs if python <3.9 for google palm tests
awharrison-28 Aug 21, 2023
c72514e
Merge branch 'main' into palm2
awharrison-28 Aug 21, 2023
3778552
additional checks to make sure the tests run under the correct enviro…
awharrison-28 Aug 21, 2023
973221f
Merge branch 'palm2' of https://github.com/am831/semantic-kernel into…
awharrison-28 Aug 21, 2023
dd1fff3
added some missing files from bad merge
awharrison-28 Aug 21, 2023
59371fb
another precommit check run
awharrison-28 Aug 21, 2023
e6cef90
missed some logic for running on python version less than 3.9
awharrison-28 Aug 21, 2023
113d919
correctly formatted multiple skipifs
awharrison-28 Aug 21, 2023
1581a94
updated lock file
awharrison-28 Aug 21, 2023
6e8bc80
Merge branch 'main' into palm2
awharrison-28 Aug 21, 2023
de711db
Merge branch 'main' into palm2
awharrison-28 Aug 21, 2023
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
284 changes: 244 additions & 40 deletions python/poetry.lock

Large diffs are not rendered by default.

48 changes: 48 additions & 0 deletions python/samples/kernel-syntax-examples/google_palm_chat.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Copyright (c) Microsoft. All rights reserved.

import asyncio

import semantic_kernel as sk
import semantic_kernel.connectors.ai.google_palm as sk_gp
from semantic_kernel.connectors.ai.chat_request_settings import ChatRequestSettings


async def chat_request_example(api_key):
palm_chat_completion = sk_gp.GooglePalmChatCompletion(
"models/chat-bison-001", api_key
)
settings = ChatRequestSettings()
settings.temperature = 1

chat_messages = list()
user_mssg = "I'm planning a vacation. Which are some must-visit places in Europe?"
chat_messages.append(("user", user_mssg))
answer = await palm_chat_completion.complete_chat_async(chat_messages, settings)
chat_messages.append(("assistant", str(answer)))
user_mssg = "Where should I go in France?"
chat_messages.append(("user", user_mssg))
answer = await palm_chat_completion.complete_chat_async(chat_messages, settings)
chat_messages.append(("assistant", str(answer)))

context_vars = sk.ContextVariables()
context_vars["chat_history"] = ""
context_vars["chat_bot_ans"] = ""
for role, mssg in chat_messages:
if role == "user":
context_vars["chat_history"] += f"User:> {mssg}\n"
elif role == "assistant":
context_vars["chat_history"] += f"ChatBot:> {mssg}\n"
context_vars["chat_bot_ans"] += f"{mssg}\n"

return context_vars


async def main() -> None:
api_key = sk.google_palm_settings_from_dot_env()
chat = await chat_request_example(api_key)
print(chat["chat_history"])
return


if __name__ == "__main__":
asyncio.run(main())
Original file line number Diff line number Diff line change
@@ -0,0 +1,142 @@
# Copyright (c) Microsoft. All rights reserved.

import asyncio
from typing import Tuple

import semantic_kernel as sk
import semantic_kernel.connectors.ai.google_palm as sk_gp

kernel = sk.Kernel()
apikey = sk.google_palm_settings_from_dot_env()
palm_text_embed = sk_gp.GooglePalmTextEmbedding("models/embedding-gecko-001", apikey)
kernel.add_text_embedding_generation_service("gecko", palm_text_embed)
palm_chat_completion = sk_gp.GooglePalmChatCompletion("models/chat-bison-001", apikey)
kernel.add_chat_service("models/chat-bison-001", palm_chat_completion)
kernel.register_memory_store(memory_store=sk.memory.VolatileMemoryStore())
kernel.import_skill(sk.core_skills.TextMemorySkill())


async def populate_memory(kernel: sk.Kernel) -> None:
# Add some documents to the semantic memory
await kernel.memory.save_information_async(
"aboutMe", id="info1", text="My name is Andrea"
)
await kernel.memory.save_information_async(
"aboutMe", id="info2", text="I currently work as a tour guide"
)
await kernel.memory.save_information_async(
"aboutMe", id="info3", text="My favorite hobby is hiking"
)
await kernel.memory.save_information_async(
"aboutMe", id="info4", text="I visitied Iceland last year."
)
await kernel.memory.save_information_async(
"aboutMe", id="info5", text="My family is from New York"
)


async def search_memory_examples(kernel: sk.Kernel) -> None:
questions = [
"what's my name",
"what is my favorite hobby?",
"where's my family from?",
"where did I travel last year?",
"what do I do for work",
]

for question in questions:
print(f"Question: {question}")
result = await kernel.memory.search_async("aboutMe", question)
print(f"Answer: {result}\n")


async def setup_chat_with_memory(
kernel: sk.Kernel,
) -> Tuple[sk.SKFunctionBase, sk.SKContext]:
"""
When using Google PaLM to chat with memories, a chat prompt template is
essential; otherwise, the kernel will send text prompts to the Google PaLM
chat service. Unfortunately, when a text prompt includes memory, chat
history, and the user's current message, PaLM often struggles to comprehend
the user's message. To address this issue, the prompt containing memory is
incorporated into the chat prompt template as a system message.
Note that this is only an issue for the chat service; the text service
does not require a chat prompt template.
"""
sk_prompt = """
ChatBot can have a conversation with you about any topic.
It can give explicit instructions or say 'I don't know' if
it does not have an answer.

Information about me, from previous conversations:
- {{$fact1}} {{recall $fact1}}
- {{$fact2}} {{recall $fact2}}
- {{$fact3}} {{recall $fact3}}
- {{$fact4}} {{recall $fact4}}
- {{$fact5}} {{recall $fact5}}

""".strip()

prompt_config = sk.PromptTemplateConfig.from_completion_parameters(
max_tokens=2000, temperature=0.7, top_p=0.8
)
prompt_template = sk.ChatPromptTemplate( # Create the chat prompt template
"{{$user_input}}", kernel.prompt_template_engine, prompt_config
)
prompt_template.add_system_message(sk_prompt) # Add the memory as a system message
function_config = sk.SemanticFunctionConfig(prompt_config, prompt_template)
chat_func = kernel.register_semantic_function(
None, "ChatWithMemory", function_config
)

context = kernel.create_new_context()
context["fact1"] = "what is my name?"
context["fact2"] = "what is my favorite hobby?"
context["fact3"] = "where's my family from?"
context["fact4"] = "where did I travel last year?"
context["fact5"] = "what do I do for work?"

context[sk.core_skills.TextMemorySkill.COLLECTION_PARAM] = "aboutMe"
context[sk.core_skills.TextMemorySkill.RELEVANCE_PARAM] = 0.6

context["chat_history"] = ""

return chat_func, context


async def chat(
kernel: sk.Kernel, chat_func: sk.SKFunctionBase, context: sk.SKContext
) -> bool:
try:
user_input = input("User:> ")
context["user_input"] = user_input
except KeyboardInterrupt:
print("\n\nExiting chat...")
return False
except EOFError:
print("\n\nExiting chat...")
return False

if user_input == "exit":
print("\n\nExiting chat...")
return False

answer = await kernel.run_async(chat_func, input_vars=context.variables)
context["chat_history"] += f"\nUser:> {user_input}\nChatBot:> {answer}\n"

print(f"ChatBot:> {answer}")
return True


async def main() -> None:
await populate_memory(kernel)
await search_memory_examples(kernel)
chat_func, context = await setup_chat_with_memory(kernel)
print("Begin chatting (type 'exit' to exit):\n")
chatting = True
while chatting:
chatting = await chat(kernel, chat_func, context)


if __name__ == "__main__":
asyncio.run(main())
Original file line number Diff line number Diff line change
@@ -0,0 +1,79 @@
# Copyright (c) Microsoft. All rights reserved.

import asyncio

import semantic_kernel as sk
import semantic_kernel.connectors.ai.google_palm as sk_gp

"""
System messages prime the assistant with different personalities or behaviors.
The system message is added to the prompt template, and a chat history can be
added as well to provide further context.
A system message can only be used once at the start of the conversation, and
conversation history persists with the instance of GooglePalmChatCompletion. To
overwrite the system message and start a new conversation, you must create a new
instance of GooglePalmChatCompletion.
Sometimes, PaLM struggles to use the information in the prompt template. In this
case, it is recommended to experiment with the messages in the prompt template
or ask different questions.
"""

system_message = """
You are a chat bot. Your name is Blackbeard
and you speak in the style of a swashbuckling
pirate. You reply with brief, to-the-point answers
with no elaboration. Your full name is Captain
Bartholomew "Blackbeard" Thorne.
"""

kernel = sk.Kernel()
api_key = sk.google_palm_settings_from_dot_env()
palm_chat_completion = sk_gp.GooglePalmChatCompletion("models/chat-bison-001", api_key)
kernel.add_chat_service("models/chat-bison-001", palm_chat_completion)
prompt_config = sk.PromptTemplateConfig.from_completion_parameters(
max_tokens=2000, temperature=0.7, top_p=0.8
)
prompt_template = sk.ChatPromptTemplate(
"{{$user_input}}", kernel.prompt_template_engine, prompt_config
)
prompt_template.add_system_message(system_message) # Add the system message for context
prompt_template.add_user_message(
"Hi there, my name is Andrea, who are you?"
) # Include a chat history
prompt_template.add_assistant_message("I am Blackbeard.")
function_config = sk.SemanticFunctionConfig(prompt_config, prompt_template)
chat_function = kernel.register_semantic_function(
"PirateSkill", "Chat", function_config
)


async def chat() -> bool:
context_vars = sk.ContextVariables()

try:
user_input = input("User:> ")
context_vars["user_input"] = user_input
except KeyboardInterrupt:
print("\n\nExiting chat...")
return False
except EOFError:
print("\n\nExiting chat...")
return False

if user_input == "exit":
print("\n\nExiting chat...")
return False

answer = await kernel.run_async(chat_function, input_vars=context_vars)
print(f"Blackbeard:> {answer}")
return True


async def main() -> None:
chatting = True
while chatting:
chatting = await chat()


if __name__ == "__main__":
asyncio.run(main())
12 changes: 11 additions & 1 deletion python/semantic_kernel/connectors/ai/google_palm/__init__.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,17 @@
# Copyright (c) Microsoft. All rights reserved.

from semantic_kernel.connectors.ai.google_palm.services.gp_chat_completion import (
GooglePalmChatCompletion,
)
from semantic_kernel.connectors.ai.google_palm.services.gp_text_completion import (
GooglePalmTextCompletion,
)
from semantic_kernel.connectors.ai.google_palm.services.gp_text_embedding import (
GooglePalmTextEmbedding,
)

__all__ = ["GooglePalmTextCompletion"]
__all__ = [
"GooglePalmTextCompletion",
"GooglePalmChatCompletion",
"GooglePalmTextEmbedding",
]
Loading