hook
hook
__all__ = ['VertexAiHook', 'GeminiApiHook']
module-attribute
GeminiApiHook
Bases: LLMHook
Hook for interacting with the Google Gemini API.
This hook provides methods to generate content using various Gemini models, including text-based prompts and prompts with PDF file context. It handles the communication with the Gemini API and extracts relevant information from the API's responses.
Note
Requires the following environment variables to be set:
- GEMINI_API_KEY: Gemini api key string.
__init__() -> None
Initializes the GeminiApiHook.
Note
Requires the following environment variables to be set:
- GEMINI_API_KEY: Gemini api key string.
extract_text_from_response(response_json: dict) -> str
Extracts text content from a Gemini API JSON response.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
response_json
|
dict
|
The JSON response from the Gemini API as a dictionary. |
required |
Returns:
Type | Description |
---|---|
str
|
The extracted text content if found. Raises ValueError if extraction fails or the prompt was blocked. |
generate_completion(content, **kwargs)
Generates a text completion using the LLM.
This method should be implemented by subclasses to interact with a specific LLM API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
content
|
str
|
The prompt or content to generate a completion for. |
required |
**kwargs
|
Additional keyword arguments for the LLM API. |
{}
|
Raises:
Type | Description |
---|---|
NotImplementedError
|
If the method is not implemented by a subclass. |
generate_content(model: str, prompt: str = None, **kwargs: dict[str, Any]) -> dict
Generates content using the Gemini API via a POST request.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str
|
The name of the Gemini model to use. |
required |
prompt
|
str
|
Text prompt for generation. |
None
|
**kwargs
|
dict[str, Any]
|
Additional parameters to include in the request payload, such as systemInstruction or generationConfig. |
{}
|
Examples:
>>> response = gemini_hook.generate_content(
... model="gemini-2.0-flash-lite",
... prompt="Summarize this article about climate change...",
... systemInstruction={"parts": [{"text": "You are an expert summarizer. Provide a concise summary."}]},
... generationConfig={"responseMimeType": "text/plain", "temperature": 0.2}
... )
Example with custom contents structure
>>> response = gemini_hook.generate_content(
... model="gemini-2.0-flash-lite",
... systemInstruction={
... "parts": [{"text": "You are an expert summarizer. Provide a concise summary in Portuguese."}]
... },
... contents=[{
... "role": "user",
... "parts": [
... {"text": "First part of the article..."},
... {"text": "Second part with more details..."}
... ]
... }],
... generationConfig={"responseMimeType": "text/plain", "temperature": 0.2}
... )
Returns:
Type | Description |
---|---|
dict
|
The full JSON response from the Gemini API as a dictionary. |
generate_content_with_pdf(model: str, prompt: str = None, pdf_files: list[str] = None, **kwargs: dict[str, Any]) -> dict
Generates content using the Gemini API with PDF context via a POST request.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model
|
str
|
The name of the Gemini model to use. |
required |
prompt
|
str
|
The text prompt for generation. |
None
|
pdf_files
|
list[str]
|
A list of base64 encoded strings, each representing a PDF file. |
None
|
**kwargs
|
dict[str, Any]
|
Additional parameters to include in the request payload, such as systemInstruction or generationConfig. |
{}
|
Examples:
>>> response = gemini_hook.generate_content_with_pdf(
... model="gemini-2.5-pro",
... prompt="Summarize this article about climate change...",
... pdf_files=[base64_pdf1, base64_pdf2],
... systemInstruction={"parts": [{"text": "You are an expert summarizer. Provide a concise summary."}]},
... generationConfig={"responseMimeType": "text/plain", "temperature": 0.2}
... )
Example with custom contents structure
>>> response = gemini_hook.generate_content_with_pdf(
... model="gemini-2.5-pro",
... pdf_files=[base64_pdf1, base64_pdf2],
... systemInstruction={"parts": [{"text": "You are an expert summarizer. Provide a concise summary."}]},
... generationConfig={"responseMimeType": "text/plain", "temperature": 0.2}
... )
Returns:
Type | Description |
---|---|
dict
|
The full JSON response from the Gemini API as a dictionary. |
historic_append(text, actor)
Appends text to the conversation history.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text
|
str
|
The text to append. |
required |
actor
|
str
|
The actor who produced the text (e.g., 'user', 'model'). |
required |
VertexAiHook
Bases: LLMHook
Hook for interacting with Vertex AI Generative Models.
__init__(model_name: str, **kwargs: dict[str, Any]) -> None
Initializes the GenerativeModelHook.
Note
Requires the following environment variables to be set:
- GCP_PROJECT: The Google Cloud project ID.
- GCP_REGION: The Google Cloud region.
These are needed to initialize the Vertex AI client with the correct context.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
model_name
|
str
|
The name of the model to use. |
required |
**kwargs
|
Any
|
Additional arguments for model initialization. |
{}
|
generate_completion(content, **kwargs)
Generates a text completion using the LLM.
This method should be implemented by subclasses to interact with a specific LLM API.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
content
|
str
|
The prompt or content to generate a completion for. |
required |
**kwargs
|
Additional keyword arguments for the LLM API. |
{}
|
Raises:
Type | Description |
---|---|
NotImplementedError
|
If the method is not implemented by a subclass. |
generate_content(content: str, **kwargs: dict[str, Any]) -> Any
Generates a content for the given content.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
content
|
str
|
The content to generate a content for. |
required |
**kwargs
|
Any
|
Additional arguments for the generation. |
{}
|
Returns:
Name | Type | Description |
---|---|---|
Any |
Any
|
The generated content. |
historic_append(text, actor)
Appends text to the conversation history.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
text
|
str
|
The text to append. |
required |
actor
|
str
|
The actor who produced the text (e.g., 'user', 'model'). |
required |