LiteLLM¶
pip install beekeeper-llms-litellm
- class LiteLLM¶
A wrapper class for interacting with a LiteLLM-compatible large language model (LLM). For more information, see: https://docs.litellm.ai/.
- Parameters:
model (str) – The identifier of the LLM model to use (e.g., “gpt-4”, “llama-3”).
temperature (float, optional) – Sampling temperature to use. Must be between 0.0 and 1.0. Higher values result in more random outputs, while lower values make the output more deterministic. Default is 1.0.
max_tokens (int, optional) – The maximum number of tokens to generate in the completion.
api_key (str) – API key used for authenticating with the LLM provider.
additional_kwargs (Dict[str, Any], optional) – A dictionary of additional parameters passed to the LLM during completion. This allows customization of the request beyond the standard parameters.
callback_manager – (ModelMonitor, optional): The callback manager is used for observability.
- chat_completion(*args, **kwargs)¶
Generates a chat completion for LLM.
- completion(prompt, **kwargs)¶
Generates a chat completion for LLM. Using OpenAI’s standard endpoint (/completions).
- Parameters:
prompt (str) – The input prompt to generate a completion for.
**kwargs (Any) – Additional keyword arguments to customize the LLM completion request.
- text_completion(prompt, **kwargs)¶
Generates a chat completion for LLM. Using OpenAI’s standard endpoint (/completions).
- Parameters:
prompt (str) – The input prompt to generate a completion for.
**kwargs (Any) – Additional keyword arguments to customize the LLM completion request.