Skip to main content
LiteLLM can treat Orchid as an OpenAI-compatible proxy — useful for teams that already route multiple LLM providers through LiteLLM.
import litellm

response = litellm.completion(
    model="openai/orchid01",
    api_base="https://llm.orchid.ac/v1",
    api_key="orchid-your-key-here",
    messages=[{"role": "user", "content": "Analyse this filing..."}],
)
print(response.choices[0].message.content)
The openai/ prefix tells LiteLLM to use the OpenAI-compatible path. The model name after the prefix (orchid01) is forwarded to the Orchid API.

LiteLLM Proxy config

If you’re running a LiteLLM proxy server, add Orchid as a model in your config:
model_list:
  - model_name: orchid01
    litellm_params:
      model: openai/orchid01
      api_base: https://llm.orchid.ac/v1
      api_key: os.environ/ORCHID_API_KEY
Then call it via your LiteLLM proxy as usual with model: orchid01.