Skip to main content

Exception Mapping

LiteLLM maps the 4 most common exceptions across all providers.

  • Rate Limit Errors
  • Context Window Errors
  • Invalid Request Errors
  • InvalidAuth Errors (incorrect key, etc.)

Base case - we return the original exception.

All our exceptions inherit from OpenAI's exception types, so any error-handling you have for that, should work out of the box with LiteLLM.

For all 4 cases, the exception returned inherits from the original OpenAI Exception but contains 3 additional attributes:

  • status_code - the http status code of the exception
  • message - the error message
  • llm_provider - the provider raising the exception

usage

from litellm import completion

os.environ["ANTHROPIC_API_KEY"] = "bad-key"
try:
# some code
completion(model="claude-instant-1", messages=[{"role": "user", "content": "Hey, how's it going?"}])
except Exception as e:
print(e.llm_provider)

details

To see how it's implemented - check out the code

Create an issue or make a PR if you want to improve the exception mapping.

Note For OpenAI and Azure we return the original exception (since they're of the OpenAI Error type). But we add the 'llm_provider' attribute to them. See code

custom mapping list

Base case - we return the original exception.

LLM ProviderReturned Status Code
Anthropic400
Anthropic401
Anthropic401
Anthropic400
Anthropic429
OpenAI400
Replicate400
Replicate401
Replicate400
Replicate429
Replicate500
Cohere401
Cohere400
Cohere429
Huggingface400
Huggingface400
Huggingface401
Huggingface429
Openrouter400
Openrouter401
Openrouter429
AI21400
AI21400
AI21401
AI21429
TogetherAI400
TogetherAI400
TogetherAI400
TogetherAI401
TogetherAI429

For a deeper understanding of these exceptions, you can check out this implementation for additional insights.

The ContextWindowExceededError is a sub-class of InvalidRequestError. It was introduced to provide more granularity for exception-handling scenarios. Please refer to this issue to learn more.