-
-
Notifications
You must be signed in to change notification settings - Fork 4.8k
Open
Description
What happened?
When calling LiteLLM API functions with missing or invalid parameters, error handling is inconsistent in certain cases:
- Core Library (Direct Usage): When required parameters are completely omitted (not passed at all), TypeError is raised instead of BadRequestError
- Proxy Server (Wrong Parameter Names): When using incorrect parameter names (e.g.,
messagesinstead ofinputfor responses API), the proxy returns HTTP 500 Internal Server Error instead of HTTP 400 Bad Request
Note: When model parameter is simply missing (FastAPI provides model=None), the proxy correctly returns HTTP 400 with ProxyModelNotFoundError. This case works fine.
Expected Behavior
- Missing or invalid parameters should raise
BadRequestErrorwith HTTP 400 status code - Error message should clearly indicate what went wrong
- No HTTP 500 errors for client-side parameter mistakes
Current Behavior
Case 1: Core Library - Completely omitted required parameter
import litellm
# Omitting 'model' parameter completely
try:
litellm.completion(messages=[{"role": "user", "content": "test"}])
except TypeError as e: # Should be BadRequestError
print(f"TypeError: {e}")
# Output: TypeError: completion() missing 1 required positional argument: 'model'Case 2: Proxy - Wrong parameter name causes HTTP 500
# Using 'input' instead of 'messages' for chat/completions endpoint
curl -X POST http://localhost:4000/chat/completions \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-3.5-turbo",
"input": [{"role": "user", "content": "test"}]
}'
# Returns: HTTP 500 Internal Server Error
# Should return: HTTP 400 Bad RequestRelevant log output
Core Library:
TypeError: completion() missing 1 required positional argument: 'model'
Proxy Server (Wrong parameter name):
litellm.proxy.proxy_server._handle_llm_api_exception(): Exception occured - Router.acompletion() missing 1 required positional argument: 'messages'
TypeError: Router.acompletion() missing 1 required positional argument: 'messages'
Are you a ML Ops Team?
No
What LiteLLM version are you on?
v1.80.5
Impact
- API consumers cannot distinguish between client and server errors
- Violates HTTP status code conventions (4xx for client errors, 5xx for server errors)
- Makes debugging more difficult for users who provide wrong parameter names
- Core library raises TypeError instead of user-friendly BadRequestError
Solution
I have a fix ready that adds TypeError to BadRequestError conversion in:
@clientdecorator inlitellm/utils.py(for core library)route_requestfunction inlitellm/proxy/route_llm_request.py(for proxy layer)
This ensures parameter validation errors return HTTP 400 with clear error messages.
Metadata
Metadata
Assignees
Labels
No labels