Skip to content

[Bug]: TypeError instead of BadRequestError for missing/invalid parameters (HTTP 500 in proxy) #16993

@hula-la

Description

@hula-la

What happened?

When calling LiteLLM API functions with missing or invalid parameters, error handling is inconsistent in certain cases:

  1. Core Library (Direct Usage): When required parameters are completely omitted (not passed at all), TypeError is raised instead of BadRequestError
  2. Proxy Server (Wrong Parameter Names): When using incorrect parameter names (e.g., messages instead of input for responses API), the proxy returns HTTP 500 Internal Server Error instead of HTTP 400 Bad Request

Note: When model parameter is simply missing (FastAPI provides model=None), the proxy correctly returns HTTP 400 with ProxyModelNotFoundError. This case works fine.

Expected Behavior

  • Missing or invalid parameters should raise BadRequestError with HTTP 400 status code
  • Error message should clearly indicate what went wrong
  • No HTTP 500 errors for client-side parameter mistakes

Current Behavior

Case 1: Core Library - Completely omitted required parameter

import litellm

# Omitting 'model' parameter completely
try:
    litellm.completion(messages=[{"role": "user", "content": "test"}])
except TypeError as e:  # Should be BadRequestError
    print(f"TypeError: {e}")
    # Output: TypeError: completion() missing 1 required positional argument: 'model'

Case 2: Proxy - Wrong parameter name causes HTTP 500

# Using 'input' instead of 'messages' for chat/completions endpoint
curl -X POST http://localhost:4000/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-3.5-turbo",
    "input": [{"role": "user", "content": "test"}]
  }'

# Returns: HTTP 500 Internal Server Error
# Should return: HTTP 400 Bad Request

Relevant log output

Core Library:

TypeError: completion() missing 1 required positional argument: 'model'

Proxy Server (Wrong parameter name):

litellm.proxy.proxy_server._handle_llm_api_exception(): Exception occured - Router.acompletion() missing 1 required positional argument: 'messages'
TypeError: Router.acompletion() missing 1 required positional argument: 'messages'

Are you a ML Ops Team?

No

What LiteLLM version are you on?

v1.80.5

Impact

  • API consumers cannot distinguish between client and server errors
  • Violates HTTP status code conventions (4xx for client errors, 5xx for server errors)
  • Makes debugging more difficult for users who provide wrong parameter names
  • Core library raises TypeError instead of user-friendly BadRequestError

Solution

I have a fix ready that adds TypeError to BadRequestError conversion in:

  1. @client decorator in litellm/utils.py (for core library)
  2. route_request function in litellm/proxy/route_llm_request.py (for proxy layer)

This ensures parameter validation errors return HTTP 400 with clear error messages.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions