2

I have some temporal workflows running on a kubernetes pod. One of the workflows starts an activity, which is doing a bunch of BatchWriteItem calls to DynamoDB(close to 40k requests by an activity execution). A snipper of the activity BatchWriteItem logic code is below:

        av, err := attributevalue.MarshalMap(msg)
        if err != nil {
            return err
        }

        writeRequests = append(writeRequests, types.WriteRequest{
            PutRequest: &types.PutRequest{
                Item: av,
            },
        })

        if len(writeRequests) == maxBatchSize {
            input := &dynamodb.BatchWriteItemInput{
        RequestItems: map[string][]types.WriteRequest{
            repo.tableName: writeRequests,
                },
             }
        out, err := repo.store.BatchWriteItem(ctx, input)
        if err != nil {
            return err
        }
    }

When executing a workflow, this activity runs for around 10 minutes without any issues, and suddenly I see this error log in the pod

2025/03/17 00:02:40 INFO Task processing failed with error Namespace <namespace> TaskQueue <queue-name> WorkerID 1@kubernetes-worker-pod-name@ WorkerType ActivityWorker Error context deadline exceeded

I set the Temporal Activity StartToClose timeout to 2 hours, so that is not the reason for error here. I checked the temporal worker pod CPU and memory usage and they are within limits. I also noticed there are no throttles write requests in DynamoDB dashboard. Not sure what else is causing this issue. Can I get some pointers on the cause of this error?

Update:

I see this error in the Temporal Activity's Call stack

coroutine root [blocked on chan-2.Receive]:
go.temporal.io/sdk/internal.(*decodeFutureImpl).Get(0xc05ecce318, {0x18f01f8, 0xc0007f0300}, {0x13f11c0, 0xc05ecce180})
/go/src/app/vendor/go.temporal.io/sdk/internal/internal_workflow.go:1588 +0x3e
github.com/twilio-internal/comms-api-broadcast-internal-api/internal/temporal/workflows.IngressWorkflow({0x18f01f8?, 0xc0007f0240?}, {{0xc0001b8150, 0x22}, {0xc0001b8180, 0x2a}, {0xc0007a0060, 0x11}, {0xc00035c557, 0x5}})
/go/src/app/internal/temporal/workflows/ingress.go:44 +0x452
reflect.Value.call({0x1460c60?, 0x1746018?, 0x4158c5?}, {0x16d5a9a, 0x4}, {0xc0007f0270, 0x2, 0xc0007f0270?})
/usr/local/go/src/reflect/value.go:581 +0xca6
reflect.Value.Call({0x1460c60?, 0x1746018?, 0x7fefb9f2c208?}, {0xc0007f0270?, 0x46f49d?, 0x7ff000dfb878?})
/usr/local/go/src/reflect/value.go:365 +0xb9
go.temporal.io/sdk/internal.executeFunction({0x1460c60, 0x1746018}, {0xc0006b0480, 0x2, 0x1466620?})
/go/src/app/vendor/go.temporal.io/sdk/internal/internal_worker.go:1940 +0x26b
go.temporal.io/sdk/internal.(*workflowEnvironmentInterceptor).ExecuteWorkflow(0xc0006c8190, {0x18f01f8, 0xc0007f0210}, 0xc0006a24f8)
/go/src/app/vendor/go.temporal.io/sdk/internal/workflow.go:619 +0x150
go.temporal.io/sdk/interceptor.(*tracingWorkflowInboundInterceptor).ExecuteWorkflow(0xc000517860, {0x18f03f0, 0xc00050a600}, 0xc0006a24f8)
/go/src/app/vendor/go.temporal.io/sdk/interceptor/tracing_interceptor.go:449 +0x2ca
go.temporal.io/sdk/internal.(*workflowExecutor).Execute(0xc0007e2180, {0x18f03f0, 0xc00050a600}, 0xc000802440)
/go/src/app/vendor/go.temporal.io/sdk/internal/internal_worker.go:835 +0x28b
go.temporal.io/sdk/internal.(*syncWorkflowDefinition).Execute.func1({0x18f01f8, 0xc000517920})
/go/src/app/vendor/go.temporal.io/sdk/internal/internal_workflow.go:556 +0xc6

Not really sure what this error means in my context

1
  • Do you have any other timeouts on this activity? Like a Schedule-to-Close or a heartbeat timeout? Commented Jun 12 at 16:52

0

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.