When you have an API endpoint that takes too long to run, it can result in consumers facing time-outs or blocking scenarios. Now while consumers can use background workers to make these long running calls, it’s far more convenient if the service is implemented to support an asynchronous operation. A very common approach is to queue up the work on the service side and to return a task-id to the consumer. The consumer can then periodically poll the API with the task-id which returns a response indicating whether the task has been completed or not. Once the task completes, the consumer then makes a call to retrieve the completed task response. In some implementations, these two operations are combined, so it’s the same endpoint that both returns the pending status as well as the completed result.
On the service side, the implementation is typically done using a queue. The first call generates and persists a task-id which is returned, and the task is queued up in a queue (such as AWS SQS or Azure Queues). You can then trigger off a worker via AWS Lambdas or Azure Functions which performs the task and then marks the task as completed.