Unlike Kafka or Kinesis, you don’t really have a concept of a persisted topic or stream to which multiple consumers can consume from. Instead you need to set rules/targets, so it’s more like a concious push than a pull. Arguably, this does add a little bit of coupling in the sense that the producing team is aware of their consumers – but in many scenarios, that may actually be preferred.
Continuing with the example from the previous post, you’d need to add a new rule with the following event pattern.
{
"source": ["demo-event"]
}
If you are wondering where that came from, here’s how you constructed your event entries. So the pattern above is basically saying, any time an event is pushed to the bus with that source, fire the rule.
PutEventsRequestEntry requestEntry1 = new PutEventsRequestEntry
{
Source = "demo-event",
EventBusName = "dev-event-bus",
DetailType = "mock.client",
Detail = payload1
};
You can now add a target to the rule. To test it out, the simplest approach is to set a cloudwatch log group as the target. Now you can just use the AWS console to confirm that your events are being received by the bus and that the rule is firing. Once you do that, you can test out the lambda target by creating a very simple lambda in C#.
public class Function
{
public async Task FunctionHandler(
CloudWatchEvent<object> input,
ILambdaContext context)
{
context.Logger.LogLine($"Lambda: {input.Source}");
}
}
The main thing is to note the signature for the FunctionHandler
method. The events are going to be of type CloudWatchEvent<object>
. And the one line of code just logs it to cloudwatch.
Nish, where can I reach you nowadays. Long time – Jayakrishnan (Cochin)