Blog

AWS EventBridge Pipes

03 Dec, 2022
Xebia Background Header Wave

My last session on re:Invent was about AWS EventBridge Pipes. I am flying back home this evening so I still have some time to write a blog about this new feature Werner Vogels announced.

Why

Having to code service integrations between event producers and consumers by application development teams can be quit hard because of complex details in the resources involved. The code created can have bugs or be inefficient, always makes the overall application more complex and increases the total cost of ownership of the application.

What

The new Pipes are managed point-to-point integrations between services. The source can be streams from SQS, DynamoDB, Kafka or MQ. The range of targets is much wider and includes SNS, SQS, StepFunctions, Lambda, Kinesis, API Gateway, EventBridge and SageMaker.
The configuration of pipes lets you filter the events (non matching events are dropped) and can then be optionally enriched with other data by calling a Lambda function sending the result to the target of the pipe. The pipe will try to deliver the message to your target using the specified retry policy, concrrency and batching settings. If the message was not successfully delivered you can route the event to a dead letter queue (DLQ) to not block further processing of events. If no DLQ is specified the Pipe will stop processing new events and you risc loosing event based on the “event horizon” of you source resource. For SQS the DLQ is specified on the queue resource for other sources you can specify one on the Pipe resource.
The filtering feature is quit powerful and you do not have to pay for dropped events.
The integration is fully managed and there is no dataplane that can be accessed.
Because before Pipes, event source mappings were the preferred method of making these integrations, EventBridge Pipes configuration closely matches what could be specified in those.

What not

Pipes are not meant to be used for many producers to many consumers situations. For those case you would target an EventBridge bus from the sources and use rules to get to messages to the targets.
For DynamoDB the maximum number of pollers is 2. So if you need more resources processing the DynamoDB stream you should pipe the stream events to an EventBridge bus first and then use rules on that bus again.
EventBridge can’t currently be used as a source for Pipes. I guess the implementation of rules is good enough there.

Current features

  • batching and concurrency can be specified
  • filtering events
  • enriching events
  • dead letter queue
  • retries

Future features

  • chain pipes
  • dead letter queues DLQ
  • cross region or account pipes
  • VPC endpoint to not go through a NAT when source is in a VPC
  • support for AppFlow
  • support for CDK, SAM and AWS Application Composer
  • imporoved logging

Conclusion

Any place where you have currently implemented event source mappings you should look to use the EventBridge Pipe feature if possible to reduce the cost of ownership of your application. It is probably wise to wait for the CloudFormation support at least before doing that (I could not find support in the documentation).

Jacco Kulman
Jacco is a Cloud Consultant at Binx.io. As an experienced development team lead he coded for the banking- and hospitality- and media-industries. He is a big fan of serverless architectures. In his free time he reads science fiction, contributes to open source projects and enjoys being a life-long-learner.
Questions?

Get in touch with us to learn more about the subject and related solutions

Explore related posts