Asynchronous integrations can now be platform-independent like the REST APIs

If you ask a developer for an opinion about asynchronous communication, you will probably hear that it is more complex than the synchronous one. But of course there are valid reasons to use it anyway. One of them is that the indirect communication via a messaging broker decreases coupling: The senders do not need to know the receiver(s) of the messages, so the receiver(s) can be replaced without impacting the senders. That makes the evolution of inter-connected systems much more flexible.

However, that also means adding a dependency on a central component, the message broker. In the world of the "simple" synchronous REST APIs the communicating parties just need to comply to the standard protocol (HTTP), and clients and servers can talk to each other without caring if their counterpart is based on Java, .NET, Typescript... and with or without the use of any particular framework or infrastructure product. On the other hand, for the asynchronous APIs, it is typical that the components are very much aware and dependent on the fact that what they use to communicate is either Kafka, JMS, Websocket, Amazon SQS or something else.

But do we have to sacrifice some aspect of flexibility in order to get the flexibility of asynchronous communication? Let's look at some modern approaches that let us avoid it.

Specification languages

Synchronous integrations have a history of well-accepted standards. The SOAP Web Services are considered old-fashioned today, but their WSDL already set the bar high in terms of rigorous and machine-readable interface specification. (Actually, WSDL does not even depend on HTTP or just synchronous communication, but its fate is already sealed due to its association with the hated world of XML standards.) Swagger, which later became OpenAPI, is a solid standard for REST interfaces.

However, for asynchronous integrations the standardisation road has been bumpier so far. Let's take Kafka as an important representative of the async world. The decoupling of the applications (microservices) can become a source of a total mess as the formats of the messages in a Kafka topic that are expected and supported by the various communicating parties can start to diverge very easily. The Kafka ecosystem has its own tool to fight the mess: the Avro schema and a schema registry that can automatically check for compatibility of the schema versions used. The downside of it is, nobody uses Avro schemas for anything other than Kafka.

Inspired by OpenAPI, we got AsyncAPI. Its recently released version 3 may be a good opportunity to start using a platform-independent standard. The creators of the standard used the feedback from the users to make the new major version more suitable for different possible usages of asynchronous APIs.

Some of the changes comparing to AsyncAPI v2:

You can learn more in this podcast interview with the AsyncAPI founder Fran Méndez.

Implementation abstractions

When implementing microservice applications or data streaming pipelines, ideally you want to concentrate on the business logic, data processing algorithms and not on the asynchronous communication platform that glues your services together.

If you use Java or Kotlin with SpringBoot, adding Spring Cloud Stream as an abstraction layer to support your asynchronous integration may be a good idea. It uses Spring Cloud Function to nicely decouple the core functionality represented by "functions" from the communication glue. What messaging platform is used is just a matter of "binding" that is defined in configuration files and does not affect the functional code.

@SpringBootApplication
public class SampleApplication  {
    @Bean
    public Function uppercase() {
        return value -> value.toUpperCase();
    }
}

As you can see, the code is a pure function working with data objects, totally messaging-platform-agnostic.

The configuration is connected to the code by simple naming conventions.

spring:
  cloud:
    stream:
      kafka:
        binder:
          brokers: ...
          consumer-properties:
	        ...
          producer-properties:
	        ...
          replication-factor: 3
          required-acks: all
      bindings:
        uppercase-in-0:
          binder: kafka
          destination: incoming-topic
          group: kafka-consumer-group
        uppercase-out-0:
          destination: outgoing-topic

By changing the configuration you could switch the existing application from Kafka to RabbitMQ. Cute!

Do you have anything to add? Comment on LinkedIn.

Back to all blog posts