• Post author:
  • Post category:Uncategorized
  • Post comments:0 Comments


Performance is an indication of the responsiveness of a system to execute any task within a given time interval. In simple words, how quickly a resource can be accessed by another resource or user in a given time.

It can be measured in terms of latency or throughput.
  • Latency is the time taken to respond to any event.
  • Throughput is the number of events that take place within a given amount of time.

Common causes:

  • Unnecessary round-trip to server
  • Problem in defining the boundary of Microservices which results in Chattiness
  • No or minimal asynchronous communication.
  • No caching.
  • No load balancing.
  • Unnecessary data sent over the network.
  • Database or software design issues.

Points to be considered:

  • Reduce the unnecessary round-trip to the server.
  • Define efficient and appropriate caching strategy.
  • Use efficient queries to minimize performance impact, and avoid fetching all of the data when only a portion is displayed.
  • Try to reduce the number of transitions across boundaries, and minimize the amount of data sent over the network.
  • Async communication instead of sync communication.
  • Performance Monitoring.
  • Scale Up or Scale Out
  • Event based communication
  • Common Patterns:
    • Cache-Aside Pattern
    • Choreography over Orchestration
    • CQRS
    • Event Sourcing
    • Materialized View
    • Priority Queue
    • Queue-Based Load Leveling
    • Sharding
    • Static Content Hosting
    • Throttling
Please share this
Continue ReadingPerformance