What are some kind of challenges that distributed systems introduces?
When you
are implementing microservices architecture, there are some challenges that you
need to deal with every single microservices. Moreover, when you think about
the interaction with each other, it can create a lot of challenges. As well as
if you pre-plan to overcome some of them and standardize them across all
microservices, then it happens that it also becomes easy for developers to
maintain services.
Some of
the most challenging things are testing, debugging, security, version
management, communication ( sync or async ), state maintenance etc. Some of the
cross-cutting concerns which should be standardized are monitoring, logging,
performance improvement, deployment, security etc.
On what basis should microservices be defined?
It should
be based on the following criteria.
- Business
functionalities that change together in bounded context
- Service
should be testable independently.
- Changes
can be done without affecting clients as well as dependent services.
- It
should be small enough that can be maintained by 2-5 developers.
- Reusability
of a service
How to tackle service failures when there are dependent services?
In real
time, it happens that a particular service is causing a downtime, but the other
services are functioning as per mandate. So, under such conditions, the
particular service and its dependent services get affected due to the downtime.
In order
to solve this issue, there is a concept in the microservices architecture
pattern, called the circuit breaker. Any service calling remote service can
call a proxy layer which acts as an electric circuit breaker. If the remote
service is slow or down for ‘n’ attempts, then proxy layer should fail fast and
keep checking the remote service for its availability again. As well as the
calling services should handle the errors and provide retry logic. Once the
remote service resumes then the services start working again and the circuit
becomes complete.
This way,
all other functionalities work as expected. Only one or the dependent services
get affected.
This is
related to the automation for cross-cutting concerns. We can standardize some
of the concerns like monitoring strategy, deployment strategy, review and
commit strategy, branching and merging strategy, testing strategy, code
structure strategies etc.
For
standards, we can follow the 12-factor application guidelines. If we follow
them, we can definitely achieve great productivity from day one. We can also
containerize our application to utilize the latest DevOps themes like
dockerization. We can use mesos, marathon or kubernetes for orchestrating
docker images. Once we have dockerized source code, we can use CI/CD pipeline
to deploy our newly created codebase. Within that, we can add mechanisms to
test the applications and make sure we measure the required metrics in order to
deploy the code.
We can use
strategies like blue-green deployment or canary deployment to deploy our code
so that we know the impact of code which might go live on all of the servers at
the same time. We can do AB testing and make sure that things are not broken
when live. In order to reduce a burden on the IT team, we can use AWS / Google
cloud to deploy our solutions and keep them on autoscale to make sure that we
have enough resources available to serve the traffic we are receiving.
This is a
very interesting question. In monolith where HTTP Request waits for a response,
the processing happens in memory and it makes sure that the transaction from
all such modules work at its best and ensures that everything is done according
to expectation. But it becomes challenging in the case of microservices because
all services are running independently, their datastores can be independent,
REST Apis can be deployed on different endpoints. Each service is doing a
bit without knowing the context of other microservices.
In this
case, we can use the following measures to make sure we are able to trace the
errors easily.
- Services
should log and aggregators push logs to centralized logging servers. For
example, use ELK Stack to analyze.
- Unique
value per client request(correlation-id) which should be logged in all the
microservices so that errors can be traced on a central logging server.
- One
should have good monitoring in place for each and every microservice in
the ecosystem, which can record application metrics and health checks of
the services, traffic pattern and service failures.
Microservices
are often integrated using a simple protocol like REST over HTTP. Other
communication protocols can also be used for integration like AMQP, JMS, Kafka,
etc.
The
communication protocol can be broadly divided into two categories- synchronous
communication and asynchronous communication.
- Synchronous
Communication
RestTemplate,
WebClient, FeignClient can be used for synchronous communication between two
microservices. Ideally, we should minimize the number of synchronous calls
between microservices because networks are brittle and they introduce latency.
Ribbon - a client-side load balancer can be used for better utilization of
resource on the top of RestTemplate. Hystrix circuit breaker can be used to
handle partial failures gracefully without a cascading effect on the entire
ecosystem. Distributed commits should be avoided at any cost, instead, we shall
opt for eventual consistency using asynchronous communication.
- Asynchronous
Communication
In
this type of communication, the client does not wait for a response, instead,
it just sends the message to the message broker. AMQP (like RabbitMQ) or Kafka
can be used for asynchronous communication across microservices to achieve
eventual consistency.
- Centralized sessions
All the
microservices can use a central session store and user authentication can be
achieved. This approach works but has many drawbacks as well. Also, the
centralized session store should be protected and services should connect
securely. The application needs to manage the state of the user, so it is
called stateful session.
- Token-based
authentication/authorization
In this
approach, unlike the traditional way, information in the form of token is held
by the clients and the token is passed along with each request. A server can
check the token and verify the validity of the token like expiry, etc. Once the
token is validated, the identity of the user can be obtained from the token.
However, encryption is required for security reasons. JWT(JSON web token) is
the new open standard for this, which is widely used. Mainly used in stateless
applications. Or, you can use OAuth based authentication mechanisms as well.
Ideally,
you should follow the following practices for logging.
- In a microservice
architecture, each request should have a unique value (correlationid) and
this value should be passed to each and every microservice so the
correlationid can be logged across the services. Thus the requests can be
traced.
- Logs generated by all the
services should be aggregated in a single location so that while searching
becomes easier. Generally, people use ELK stack for the same. So that it
becomes easy for support persons to debug the issue.
To
resolve these issues we make use of Spring Cloud Sleuth and Zipkin
- Spring
Cloud Sleuth is
used to generate and attach the trace id, span id to the logs so that
these can then be used by tools like Zipkin and ELK for storage and
analysis
- Zipkin is a distributed tracing
system. It helps gather timing data needed to troubleshoot latency
problems in service architectures. Features include both the collection
and lookup of this data.
- In
a microservice architecture, there can be many different services written
in different languages. So a developer might have to setup few services
along with its dependency and platform requirements. This becomes
difficult with the growing number of services in an ecosystem. However,
this becomes very easy if these services run inside a Docker container.
- Running
services inside a container also give a similar setup across all the
environments, i.e development, staging and production.
- Docker
also helps in scaling along with container orchestration.
- Docker
helps to upgrade the underlying language very easily. We can save many
man-hours.
- Docker
helps to onboard the engineers fast.
- Docker
also helps to reduce the dependencies on IT Teams to set up and manage the
different kind of environment required.
This
approach is not at all scalable because we might have multiple environments and
also we might have to take care of geographically distributed deployments where
we might have different configurations as well.
Also, when
there are application and cron application as part of the same codebase, it
might need to take additional care on production as it might have repercussions
how the crons are architected.
To solve
this, we can put all our configuration in a centralized config service which
can be queried by the application for all its configurations at the runtime.
Spring cloud is one of the example services which provides this facility.
It also
helps to secure the information, as the configuration might have passwords or
access to reports or database access controls. Only trusted parties should be
allowed to access these details for security reasons.
As in
modern microservice architecture where each microservice runs in a separate
container, deploying and managing these containers is very challenging and
might be error-prone.
Container
orchestration solves this problem by managing the life cycle of a container and
allows us to automate the container deployments.
It also
helps in scaling the application where it can easily bring up a few containers.
Whenever there is a high load on the application and once the load goes down.
it can scale down as well by bringing down the containers. It is helpful to
adjust cost based on requirements.
Also in
some cases, it takes care of internal networking between services so that you
need not make any extra effort to do so. It also helps us to replicate or
deploy the docker images at runtime without worrying about the resources. If
you need more resources, you can configure that in orchestration services and
it will be available/deployed on production servers within minutes.
- A
gateway can also authenticate requests by verifying the identity of a user
by routing each and every request to authentication service before routing
it to the microservice with authorization details in the token.
- Gateways
are also responsible to load balance the requests.
- API
Gateways are responsible to rate limit a certain type of request to save
itself from blocking several kinds of attacks etc.
- API
Gateways can whitelist or blacklist the source IP Addresses or given
domains which can initiate the call.
- API
Gateways can also provide plugins to cache certain type of API responses
to boost the performance of the application.
If there
is any dependency between microservices then the service holding the data
should publish messages for any change in the data for which other services can
consume and update the local state.
If
consistency is required then microservices should not maintain local state and
instead can pull the data whenever required from the source of truth by making
an API call.
You will
start your account with initial money. Then all of the credit and debit events
happen and the latest state is generated by calculating all of the events one
by one. In a case where events are too many, the application can create a
periodic snapshot of events so that there isn’t any need to replay all of
the events again and again.
Spring
Cloud solves this problem by providing a few ready-made solutions for this
challenge. There are mainly two options available for the service discovery -
Netflix Eureka Server and Consul. Let's discuss both of these briefly:
Netflix
Eureka Server
Eureka is
a REST (Representational State Transfer) based service that is primarily used
in the AWS cloud for locating services for the purpose of load balancing and
failover of middle-tier servers. The main features of Netflix Eureka are:
- It
provides service-registry.
- zone
aware service lookup is possible.
- eureka-client (used by microservices) can cache the registry locally for faster lookup. The client also has a built-in load balancer that does basic round-robin load balancing.
Spring
Cloud provides two dependencies - eureka-server and eureka-client. Eureka
server dependency is only required in eureka server’s build.gradle
- build.gradle
- Eureka Server
- compile('org.springframework.cloud:spring-cloud-starter-netflix-eureka-server')
On the
other hand, each microservice need to include the eureka-client dependencies to
enables eureka discovery.
build.gradle
- Eureka Client (to be included in all microservices) compile('org.springframework.cloud:spring-cloud-starter-netflix-eureka-client').
Eureka
server provides a basic dashboard for monitoring various instances and their
health in the service registry. The ui is written in freemarker and provided
out of the box without any extra configuration.
Consul
Server
It is a
REST-based tool for dynamic service registry. It can be used for registering a
new service, locating a service and health checkup of a service.
You have
the option to choose any one of the above in your spring cloud-based
distributed application.
- config-server
It is the
config-server that can be deployed in each environment. It is the Java Code
without configuration storage.
- config-dev
It is the
git storage for your development configuration. All configuration related to
each microservices in the development environment will fetch its config from
this storage. This project has no Java code, and t is meant to be used with
config-server.
- config-qa
Same as
config-dev but its meant to be used only in qa environment.
- Config-prod
- Same as config-dev, but meant
for production environment.
- So depending upon the
environment, we will use config-server with either config-dev, config-qa
or config-prod.
- Eureka Server
The
central server (one per zone) that acts as a service registry. All
microservices register with this eureka server during app bootstrap.
- Eureka Client
Eureka
also comes with a Java-based client component, the eureka-client, which makes
interactions with the service much easier. The client also has a built-in load
balancer that does basic round-robin load balancing. Each microservice in the
distributed ecosystem much include this client to communicate and register with
eureka-server.
- Typical use case for
Eureka
There is
usually one eureka server cluster per region (US, Asia, Europe, Australia)
which knows only about instances in its region. Services register with Eureka
and then send heartbeats to renew their leases every 30 seconds. If the service
can not renew their lease for a few times, it is taken out of server registry
in about 90 seconds. The registration information and the renewals are
replicated to all the eureka nodes in the cluster. The clients from any zone
can look up the registry information (happens every 30 seconds) to locate their
services (which could be in any zone) and make remote calls.
Eureka
clients are built to handle the failure of one or more Eureka servers. Since
Eureka clients have the registry cache information in them, they can operate
reasonably well, even when all of the eureka servers go down.
- Brittle
nature of the network itself
- Remote
process is hung or
- Too much traffic on the target microservices than it can handle
This can
lead to cascading failures in the calling service due to threads being blocked
in the hung remote calls. A circuit breaker is a piece of software that is used
to solve this problem. The basic idea is very simple - wrap a potentially
failing remote call in a circuit breaker object that will monitor for
failures/timeouts. Once the failures reach a certain threshold, the circuit
breaker trips, and all further calls to the circuit breaker return with an
error, without the protected call being made at all. This mechanism can protect
the cascading effects of a single component failure in the system and provide
the option to gracefully downgrade the functionality.
Typical
Circuit Breaker Implementation
Here a
REST client calls the Recommendation Service which further communicates with
Books Service using a circuit breaker call wrapper. As soon as the
books-service API calls starts to fail, circuit breaker will trip (open) the
circuit and will not make any further call to book-service until the circuit is
closed again.
If the
failure count exceeds a specified threshold within a specified time period, the
circuit trips into the Open State. In the Open State, calls always fail
immediately without even invoking the actual remote call. The following factors
are considered for tripping the circuit to Open State -
- An
Exception thrown (HTTP 500 error, can not connect)
- Call
takes longer than the configured timeout (default 1 second)
- The internal thread pool (or semaphore depending on configuration) used by hystrix for the command execution rejects the execution due to exhausted resource pool.
After a
predetermined period of time (by default 5 seconds), the circuit transitions
into a half-open state. In this state, calls are again attempted to the remote
dependency. Thereafter the successful calls transition the circuit breaker back
into the closed state, while the failed calls return the circuit breaker into
the open state.
- A
circuit breaker is a valuable place for monitoring, any change in the
breaker state should be logged so as to enable deep monitoring of
microservices. It can easily troubleshoot the root cause of failure.
- All
places where a degraded functionality can be acceptable to the caller if
the actual server is struggling/down.
Benefits:-
- The
circuit breaker can prevent a single service from failing the entire
system by tripping off the circuit to the faulty microservice.
- The
circuit breaker can help to offload requests from a struggling server by
tripping the circuit, thereby giving it a time to recover.
- In
providing a fallback mechanism where a stale data can be provided if real
service is down.
Normally
one endpoint is Strangled at a time, slowly replacing all of them with the
newer implementation. Zuul Proxy (API Gateway) is a useful tool for this
because we can use it to handle all traffic from clients of the old endpoints,
but redirect only selected requests to the new ones.
Let’s take
an example use-case:
/src/main/resources/application.yml
zuul:
routes:
first:
path:
/first/**
url:
http://first.example.com --1
legacy:
path: /**
url:
http://legacy.example.com -- 2
- Paths
in /first/** have been extracted into a new service with an external URL
http://first.example.com
- legacy
app is mapped to handle all request that do not match any other patterns
(/first/**).
This
configuration is for API Gateway (zuul reverse proxy), and we are strangling
selected endpoints /first/ from the legacy app hosted at
http://legacy.example.com slowly to newly created microservice with external
URL http://first.example.com
- Circuit
Breaker does not even try calls once the failure threshold is reached,
doing so reduces the number of network calls. Also, a number of threads
consumed in making faulty calls are freed up.
- Circuit
breaker provides fallback method execution for gracefully degrading the
behavior. Try catch approach will not do this out of the box without
additional boiler plate code.
- Circuit
Breaker can be configured to use a limited number of threads for a
particular host/API, doing so brings all the benefits of bulkhead design
pattern.
So instead
of wrapping service to service calls with try/catch clause, we must use the
circuit breaker pattern to make our system resilient to failures.
Different
mechanisms of versioning are:
- Add
version in the URL itself
- Add
version in API request header
Most
common approach in versioning is the URL versioning itself. A versioned URL
looks like the following:
Versioned
URL
- https://<host>:<port>/api/v1/...
- https://<host>:<port>/api/v2/...
As an API
developer you must ensure that only backward-compatible changes are
accommodated in a single version of URL. Consumer-Driven-Tests can help
identify potential issues with API upgrades at an early stage.
Practically
we have three approaches -
- Database
server per microservice - Each
microservice will have its own database server instance. This approach has
the overhead of maintaining database instance and its replication/backup,
hence its rarely used in a practical environment.
- Schema
per microservice - Each
microservice owns a private database schema which is not accessible to
other services. Its most preferred approach for RDMS database (MySql,
Postgres, etc.)
- Private
Table per microservice - Each
microservice owns a set of tables that must only be accessed by that
service. It’s a logical separation of data. This approach is mostly used
for the hosted database as a service solution (Amazon RDS).
- Partition correctly
Get to
know the domain of your business, that's very very important. Only then you
will be able to define the bounded context and partition your microservice
correctly based on business capabilities.
- DevOps culture
Typically,
everything from continuous integration all the way to continuous delivery and
deployment should be automated. Otherwise, a big pain to manage a large
fleet of microservices.
- Design for stateless
operations
We never
know where a new instance of a particular microservice will be spun up for
scaling out or for handling failure, so maintaining a state inside service
instance is a very bad idea.
- Design for failures
Failures
are inevitable in distributed systems, so we must design our system for
handling failures gracefully. failures can be of different types and must be
dealt with accordingly, for example -
- Failure
could be transient due to inherent brittle nature of the network, and the
next retry may succeed. Such failures must be protected using retry
operations.
- Failure
may be due to a hung service which can have cascading effects on the
calling service. Such failures must be protected using Circuit Breaker
Patterns. A fallback mechanism can be used to provide degraded
functionality in this case.
- A
single component may fail and affect the health of the entire system,
bulkhead pattern must be used to prevent the entire system from
failing.
- Design for versioning
We should
try to make our services backward compatible, explicit versioning must be used
to cater different versions of the RESt endpoints.
- Design for asynchronous
communication b/w services
Asynchronous
communication should be preferred over synchronous communication in inter
microservice communication. One of the biggest advantages of using asynchronous
messaging is that the service does not block while waiting for a response from
another service.
- Design for eventual
consistency
Eventual
consistency is a consistency model used in distributed computing to achieve
high availability that informally guarantees that, if no new updates are made
to a given data item, eventually all accesses to that item will return the last
updated value.
- Design for idempotent
operations
Since
networks are brittle, we should always design our services to accept repeated
calls without any side effects. We can add some unique identifier to each
request so that service can ignore the duplicate request sent over the network
due to network failure/retry logic.
- Share as little as
possible
In
monolithic applications, sharing is considered to be a best practice but that's
not the case with Microservices. Sharing results in a violation of Bounded
Context Principle, so we shall refrain from creating any single unified shared
model that works across microservices. For example, if different services need
a common Customer model, then we should create one for each microservice with
just the required fields for a given bounded context rather than creating a big
model class that is shared in all services.
The more
dependencies we have between services, the harder it is to isolate the service
changes, making it difficult to make a change in a single service without
affecting other services. Also, creating a unified model that works in all
services brings complexity and ambiguity to the model itself, making it hard
for anyone to understand the model.
In a way
are want to violate the DRY principle in microservices architecture when it
comes to domain models.
- Server-Side Caching - Distributed caching
software like Redis/MemCache/etc are used to cache the results of business
operations. The cache is distributed so all instances of a microservice
can see the values from the shared cache. This type of caching is opaque
to clients.
- Gateway Cache - central API gateway
can cache the query results as per business needs and provide improved
performance. This way we can achieve caching for multiple services at one
place. Distributed caching software like Redis or Memcache can be used in
this case.
- Client-Side Caching - We can set
cache-headers in http response and allow clients to cache the results for
a pre-defined time. This will drastically reduce the load on servers since
the client will not make repeated calls to the same resource. Servers can
inform the clients when information is changed, thereby any changes in the
query result can also be handled. E-Tags can be used for client-side load
balancing. If the end client is a microservice itself, then Spring Cache
support can be used to cache the results locally.
By the use
of swagger annotation on REST endpoint, api documentation can be auto-generated
and exposed over the web interface. An internal and external team can use web
interface, to see the list of APIs and their inputs & error codes. They can
even invoke the endpoints directly from web interface to get the results.
Swagger UI
is a very powerful tool for your microservices consumers to help them
understand the set of endpoints provided by a given microservice.
- JUnit
the
standard test runners
- TestNG
the
next generation test runner
- Hemcrest
declarative
matchers and assertions
- Rest-assured
for
writing REST Api driven end to end tests
- Mockito
for
mocking dependencies
- Wiremock
for
stubbing thirdparty services
- Hoverfly
Create
API simulation for end-to-end tests.
- Spring
Test and Spring Boot Test
for
writing Spring Integration Tests - includes MockMVC, TestRestTemplate,
Webclient like features.
- JSONassert
An
assertion library for JSON.
- Pact
The
Pact family of frameworks provide support for Consumer Driven Contracts
testing.
- Selenium
Selenium
automates browsers. Its used for end-to-end automated ui testing.
- Gradle
Gradle
helps build, automate and deliver software, fastr.
- IntelliJ
IDEA
IDE
for Java Development
- Using
spring-boot-starter-test
- We
can just add the below dependency in project’s build.gradle
- testCompile('org.springframework.boot:spring-boot-starter-test')
This
starter will import two spring boot test modules spring-boot-test &
spring-boot-test- autoconfigure as well as Junit, AssertJ, Hamcrest, Mockito,
JSONassert, Spring Test, Spring Boot Test and a number of other useful
libraries.
Why Microservices?
In the case of monolith applications, there are several problems like
1. Same code base for presentation, business layer, and data access layer. Application is deployed as a single unit.
2. Complexity to maintain and scalability is an issue.
Microservice solves the above problems.
Microservices are ideal when a monolith or a legacy application needs to be modernized.
For new software development, if the key business drivers are to reduce time to market, scalable better software, lower costs, faster development, or cloud-native development, microservices are ideal.
Each service is independent and gives the flexibility to choose the programming language, database, and/or architecture.
Distinct services can be developed, deployed, and maintained independently.
What is an API gateway in microservices?
API Gateway in Microservices is a Microservices Architecture pattern.
API Gateway is a server and is a single-entry point into the system. API Gateway is responsible for routing the request, composition, and translation of the protocol. All the requests from the clients first come to the API Gateway and the API Gateway routes the request to the correct microservice.
API Gateway can also aggregate the results from the microservices back to the client. API Gateway can also translate between web protocols like HTTP, web socket, etc.
API Gateway can provide every client with a custom API as well.
An example of an API Gateway is Netflix API Gateway.
How to deploy microservices?
Microservices are developed and deployed quickly and in most cases automatically as part of the CI/CD pipeline. Microservices could be deployed in Virtual Machines or Containers. The virtual machines or containers can be On-premise or in the cloud as well.
There are different deployment approaches available for Microservices. Some of the possible deployment approaches for microservices are mentioned below.
· Multiple service instances per host
· Service instance per host
· Service instance per VM
· Service instance per Container
· Serverless deployment
· Service deployment platform
How to handle exceptions in microservices?
In the case of microservices, exception handling is important. If any exception/error is not handled, it will be propagated to all the downstream services creating an impact on the user experience. To make the services more resilient, handling exceptions becomes very important.
In the case of ‘500 — Internal Service Error’, Sprint Boot will respond like below.
(
“timestamp”: “2020–04–02T01:31:08.501+00:00”,
“path”: “/shop/action”,
“status”: 500,
“error”: “Internal Server Error”,
“message”: “”,
“requestId”: “a8c4c6d4–3”
}
Spring provides ControllerAdvice for exception handling in Spring Boot Microservices. @ControllerAdivce informs Spring Boot that a class will act like an Interceptor in case of any exceptions.
We can have any number of exception handlers to handle each exception.
Eg. For handling generic Exception and RunTimeException, we can have 2 exception handlers.
@ControllerAdvice
public class ApplicationExceptionHandler {
@ExceptionHandler(Exception.class)
public ResponseEntity handleGenericException(Exception e)
{
ShopException shopException = new ShopException(100, “Items are not found”);
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(shopException);
}
@ExceptionHandler(RuntimeException.class)
public ResponseEntity handleRunTimeException(RuntimeException e)
{
ShopException shopException = new ShopException(101, “Item is not found”);
return ResponseEntity.status(HttpStatus.INTERNAL_SERVER_ERROR) .body(shopException);
}
}
What is Spring Cloud?
Spring Cloud is an open-source library that provides tools for quickly deploying the JVM based application on the clouds. It provides a better user experience and an extensible mechanism due to various features like Distributed configuration, Circuit breakers, Global locks, Service registrations, Load balancing, Cluster state, Routing, Load Balancing, etc. It is capable of working with spring and different applications in various languages
Features of Spring Cloud
Major features are as below:
· Distributed configuration
· Distributed messaging
· service-to-service calls
· Circuit breakers
· Global locks
· Service registration
· Service Discovery
· Load balancing
· Cluster state
· Routing
How Do You Override A Spring Boot Project’s Default Properties?
Spring Application loads properties from the application.properties files in the following locations and add them to the Spring Environment:
1. A /config subdirectory of the current directory.
2. The current directory
3. A classpath /config package
4. The classpath root
How Is Spring Security Implemented In A Spring Boot Application?
Spring Security is a framework that majorly focuses on providing both authentication and authorization to Java EE-based enterprise software applications.
Adding Spring security:
Maven:
To include spring security, include the below dependency:
<dependencies>
<dependency>
<groupID>org.springframework.security</groupID>
<artifactId>spring-security-config</artifactID>
<version>5.5.0</version>
</dependency>
<dependency>
<groupId>org.springframework.security</groupId>
<artifactId>spring-security-web</artifactId>
<version>5.5.0</version>
</dependency>
</dependencies>
What is Hystrix and how can it be implemented in the Spring Boot Application?
Netflix’s Hystrix is a library that provides an implementation of the Circuit Breaker pattern for Microservices based applications. A circuit breaker is a pattern that monitors for failures and once the failures reach a certain threshold, the circuit breaker trips, and all further calls will return with an error, without the external call being made at all.
On applying a Hystrix circuit breaker to a method, it watches for failing calls to that method, and if failures build up to a threshold; Hystrix opens the circuit so that subsequent calls automatically fail.
While the circuit is open, Hystrix redirects to a specified method called a fallback method. This creates a time buffer for the related service to recover from its failing state.
Below are the annotations used to enable Hystrix in a Spring Boot application:
@EnableCircuitBreaker: It is added to the main Application class for enabling Hystrix as a circuit breaker and to enable hystrix-javanica; which is a wrapper around native Hystrix required for using the annotations.
@HystrixCommand: This is method annotation that notifies Spring to wrap a particular method in a proxy connected to a circuit breaker so that Hystrix can monitor it. We also need to define a fallback method having the backup logic that needs to be executed in the failure scenario. Hystrix passes the control to this fallback method when the circuit is broken.
This annotation can also be used for asynchronous requests. Currently, it works only with classes marked with @Component or @Service.
What is Service Discovery and how can it be enabled in Spring Boot?
In a typical Microservice architecture multiple services collaborate to provide an overall functionality. These set of service instances may have dynamically assigned network locations. Also, the services scale up and down as per the load. It could get tricky in a cloud environment resolving the services that are required for operation for common functionality.
Consequently, in order for a client to make a request to a service, it must use a service-discovery mechanism. It is the process where services register with a central registry and other services query this registry for resolving dependencies.
A service registry is a highly available and up to date database containing the network locations of service instances. The two main service-discovery components are client-side discovery and service-side discovery.
Netflix Eureka is one of the popular Service Discovery Server and Client tools. Spring Cloud supports several annotations for enabling service discovery. @EnableDiscoveryClient annotation allows the applications to query the Discovery server to find required services.
In Kubernetes environments, service discovery is built-in, and it performs service instance registration and deregistration.