How Spring boot application
handle the runtime exception internally and what are the annotation to use to
declare the user define exception? Any real time scenario where spring boot fit
to handle the exception.
Spring boot checks if any class is
annotated as @ControllerAdvice and @ExceptionHandler and called from rest end
point layer, when any exception occurs than spring boot calls the corresponding
annotated class to handle the error message.
- Spring boot provides the below annotation to
create the custom handler which eventually catch the exception from rest
endpoint.
- Spring boot provides the cross-cutting concern
to handle the exception being generated by rest layer.
- Leverage to use the error message code comprehensively.
- @ControllerAdvice is an annotation, to
handle the exceptions globally.
- @ExceptionHandler is an annotation used
to handle the specific exceptions and sending the custom responses to the
client.
The real-time scenario is like, let’s
say that most of the exception message is system generated and has a
straightforward information, which is sometimes difficult to interpret by the
user interface and understand by layman user, to solve this issue spring boot
handle the error message and convert into the meaningful and comprehensive
message which easy to understand and interpret.
What are the important
annotations use to create the interceptor and what all their annotation does?
Interceptor is one of the prominent
features of spring boot and must use the @Component annotated class that
supports it and it should implement the HandlerInterceptor interface.
There are three methods which are
used to implement the interceptor.
- preHandle() method − This method is
predominantly used to perform the operation by intercepting the call
and getting the information present in the request.
- postHandle() method − This method is used
to perform the operation by intercepting the call information
present in the response.
- afterCompletion() method − This is used to
perform operations when the request and response get completed.
Spring boot example to develop
the Exception handler?
Below code structure represent the
ControllerAdvice class developed using spring boot framework to handle
the exception.
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.ControllerAdvice;
import org.springframework.web.bind.annotation.ExceptionHandler;
// Need to mention the RestController so that it will behave as a controller class
@ControllerAdvice
public class ProductExceptionController {
// Below method use to handle the exception, which is being generated by the rent //endpoint method. This method also act as a User define exception.
@ExceptionHandler(value = ProductNotfoundException.class)
public ResponseEntity
return new ResponseEntity<>("Product not found", HttpStatus.NOT_FOUND);
}
}
- Rest controller class which generate the exception
import java.util.HashMap;
import java.util.Map;
// Below series of import is important package specially org.springframework package
import org.springframework.http.HttpStatus;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;
import com.tutorialspoint.demo.exception.ProductNotfoundException;
import com.tutorialspoint.demo.model.Product;
// This class represents how to call the exception handler class which is mention above.
@RestController
public class ProductServiceController {
private static Map productRepo = new HashMap<>();
static {
Product honey = new Product();
honey.setId("1");
honey.setName("Honey");
productRepo.put(honey.getId(), honey);
Product almond = new Product();
almond.setId("2");
almond.setName("Almond");
productRepo.put(almond.getId(), almond);
}
// Below rest end points method throwing the exception if id is not found in databases, //so rather than call the runtime exception its calling the handler class, to catch the //exception and generate the appropriate message
@RequestMapping(value = "/products/{id}", method = RequestMethod.PUT)
public ResponseEntity
if(!productRepo.containsKey(id))
throw new ProductNotfoundException();
productRepo.remove(id);
product.setId(id);
productRepo.put(id, product);
return new ResponseEntity<>("Product is updated successfully", HttpStatus.OK);
}
}
What are Spring Profiles? How
do you implement it using Spring Boot?
A profile is a feature of Spring framework that
allows us to map the beans and components to certain profiles. A profile can be
assumed to be a group or an environment like dev, test, prod, etc.; that needs
a certain kind of behavior and/or requires to maintain distinct functionalities
across the profiles. So, when the application is running with ‘dev’
(Development) profile only certain beans can be loaded and when in ‘prod’
(Production) certain other beans can be loaded.
In Spring Boot we use @Profile annotation to map bean to
a particular profile by taking the names of one (or multiple) profiles.
Let’s say we have a Component class that is used to record and
mock the REST requests and responses. However, we want to activate this
component only in dev profile and disable in all other profiles. We annotate
the bean with “dev” profile so that it will only be present in the container
during development.
@Component
@Profile("dev")
public class DevMockUtility
Profiles are activated using application.yml in
the Spring project:
spring.profiles.active=dev
To set profiles programmatically, we
can also use the SpringApplication class:
SpringApplication.setAdditionalProfiles("dev");
What Is the Difference Between
Hibernate and Spring Data JPA?
Hibernate is a JPA (Java Persistence API)
implementation providing ORM (Object-relational mapping) for mapping, storing,
updating and retrieving application data from relational databases to Java
objects and vice versa. Hibernate maps Java classes to database tables and from
Java data types to SQL data types, hence programmer is relieved from writing
traditional data persistence programs like SQL.
Whereas Spring Data JPA is a JPA Data
Access Abstraction used to significantly reduce the amount of boilerplate code
required to implement data access layers for various persistence stores. With
Spring Data, we still need to use Hibernate, Eclipse Link, or any other JPA
provider. One of the key benefits is that we can control transaction boundaries
with the use of @Transactional annotation.
How to implement Caching in
Spring Boot?
Caching is a mechanism that helps in reducing roundtrip calls to
Database, REST service, files, etc. Performance under heavy load is a key
feature expected from any modern web and mobile application, hence caching is
really vital to enhance the speed of fetching data.
Spring Boot provides a starter project for caching
“spring-boot-starter-cache”, adding this to an application brings in all the
dependencies to enable JSR-107 (JCACHE - Java Temporary Caching API) and Spring
caching annotations.
In order to enable caching in a Spring Boot application, we need
to add @EnableCaching to the required configuration class. This will
automatically configure a suitable CacheManager to serve as a provider for the
cache.
Example:
@Configuration
@EnableCaching
public class CachingConfig {
@Bean
public CacheManager cacheManager() {
return new ConcurrentMapCacheManager("addresses");
}
}
Now to enable caching, we need to add
a @Cacheable annotation to the methods where we want to cache the data.
@Cacheable("addresses")
public String getAddress(Customer customer) {...}
What is HATEOS in RESTful
applications?
HATEOAS (Hypermedia as the Engine of
Application State) is a principle for REST APIs, according to which the API
should guide the client through the application by returning relevant
information about potential subsequent steps, along with the response.
This information is in the form of
hypermedia links included with responses, which helps the client to navigate
the site's REST interfaces. It essentially tells the clients what they can do
next, and what is the URI of the resource. If a service consumer can use the
links from the response to perform transactions, then it would not need to
hardcode all links.
According to the Richardson Maturity
Model, HATEOAS is considered the final level of REST.
To support HATEOAS, each resource in
the application should contain a "links" property which defines
hyperlinks to related resources.
Each link object typically includes
the following properties:
"rel”: Relation with the target
resource.
"href”: URI for resource
For example:
{
"contractId": 10067,
"description": "Contract details for the requested orderId",
"status": "created",
"links": [
{
"rel": "self",
"href": "http://demoApplication.com/contracts/10067"}]
}
Explain the difference between
@Controller and @RestController annotation?
Traditional Spring controllers are
created by adding a class with @Controller annotation. It is actually a
specialization of the @Component annotation that allows the implementation
classes to be autodetected by Spring context through the classpath scanning.
Generally, @Controller annotation is
used in combination with @RequestMapping and @ResponseBody added to the request
handling methods to define the REST APIs.
@RestController is a convenient
annotation that combines both the features of @Controller and @ResponseBody
annotations.
The key difference between typical
Spring @Controller and the RESTful web service @RestController is the way the
HTTP response body is created. While the traditional MVC controller relies on
the View technology, the RESTful web service controller returns the object and
the object data is written directly to the HTTP response as JSON.
What is @Autowired annotation?
How is @Qualifier used with it?
@Autowired annotation is used to autowire i.e. inject dependent
bean on the constructor, setter method or a field/property. When @Autowired is
used on dependency, the application context searches for a matching dependency
and provides as required. This helps us to avoid writing explicit injection
logic.
However, by default, all dependencies
that are Autowired are required. So, in scenarios where a required dependency
is not available or if there is conflict; it results in an exception like
NoUniqueBeanDefinitionException.
There are a few options available to
turn off the default behavior:
- By using
(required=false) option with @Autowired to make it non-mandatory for a
specific bean property.
@Autowired (required=false)
private Contract contractBean;
- By using @Qualifier,
we can further qualify autowiring; in scenarios when two beans are created
with the same name.
@Qualifier ("design")
private Contract contractBean
What is Microservices? How it
is different from monolithic applications?
Microservices (MS) is an architecture
pattern that prescribes to divide an application based on business
functionality instead of technical boundaries. These set of smaller
interconnected services constitute the complete application. As opposed to
monolithic architecture, it recommends breaking the application into smaller
atomic units, each performing a single function.
Typically, an application provides a
set of distinct features or functionality, such as order management, billing,
customer service, etc. Each microservice works as a mini-application that has
its own hexagonal architecture. It is often compared to Honeycombs (nests) that
are a combination of multiple hexagonal structures.
Below are some of the key features of
Microservices that distinguish from monolithic:
- Tight
Cohesion: Single responsibility per service i.e.
code perform a single and well-defined task only.
- Loose
Coupling: Microservices are the autonomous i.e.
effect of changes are isolated to that particular MS only.
- Interoperability: One of the key foci of microservices is on communication
between systems using diverse technologies.
- Stateless: An ideal microservice does not have a state i.e. it does not
store any information between requests. All the information needed to
create a response is present in the request.
- Devops: It is highly recommended to implement an automated build and
release process using suitable CI-CD infrastructure.
- Developing Products instead of Projects.
What is Hystrix and how it can
be implemented in the Spring Boot application?
Netflix’s Hystrix is a library that provides an implementation
of the Circuit Breaker pattern for Microservices based applications. A circuit
breaker is a pattern that monitors for failures and once the failures reach a
certain threshold, the circuit breaker trips, and all further calls will return
with an error, without the external call being made at all.
On applying Hystrix circuit breaker to a method, it watches for
failing calls to that method, and if failures build up to a threshold; Hystrix
opens the circuit so that subsequent calls automatically fail.
While the circuit is open, Hystrix redirects call to a specified
method called a fallback method. This creates a time buffer for the related
service to recover from its failing state.
Below are the annotations used to enable Hystrix in a Spring
Boot application:
@EnableCircuitBreaker: It is
added to the main Application class for enabling Hystrix as a circuit breaker
and to enable hystrix-javanica; which is a wrapper around native Hystrix
required for using the annotations.
@HystrixCommand: This is
method annotation that notifies Spring to wrap a particular method in a proxy
connected to a circuit breaker so that Hystrix can monitor it. We also need to
define a fallback method having the backup logic that needs to be executed in
the failure scenario. Hystrix passes the control to this fallback method when
the circuit is broken.
This annotation can also be used for
asynchronous requests. Currently, it works only with classes marked with
@Component or @Service.
What is Service Discovery and
how it can be enabled in Spring Boot?
In a typical Microservice architecture multiple services
collaborate to provide an overall functionality. These set of service instances
may have dynamically assigned network locations. Also, the services scale up
and down as per the load. It could get tricky in a cloud environment resolving
the services that are required for operation for common functionality.
Consequently, in order for a client to make a request to a
service, it must use a service-discovery mechanism. It is the process where
services register with a central registry and other services query this
registry for resolving dependencies.
A service registry is a highly available and up to date database
containing the network locations of service instances. The two main
service-discovery components are client-side discovery and service-side
discovery.
Netflix Eureka is one of the popular Service Discovery Server
and Client tools. Spring Cloud supports several annotations for enabling
service discovery. @EnableDiscoveryClient annotation allows the applications
to query Discovery server to find required services.
In Kubernetes environments, service
discovery is built-in, and it performs service instance registration and
deregistration
What is Cloud Foundry?
Cloud Foundry is an open source cloud PaaS (platform as a
service) where developers and organizations can build, deploy, run and scale
their applications. It is the same company that manages Spring, hence has great
support for running Spring based cloud applications.
For deploying an application in Cloud Foundry, we need to
configure the application with a target and a space to deploy the application
to.
It is increasingly gaining popularity as an open source and lets
us use our own tools and code. Organizations can deploy Cloud Foundry PaaS on
their own internal infrastructure; on cloud providers' infrastructure, such as
Amazon Web Services (AWS) or Microsoft Azure.
It also provides a few out of the box components:
- Authentication: Contains an OAuth2 server and login server for user
identity management.
- Application Lifecycle: Provides application deployment and
management services.
- Application Storage and Execution: It can control when an application
starts and stops, as well as the VM's containers.
- Service Brokers: Helps connecting applications to services like
databases.
- Messaging: Enables VMs to communicate via HTTP or HTTPS protocols,
can also store data like application status.
- Metrics and Logging. It provides Loggregator tool, which helps
organizations monitor their Cloud Foundry environment.
It is very easy to set up and application on Cloud Foundry:
- Create a pivotal Cloud Foundry account.
- Create an organization and space to deploy the application.
- Add the plugin with the configuration of Cloud Foundry org and
space in application pom.xml/build.gradle.
What is ELK stack?
Popular logging frameworks such as
Log4j, Logback, and SLF4J, etc. provide logging functionality for the
individual microservice application. However, when a group of run together to
provide complete business functionality, it becomes really challenging to trace
a request across all the services, especially in case of failures.
Hence it is highly recommended to
have a centralized logging solution in place, to have all the log messages
stored in a central location rather than on local machine/container of each
microservice. This eliminates dependency on the local disk space (volumes) and
can help retain the logs for a long time for analysis in the future.
The Elasticsearch, Logstash, and
Kibana tools, collectively known as the ELK stack, provide an end-to-end
logging solution in the distributed application; providing a centralized
logging solution. ELK stack is one of the most commonly used architectures for
custom logging management in cloud-based Microservices applications.
Elasticsearch is a NoSQL database
used to store the logs as documents. Logstash is a log pipeline tool that
accepts logs as input from various micro service applications, executes
transformations if required and stores data into the target (Elasticsearch
database).
Kibana is a UI that works on top of
Elasticsearch, providing a visual representation of the logs and ability to
search them as required. All three tools are typically installed on a
single server, known as the ELK server.
In a centralized logging approach,
applications should follow a standard for log messages. Each log message having
a context, message, and correlation ID. The context information is ideally can
be the IP address, user information, process details, timestamp, etc. The
message is a simple text description of the scenario. The correlation ID is
dynamically generated and is common across all that used for end-to-end
tracking of a request/task.
No comments:
Post a Comment