Spring Framework Topics and Interview Questions:
Hibernate Topics and Interview Questions:
Apache Camel Topics:
Apache Camel Topics:
This blog has information on Micro Services Architecture, Spring Boot, Spring Cloud, Java and Interview Questions,Tutorials. Also covers some articles on other technologies.
Spring Framework Topics and Interview Questions:
How does microservices architecture work?
A
microservices architecture breaks your monolithic application into a collection
of loosely coupled services. Each service focuses on a business capability or
domain.
They then
communicate through well-defined APIs using RESTful or other lightweight
protocols, sometimes using centralized API gateways.
Containers,
such as Docker, package and deploy these microservices with container
orchestration tools like Kubernetes.
This
decentralization allows independent development and maintenance by different
teams using the programming languages and technologies that
best suit their needs.
Microservices also ensure fault tolerance through techniques such as circuit breakers, retries, distributed tracing, and monitoring and logging for issue detection.
How to implement microservices architecture?
To
implement the microservice architecture, consider the four guiding
principles:
1. Follow
Conway’s law when structuring your application
Conway’s
law suggests
that your software system’s structure will mirror your development team’s
communication patterns. It means organizing your services around your team’s
natural boundaries and communication paths in microservices.
For
example, if you have a large cross-functional team, you might structure your
microservices architecture pattern to align with the responsibilities of these
teams. It can lead to better communication, collaboration, and shared
understanding among team members, which, in turn, will result in more efficient
and effective development.
2. Avoid
ending up with accidental monoliths
One common
pitfall in a microservices architecture is inadvertently recreating a monolith
by tightly coupling your services. To prevent this, maintain loose coupling
between services and resist the temptation to share too much logic or data.
Each service should be independently deployable and maintainable.
3. Refactor
your monolithic application with service objects
It’s often
wise to incrementally refactor your existing codebase if transitioning from a
monolithic architecture to microservices. Service objects can help you with
this. Break down monolithic modules into smaller, reusable service objects that
encapsulate specific functionality. It makes it easier to replace monolithic
components with microservices gradually.
4. Design
smart endpoints and dumb pipes
Microservices
should communicate efficiently, but communication should be simple and
transparent. So, design smart endpoints that are responsible for processing
requests and responses. At the same time, the communication between them (the
“pipes”) should be as straightforward as possible, such as using HTTP/REST or
lightweight message queues.
Moreover,
prioritize continuous monitoring, automate testing, embrace containerization
for scalability, and foster a culture of decentralization. Additionally, stay
updated on emerging technologies and best practices in microservices to ensure
your architecture evolves effectively.
How to
deploy microservice architecture?
You can
deploy a microservice architecture with single machines, orchestrators,
containers, serverless functions, and more. No matter the way, follow a
structured approach that ensures reliability, scalability, and agility, such as
these six essential steps:
Step 1:
Use cloud services for production infrastructure
Consider
utilizing cloud platforms like AWS, Azure, or Google Cloud for
your production environment. These services provide scalable, reliable
infrastructure, eliminating the need to manage physical servers. You can
quickly provision resources and leverage cloud-native tools for seamless
deployment and management.
Step 2:
Design for failure
In a
microservices architecture, failures are inevitable. Design your services to be
resilient rather than trying to prevent them entirely. Implement redundancy and
failover mechanisms so that when a service goes down, it doesn’t bring down the
entire system. This approach ensures uninterrupted service availability.
Step 3:
Decentralized data management
Each
microservice should have its own data store, preferably a database tailored to
its needs. Avoid monolithic databases that can create dependencies between
services. Decentralized data management ensures that changes to one service’s
data structure won’t impact others, enhancing agility and scalability.
Step 4:
Distribute governance
Distribute
governance responsibilities across your development teams. Empower each group
to make decisions regarding their microservices, including technology
stack, API design, and scaling strategies. This approach fosters autonomy,
accelerates development, and ensures findings align with service-specific
requirements.
Step 5:
Automate infrastructure deployment and embrace CI/CD
Leverage
automation tools like Kubernetes, Docker, and Jenkins to streamline your
deployment pipeline. Implement continuous integration and continuous
deployment (CI/CD) processes to automate testing, building, and
deploying microservices. This automation accelerates the release cycle and
minimizes the risk of human error.
Step 6:
Monitor, log, and troubleshoot from the start
Start
monitoring and logging your microservices right from the beginning. Use
monitoring and observability tools like Prometheus, Grafana,
and ELK Stack to collect and analyze your services’ performance data. Effective
monitoring lets you identify issues early, troubleshoot efficiently, and
continuously optimize your microservices.
Ways to
deploy a microservices architecture
When
you’re planning to deploy a microservices architecture, you have several
options at your disposal. Each option offers a unique approach to hosting and
managing your microservices, and choosing the right one depends on your
specific needs and the nature of your application.
Option
1: Single machine, multiple processes
Imagine
your microservices running on a single machine, each as a separate process.
This option is a straightforward approach to deploying microservices. It’s
suitable for small-scale applications and can be cost-effective. However, it
has limitations in terms of scalability and resilience. The entire system can
halt when the machine experiences issues or becomes overwhelmed.
Option
2: Multiple machines and processes
To address
the limitations of the single-machine approach, you can deploy your
microservices across multiple machines. Each microservice runs as a separate
process on its server. This approach improves scalability and reliability,
making it suitable for larger applications. Implementing load balancing is
essential to distribute traffic evenly across the microservices and maintain
high availability.
Option
3: Deploy microservices with containers
Containers,
such as Docker, have revolutionized microservices deployment. With
containerization, you can package each microservice and its dependencies
into a lightweight, consistent environment. This approach ensures that your
microservices are isolated from each other, making it easier to manage, scale,
and deploy them across various environments. Containers are also portable so
that you can run them on different cloud providers or on-premises servers with
minimal changes.
Option
4: Deploy microservices with orchestrators
Container
orchestration platforms
like Kubernetes offer a powerful solution for deploying and managing
microservices at scale. Orchestrators help automate the deployment, scaling,
and load balancing of microservices. They provide advanced features like
self-healing, rolling updates, and service discovery, making managing many
microservices more manageable. Kubernetes, in particular, has become the de
facto standard for container orchestration.
Option
5: Deploy microservices as serverless functions
Popularized
by platforms like AWS Lambda, serverless computing allows you to
deploy microservices as individual functions that automatically scale with
demand. This approach eliminates the need for managing infrastructure and
ensures cost efficiency by only charging you for the resources used. While
serverless is an attractive option for specific workloads, it’s unsuitable for
all applications. Consider latency, execution limits, and the stateless nature
of serverless functions when deciding whether to go serverless.
And, as
you deploy your microservices architecture, monitor your system’s ongoing
health and performance to ensure seamless operations.
How to monitor microservice?
Start with
containers, then look for service performance, APIs, multi-location services,
and other parts. To effectively monitor microservices, adhere to the
below-mentioned fundamental principles:
1. Monitor
containers and what’s inside them
In
microservices, organizations often containerize applications using technologies
like Docker or Kubernetes. Monitoring these containers is fundamental. So, keep
track of resource utilization (CPU, memory, network), container health, and the
processes running inside. It allows you to spot potential issues early on.
2. Alert
on service performance, not container performance
While
monitoring containers is essential, the ultimate goal is ensuring that the
services hosted within these containers perform as expected. Instead of being
overwhelmed with alerts from individual containers, focus on high-level
service-level indicators such as response times, error rates, and throughput.
It provides a more accurate reflection of the user experience.
3. Monitor
elastic and multi-location services
Microservices
architecture enables services to scale and distribute across multiple locations
dynamically. Therefore, ensure your monitoring solution can track service
instances wherever they may be, whether in a data center, cloud, or on edge.
Next, measure elasticity regarding auto-scaling events and location-aware
monitoring for uniform performance across various regions.
4. Monitor
APIs
In
microservices, communication often happens through APIs. So, monitor the
performance and reliability of these APIs. Track response times, error rates,
and usage patterns to identify bottlenecks, misbehaving services, or any
external dependencies causing slowdowns or failures in your microservices
ecosystem.
5. Map
your monitoring to your organizational structure
Different
teams often manage microservices in larger organizations. Each team may have
ownership of specific microservices or service clusters. So, create a
monitoring strategy that reflects your organizational structure.
Implement role-based access controls so each team can monitor
and troubleshoot their services without impacting others.
Challenges (and best practices) to implement microservice architecture
Implementing
a microservice architecture requires g effective communication, complexity
management, and orchestrating service interactions. Here’s how you can overcome
these challenges.
Challenge
1: Service coordination
Coordinating
the services in a microservices architecture can be complex due to the system’s
distributed nature. Each microservice operates independently, with its codebase
and database, making it essential to establish effective communication between
them.
Solution:
Use API gateways
API
gateways provide a central entry point for clients, simplifying service
communication. They handle request routing and can perform tasks like load
balancing and authentication. This practice centralizes the routing logic,
easing developers’ service discovery burden. API gateways can also help with
versioning and rate limiting, enhancing the user experience.
Challenge
2: Data management
Each
microservice often maintains its database, which can lead to data consistency
and synchronization issues. Ensuring that data is accurate and up to date
across all services can be complex. The need to manage transactions and
maintain data integrity between services becomes critical.
Solution:
Execute event sourcing and CQRS
Event
sourcing involves capturing all changes to an application’s state as a sequence
of immutable events. Each event represents a change to the system’s state and
can be used to reconstruct the state at any point in time. By storing these
events and using them for data reconstruction, you can maintain data
consistency and simplify synchronization.
Command
Query Responsibility Segregation (CQRS) complements this approach by separating
the read and write data models. This allows for specialized optimizations and
improved data consistency.
Challenge
3: Scalability
While the
architecture promotes horizontal scaling of individual services, ensuring
dynamic scaling, load balancing, and resource allocation to meet changing
demands without overprovisioning resources becomes challenging.
Solution:
Utilize containerization and orchestration
Containerization,
facilitated by technologies like Docker, packages each microservice and its
dependencies into a standardized container. Orchestration tools, such as
Kubernetes, manage these containers, automatically scaling them up or down in
response to varying workloads. This combination simplifies deployment and
scaling, making it easier to adapt to changing demands.
Challenge
4: Monitoring and debugging
With
numerous independent services communicating, it’s challenging to monitor
individual services’ health, performance, and logs and to trace the flow of
requests across the entire system. Debugging issues that span multiple
services, identifying bottlenecks, and diagnosing performance problems become
more complex in such a distributed environment.
Solution:
Incorporate centralized logging and distributed tracing
Centralized
logging tools
collect log data from various services into a single location. It allows for
easier monitoring and debugging, as developers can access a unified log stream
for all services.
Distributed
tracing tools enable the tracking of requests across services, offering
insights into the flow of data and the ability to identify bottlenecks or
errors. These tools provide an effective way to diagnose issues, optimize
performance, and ensure reliability.
Challenge
5: Security
Each
service may expose APIs for interaction, making it essential to ensure the
security of both the services themselves and the communication between them. As
services interact across a network, potential vulnerabilities, including data
breaches, unauthorized access, and denial-of-service attacks, must be addressed
effectively.
Solution:
Implement OAuth 2.0 and JWT
OAuth 2.0
is an industry-standard protocol for secure authentication and authorization,
ensuring that only authenticated users and services can access sensitive data.
JWTs, on the other hand, are compact, self-contained tokens that transmit
information between services securely. These technologies enhance security by
enabling controlled access and secure data transmission.
Till now,
you have learned how microservices work and how to tackle their implementation
challenges, but should you use microservice architecture for your
project? Find your answer below.
When should you and when should you not use microservice architecture?
Use
microservice architecture
Do not use microservice architecture
Below are example answers for the interview questions related to solution architect design, with a focus on Java:
Design
Patterns and Architecture:
Answer: The Singleton pattern ensures that
a class has only one instance and provides a global point of access to it. In
Java, a common use case is creating a logging service. By having a single
instance of the logger, we can centralize log management and avoid unnecessary
resource consumption.
Answer: The Observer pattern is used for
implementing distributed event handling systems. In Java, it can be implemented
using the Observer and Observable interfaces. For example, in a
stock market application, stock prices (Observable) notify registered investors
(Observers) when there is a change.
Answer: The Builder pattern separates the
construction of a complex object from its representation, allowing the same
construction process to create different representations. In Java, the StringBuilder
class is a good example. It allows for efficient construction of strings by
appending characters or other strings.
Answer: MVC separates the application into
three components: Model (data), View (presentation), and Controller (user
input). MVVM introduces ViewModel, which abstracts the View's state and
behavior. MVVM is often preferred for client-side development, especially in
frameworks like JavaFX or Android, where data binding is crucial.
Answer: Caching in Java can be implemented
using libraries like Ehcache or directly using ConcurrentHashMap. You
can cache the results of expensive operations, such as database queries, and
set expiration policies to keep the cache up-to-date.
Answer: Microservices offer scalability,
flexibility, and the ability to develop and deploy independently. However,
challenges include increased complexity, potential communication overhead, and
the need for effective service orchestration. It is suitable for large, complex
systems with diverse requirements and development teams.
Java
Programming:
Answer: Abstract classes can have both
abstract and concrete methods, while interfaces only define abstract methods.
Use abstract classes when you want to share code among related classes, and
interfaces when you want to enforce a contract on unrelated classes.
Answer: Generics in Java allow you to create classes, interfaces, and methods with parameters that can work with any data type. For example, a generic class Box<T> can hold objects of any type, providing type safety.
public
class Box<T> { private T value; public void setValue(T value) { this.value
= value; } public T getValue() { return value; } }
Answer: Lambdas in Java introduce a concise syntax for writing anonymous methods (functional interfaces). They enhance code readability by allowing developers to express functionality more succinctly. For example:
List<String>
names = Arrays.asList("John", "Jane", "Alice");
names.forEach(name -> System.out.println(name));
Answer: The volatile keyword in
Java is used to indicate that a variable's value may be changed by multiple
threads simultaneously. It ensures that changes made by one thread are visible
to other threads, preventing data inconsistency. It is commonly used for flags
or state variables shared among threads.
System
Design and Scalability:
Answer: To handle a large number of
concurrent users, I would focus on distributed architecture, load balancing,
and horizontal scaling. Consider using microservices, caching strategies, and
optimizing database queries. Implementing a content delivery network (CDN) and
utilizing cloud services can also enhance scalability.
Answer: Relational databases (e.g., MySQL,
PostgreSQL) are suitable for structured data with complex relationships. NoSQL
databases (e.g., MongoDB, Cassandra) excel in handling large amounts of
unstructured or semi-structured data. The choice depends on the nature of the
data, scalability requirements, and the need for ACID compliance.
Answer: A well-designed RESTful API
follows principles like statelessness, resource-based URI, uniform interface
(e.g., HTTP verbs for actions), and hypermedia as the engine of application
state (HATEOAS). It should be easy to understand, discoverable, and support
versioning for backward compatibility.
Best
Practices and Code Quality:
Answer: I would follow secure coding
practices such as input validation, parameterized queries to prevent SQL
injection, and using secure communication (HTTPS). Regularly updating
dependencies, implementing proper authentication and authorization, and
performing security audits are essential.
Answer: SOLID is an acronym representing
five design principles (Single Responsibility, Open/Closed, Liskov
Substitution, Interface Segregation, Dependency Inversion). Applying these
principles in Java leads to modular, maintainable, and extensible code. For instance,
adhering to the Single Responsibility Principle involves designing classes that
have only one reason to change, promoting maintainability.
Answer: Exception handling is crucial for
identifying and handling errors gracefully. In a distributed system, I would
implement a consistent error-handling strategy using a combination of proper
logging, standardized error codes, and returning meaningful error messages to
clients. It's essential to communicate errors effectively across services and
maintain traceability.
Project
and Team Collaboration:
Answer: I believe in fostering a
collaborative and communicative environment. Regular code reviews,
knowledge-sharing sessions, and promoting coding standards contribute to code
quality. Encouraging a culture of continuous improvement, embracing feedback,
and aligning the team with the project goals are key aspects of successful
collaboration.
Answer: In an Agile environment, a
solution architect collaborates closely with the team, providing architectural
guidance while adapting to changing requirements. I emphasize iterative design,
evolving architectures, and maintaining a balance between flexibility and
adherence to architectural guidelines. Regular communication and feedback loops
are essential for alignment.
Answer: In a previous project, we faced a
tight deadline to implement a new feature with significant impact. I conducted
a quick risk assessment, prioritized critical components, and involved key
stakeholders in decision-making. By focusing on essential functionality and
leveraging existing components, we delivered the feature on time.
Post-implementation, we iteratively refined the design based on feedback for
continuous improvement.
Problem
When services are developed by decomposing business capabilities/subdomains, the services responsible for user experience have to pull data from several microservices. In the monolithic world, there used to be only one call from the UI to a backend service to retrieve all data and refresh/submit the UI page. However, now it won't be the same. We need to understand how to do it.
Solution
With microservices, the UI has to be designed as a skeleton with multiple sections/regions of the screen/page. Each section will make a call to an individual backend microservice to pull the data. That is called composing UI components specific to service. Frameworks like AngularJS and ReactJS help to do that easily. These screens are known as Single Page Applications (SPA). This enables the app to refresh a particular region of the screen instead of the whole page.
Problem
When
microservice architecture has been implemented, there is a chance that a
service might be up but not able to handle transactions. In that case, how do
you ensure a request doesn't go to those failed instances? With a load
balancing pattern implementation.
Solution
Each
service needs to have an endpoint which can be used to check the health of the
application, such as /health. This API should o check the status of the
host, the connection to other services/infrastructure, and any specific logic.
Spring
Boot Actuator does implement a /health endpoint and the implementation can be
customized, as well.
Problem
In
microservice architecture, requests often span multiple services. Each service
handles a request by performing one or more operations across multiple
services. Then, how do we trace a request end-to-end to troubleshoot the
problem?
Solution
We need a
service which
Spring
Cloud Slueth, along with Zipkin server, is a common implementation.
Problem
When the
service portfolio increases due to microservice architecture, it becomes
critical to keep a watch on the transactions so that patterns can be monitored
and alerts sent when an issue happens. How should we collect metrics to
monitor application perfomance?
Solution
A metrics
service is required to gather statistics about individual operations. It should
aggregate the metrics of an application service, which provides reporting and
alerting. There are two models for aggregating metrics:
Problem
Consider a
use case where an application consists of multiple service instances that are
running on multiple machines. Requests often span multiple service instances.
Each service instance generates a log file in a standardized format. How can we
understand the application behavior through logs for a particular request?
Solution
We need a
centralized logging service that aggregates logs from each service instance.
Users can search and analyze the logs. They can configure alerts that are
triggered when certain messages appear in the logs. For example, PCF does have
Loggeregator, which collects logs from each component (router, controller,
diego, etc...) of the PCF platform along with applications. AWS Cloud Watch
also does the same.
Problem
When
microservices come into the picture, we need to address a few issues in terms
of calling services:
So how
does the consumer or router know all the available service instances and
locations?
Solution
A service
registry needs to be created which will keep the metadata of each producer
service. A service instance should register to the registry when starting and
should de-register when shutting down. The consumer or router should query
the registry and find out the location of the service. The registry also
needs to do a health check of the producer service to ensure that only
working instances of the services are available to be consumed through it.
There are two types of service discovery: client-side and server-side. An
example of client-side discovery is Netflix Eureka and an example
of server-side discovery is AWS ALB.
Problem
A service
typically calls other services and databases as well. For each environment like
dev, QA, UAT, prod, the endpoint URL or some configuration properties might be
different. A change in any of those properties might require a re-build and
re-deploy of the service. How do we avoid code modification for configuration
changes?
Solution
Externalize
all the configuration, including endpoint URLs and credentials. The application
should load them either at startup or on the fly.
Spring
Cloud config server provides the option to externalize the properties to GitHub
and load them as environment properties. These can be accessed by the
application on startup or can be refreshed without a server restart.
Problem
A service
generally calls other services to retrieve data, and there is the chance that
the downstream service may be down. There are two problems with this: first,
the request will keep going to the down service, exhausting network resources
and slowing performance. Second, the user experience will be bad and
unpredictable. How do we avoid cascading service failures and handle failures
gracefully?
Solution
The
consumer should invoke a remote service via a proxy that behaves in a similar
fashion to an electrical circuit breaker. When the number of consecutive
failures crosses a threshold, the circuit breaker trips, and for the duration
of a timeout period, all attempts to invoke the remote service will fail
immediately. After the timeout expires the circuit breaker allows a limited
number of test requests to pass through. If those requests succeed, the circuit
breaker resumes normal operation. Otherwise, if there is a failure, the timeout
period begins again.
Netflix
Hystrix is a good implementation of the circuit breaker pattern. It also helps
you to define a fallback mechanism which can be used when the circuit breaker
trips. That provides a better user experience.
Problem
When an
application is broken down to smaller microservices, there are a few concerns
that need to be addressed:
Solution
An API
Gateway helps to address many concerns raised by microservice implementation,
not limited to the ones above.
Problem
How do you
prevent a legacy monolith’s domain model from polluting the domain model of a
new service.
Solution
Define an
anti-corruption layer, which translates between the two domain models.
Problem
How do you
migrate a legacy monolithic application to a microservice architecture?
Solution
Modernize
an application by incrementally developing a new (strangler) application around
the legacy application. In this scenario, the strangler application has a microservice
architecture.
The strangler application consists of two types of services. First, there are services that implement functionality that previously resided in the monolith. Second, there are services that implement new features. The latter are particularly useful since they demonstrate to the business the value of using microservices. Eventually, the newly refactored application “strangles” or replaces the original application until finally you can shut off the monolithic application.
DRY (Don’t Repeat Yourself)
One of the most important OOPs Design Principles is DRY, as the name suggests DRY (don’t repeat yourself) means don’t write duplicate code, instead use Abstraction to abstract common things in one place. If you are using JDK 8 or later versions, you can implement the method in interfaces as well. If you have the same block of code in more than two places, consider making it a separate method. Even if you use a hard-coded value more than once, make them public final constant.
Composition
Over Inheritance (COI)
COI is an acronym for Composition Over Inheritance. As the name implies, this principle emphasizes using Composition instead of Inheritance to achieve code reusability. Inheritance allows a subclass to inherit its superclass’s properties and behavior, but this approach can lead to a rigid class hierarchy that is difficult to modify and maintain. In contrast, Composition enables greater flexibility and modularity in class design by constructing objects from other objects and combining their behaviors. Additionally, the fact that Java doesn’t support multiple inheritances can be another reason to favor Composition over Inheritance.
Composition
allows changing the behavior of a class at run-time by setting property during
run-time, and by using Interfaces to compose a class, we use polymorphism,
which provides flexibility to replace with better implementation at any time.
Difference
between Composition and Inheritance
Now let’s
understand the difference between Inheritance and Composition in a little bit
more detail.
Static
vs Dynamic
The first
difference between Inheritance and Composition comes from a flexibility point
of view. When we use Inheritance, we have to define which class you are
extending in code. It cannot be changed at runtime, but with Composition you
just define a Type which you want to use, which can hold it’s different
implementation. In this sense, Composition is much more flexible than
Inheritance.
Limited
code reuse with Inheritance
As
aforementioned, with Inheritance you can only extend one class, which means you
code can only reuse just one class, not more than one. If you want to leverage
functionalities from multiple classes, you must use Composition. For example,
if your code needs authentication functionality, you can use an Authenticator,
for authorization you can use an Authorizer etc. But with Inheritance you
just stuck with only class, why? Because Java doesn’t support multiple
Inheritance. This difference between Inheritance vs Composition actually
highlights a severe limitation of later reusability.
Unit
Testing
This is in
my opinion, the most important difference between Inheritance and Composition
in OOP probably is the deciding factor whether to use Composition or
Inheritance. When you design classes using Composition, they are easier to test
because you can supply a mock implementation of the classes you are using. But
when you design your class using Inheritance, you must need a parent class in
order to test it’s child class. There is no way you can provide a mock
implementation of the parent class.
Final
Classes
This
difference between them also highlights the other limitation of Inheritance.
Composition allows code reuse even from final classes, which is not possible
using Inheritance because you cannot extend final class in Java, which is
necessary for Inheritance to reuse code.
Encapsulation
The last
difference between Composition and Inheritance in Java in this list comes from
Encapsulation and robustness point of view. Though both Inheritance and
Composition allow code reuse, Inheritance breaks encapsulation because in case
of Inheritance, subclass is dependent upon super class behavior. If parent
classes change its behavior, then child class will also get affected. If
classes are not properly documented and child class has not used the super
class in a way it should be used, any change in super class can break
functionality in the subclass.
The
Composition provides a better way to reuse code and same time protect the class
you are reusing from any of its clients, but Inheritance doesn’t offer that
guarantee. However, sometimes Inheritance becomes necessary, mainly when you
are creating class from the same family.
Programming
for Interface not for Implementation
This OOPs
Design Principles say that Always program for the interface and not for
implementation; this will lead to flexible code that can work with any new
implementation of the interface. But hold on for a min and go through below
lines!
An
interface might be a language keyword and even an interface might also be a
design principle. Don’t confuse both! There are two rules to think of:
Minimize Coupling
Coupling
between modules/components is their degree of mutual interdependence; lower
coupling is better. In other words, coupling is the probability that code unit
“B” will “break” after an unknown change to code unit “A”.
Coupling refers to the degree of direct knowledge that one element has of another. In other words, how often do changes in class A force related changes in class B.
What
is Tight Coupling?
In
general, Tight coupling means the two classes often change together. In other
words, if A knows more than it should about the way in which B was implemented,
then A and B are tightly coupled. For example, if you want to change the skin,
you would also have to change the design of your body as well because the two
are joined together, they are tightly coupled. The best example of tight
coupling is RMI (Remote Method Invocation).
What
is Loose Coupling ?
In simple
words, loose coupling means they are mostly independent. If the only knowledge
that class A has about class B, is what class B has exposed through its
interface, then class A and class B are said to be loosely coupled. In order to
overcome from the problems of tight coupling between objects, spring framework
uses dependency injection mechanism with the help of a POJO/POJI model.
Needless to say, through dependency injection its possible to achieve loose
coupling.
Maximize
Cohesion
The
Cohesion of a single module/component is the degree to which its
responsibilities form a meaningful unit; higher cohesion is better. We should
group the related functionalities as to share a single responsibility (e.g. in
a class).
In
general, Cohesion is most closely associated with making sure that a class is
designed with a single, well-focused purpose. The more focused a class is, the
cohesiveness of that class is more. The advantages of high cohesion is
that such classes are much easier to maintain (and less frequently changed)
than classes with low cohesion. Another benefit of high cohesion is that
classes with a well-focused purpose tend to be more reusable than other
classes.
Suppose we
have a class that multiply two numbers, but the same class creates a pop-up
window displaying the result. This is the example of low cohesive class because
the window and the multiplication operation don’t have much in common. To make
it high cohesive, we would have to create a class Display and a class Multiply.
The Display will call Multiply’s method to get the result and display it.
Therefore, this could be an example to develop a high cohesive solution within
OOPs Design Principles.
KISS
(Keep It Simple, Stupid)
The Keep
it Simple, Stupid (KISS) principle states that most systems work the
best if they are kept simpler rather than made complex. Therefore, we should
consider the simplicity as a key goal in the design, and avoid the unnecessary
complication.
The Keep
it Simple, Stupid (KISS) principle is a reminder to keep your code
simple and readable for humans. If your method handles multiple use-cases,
split them into smaller methods. If it performs multiple functionalities, make
multiple methods instead.
Furthermore,
if a single method handles multiple functionalities, it will become long and
bulky. A long method will be very hard to maintain for programmers.
Consequently, bugs will be harder to find, and we might find ourselves
violating other design principles as well. If a method does two things, you
can’t call it to do just one of them, so you’ll obviously make another method.
Also, you
should keep your code simple to be easily understood by other developers.
Learning of OOPs Design Principles can also help you to achieve this. For
example, if a simple for loop does the job efficiently, you should not use a
stream API unnecessarily.
Delegation
Principles
Don’t do
all stuff by yourself, delegate it to the respective classes. A classical
example of the delegation design principle is equals() and hashCode() method in
Java. In order to compare two objects for equality, we ask the class itself to
make comparison instead of the Client class doing that check.
The key benefit of this OOPs Design Principles is no duplication of code and pretty easy to modify behavior. Event delegation is another example of this principle, where an event is delegated to handlers for handling.
Encapsulate
What Changes
Only one
thing is constant in the software field, and that is “Change,” So encapsulate
the code you expect or suspect to be changed in the future. The benefit of this
OOP Design principles is that It’s easy to test and maintain proper
encapsulated code.
YAGNI
(you aren’t gonna need it)
YAGNI
stands for “you aren’t gonna need it”: don’t implement something until it is
necessary. Always implement things when you actually need them, never when you
just foresee that you need them. It leads to code bloat; the software becomes
larger and more complicated. The YAGNI principle suggests that developers
should avoid adding unnecessary functionality or code that is not currently
needed. By focusing on the current requirements and keeping the code simple,
developers can improve the overall quality of the software.
The YAGNI
principle can help developers avoid wasting time on developing features that
may never be used. Instead, developers should focus on delivering software that
meets the current requirements and can be easily maintained and extended in the
future if necessary.