Showing posts with label Design. Show all posts
Showing posts with label Design. Show all posts

Wednesday, December 20, 2023

Important Key Concepts and Terminologies

 System Design is the core concept behind the design of any distributed systems. System Design is defined as a process of creating an architecture for different components, interfaces, and modules of the system and providing corresponding data helpful in implementing such elements in systems.

the standard terms and key concepts of system design and performance, such as:

·       Latency, 

·       Throughput, 

·       Availability,

·       Redundancy,

·       Time

·       CAP Theorem

·       Lamport’s Logical Clock Theorem. 

Latency:

Latency is defined as the amount of time required for a single data to be delivered successfully. Latency is measured in milliseconds (ms).

There is a certain amount of time required for user input over the website and there is a certain amount of time for the response from the web application to the user. So the delay between user input and web application response to the same input is known as latency

Reasons for high Latency

Now you must be wondering about the factors that are responsible for delays. So high latency mainly depends on 2 factors:

  1. Network Delays 
  2. Mathematical Calculation Process Delays 

 In monolithic architecture, as we know there is only a single block and all network calls are local within hence network delay is zero and hence latency equals computational delays only (which if not latency equals zero in monolithic systems).

Latency = Mathematic Calculation Delays

In distributed systems, there is a networks over which signals are passed to and fro hence there will for sure be network delay.

Latency = Mathematic Calculation Delays + Network Delays 

How to Reduce latency:

  1. Use a content delivery network (CDN): CDNs help to cut down on latency. In order to shorten the distance between users and reduce the amount of time that data must travel over great distances, CDN servers are situated at various locations.
  2. Upgrading computer hardware/software: Improving or fine-tuning mechanical, software, or hardware components can help cut down on computational lag, which in turn helps cut down on latency.
  3. Cache: A cache is a high-speed data storage layer used in computers that temporarily store large amounts of transient data. By caching this data, subsequent requests for it can be fulfilled more quickly than if the data were requested directly from its original storage location. This lessens latency as well.

 Throughput:

Throughput is defined as the measure of amount of data transmitted successfully in a system, in a certain amount of time. In simple terms, throughput is considered as how much data is transmitted successfully over a period of time. The unit of measure for throughput is bits per second or bps.

 Availability:

Availability is the percentage of time the system is up and working for the needs. 

How to increase Availability?

  1. Eliminate SPOF(major and important)
  2. Verify Automatic Failover
  3. Use Geographic Redundancy
  4. Continue upgrading and improving.

 

From the above understanding, we can land up with two conclusions:

  1. Availability is low in monolithic architecture due to SPOF.
  2. Availability is high in distributed architecture due to redundancy.

 Redundancy:

Redundancy is defined as a concept where certain entities are duplicated with aim to scale up the system and reduce over all down-time

 Consistency:

Consistency is referred to as data uniformity in systems

When a user requests data, the system always returns the same data, regardless of the user’s location, time, etc. Before a user receives data from any node, the server at which the data is updated or changed should successfully replicate the new data to all the nodes.  

Time:

Time is a measure of sequences of events happening which is measured here in seconds in its SI unit. 

It is measured using a clock which is of two types:

  1. Physical Clock: responsible for the time between systems.  
  2. Logical Clock: responsible for the time within a system. 

 CAP Theorem:

Three desirable characteristics of distributed systems with replicated data are referred to as CAP: partition tolerance, availability, and consistency.

 Lamport’s Logical Clock Theorem:

Lamport’s Logical Clock is a process to ascertain the sequence in which events take place. It acts as the foundation for the more complex Vector Clock Algorithm. A logical clock is required because a distributed operating system (Lamport) lacks a global clock.

 

Analysis of Monolithic and Distributed Systems

System analysis is the process of gathering the requirements of the system prior to the designing system in order to study the design of our system better so as to decompose the components to work efficiently so that they interact better which is very crucial for our systems.

System design is a systematic process involving phases such as planning, analysis, designing, deploying, and testing phases. Now the first most question would be why do analyze the system when we are well versed in the designing of systems. 

Generally, the systems can be categorized into two broad categories:

  1. Monolithic Systems
  2. Distributed Systems built with help of Microservices

 Monolithic Systems

If all the functionalities of a project exist in a single codebase, then that application is known as a monolithic application

As the name suggests monolithic means a formation with large single blocks only.  Henceforth with monolithic systems, we refer to a system where the whole web application is deployed as a single unit.

Architecture of Monolithic systems

The Monolithic system architecture can be visualized by considering three sections or three layers:

  1. Client Tier or User Layer or Presentation Layer: It is the closest layer to the user and hence it can be either a webpage or a web application where the user gets things done. It takes input lead from the user, interacts with the server, and displays the user result. Hence we call it a front-end layer.
  2. Middle Tier or Service Layer: It compromises all the logic behind the application and is there in the application server. The application server includes the business logic, receives requests from the client, acts on them, and correspondingly stores the data.
  3. Data Tier or Persistence Layer: It includes a data persistence mechanism(DB) and communication with other applications. It includes databases, message queues, etc. Database server will be used by application server for the persistence of data.              

 Benefits of Monolithic Architecture

  1. Easy development: It is simply a traditional model and does not require hardcore developers to develop such a design because it can never be complex.  
  2. Easy deployment: As seen above all components are stored in a single block/repository so we just need to take care of that single repository while deploying.
  3. Easy testing: Because of closed encapsulation end to end testing via tools is pretty easy.  

 Disadvantages of Monolithic Architecture

  1. The codebase is stored in a single repository leading to difficulty in understanding. 
  2. Any changes in the codebase (updation) will require complete redeployment of the software application. 
  3. Less reusable.
  4. Less scalable because each element will be having different scalability requirements. 

 Microservices

Microservices is an architectural development style in which the application is made up of smaller services that handle a small portion of the functionality and data by communicating with each other directly using lightweight protocols like HTTP. According to Sam Newman, “Microservices are the small services that work together.” 

Microservices Architecture

The Microservice architecture has a significant impact on the relationship between the application and the database. 

  • Instead of sharing a single database with other microservices, each microservice has its own database. 
  • It often results in duplication of some data, but having a database per microservice is essential if you want to benefit from this architecture, as it ensures loose coupling. 
  • Another advantage of having a separate database per microservice is that each microservice can use the type of database best suited for its needs. 
  • Each service offers a secure module boundary so that different services can be written in different programming languages. 
  • There are many patterns involved in microservice architecture like service discovery & registry, caching, API gateway & communication, observability, security, etc.

Advantages

  • Scalability: Microservices architecture allows individual services to scale independently of each other, which helps in optimizing resource utilization and reducing infrastructure costs.
  • Flexibility: Since each service is a separate component, it can be developed, deployed, and updated independently of the others. This provides greater flexibility to the development team, allowing them to work on different parts of the application simultaneously.
  • Resilience: In case of failure of one service, other services can continue to operate without interruption, making the system more resilient.
  • Technology Heterogeneity: Microservices architecture allows for the use of different technologies and programming languages for different services, as long as they can communicate with each other.
  • Easy maintenance: As each service is independent, it is easier to maintain and update the system without affecting other services.

Disadvantages

  • Complexity: Microservices architecture involves the creation of a large number of small services, which can result in increased complexity in terms of development, deployment, and maintenance.
  • Increased overhead: Since each service is a separate component, there is an increased overhead in terms of network communication and data consistency.
  • Testing complexity: With multiple independent services, testing can become more complex, and there is a need to ensure that all services work together seamlessly.
  • Distributed systems: Microservices architecture creates a distributed system, which can lead to additional challenges in terms of monitoring and management.
  • Skillset: Developing microservices requires a different skill set than traditional monolithic application development, which can be a challenge for some development teams. 

Distributed Systems vs Microservices

If you are incorporating Microservices architecture or migrating from Monolithic to Microservices architecture, you cannot do all work on a single system (as it is against the modular feature of Microservices). This is where Distributed Systems were developed. 

Distributed Systems not only provide modularity to architecture but also gives you the advantage to use Microservice architecture easily to utilize all its benefits.

Distributed Systems

distributive system is a collection of multiple individual systems connected through a network sharing resources so as to achieve common goals. 

It is more likely seen in the real world as it has huge advantages over monolithic architecture making it highly scalable and at the same time with multiple systems sharing the same resource solves our SPOF problem. (Single Point Of Failure)

Advantages of Distributed Systems

  1. Scalable: As it contains a collection of independent machines horizontal scaling can be done to achieve scalability.  
  2. Reliable: The distributed system solves SPOF while the monolithic does not because even if an individual unit fails to work rest are operational making it more efficient to work most of the time and hence reliable. 
  3. Low latency: Because of multiple servers and more likely spread to get closer to the user to resolve a query, hence takes very less time to resolve a user query. 

Disadvantages of Distributed Systems

  1. Complexity: Because of high scalability the number of network points and hardware makes the system quite complex and challenging. 
  2. Consistency: Higher number of devices makes it difficult to integrate the data as it gets complex to synchronous application states. 
  3. Network Failure:  Communication and coordination between systems in distributed systems is carried via network calls. In a network failure, conflicting information is passed or sometimes communication failure occurs leading to poor overall system performance.   

Note: Management is also a disadvantage here out because of load balancing functionality(It is a process of distributing the load to the nodes), logging, caching and monitoring is required to manage the system to prevent failures. 

 

Tuesday, December 19, 2023

What is Scalability and How to achieve it?

 In system design, Scalability is the capacity of a system to adapt its performance and cost to the new changes in application and system processing demands. 

The architecture used to build services, networks, and processes is scalable under these 2 conditions: 

  1. Add resources easily when demand/workload  increases.
  2. Remove resource easily when demand /workload  decreases.  

 How to achieve Scalability?

Now scalability is achieved via two methods in systems: 

  1. Vertical scaling
  2. Horizontal scaling

 What is Vertical Scaling?

By adding more configuration or hardware for better computing or storage, vertical scaling expands the scale of a system. In actuality, this would include upgrading the processors, raising the RAM, or making other power-increasing changes. Multi-core scaling is used in this case to scale by distributing the load among the CPU and RAM resources.

Pros Of Scaling Up Vertically

  1. It uses less energy than maintaining multiple servers.
  2. Requires less administrative work because only one machine must be managed.
  3. Has lower cooling costs.
  4. Lower software costs.
  5. Simpler to implement.
  6. Preserves application compatibility.

Cons Of Scaling Up Vertically

  1. There is a high chance of hardware failure, which could result in more serious issues.
  2. There is little room for system upgrades, it may become a single point of failure (SPOF)
  3. There is a limit to how much RAM.
  4. Memory storage can be added to a machine at once.

 What is Horizontal Scaling?

Through the act of adding new machines, a system can be scaled horizontally. Several devices must be gathered and connected in order to handle more system requests.  

Pros Of Scaling Up Horizontal

  1. It is less expensive than scaling up and makes use of smaller systems.
  2. Simple to upgrade.
  3. The existence of discrete, multiple systems improves resilience.
  4. Fault tolerance is simple to manage.
  5. The capacity is increased by supporting linear.  

Cons Of Scaling Up Horizontal

  1. The license costs are higher.
  2. It has a larger footprint inside the data center which increases the cost of utilities like cooling and energy.
  3. It necessitates more networking hardware. 

Vertical Scaling vs. Horizontal Scaling

Now that we have looked into the details of each type of scaling, let us compare them with respect to different parameters:

Parameter

Horizontal Scaling

Vertical Scaling

Database

Partitioning of data. 

Data resides on a single machine and scaling is done across multicores henceforth the load is divided between CPU and RAM. 

Downtime

Adding machines in a pooled results in lesser downtime.

Calling over a single machine increases downtime. 

Data Sharing

As there is distributed network structure so data sharing via message passing becomes quite complex

Working over a single machine enables message passing making data sharing very easier.  

Example/s

SQL

Cassandra, MongoDB

How to avoid failure during Scalability?

As studied above with the concept of scalability we can while designing the architect of a system we cannot opt to design on extreme sides that are either overusing (more number of resources) the resources or underusing (lesser number of resources) the resources per the requirements gathered and analyzed.

Now there is a catch here even if we can design a perpetual perfect system then too there arises a failure (as discussed above in Architect Principle Rules for Designing). Failures do exist for sure as mentioned above in the best-designed system, but we can prevent them from hampering our system globally. This is because we keep our system redundant, and our data replicated so that it is retained.

Let us now understand these terms in greater depth which are as follows:

  1. Redundancy
  2. Replication

 What is Redundancy?

Redundancy is nothing more than the duplication of nodes or components so that, in the event of a node or component failure, the backup node can continue to provide services to consumers. In order to sustain availability, failure recovery, or failure management, redundancy is helpful. The goal of redundancy is to create quick, effective, and accessible backup channels.

It is of two types:

  1. Active redundancy 
  2. Standby or Passive redundancy

 What is Replication?

Replication is the administration of various data storage in which each component is kept in numerous copies hosted on different servers. It is simply the copying of data between many devices. It involves synchronizing various machines. Replication contributes to increased fault tolerance and reliability by ensuring consistency amongst redundant resources.

Also, it is of two types:

  1. Active replication
  2. Passive replication 

Scalability Design Principles

Whenever a system is designed, the following principles should be kept in mind to tackle scalability issues:

  1. Scalability vs Performance: While building a Scalable system, the performance of the system should be always directly proportional to its scalability. It means that when the system is scaled up, the performance should enhance, and when the performance requirements are low, the system should be scaled down.
  2. Asynchronous Communication: The should be always asynchronous communication between various components of the systems, to avoid any failure.
  3. Concurrency: It is the same concept just likely programming, here in the system if our controller needs multiple queries to send to the user then they are launched concurrently which drastically cuts(reduces) the response time.  
  4. Databases: If the Queries are fired one after another, the overall latency should not increase and the database should not start sweating.
  5. Eventual Consistency: Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.
  6. Denormalization: 3rd normal form depicts computations are more expensive than HDD space which just not only bind us to electricity but also to higher latency. 
  7. Caching: It acts as an important pillar in getting cache hits/misses and LRU cache. 
  8. Failures: Everything in a system can never be kept under control as failures occur when we are making the system perform threshold. But failures do occurs. Here we practice using this method to isolate issues and prevent them from spreading globally. 
  9. Monitoring: Some bugs do occur while in the reproduction phase which is the worst phase as here we are not having adequate evidence behind occurrences of logic so with help of monitoring we indirectly are constantly retrospecting the incidents.  
  10. Capacity balancing: Suppose the load increases tremendously and we receive 1000 requests earlier being managed by 20 workers with an average request time say it be 100ms. 
    Now here, circuit breakout = 1000/20 = 500ms. It means 900 requests will fail. That’s why sometimes we adjust circuit breaker settings in order to balance capacities in the real world.  
  11. Servers: Small-capacity servers are good for curves with smooth capacity whereas big servers are action for heavy computations invoking monitoring and latency, load balancing.
  12. Deployment: An older code should always be present and maintained for all massive and irreversible changes that result in downtime. If not possible then break the change apart but these practices must be followed while deploying when system architecture is scaling.    

How to handle SPOF during Scalability?

In order to make efficient systems scalable from replication parameter where there are multiple copies stored over servers handling SPOF well, here we need to learn 2 concepts listed below that aids us in achieving efficient scalable systems globally even across huge distributed system architect.   

  1. Load Balancing
  2. Caching

Now let us cover load balancing to a greater order degree of depth followed by caching to completely understand scalability to higher order degree which is as follows:     

What is Load Balancing?

Load balancing It is a technique of effectively distributing application or network traffic among all nodes in a distributed system. Load balancers are the tools used to ensure load balancing.

Load Balancer roles

  1. Each node receives an equal share of the workload.
  2. Should keep track of which nodes are unavailable or not in use.
  3. Effectively manage/distribute work to ensure that it is finished on time.
  4. Distribution should be done to maximize speed and use all available capacity.
  5. Load balancers must guarantee high scaling, high throughput, and high availability.

 

Let us also make it clear what should be the ideal conditions under which we can use a load balancer. They are as follows: 

  • Load balancers can be used for load management when the application has several instances or servers.
  • Application traffic is split amongst several servers or nodes.
  • Load balancers are crucial for maintaining scalability, availability, and latency in a heavy-traffic environment. 

Benefits of Load Balancing

  1. Optimization: In a heavy traffic environment, load balancers help to better utilize resources and reduce response times, which optimizes the system.
  2. Improved User Experience: Load balancers assist in lowering latency and raising availability, resulting in a smooth and error-free user request.
  3. Prevents Downtime: By keeping track of servers that aren’t working and allocating traffic properly, load balancers provide security and avoid downtime, which also boosts revenue and productivity.
  4. Flexibility: To ensure efficiency, load balancers can reroute traffic in the event of a failure and work on server maintenance.
  5. Scalability: Load balancers can use real or virtual servers to deliver responses without any interruption when a web application’s traffic suddenly surges. 

Challenges to Load Balancing

As we already have discussed a constraint of SPOF while developing systems so the same is incorporated here out. Load balancer failure or breakdown may cause the entire system to be suspended and unavailable for a while, which will negatively affect user experience. Client and server communication would be disrupted in the event of a load balancer malfunction. We can employ redundancy to resolve this problem. Both an active and a passive load balancer may be present in the system. The passive load balancer can take over as the active load balancer if the active load balancer fails.

For better understanding, we will dive to Load Balancing algorithms that are as follows:

Load Balancing Algorithms

For the effective distribution of load over various nodes or servers, various algorithms can be used. Depending on the kind of application the load balancer must be utilized for, the algorithm should be chosen.

A few load-balancing algorithms are listed below:

  • Round Robin  Algorithm
  • Weighted Round Robin  Algorithm
  • IP Hash  Algorithm
  • Least Connection Algorithm
  • Least Response Time

What is Caching?

A cache is a portion of data that is generally temporarily cached by a high-speed data storage layer allowing subsequent requests for that data to be fulfilled more quickly than if the data were accessed directly from its original storage location. 

Caching is a process by which we can reuse data that has already been swiftly accessed or computed by producing a local instance of the static data, caching reduces the number of read calls, API calls, and network I/O calls.

Types Of Cache:

There are basically three types of caches as follows:

  1. Local cache: In memory when the cache must be kept in the local memory, it is used for a single system. It is also known as L1 cache.  
    • Example: Memcache and Google Guava Cache
  2. External cache: Within multiple systems also known as a distributed cache. It is also known as L2 cache.
    • When the cache must be shared by several systems, it is employed. As a result, we store the cache in a distributed manner that all servers may access. 
    • Example: Redis
  3. Specialized cache: It is a special type of memory that is developed for improving the performance of the above local and external cache. It is also known as L3 cache.

How does Caching work?

The information stored in a cache is typically saved in hardware that provides quick access, such as RAM (random-access memory), but it can also be used by a software component. The main objective is to increase data retrieval performance by avoiding contact with the slower storage layer below.

NoteApplications of caching are: 

  1. CDN (Content Delivery Network)
  2. Application Server Cache

Benefits of Caching

  1. Improves performance of the application
  2. Lower database expenses
  3. Lessen the Backend’s Load
  4. Dependable Results
  5. Get rid of hotspots in databases
  6. Boost read-through rate (IOPS)

Disadvantages of Caching

  1. Cache memory is costly and has a finite amount of space.
  2. The page becomes hefty as information is stored in the cache.
  3. Sometimes updated information is not displayed as the cache is not updated. 

Application of Caching

  1. Caching could help reduce latency and increase IOPS for many read-intensive application workloads, including gaming, media sharing, social networking, and Q&A portals.
  2. Examples of cached data include database searches, computationally challenging calculations, API calls, and responses, and web artifacts like HTML, JavaScript, and image files.
  3. Compute-intensive applications that change data sets, such as recommendation engines and simulations for high-performance computing, benefit from an in-memory data layer acting as a cache.
  4. In these applications, massive data sets must be retrieved in real-time across clusters of servers that can include hundreds of nodes. Due to the speed of the underlying hardware, many programs are severely constrained in their ability to manipulate this data in a disk-based store.

Remember: When and where to use caching?

Case1: Static Data: If the data is not changing too regularly, caching would be beneficial. We can save the data and use it right away. Caching wouldn’t do much good if the data was changing quickly.

Case2: Application type: Applications can either be read-intensive or write-intensive. The application that requires a lot of reading would benefit more from caching. Data would change quickly for a write-intensive application, hence caching shouldn’t be used.

Lastly, let us discuss caching strategies to wrap up the concept of caching:

Caching Strategies 

 Caching patterns are what designers use to include a cache into a system. Write-through and cache-aside are two common techniques:

  1. write-around
  2. write-through
  3. write-back

Cache Eviction Strategies

The eviction policy of the cache determines the order in which items are removed from a full cache. It’s done to clear some room so that more entries can be added. These policies are listed below as follows:

  • LRU(Least Recently Used)
  • LFU(Least Frequent Used)
  • FIFO(First In First Out)
  • LIFO(Last in First Out)
  • MRU

 



Monday, December 18, 2023

What is Low Level Design or LLD?

LLD stands for low-level design. It is a component-level design process that follows step by step refinement process. The input to LLD is HLD. 

LLD describes class diagrams with the help of methods and relations between classes and program specs. It describes the modules so that the programmer can directly code the program from the document. It provides us with the structure and behavior of class as different entities have different character sets. From this design, it is easy for a developer to write down logic and henceforth the actual code for it. 

Low-level designing is also known as object-level designing or micro-level or detailed designing

 Roadmap to Low-level Designing

In order to bridge concepts of LLD with real code let us In order to understand how to design any low-level diagram let us understand via the steps:

1. Object-oriented Principles

The user requirement is processed by using concepts of OOPS programming. Hence it is recommended to have a strong grip on OOPS concepts prior to moving ahead in designing any low-level system.  

Object-oriented programming concept 4 pillars are must-have to go start learning low-level designing and the programmer should be very well versed with these 4 pillars namely as follows:

Inheritance, encapsulation, polymorphism, and abstraction. Within polymorphism, we should be clear-cut with compile-time ad run-time polymorphism. Programmers should be absolutely clear about the OOPS concepts to depth right to classes, and objects because OOPS is the foundation on which low-leveling on any system is based. 

Acing low-level design is ‘extremely subjective’ because we have to optimally use these concepts while coding to build a low-level system via implementing coding software entities(classes, functions, modules, etc).

2. Process of Analyzing and design

It is analyzing phase which is our 1st step where we are claying real-world problems into object-world problems using OOPS concepts and SOLID principles. 

3. Design Patterns

Now implementation of our above object world problem is carried out with help of design patterns. These are the set of practices indeed the solutions that are already proposed checked against test cases. We just need to map them and implement them ascites as per the circumstance. 

With the usage of objects, classes, SOLID principles, and UML diagrams, we can model complex problems to code while designing low-level systems. And at the same time, there exists a scenario that does not exist in the real world but has to be coded to get solved at low-level designing. For these certain sets of specified problems. There are solutions to these where these problems are already been solved in different complex oriented codes to the problem. Hence the definition is as follows: 

Each problem describes a problem that occurs over and over multiple times in the environment. And the core of the solution to the problem can be used a million times without ever doing it. 

Why there is a need for design patterns?

These problems have occurred over and over again corresponding to which these solutions have been laid out. These problems are been faced and solved by expert designers in the world of programming and the solutions are robust over time saving a lot of time and energy. Hence the complex and classic problems in the software world are being solved by tried and tested solutions. Here we do not look out for solutions that are the so-called design patterns.  

TipIt is strongly recommended to have good understanding of common design patterns to have hold over low-level designing. 

 Different Types of Design Patterns

There are widely varied many types of design patterns while many.

Let us discuss 4 types of design patterns that are extensively used globally.

  • Factory Design Pattern
  • Abstract factory Pattern
  • Singleton Pattern
  • Observer Pattern

 4. UML Diagram

They are 2 types of UML Diagrams:

  1. Structural UML diagram: These types of diagrams basically defines how different entities and objects will be structured and defining the relationship between them. They are helpful in representing  how components will appear with respect to structure.
  1. Behavioral UML diagram: These types of diagrams basically defines what are the different operations that it supports. Here different behavioral UML showcases different behavioral of  

Tip: Important UML diagrams used by developers frequently are as follows:

  • Class diagram from Structural UML Diagram
  • Sequence, Use case and Activity from Behavioral UML Diagram

5. SOLID Principles

These are sets of 5 principles(rules) that are strictly followed as per requirements of the system or requirements for optimal designing.  

In order to write scalable, flexible, maintainable, and reusable code:

  1. Single-responsibility principle (SRP)
  1. Open-closed principle (OCP)
  1. Liskov’s Substitution Principle(LSP)
  1. Interface Segregation Principle (ISP)
  1. Dependency Inversion Principle (DIP)

We need to keep checking while coding and implementing our designing classes as per need if we need to make our classes extendable then we need to talk about the open-closed principle. Or if our requirement is to design a class with a single responsibility principle.