Latency in System Design
Latency is defined as the amount of time required for a single data to be delivered successfully. Latency is measured in milliseconds (ms).
We all have encountered situations where websites and web applications take longer to respond, or there is buffering while playing a video despite having good network connectivity. Then the latency for such systems is said to be comparatively high.
There is a certain amount of time required for user input over the website and there is a certain amount of time for the response from the web application to the user. So the delay between user input and web application response to the same input is known as latency.
Reasons for high Latency
Now you must be wondering about the factors that are responsible for delays. So high latency mainly depends on 2 factors:
- Network Delays
- Mathematical Calculation Process Delays
In monolithic architecture, as we know there is only a single block and all network calls are local within hence network delay is zero and hence latency equals computational delays only (which if not latency equals zero in monolithic systems)
Latency = Mathematic Calculation Delays
In distributed systems, there is a networks over which signals are passed to and fro hence there will for sure be network delay.
Latency = Mathematic Calculation Delays + Network Delays
Let us finally discuss are components affecting latency:
Components Affecting Latency:
- Packet Size: Smaller the packet chunk size faster the transmission and the lower the latency.
- Packet Loss: Transmission of huge packets of various sizes in medium losses to very few losses in packets.
- Medium of transmission: Optical fiber is the fastest way of transmission.
- Distance between Nodes: Poor signal will increase latency and great connectivity decreases to a greater extent.
- Signal strength: Good signal strength reduces latency.
- Storage delays: Stored information in a database and fetching from it requires very little time which supports increasing latency.
How to Reduce latency:
- Use a content delivery network (CDN): CDNs help to cut down on latency. In order to shorten the distance between users and reduce the amount of time that data must travel over great distances, CDN servers are situated at various locations.
- Upgrading computer hardware/software: Improving or fine-tuning mechanical, software, or hardware components can help cut down on computational lag, which in turn helps cut down on latency.
- Cache: A cache is a high-speed data storage layer used in computers that temporarily store large amounts of transient data. By caching this data, subsequent requests for it can be fulfilled more quickly than if the data were requested directly from its original storage location. This lessens latency as well.
Applications of Latency
- Vehicles
- Capital Market
Important Key Concepts and Terminologies – Learn System Design
System Design is the core concept behind the design of any distributed systems. System Design is defined as a process of creating an architecture for different components, interfaces, and modules of the system and providing corresponding data helpful in implementing such elements in systems.
In this article, we’ll cover the standard terms and key concepts of system design and performance, such as:
- Latency,
- Throughput,
- Availability,
- Redundancy,
- Time
- CAP Theorem
- Lamport’s Logical Clock Theorem.
Let us see them one by one.
Contact Us