Fundamental Model
The fundamental model in a distributed computing system is a broad conceptual framework that helps in understanding the key aspects of the distributed systems. These are concerned with more formal description of properties that are generally common in all architectural models. It represents the essential components that are required to understand a distributed system’s behaviour. Three fundamental models are as follows:
1. Interaction Model
Distributed computing systems are full of many processes interacting with each other in highly complex ways. Interaction model provides a framework to understand the mechanisms and patterns that are used for communication and coordination among various processes. Different components that are important in this model are –
- Message Passing – It deals with passing messages that may contain, data, instructions, a service request, or process synchronisation between different computing nodes. It may be synchronous or asynchronous depending on the types of tasks and processes.
- Publish/Subscribe Systems – Also known as pub/sub system. In this the publishing process can publish a message over a topic and the processes that are subscribed to that topic can take it up and execute the process for themselves. It is more important in an event-driven architecture.
2. Remote Procedure Call (RPC)
It is a communication paradigm that has an ability to invoke a new process or a method on a remote process as if it were a local procedure call. The client process makes a procedure call using RPC and then the message is passed to the required server process using communication protocols. These message passing protocols are abstracted and the result once obtained from the server process, is sent back to the client process to continue execution.
1. Failure Model
This model addresses the faults and failures that occur in the distributed computing system. It provides a framework to identify and rectify the faults that occur or may occur in the system. Fault tolerance mechanisms are implemented so as to handle failures by replication and error detection and recovery methods. Different failures that may occur are:
- Crash failures – A process or node unexpectedly stops functioning.
- Omission failures – It involves a loss of message, resulting in absence of required communication.
- Timing failures – The process deviates from its expected time quantum and may lead to delays or unsynchronised response times.
- Byzantine failures – The process may send malicious or unexpected messages that conflict with the set protocols.
2. Security Model
Distributed computing systems may suffer malicious attacks, unauthorised access and data breaches. Security model provides a framework for understanding the security requirements, threats, vulnerabilities, and mechanisms to safeguard the system and its resources. Various aspects that are vital in the security model are:
- Authentication: It verifies the identity of the users accessing the system. It ensures that only the authorised and trusted entities get access. It involves –
- Password-based authentication: Users provide a unique password to prove their identity.
- Public-key cryptography: Entities possess a private key and a corresponding public key, allowing verification of their authenticity.
- Multi-factor authentication: Multiple factors, such as passwords, biometrics, or security tokens, are used to validate identity.
- Encryption:
- It is the process of transforming data into a format that is unreadable without a decryption key. It protects sensitive information from unauthorized access or disclosure.
- Data Integrity:
- Data integrity mechanisms protect against unauthorised modifications or tampering of data. They ensure that data remains unchanged during storage, transmission, or processing. Data integrity mechanisms include:
- Hash functions – Generating a hash value or checksum from data to verify its integrity.
- Digital signatures – Using cryptographic techniques to sign data and verify its authenticity and integrity.
- Data integrity mechanisms protect against unauthorised modifications or tampering of data. They ensure that data remains unchanged during storage, transmission, or processing. Data integrity mechanisms include:
Distributed Computing System Models
Distributed computing is a system where processing and data storage is distributed across multiple devices or systems, rather than handled by a single central device. In this article, we will see Distributed Computing System Models.
Important Topics for Distributed Computing System Models
- Types of Distributed Computing System Models
- Physical Model
- Architectural Model
- Fundamental Model
Contact Us