Practice Questions - Operating Systems

Q. Define an Operating System.

An Operating System (OS) is system software that manages hardware resources and provides services to applications. It acts as a bridge between users and hardware by handling tasks such as memory management, process scheduling, file handling, and device control. Without an OS, applications cannot interact with the system’s CPU, memory, and I/O devices. The OS ensures efficient utilization of resources and provides a stable environment for running programs.
Example: Windows, Linux, and macOS manage processes, allocate CPU time, handle file systems, and provide user interfaces, enabling users to run applications like browsers, games, or compilers.


Q. Difference between multiprogramming and time-sharing systems.

Multiprogramming allows multiple programs to reside in memory simultaneously so the CPU always has a job to execute. It improves CPU utilization by switching to another job when one waits for I/O. Time-sharing extends multiprogramming by rapidly switching among users, giving each the illusion of owning the CPU. It focuses on responsiveness rather than utilization.
Example:

  • Multiprogramming: A batch system running multiple jobs like payroll, sorting, and backup simultaneously.

  • Time-sharing: Unix systems where multiple remote users edit files, compile programs, and run commands interactively.


Q. Define a Process. What are process states?

A process is an executing program along with its code, data, registers, and resources. The OS manages processes using the Process Control Block (PCB). A process moves through various states as it executes.
Process States:

  • New: Process is created.

  • Ready: Waiting for CPU.

  • Running: Currently executing.

  • Waiting/Blocked: Waiting for I/O or an event.

  • Terminated: Execution finished.
    Example: When running a C program, the OS loads it into memory (new), schedules it (ready), executes instructions (running), waits for user input (blocked), and ends once completed (terminated).


Q. What is Interprocess Communication (IPC)?

IPC allows processes to exchange data and synchronize actions. It is essential in modern OS environments where multiple processes run simultaneously and may need to communicate. IPC can be message-based (message queues, pipes) or shared-memory based.
Example: The Chrome browser runs multiple processes for tabs, plugins, etc. These processes communicate through IPC to share information like history, cookies, or rendering instructions. IPC ensures efficient data transfer without processes interfering directly with each other’s memory.


Q. Define a Thread and mention any one threading model.

A thread is the smallest unit of execution within a process. Multiple threads in the same process share code, data, and open files but maintain separate registers and stacks. Threads improve performance in applications requiring parallelism.
Threading Model Example:
Many-to-One Model: Many user-level threads map to a single kernel thread. It is simple but does not support true parallelism because only one thread can execute at a time.
Example: Early Java Virtual Machines used this model where multiple Java threads were managed by the JVM but executed as a single OS thread.


Q. What is preemptive scheduling? Give an example.

Preemptive scheduling allows the OS to interrupt a running process and allocate the CPU to another process. This ensures better responsiveness, especially in time-sharing or real-time systems. Preemptive scheduling prevents long-running processes from blocking others.
Example: Round Robin Scheduling is preemptive. If the time quantum is 10 ms, the OS will forcibly switch processes after 10 ms. This ensures fairness among users in systems like Linux servers where multiple users execute commands simultaneously.


Q. What is mutual exclusion?

Mutual exclusion ensures that when one process is executing in a critical section accessing shared data, no other process can enter its critical section. Without mutual exclusion, race conditions occur where concurrent operations cause unpredictable results.
Example: If two processes try to update a shared bank account balance simultaneously, incorrect updates may occur. Using semaphores or locks, only one process accesses the balance at a time, ensuring data consistency.


Q. Define Paging in memory management.

Paging divides physical memory into fixed-size frames and logical memory into pages of the same size. When a process executes, its pages are loaded into available frames, eliminating the need for contiguous allocation and reducing external fragmentation. The OS maintains a page table to translate logical addresses to physical addresses.
Example: Suppose a program uses logical address 5,320. The OS checks the page table to map this address to the corresponding physical frame, allowing efficient memory access without sequential storage.


Q. What is a deadlock? State two necessary conditions.

A deadlock occurs when a set of processes are permanently blocked because each waits for resources held by others.
Necessary Conditions:

  1. Mutual Exclusion: Resources cannot be shared.

  2. Hold and Wait: A process holds one resource while waiting for another.
    (Other two: No Preemption, Circular Wait.)
    Example: Process A holds a printer and waits for a scanner; Process B holds the scanner and waits for the printer → both remain stuck forever.


Q. What is disk scheduling? Name any two algorithms.

Disk scheduling decides the order in which disk I/O requests are serviced, reducing seek time and improving throughput. Since disk heads move mechanically, ordering requests properly increases system performance.
Two Algorithms:

  • FCFS (First Come First Serve): Simple but inefficient for heavy loads.

  • SCAN: Disk head moves like an elevator, reducing average seek time.
    Example: If requests arrive at tracks 55, 58, 18, 90, SCAN arranges them in an optimal sweeping order rather than servicing randomly.

Q. Explain simple batch, multiprogrammed batch, and time-sharing systems.

Simple batch systems execute jobs in batches without user interaction. Jobs are submitted to a queue, and the OS runs them sequentially. They increase throughput but lack interactivity.
Example: Early punch-card systems.

Multiprogrammed batch systems load multiple jobs into memory, switching the CPU to another job when one waits for I/O. This improves CPU utilization.
Example: A server running backup, sorting, and billing jobs together.

Time-sharing systems allow multiple users to interact with the system simultaneously. The CPU rapidly switches between tasks, giving users the illusion of dedicated access.
Example: Modern UNIX terminals where users run editors, compilers, or commands concurrently.


Q. (a) Process Control Block (PCB)

(b) Operations during process creation and termination

(a) Process Control Block
PCB is a data structure containing process information: process ID, registers, program counter, memory limits, open files, and scheduling details. The OS uses the PCB during context switching.
Example: When CPU switches from Process A to B, their PCBs store the saved states.

(b) Process Creation & Termination
During creation, OS allocates memory, initializes PCB, assigns PID, loads program code, and sets the process to “ready.”
During termination, OS releases resources, closes files, removes PCB, and notifies parent process.
Example: Fork() in Linux creates a child process; exit() terminates it.


Q. Explain scheduling criteria and four scheduling algorithms.

Scheduling Criteria: CPU utilization, throughput, waiting time, response time, turnaround time, and fairness.

Four Scheduling Algorithms:

  1. FCFS: Non-preemptive; simple but long waiting times.

  2. SJF: Selects shortest job; optimal for waiting time but may starve long processes.

  3. Round Robin: Preemptive with time quantum; used in time-sharing systems.

  4. Priority Scheduling: Executes based on priority; may starve low-priority jobs.

Example: In Round Robin with 5 ms quantum, processes get equal CPU slices, making it suitable for interactive systems like Linux terminals.


Q. (a) Peterson’s Solution

(b) Producer–Consumer using semaphores

(a) Peterson’s Solution
Peterson’s solution ensures mutual exclusion for two processes by using a “turn” variable and two flags. A process sets its flag and gives turn to the other process, only entering the critical section when the other’s flag is false or turn belongs to it.
Example: Prevents two bank transactions from updating the same record concurrently.

(b) Producer–Consumer Using Semaphores
Semaphores used:

  • Empty (buffer slots),

  • Full (occupied slots),

  • Mutex (critical section).
    The producer waits if the buffer is full; the consumer waits if empty.
    Example: In a print queue, producers create print jobs and consumers (printers) execute them safely.


Q. Explain Swapping, Paging, Segmentation, and Segmentation with Paging.

Swapping: Moves whole processes in and out of memory to free space.
Example: When RAM is full, OS swaps processes to disk.

Paging: Splits memory into fixed-size pages and frames, avoiding external fragmentation.
Example: Page table maps logical to physical addresses.

Segmentation: Divides memory into variable-sized logical units like code, stack, data.
Example: Compiler-generated segments.

Segmentation with Paging: Segments are divided into pages, combining benefits of both.
Example: Used in the Multics system, reducing fragmentation while supporting logical structure.


Q. (a) Demand Paging & Performance

(b) LRU and Optimal algorithms

(a) Demand Paging
Pages are loaded only when needed, reducing memory usage. Page faults occur when referenced pages are not in memory, triggering OS to load them. Performance depends on fault rate, memory size, and replacement strategy.
Example: Running a large program in limited RAM.

(b) Page Replacement Algorithms

  • LRU: Replaces the least recently used page; good approximation of optimal.

  • Optimal: Replaces page that will not be used for the longest future time; theoretical best.
    Example: Optimal gives minimum faults but requires future knowledge.


Q. (a) Banker's Algorithm

(b) Deadlock detection & recovery

(a) Banker’s Algorithm
A deadlock avoidance algorithm ensuring resource allocation leaves the system in a safe state. Before granting a request, it checks if the system can still allocate maximum resources and finish all processes safely.
Example: A bank does not loan money if doing so risks inability to satisfy future customers.

(b) Deadlock Detection & Recovery
Detection uses resource allocation graphs to identify cycles.
Recovery methods:

  • Terminate processes,

  • Preempt resources,

  • Rollback processes.
    Example: OS may kill lowest-priority process to break deadlock.


Q. Short notes (any two)

(a) Disk Scheduling Strategies

Algorithms like FCFS, SSTF, SCAN, C-SCAN minimize seek time.
Example: SCAN behaves like an elevator, servicing requests along a direction.

(b) Buffering & Caching

Buffering smooths data transfer differences; caching stores frequently accessed data for fast retrieval.
Example: Browser cache speeds up loading.

(c) Real-Time OS

Guarantees deadlines for tasks.
Example: Airbag control system.

(d) Distributed OS

Manages multiple networked computers as a single system.
Example: Google File System.

Q. Process Synchronization Case Study

(a) Dining Philosopher Problem
(b) Barber Shop Problem

(a) Dining Philosopher Problem

The Dining Philosopher problem models resource sharing among multiple processes. Five philosophers sit around a table, each needing two chopsticks (shared resources) to eat. If each philosopher picks up one chopstick simultaneously, a deadlock occurs.
Solutions include:

  • Semaphore Solution: Use one semaphore per chopstick; philosophers wait if chopsticks are unavailable.

  • Resource Hierarchy Solution: Number chopsticks and always pick the lower-numbered one first, preventing circular wait.
    Example: Prevents deadlock in systems where multiple processes need two or more shared resources, such as allocating printers and scanners.

(b) Barber Shop Problem

This models synchronization between a barber (consumer) and customers (producers). There is one barber, one barber chair, and several waiting chairs.
Semaphore-based solution:

  • Customers semaphore: Counts waiting customers.

  • Barber semaphore: Signals barber availability.

  • Mutex: Protects shared counters.
    Example: Used in server request handling where a limited number of worker threads (barbers) serve client requests (customers) efficiently.


Q. Virtual Memory Case Study

(a) Thrashing
(b) Demand Segmentation & Overlay
(c) Role of replacement policies

(a) Thrashing

Thrashing occurs when excessive page faults cause the CPU to spend most of its time swapping pages instead of executing instructions. It arises due to insufficient memory, high multiprogramming levels, or poor locality of reference.
Example: Running many heavy applications on low RAM laptops leads to constant disk swapping.

(b) Demand Segmentation & Overlay

Demand Segmentation: Loads required segments only when needed. This reduces memory usage for large programs.
Overlay: Divides program into mutually exclusive parts loaded as needed, used in early systems without virtual memory.
Example: Older compilers used overlays to fit into limited memory.

(c) Replacement Policies & Performance

Efficient page replacement (LRU, Optimal, FIFO) reduces page faults and improves performance. Poor algorithms cause more faults and may trigger thrashing.
Example: LRU approximates optimal behavior in modern OS.


Q. File System Case Study

(a) Logical vs Physical File System
(b) File Allocation Methods
(c) FAT32 vs NTFS vs Ext3

(a) Logical vs Physical File System

The logical file system manages user-level operations: file naming, directories, access control, and metadata.
The physical file system handles disk blocks, allocation, free space, and storage structure.
Example: Logical layer interprets “report.docx,” while physical layer stores it as blocks on disk.

(b) File Allocation Methods

  • Contiguous: Fast access but suffers from external fragmentation.

  • Linked: Flexible but slow for random access.

  • Indexed: Uses an index block; efficient for large files.
    Example: Indexing used in UNIX file systems.

(c) FAT32 vs NTFS vs Ext3

  • FAT32: Simple, portable, but lacks security and journaling.

  • NTFS: Supports permissions, encryption, journaling; used in Windows.

Ext3: Journaling, stable, used in Linux; faster recovery.
Example: NTFS preferred for enterprise servers due to reliability.

No comments:

Post a Comment