Os Concepts Questions Placement
Operating System Concepts Interview Questions and Answers for Placements
Last Updated: March 2026
Introduction to Operating Systems
Operating Systems form the foundational layer of computer systems, managing hardware resources and providing services for computer programs. Understanding OS concepts is crucial for technical interviews as it demonstrates your knowledge of how software interacts with hardware and how systems manage resources efficiently.
Why OS is Essential for Placements
- Core CS Subject: Fundamental to computer science education
- System Design Foundation: Essential for backend and infrastructure roles
- Performance Optimization: Understanding helps write efficient code
- Every Company Asks: From startups to FAANG, OS questions are standard
Key Concepts and Theory
Process vs Thread
| Aspect | Process | Thread |
|---|---|---|
| Definition | Independent executing program | Lightweight unit within process |
| Memory Space | Separate address space | Shares address space with threads |
| Overhead | High (separate resources) | Low (shared resources) |
| Communication | IPC (pipes, sockets, shared memory) | Direct shared memory |
| Creation | Slow (fork, exec) | Fast |
| Isolation | Strong | Weak |
Process States
┌─────────┐
│ NEW │
└────┬────┘
│ (admitted)
▼
┌─────────────────┐
│ READY │◄──────────┐
│ (waiting for │ │ (interrupt)
│ CPU) │ │
└────────┬────────┘ │
│ (scheduler dispatch)│
▼ │
┌─────────────────┐ │
│ RUNNING │────────────┘
│ (executing) │ (I/O or event wait)
└────────┬────────┘
│
▼
┌─────────────────┐
│ WAITING │
│ (I/O or event │
│ completion) │
└────────┬────────┘
│ (I/O or event completion)
▼
┌─────────┐
│ TERMINATED
└─────────┘
Practice Questions with Solutions
Question 1: Difference between Process and Thread. ⭐ [Easy]
Process:
- Independent program in execution with its own memory space
- Contains code, data, heap, and stack segments
- Requires separate resources (CPU, memory, files)
- Communicates via Inter-Process Communication (IPC)
- Heavyweight context switch
Thread:
- Smallest unit of execution within a process
- Shares code, data, and heap with other threads
- Has private stack and registers
- Direct memory sharing with threads in same process
- Lightweight context switch
Analogy:
- Process = Factory (separate building, own resources)
- Thread = Workers in factory (share building, have own tools)
Python Example:
import threading
import multiprocessing
import os
def worker():
print(f"Process ID: {os.getpid()}")
print(f"Thread ID: {threading.current_thread().name}")
# Process - separate memory space
p = multiprocessing.Process(target=worker)
p.start() # New process created
p.join()
# Thread - shared memory space
t = threading.Thread(target=worker)
t.start() # New thread in same process
t.join()
Question 2: What is Deadlock? Explain necessary conditions and prevention. ⭐⭐⭐ [Hard]
Deadlock is a situation where two or more processes are blocked forever, waiting for each other to release resources.
Four Necessary Conditions (All must hold):
- Mutual Exclusion: Resources are non-shareable
- Hold and Wait: Process holds resources while waiting for more
- No Preemption: Resources cannot be forcibly taken
- Circular Wait: Circular chain of processes waiting
Example:
Process A holds Resource 1, waits for Resource 2
Process B holds Resource 2, waits for Resource 1
DEADLOCK!
Deadlock Prevention (Break at least one condition):
1. Eliminate Mutual Exclusion:
- Make resources shareable (not always possible)
- Example: Read-only files
2. Eliminate Hold and Wait:
- Request all resources at once
- Release all before requesting new
// Request everything upfront
synchronized(resource1) {
synchronized(resource2) {
// Do work
}
}
3. Allow Preemption:
- Forcefully take resources from processes
- Save state, rollback, restart
4. Eliminate Circular Wait:
- Impose ordering on resource types
- Always acquire resources in fixed order
# Always lock lower ID first
def safe_lock(r1, r2):
first, second = sorted([r1, r2], key=id)
with first:
with second:
# Safe - no circular wait possible
pass
Deadlock Avoidance (Banker's Algorithm):
- Analyze resource allocation state
- Only grant requests that keep system in "safe state"
- Requires advance knowledge of maximum needs
Deadlock Detection and Recovery:
- Build resource allocation graph
- Detect cycles
- Recovery: Process termination or resource preemption
Question 3: Explain Paging and Segmentation. ⭐⭐⭐ [Hard]
Paging: Fixed-size blocks
- Physical memory divided into fixed-size frames
- Logical memory divided into same-size pages
- Page table maps pages to frames
- Eliminates external fragmentation
Logical Address: [Page Number | Offset]
10 bits 12 bits (4KB page)
Page Table:
Page 0 → Frame 5
Page 1 → Frame 2
Page 2 → Frame 8
Segmentation: Variable-size blocks
- Logical address space divided into segments
- Each segment has name and length
- Segment table maps segment to base address
- Matches programmer's view (code, data, stack)
Logical Address: [Segment | Offset]
Segment Table:
Code: Base=1000, Limit=500
Data: Base=2000, Limit=1000
Stack: Base=3500, Limit=500
Comparison:
| Feature | Paging | Segmentation |
|---|---|---|
| Size | Fixed (pages) | Variable (segments) |
| Fragmentation | Internal | External |
| User awareness | Invisible | Visible |
| Table size | One per process | One per process |
| Sharing | Difficult | Easy (share segments) |
Segmented Paging (Best of both):
- Divide segments into pages
- Reduces external fragmentation
- Maintains logical view
Question 4: What is Virtual Memory? Explain page faults. ⭐⭐⭐ [Hard]
Virtual Memory: Technique that allows execution of processes not completely in memory
- Provides illusion of large memory
- Separates logical and physical memory
- Enables memory protection and sharing
How it works:
- Process uses virtual addresses
- MMU (Memory Management Unit) translates to physical addresses
- If page not in memory → Page Fault
Page Fault Handling:
1. Process accesses page not in memory
2. Hardware traps to OS
3. OS checks: Valid page reference?
4. If invalid → Terminate (segmentation fault)
5. If valid → Find free frame
6. Schedule disk read to load page
7. Update page table
8. Restart instruction
Page Replacement Algorithms:
FIFO (First-In-First-Out):
- Replace oldest page
- Simple but poor performance
- Belady's anomaly possible
LRU (Least Recently Used):
- Replace page unused for longest time
- Good approximation of optimal
- Requires tracking access times
Optimal (Belady's):
- Replace page not used for longest in future
- Theoretical best, impossible to implement
- Used as benchmark
Clock (Second Chance):
- Pages arranged in circular buffer
- Reference bit checked
- If 0, replace; if 1, give second chance
Python Simulation:
from collections import deque
def lru_page_faults(pages, capacity):
cache = deque()
faults = 0
for page in pages:
if page not in cache:
faults += 1
if len(cache) == capacity:
cache.popleft() # Remove LRU
cache.append(page)
else:
# Move to end (most recently used)
cache.remove(page)
cache.append(page)
return faults
# Example
pages = [7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2]
print(f"LRU Page Faults: {lru_page_faults(pages, 3)}")
Question 5: Explain CPU Scheduling Algorithms. ⭐⭐⭐ [Hard]
Scheduling Criteria:
- CPU utilization (maximize)
- Throughput (maximize)
- Turnaround time (minimize)
- Waiting time (minimize)
- Response time (minimize)
Scheduling Algorithms:
1. First-Come-First-Served (FCFS):
- Simple queue
- Convoy effect possible
- Non-preemptive
Process Burst Time
P1 24
P2 3
P3 3
Gantt: | P1 (24) | P2 (3) | P3 (3) |
Waiting: P1=0, P2=24, P3=27
Avg Waiting = (0+24+27)/3 = 17
2. Shortest Job First (SJF):
- Minimizes average waiting time
- Optimal but requires knowing burst times
- Can cause starvation
3. Shortest Remaining Time First (SRTF):
- Preemptive version of SJF
- If new process arrives with shorter time, preempt
4. Round Robin (RR):
- Each process gets small time unit (quantum)
- After quantum, moved to end of queue
- Good for time-sharing
Quantum = 4
P1(10) → P2(3) → P1(6) → P3(2) → P1(2)
5. Priority Scheduling:
- Each process has priority
- Can be preemptive or non-preemptive
- Starvation possible (solution: aging)
6. Multilevel Queue:
- Multiple queues with different priorities
- Processes permanently assigned to one queue
- Each queue can have own algorithm
7. Multilevel Feedback Queue:
- Processes can move between queues
- I/O bound vs CPU bound separation
- Most flexible, used in real systems
Python Simulation (Round Robin):
from collections import deque
def round_robin(processes, quantum):
"""
processes: list of (name, burst_time)
"""
queue = deque(processes)
time = 0
gantt = []
while queue:
name, burst = queue.popleft()
if burst <= quantum:
gantt.append((name, time, time + burst))
time += burst
else:
gantt.append((name, time, time + quantum))
time += quantum
queue.append((name, burst - quantum))
return gantt
processes = [("P1", 10), ("P2", 3), ("P3", 5)]
result = round_robin(processes, 4)
print("Gantt Chart:", result)
Question 6: What is Semaphore and Mutex? ⭐⭐ [Medium]
Mutex (Mutual Exclusion):
- Binary semaphore (0 or 1)
- Only one thread can access resource
- Thread that locks must unlock
- Ownership concept
import threading
mutex = threading.Lock()
shared_resource = 0
def increment():
global shared_resource
mutex.acquire()
try:
# Critical section
shared_resource += 1
finally:
mutex.release()
# Pythonic way (context manager)
def increment_safe():
global shared_resource
with mutex:
shared_resource += 1
Semaphore:
- Counting synchronization primitive
- Can allow N threads to access resource
- No ownership (any thread can signal)
import threading
# Allow 3 threads concurrently
semaphore = threading.Semaphore(3)
def limited_resource():
with semaphore:
# Only 3 threads can be here at once
print("Accessing limited resource")
Comparison:
| Feature | Mutex | Semaphore |
|---|---|---|
| Type | Binary (0/1) | Counting (0 to N) |
| Ownership | Yes | No |
| Release | Only by owner | Any thread |
| Use case | Single resource access | Resource pools |
Classic Synchronization Problems:
Producer-Consumer:
import threading
import queue
buffer = queue.Queue(maxsize=10)
mutex = threading.Lock()
empty = threading.Semaphore(10) # Empty slots
full = threading.Semaphore(0) # Filled slots
def producer():
while True:
item = produce_item()
empty.acquire()
mutex.acquire()
buffer.put(item)
mutex.release()
full.release()
def consumer():
while True:
full.acquire()
mutex.acquire()
item = buffer.get()
mutex.release()
empty.release()
consume_item(item)
Readers-Writers Problem:
- Multiple readers can access simultaneously
- Writers need exclusive access
- Prevents writer starvation or reader starvation based on variant
Question 7: Explain Inter-Process Communication (IPC). ⭐⭐ [Medium]
IPC Mechanisms:
1. Pipes:
- Unidirectional communication
- Parent-child processes
- Finite buffer
import os
r, w = os.pipe()
pid = os.fork()
if pid > 0: # Parent
os.close(r)
os.write(w, b"Hello from parent")
os.close(w)
else: # Child
os.close(w)
message = os.read(r, 100)
print(message)
os.close(r)
2. Message Queues:
- Structured message passing
- Persistent until read
- Can have multiple readers/writers
3. Shared Memory:
- Fastest IPC (no copying)
- Multiple processes access same memory
- Requires synchronization (semaphores)
from multiprocessing import Process, Value, Array
def worker(n, arr):
n.value = 3.14159
for i in range(len(arr)):
arr[i] = -arr[i]
num = Value('d', 0.0)
arr = Array('i', range(10))
p = Process(target=worker, args=(num, arr))
p.start()
p.join()
print(num.value) # 3.14159
4. Sockets:
- Network-based IPC
- Same or different machines
- Flexible but overhead
5. Signals:
- Asynchronous notification
- Limited information (just signal number)
- Used for interrupts
Comparison:
| Method | Speed | Complexity | Use Case |
|---|---|---|---|
| Pipes | Medium | Low | Parent-child |
| Message Queues | Medium | Medium | Structured data |
| Shared Memory | Fast | High | Large data, frequent access |
| Sockets | Slow | Medium | Network, different machines |
| Signals | - | Low | Events, interrupts |
Question 8: What is Thrashing? How to handle it? ⭐⭐⭐ [Hard]
Thrashing: Situation where system spends more time handling page faults than executing processes
Symptoms:
- High page fault rate
- Low CPU utilization
- High disk I/O
- System appears frozen
Why it happens:
- Too many processes in memory
- Each process has too few frames
- Constant page swapping in and out
- No useful work done
Causes:
- Insufficient RAM
- Too many active processes
- Poor locality of reference
- Working set larger than available frames
Solutions:
1. Working Set Model:
- Track pages accessed in recent window (Δ)
- Ensure process has working set in memory
- If not enough frames, suspend process
2. Page Fault Frequency (PFF):
- Monitor page fault rate
- If too high: allocate more frames or suspend process
- If too low: can steal frames
3. Increase Physical Memory:
- Simplest solution
- Add more RAM
4. Reduce Multiprogramming:
- Suspend low-priority processes
- Free up frames for active processes
5. Improve Locality:
- Program restructuring
- Compiler optimizations
- Better data structures
Question 9: Explain File System and Directory Structures. ⭐⭐ [Medium]
File System Functions:
- File creation, deletion, modification
- Mapping logical to physical blocks
- Free space management
- Directory management
- Access control
File Allocation Methods:
1. Contiguous:
- Each file occupies contiguous blocks
- Fast access (direct)
- External fragmentation
- File growth problem
2. Linked (Chained):
- Each block points to next
- No external fragmentation
- Sequential access only (slow random)
- Pointer overhead
3. Indexed:
- Index block contains all pointers
- Supports direct access
- Index block overhead
- Used in UNIX (i-nodes)
UNIX I-node Structure:
┌─────────────────┐
│ File Metadata │
│ (size, perms) │
├─────────────────┤
│ Direct Blocks │ → 12 direct data blocks
│ (0-11) │
├─────────────────┤
│ Single Indirect │ → Points to block of pointers
├─────────────────┤
│ Double Indirect │ → Points to block of blocks of pointers
├─────────────────┤
│ Triple Indirect │ → For very large files
└─────────────────┘
Directory Structures:
1. Single-Level: All files in one directory 2. Two-Level: Separate directories per user 3. Tree-Structured: Hierarchical (standard) 4. Acyclic Graph: Allows sharing (symbolic links) 5. General Graph: With links, allows cycles (needs garbage collection)
Free Space Management:
- Bitmap: One bit per block
- Linked List: Link free blocks
- Grouping: Store addresses of n free blocks in one block
- Counting: Track contiguous free blocks
Question 10: What is Banker's Algorithm? ⭐⭐⭐ [Hard]
Banker's Algorithm: Deadlock avoidance algorithm that simulates resource allocation to check safe state
Data Structures:
- Available: Vector of available resources of each type
- Max: Matrix of maximum demand of each process
- Allocation: Matrix of currently allocated resources
- Need: Matrix of remaining needs (Max - Allocation)
Safety Algorithm:
1. Work = Available, Finish[i] = false for all i
2. Find i such that:
- Finish[i] == false
- Need[i] <= Work
If no such i, go to step 4
3. Work = Work + Allocation[i]
Finish[i] = true
Go to step 2
4. If Finish[i] == true for all i, system is in SAFE state
Resource Request Algorithm:
When process Pi requests resources Request[i]:
1. If Request[i] <= Need[i], go to step 2
Else, error (exceeded maximum claim)
2. If Request[i] <= Available, go to step 3
Else, Pi must wait
3. Pretend to allocate:
Available = Available - Request[i]
Allocation[i] = Allocation[i] + Request[i]
Need[i] = Need[i] - Request[i]
4. Check if resulting state is SAFE
If safe → grant request
If unsafe → restore old state, make Pi wait
Example:
Processes: P0, P1, P2
Resources: A(10 instances), B(5 instances), C(7 instances)
Allocation Max Need Available
A B C A B C A B C A B C
P0: 0 1 0 7 5 3 7 4 3 3 3 2
P1: 2 0 0 3 2 2 1 2 2
P2: 3 0 2 9 0 2 6 0 0
Safe sequence: P1 → P3 → P0 → P2 → P4
Limitations:
- Requires advance knowledge of maximum needs
- Process count must be static
- Resource instances can be unavailable
Common Interview Follow-Up Questions
- "Difference between preemptive and non-preemptive scheduling?" → Preemptive can interrupt running process, non-preemptive runs to completion
- "What happens during context switch?" → Save current process state, restore new process state, switch page tables, flush TLB
- "Why is LRU hard to implement in hardware?" → Requires tracking every access, expensive counters
- "Copy-on-write in fork()?" → Parent and child share pages until write occurs, then copy
- "Inverted page table?" → One entry per physical frame instead of per virtual page, saves memory
Companies Asking OS Questions
| Company | Frequency | Focus Areas |
|---|---|---|
| Very High | Memory management, Scheduling | |
| Microsoft | Very High | Windows internals, Threads |
| Amazon | High | Linux internals, Performance |
| Oracle | Very High | Process management, Deadlocks |
| VMware | Very High | Virtualization, Memory |
| Apple | High | macOS/iOS internals |
| NVIDIA | High | GPU scheduling, Parallel processing |
Preparation Tips
- Understand Concepts Deeply: Don't just memorize, understand why
- Practice Calculations: Page faults, scheduling metrics
- Know Trade-offs: Every solution has costs
- Real-world Examples: How Linux/Windows implement these
- Code Semaphores: Write producer-consumer, readers-writers
- Memory Layout: Stack, heap, code, data segments
- Modern Extensions: Cgroups, namespaces (containers)
FAQ
Q: What's the difference between program and process?
A: Program is passive (code on disk), process is active (program in execution with state, memory, resources).
Q: Why does fork() return twice?
A: Returns PID to parent, 0 to child. This lets them differentiate and execute different code paths.
Q: What's the difference between internal and external fragmentation?
A: Internal: wasted space inside allocated block. External: wasted space between allocated blocks.
Q: Explain TLB (Translation Lookaside Buffer).
A: Cache for page table entries. Speeds up virtual to physical address translation.
Q: What is demand paging?
A: Only load pages when needed (on page fault). Reduces initial startup time and memory usage.
Understanding OS makes you a better programmer and system designer! 💻