Section A: Multiple Choice Questions - Solutions
Q1. Lock Compatibility and Conversions - Answer: (A)
Explanation:
-
T1 has SIX lock on F1 (Shared + Intention Exclusive)
-
T2 wants to read R5 in F1
-
For reading, T2 needs: IS on Database → IS on F1 → S on R5
-
SIX is compatible with IS, so T2 can proceed
-
IS-IS-S is the minimal lock sequence for reading a record
Q2. Timestamp Ordering with Thomas' Write Rule - Answer: (B)
Working:
-
Y: R-timestamp=10, W-timestamp=14
-
T1: Write(Y), TS=8 < W-timestamp(14) → Write ignored (Thomas' rule)
-
T3: Read(Y), TS=15 > W-timestamp(14) → Succeeds, R-timestamp=15
-
T2: Write(Y), TS=12 < R-timestamp(15) → Aborts
Q3. Tree Protocol Validity - Answer: (B)
Analysis:
-
Lock(C): Valid (first lock can be anywhere)
-
Lock(F): Valid (F is child of C)
-
Unlock(C): Valid (can unlock anytime)
-
Lock(A): INVALID - A is not descendant of any currently held lock
-
Tree protocol: once unlocked, cannot access unrelated subtrees
Q4. Wait-Die vs Wound-Wait Analysis - Answer: (A)
Wait-Die Logic (older waits, younger dies):
-
T1(5) wants P held by T2(10): T1 older → T1 waits
-
T3(15) wants Q held by T4(20): T3 older → T3 waits
-
T5(25) wants P held by T2(10): T5 younger → T5 dies
Q5. Optimistic Protocol Validation - Answer: (A)
Validation Check:
-
T6 finishes (200) after T7 starts (150) → Need to check write-read conflicts
-
T6 writes: {A}, T7 reads: {B,D} → No intersection
-
T7 writes: {D,E}, T6 reads: {A,B,C} → No intersection
-
Both pass validation
Section B: Numerical and Analytical Solutions
Q6. Deadlock Probability Analysis
Solution:
1. Expected active transactions
-
Arrival rate: λ = 1000 trans/sec
-
Mean duration: μ = 50ms = 0.05 sec
-
By Little's Law: E[Active] = λ × μ = 1000 × 0.05 = 50 transactions
2. Deadlocks per second
-
Deadlock probability = (50)² × 0.0002 = 2500 × 0.0002 = 0.5
-
With 1000 trans/sec: 0.5 deadlocks/sec
3. Detection overhead
-
Detection runs every 200ms = 5 times/sec
-
Cost per detection = 5ms × 50 = 250ms
-
Total detection time = 5 × 250ms = 1.25 seconds
-
Overhead = 125% (unrealistic - need optimization)
4. Optimal detection interval
-
Let detection interval = T seconds
-
Detection cost/sec = (5ms × 50)/T = 250/T ms/sec
-
Rollback cost/sec = 0.5 × 100ms = 50 ms/sec
-
Total cost = 250/T + 50
-
Minimize: d/dT(250/T + 50) = -250/T² = 0
-
Optimal T ≈ 50ms (detection every 50ms)
Q7. Complex Timestamp Ordering Simulation
Initial State: P=50, Q=75, R=100, All timestamps = 0
Detailed Trace:
Step | Operation | TS | Current R-ts | Current W-ts | Decision | New R-ts | New W-ts | Value |
|---|---|---|---|---|---|---|---|---|
1 | T1:Read(P) | 10 | 0 | 0 | ✓ Allow | 10 | 0 | 50 |
2 | T2:Write(R) | 15 | 0 | 0 | ✓ Allow | 0 | 15 | 200 |
3 | T1:Write(Q) | 10 | 0 | 0 | ✓ Allow | 0 | 10 | 85 |
4 | T3:Read(Q) | 20 | 0 | 10 | ✓ Allow | 20 | 10 | 85 |
5 | T2:Read(P) | 15 | 10 | 0 | ✓ Allow | 15 | 0 | 50 |
6 | T4:Write(Q) | 25 | 20 | 10 | ✓ Allow | 20 | 25 | 75 |
7 | T1:Read(R) | 10 | 0 | 15 | ✗ ABORT | - | - | - |
8 | T3:Write(P) | 20 | 15 | 0 | ✓ Allow | 15 | 20 | 65 |
9 | T4:Read(P) | 25 | 15 | 20 | ✓ Allow | 25 | 20 | 65 |
10 | T2:Write(R) | 15 | 0 | 15 | ✓ Allow | 0 | 15 | 225 |
Results:
-
Operations trace: T1 aborts at step 7, others proceed (4 marks)
-
Final timestamps: P(25,20), Q(20,25), R(0,15) (3 marks)
-
Final values: P=65, Q=75, R=225; Committed: T2, T3, T4 (2 marks)
-
Thomas' Rule: Same result (T1 still aborts due to R-timestamp violation) (1 mark)
Q8. Multi-Granularity Lock Performance
1. Optimal Lock Protocol (2 marks):
Individual Account: IS-Database → IS-Region → IS-Branch → S-Account
Branch Report: IS-Database → IS-Region → S-Branch
Regional Analysis: IS-Database → S-Region
System Backup: X-Database
2. Lock Operations per Hour
-
Individual: 50,000 × 4 locks = 200,000 operations
-
Branch: 100 × 3 locks = 300 operations
-
Regional: 20 × 2 locks = 40 operations
-
Backup: 1 × 1 lock = 1 operation
-
Total: 200,341 operations/hour
3. Memory Requirements
-
Active locks at any time ≈ 200,341/3600 × average_duration
-
Assuming 5ms average: ~278 active locks
-
Memory = 278 × 64 bytes = 17.8 KB
4. vs Simple Exclusive
-
Simple: 50,000 + 100×50K + 20×300K + 1×1M = 11,050,000 locks
-
Multi-granularity is 55× more efficient
Section C: Conceptual Solutions
Q9. Protocol Comparison and Optimization
Part A - Protocol Comparison
Response Time Calculations:
Strict 2PL:
-
Read: 2ms + 0.1ms (lock overhead) = 2.1ms
-
Update: 8ms + 0.3ms (lock) + 0.05ms (detection share) = 8.35ms
-
Complex: 50ms + 1ms (lock) + 0.2ms (detection) = 51.2ms
Timestamp Ordering:
-
Read: 2ms × (1 + 0.20 × 0.8) = 2.32ms (20% rollback rate)
-
Update: 8ms × 1.20 = 9.6ms
-
Complex: 50ms × 1.20 = 60ms
Optimistic Protocol:
-
Read: 2ms (no rollbacks for read-only)
-
Update: 8ms × (1 + 0.12 × 1.5) = 9.44ms (validation cost)
-
Complex: 50ms × (1 + 0.12 × 2.0) = 62ms
Best Protocol: Strict 2PL for this workload
Part B - Hybrid Design
Proposed Hybrid:
-
Read transactions: Optimistic protocol (no locking overhead)
-
Update transactions: Timestamp ordering (avoid deadlocks)
-
Complex transactions: Strict 2PL (better for long transactions)
-
VIP priority: Wound-wait with priority timestamps
Justification:
-
Exploits 80% read workload with optimistic approach
-
Avoids deadlock detection overhead for most transactions
-
Prioritizes important transactions effectively
-
Expected improvement: 15-20% better response time
Q10. Advanced Deadlock Resolution
1. Cost Function Design
Cost(Ti) = w1×ExecutionTime + w2×RemainingWork + w3×ItemsLocked + w4×NetworkHops
Where: w1=0.3, w2=0.4, w3=0.2, w4=0.1
2. Cost Calculations
-
T1: 0.3×120 + 0.4×30 + 0.2×8 + 0.1×2 = 49.8
-
T2: 0.3×45 + 0.4×200 + 0.2×3 + 0.1×4 = 94.5
-
T3: 0.3×200 + 0.4×15 + 0.2×12 + 0.1×1 = 68.5
-
T4: 0.3×80 + 0.4×100 + 0.2×5 + 0.1×3 = 65.3
3. Cascading Analysis
-
Choose T1 (lowest cost): T2,T3,T4 can proceed, T5 still waits
-
Network impact: 2 hops affected
-
Total system disruption: Minimal
4. System-Wide Algorithm
1. Calculate individual costs
2. Simulate cascading effects for each choice
3. Weight by system-wide throughput impact
4. Choose victim with minimal total system cost
5. Implement gradual rollback to reduce restart overhead
Section D: Implementation Solutions
Q11. Lock Manager Implementation
Part A - Data Structures
// Lock table entry
struct LockEntry {
DataItemID item_id;
LockMode mode;
set<TransactionID> holders;
queue<LockRequest> waiters;
timestamp last_access;
};
// Hash table for O(1) lookup
unordered_map<DataItemID, LockEntry> lock_table;
// Wait-for graph (adjacency list)
unordered_map<TransactionID, set<TransactionID>> wait_for_graph;
// Transaction state
struct Transaction {
TransactionID id;
set<DataItemID> held_locks;
timestamp start_time;
LockMode mode;
};
Space Complexity: O(L + T) where L = locks, T = transactions
Part B - Algorithms
bool RequestLock(TransactionID tid, DataItemID item, LockMode mode) {
auto& entry = lock_table[item];
if (IsCompatible(entry.mode, mode) && entry.waiters.empty()) {
// Grant immediately
entry.holders.insert(tid);
return true;
} else {
// Must wait
entry.waiters.push({tid, mode, current_time()});
UpdateWaitForGraph(tid, entry.holders);
if (DetectDeadlock(tid)) {
HandleDeadlock();
}
return false;
}
}
bool DetectDeadlock(TransactionID start) {
// DFS-based cycle detection
set<TransactionID> visited, rec_stack;
return DFSHasCycle(start, visited, rec_stack);
}
Part C - Performance Analysis
1. Expected lock table size:
-
Active locks: 100,000 req/sec × 25ms = 2,500 entries
-
Memory: 2,500 × 200 bytes = 500 KB
2. Deadlock detection frequency:
-
With 10,000 concurrent transactions: O(10,000 + 2,500) per detection
-
Target: Every 10ms for responsive system
3. Bottlenecks:
-
Hash table contention → Use sharded lock tables
-
Wait-for graph updates → Lazy evaluation
-
Proposed: Lock-free skip lists for hot data items
Q12. Protocol Simulation Under Failure
1. System Behavior Model
t=0ms: Network partition
- 20 active transactions frozen
- Lock table preserved but inaccessible
t=30ms: New transactions start
- 5 transactions begin with incomplete view
- Lock requests queue up
t=50ms: Network heals
- Reconcile lock tables across partitions
- Detect phantom deadlocks
- Restart queued operations
t=75ms: System failure
- All 25 active transactions abort
- Lock table lost → full recovery needed
t=275ms: Recovery complete
- Cold start with empty lock table
- Previous work lost → cascading restarts
t=400ms: Load spike
- 50 new + recovering transactions
- Lock contention increases exponentially
2. Expected Restarts
-
Partition healing: 5 transactions (conflicting states)
-
System failure: 25 transactions (complete loss)
-
Load spike aftermath: ~15 additional (deadlock cascade)
-
Total: 45 transaction restarts
3. Lock Table Evolution
-
t=0: 20 entries → Frozen
-
t=50: 25 entries → Reconciliation conflicts
-
t=75: 25 entries → Complete loss
-
t=275: 0 entries → Clean slate
-
t=400: Exponential growth to ~80 entries
4. Resilience Improvements (3 marks):
-
Persistent lock table: WAL-based recovery
-
Distributed consensus: Raft protocol for lock decisions
-
Graceful degradation: Read-only mode during partitions
-
Load shedding: Queue management with priorities
Section E: Research-Level Solutions
Q13. Novel Lock Mode Design
1. Formal Semantics
CX Lock Definition:
-
CX(condition): Conditional exclusive access
-
Semantics: Transaction gets exclusive access IF condition becomes true
-
Multiple CX locks compatible if conditions are mutually exclusive
Compatibility Rules:
S X CX IS IX
S ✓ ✗ ✓* ✓ ✓
X ✗ ✗ ✗ ✗ ✗
CX ✓* ✗ ✓** ✓ ✓*
IS ✓ ✗ ✓ ✓ ✓
IX ✓ ✗ ✓* ✓ ✓
Compatible if conditions don't conflict *Compatible if conditions are mutually exclusive
2. Serializability Proof
Theorem: CX protocol maintains conflict serializability
Proof Sketch:
-
Conflict Definition: Operations conflict if they access same item and at least one modifies it when condition is true
-
Precedence Graph: Add edge Ti → Tj if Ti's condition becomes true before Tj's
-
Mutual Exclusion: When CX condition activates, it behaves like X lock
-
Acyclicity: Condition evaluation creates temporal ordering → no cycles possible
3. Performance Analysis
Improvement over X locks:
-
Scenario: Scientific simulation with 1000 data points
-
Traditional: 1000 X locks → serialized access
-
CX locks: Average 50 non-conflicting conditions → 20x concurrency
-
Overhead: Condition evaluation ~0.1ms vs lock acquisition ~0.05ms
4. Implementation Strategy
class CXLock {
Condition condition;
atomic<bool> activated{false};
shared_mutex data_mutex;
bool TryActivate() {
if (condition.Evaluate() && !activated.exchange(true)) {
data_mutex.lock(); // Upgrade to exclusive
return true;
}
return false;
}
};
Edge Cases:
-
Condition change during execution → Re-evaluation protocol
-
Multiple activation attempts → Atomic flag with priority queuing
Q14. AI-Optimized Deadlock Prevention
1. Model Architecture
class DeadlockPredictor:
def __init__(self):
# Feature extraction
self.transaction_encoder = TransformerEncoder(
d_model=128, nhead=8, num_layers=4
)
# Temporal pattern recognition
self.lstm = LSTM(input_size=128, hidden_size=64, num_layers=2)
# Conflict probability prediction
self.classifier = nn.Sequential(
nn.Linear(64, 32),
nn.ReLU(),
nn.Linear(32, 1),
nn.Sigmoid()
)
2. Training Features
Input Features:
-
Transaction signature: (lock_sequence, data_access_pattern, duration_estimate)
-
Temporal context: (current_time, active_transactions, system_load)
-
Historical patterns: (similar_transaction_outcomes, recent_deadlock_frequency)
Target Variables:
-
Binary: Will this transaction pair cause deadlock? (0/1)
-
Continuous: Deadlock probability (0.0-1.0)
-
Multi-class: Deadlock type (wait-for cycle, resource starvation, etc.)
3. Integration Strategy
class AILockManager {
DeadlockPredictor predictor;
bool RequestLock(TransactionID tid, DataItemID item, LockMode mode) {
// Get current system state
SystemState state = GetCurrentState();
// Predict deadlock probability
float deadlock_prob = predictor.Predict(tid, item, mode, state);
if (deadlock_prob > THRESHOLD) {
// Apply prevention strategy
return ApplyPreventionStrategy(tid, item, mode, deadlock_prob);
}
// Normal lock processing
return StandardLockRequest(tid, item, mode);
}
private:
bool ApplyPreventionStrategy(TransactionID tid, DataItemID item,
LockMode mode, float prob) {
if (prob > 0.8) {
// High risk: Force ordering
return AcquireWithOrdering(tid, item, mode);
} else if (prob > 0.5) {
// Medium risk: Delay with backoff
return DelayedAcquisition(tid, item, mode);
} else {
// Low risk: Monitor closely
return MonitoredAcquisition(tid, item, mode);
}
}
};
Real-time Integration:
-
Feature extraction: 0.1ms per transaction
-
Model inference: 0.5ms per prediction
-
Strategy application: 0.2ms overhead
-
Total overhead: <1ms per lock request
4. Trade-off Analysis (4 marks):
Accuracy vs Overhead Trade-offs:
Model Complexity | Accuracy | Inference Time | Memory Usage | Recommendation |
|---|---|---|---|---|
Simple Linear | 75% | 0.1ms | 10MB | High-throughput systems |
Random Forest | 85% | 0.3ms | 50MB | Balanced approach |
Deep Neural Net | 92% | 0.8ms | 200MB | Mission-critical systems |
Transformer | 95% | 1.2ms | 500MB | Research/specialized |
Performance Analysis:
-
False Positives: 8% unnecessary prevention → 5% throughput loss
-
False Negatives: 3% missed deadlocks → 15% rollback cost
-
Net Benefit: 60% reduction in deadlock-related delays
-
ROI Calculation:
Benefit = (Prevented_Deadlocks × Rollback_Cost) - (False_Positives × Prevention_Cost) = (0.97 × 0.5 deadlocks/sec × 100ms) - (0.08 × 1000 req/sec × 1ms) = 48.5ms/sec - 80ms/sec = -31.5ms/sec
Optimization Recommendations:
-
Hybrid approach: Simple model for common cases, complex model for high-risk scenarios
-
Online learning: Adapt to changing workload patterns
-
Ensemble methods: Combine multiple models for better accuracy
-
Hardware acceleration: GPU inference for complex models
Real-world Considerations:
-
Training data quality: Need diverse workload samples
-
Concept drift: Transaction patterns change over time
-
Interpretability: Need explainable predictions for debugging
-
Scalability: Model must handle 100K+ concurrent transactions
Advanced Optimization Strategies:
class AdaptivePredictor:
def __init__(self):
self.models = {
'fast': LinearModel(), # 0.1ms, 75% accuracy
'balanced': RandomForest(), # 0.3ms, 85% accuracy
'accurate': NeuralNet() # 0.8ms, 92% accuracy
}
self.model_selector = ModelSelector()
def predict(self, transaction_features, system_load):
# Select model based on current conditions
if system_load > 0.8:
model = self.models['fast'] # High load: prioritize speed
elif transaction_features.risk_score > 0.7:
model = self.models['accurate'] # High risk: prioritize accuracy
else:
model = self.models['balanced'] # Normal: balanced approach
return model.predict(transaction_features)
Implementation Timeline:
-
Phase 1 (Months 1-3): Baseline implementation with simple linear model
-
Phase 2 (Months 4-6): Advanced models and online learning
-
Phase 3 (Months 7-9): Production optimization and hardware acceleration
-
Phase 4 (Months 10-12): Advanced features and interpretability
Success Metrics:
-
Primary: 50%+ reduction in deadlock frequency
-
Secondary: <5% increase in lock acquisition latency
-
Tertiary: 99.9% model availability and <1% false positive rate
Additional Practice Problems
Bonus Question 1: Multi-Version Concurrency Control Integration (5 marks)
Design a hybrid protocol combining traditional locking with MVCC:
Scenario: E-commerce system where:
-
90% transactions are read-heavy (browsing, searching)
-
10% transactions are write-heavy (orders, inventory updates)
-
Read transactions should never wait
-
Write transactions need strict consistency
Solution Strategy:
class HybridMVCC_Lock {
// Readers use MVCC - never block
ReadResult ReadWithMVCC(TransactionID tid, DataItemID item) {
Version version = GetVersionForTimestamp(tid.start_time);
return version.data; // No locking needed
}
// Writers use traditional locking
bool WriteWithLocking(TransactionID tid, DataItemID item, Data new_data) {
if (AcquireExclusiveLock(tid, item)) {
CreateNewVersion(item, new_data, tid.commit_time);
return true;
}
return false; // Must wait or abort
}
};
Bonus Question 2: Blockchain-Inspired Consensus Locking (5 marks)
Design a distributed locking protocol using blockchain concepts:
Key Ideas:
-
Lock requests are "transactions" in a distributed ledger
-
Consensus mechanism ensures global lock ordering
-
Immutable lock history prevents tampering
Implementation Framework:
class ConsensusLockManager:
def __init__(self, node_id, peer_nodes):
self.node_id = node_id
self.peers = peer_nodes
self.lock_chain = BlockChain()
def request_lock(self, transaction_id, resource_id, lock_mode):
# Create lock request block
lock_request = LockBlock(
transaction_id=transaction_id,
resource_id=resource_id,
lock_mode=lock_mode,
timestamp=time.now(),
previous_hash=self.lock_chain.get_latest_hash()
)
# Get consensus from majority of nodes
if self.get_consensus(lock_request):
self.lock_chain.add_block(lock_request)
return True
return False
Final Performance Benchmarks
Expected Solution Times by Section:
Section A (MCQs): 3-4 minutes per question
-
Requires quick recall and basic analysis
-
Focus on elimination of obviously wrong options
Section B (Numerical): 15-25 minutes per question
-
Heavy computational work
-
Multiple steps requiring careful tracking
Section C (Conceptual): 20-30 minutes per question
-
Deep understanding and synthesis required
-
Justification of design decisions important
Section D (Implementation): 25-35 minutes per question
-
Pseudocode writing and complexity analysis
-
System design considerations
Section E (Research): 30-45 minutes per question
-
Novel problem solving
-
Integration of multiple concepts
-
Proof techniques and formal analysis
Common Pitfalls to Avoid:
-
Timestamp Ordering: Forgetting to check both read and write timestamps
-
2PL Variants: Confusing strict vs rigorous locking rules
-
Tree Protocol: Not recognizing when items become inaccessible after unlocking
-
Deadlock Detection: Miscounting edges in wait-for graphs
-
Multiple Granularity: Forgetting intention locks on ancestor nodes