@@ -5,92 +5,114 @@ Parallel, Concurrent, and Distributed Programming in Java | Coursera
5
5
## Parallel Programming in Java
6
6
7
7
<b ><u >Week 1 : Task Parallelism</u ></b >
8
- - Demonstrate task parallelism using Asynkc/Finish constructs
9
- - Create task-parallel programs using Java's Fork/Join Framework
10
- - Interpret Computation Graph abstraction for task-parallel programs
11
- - Evaluate the Multiprocessor Scheduling problem using Computation Graphs
12
- - Assess sequetional bottlenecks using Amdahl's Law
8
+ - [ ] Demonstrate task parallelism using Asynkc/Finish constructs
9
+ - [ ] Create task-parallel programs using Java's Fork/Join Framework
10
+ - [ ] Interpret Computation Graph abstraction for task-parallel programs
11
+ - [ ] Evaluate the Multiprocessor Scheduling problem using Computation Graphs
12
+ - [ ] Assess sequetional bottlenecks using Amdahl's Law
13
+
14
+ ✅ <i >Mini project 1 : Reciproncal-Array-Sum using the Java Fork/Join Framework</i >
13
15
14
16
<b ><u >Week 2 : Functional Parallelism</u ></b >
15
17
16
- - Demonstrate functional parallelism using the Future construct
17
- - Create functional-parallel programs using Java's Fork/Join Framework
18
- - Apply the princple of memoization to optimize functional parallelism
19
- - Create functional-parallel programs using Java Streams
20
- - Explain the concepts of data races and functional/structural determinism
18
+ - [ ] Demonstrate functional parallelism using the Future construct
19
+ - [ ] Create functional-parallel programs using Java's Fork/Join Framework
20
+ - [ ] Apply the princple of memoization to optimize functional parallelism
21
+ - [ ] Create functional-parallel programs using Java Streams
22
+ - [ ] Explain the concepts of data races and functional/structural determinism
23
+
24
+ ✅ <i >Mini project 2 : Analysing Student Statistics Using Java Parallel Streams</i >
21
25
22
26
<b ><u >Week 3 : Loop Parallelism</u ></b >
23
- - Create programs with loop-level parallelism using the Forall and Java Stream constructs
24
- - Evaluate loop-level parallelism in a matrix-multiplication example
25
- - Examine the barrier construct for parallel loops
26
- - Evaluate parallel loops with barriers in an iterative-averaging example
27
- - Apply the concept of iteration grouping/chunking to improve the performance of parallel loops
27
+ - [ ] Create programs with loop-level parallelism using the Forall and Java Stream constructs
28
+ - [ ] Evaluate loop-level parallelism in a matrix-multiplication example
29
+ - [ ] Examine the barrier construct for parallel loops
30
+ - [ ] Evaluate parallel loops with barriers in an iterative-averaging example
31
+ - [ ] Apply the concept of iteration grouping/chunking to improve the performance of parallel loops
32
+
33
+ ✅ <i >Mini project 3 : Parallelizing Matrix-Matrix Multiply Using Loop Parallelism</i >
28
34
29
35
<b ><u >Week 4 : Data flow Synchronization and Pipelining</u ></b >
30
- - Create split-phase barriers using Java's Phaser construct
31
- - Create point-to-point synchronization patterns using Java's Phaser construct
32
- - Evaluate parallel loops with point-to-point synchronization in an iterative-averaging example
33
- - Analyze pipeline parallelism using the principles of point-to-point synchronization
34
- - Interpret data flow parallelism using the data-driven-task construct
36
+ - [ ] Create split-phase barriers using Java's Phaser construct
37
+ - [ ] Create point-to-point synchronization patterns using Java's Phaser construct
38
+ - [ ] Evaluate parallel loops with point-to-point synchronization in an iterative-averaging example
39
+ - [ ] Analyze pipeline parallelism using the principles of point-to-point synchronization
40
+ - [ ] Interpret data flow parallelism using the data-driven-task construct
35
41
42
+ ✅ <i >Mini project 4 : Using Phasers to Optimize Data-Parallel Applications</i >
36
43
37
44
## Concurrent Programming in Java
38
45
39
46
<b ><u >Week 1 : Threads and Locks</u ></b >
40
- - Understand the role of Java threads in building concurrent programs
41
- - Create concurrent programs using Java threads and the synchronized statement (structured locks)
42
- - Create concurrent programs using Java threads and lock primitives in the java.util.concurrent library (unstructured locks)
43
- - Analyze programs with threads and locks to identify liveness and related concurrency bugs
44
- - Evaluate different approaches to solving the classical Dining Philosophers Problem
47
+ - [ ] Understand the role of Java threads in building concurrent programs
48
+ - [ ] Create concurrent programs using Java threads and the synchronized statement (structured locks)
49
+ - [ ] Create concurrent programs using Java threads and lock primitives in the java.util.concurrent library (unstructured locks)
50
+ - [ ] Analyze programs with threads and locks to identify liveness and related concurrency bugs
51
+ - [ ] Evaluate different approaches to solving the classical Dining Philosophers Problem
52
+
53
+ ✅ <i >Mini project 1 : Locking and Synchronization</i >
45
54
46
55
<b ><u >Week 2 : Critical Sections and Isolation</u ></b >
47
- - Create concurrent programs with critical sections to coordinate accesses to shared resources
48
- - Create concurrent programs with object-based isolation to coordinate accesses to shared resources with more overlap than critical sections
49
- - Evaluate different approaches to implementing the Concurrent Spanning Tree algorithm
50
- - Create concurrent programs using Java's atomic variables
51
- - Evaluate the impact of read vs. write operations on concurrent accesses to shared resources
52
-
56
+ - [ ] Create concurrent programs with critical sections to coordinate accesses to shared resources
57
+ - [ ] Create concurrent programs with object-based isolation to coordinate accesses to shared resources with more overlap than critical sections
58
+ - [ ] Evaluate different approaches to implementing the Concurrent Spanning Tree algorithm
59
+ - [ ] Create concurrent programs using Java's atomic variables
60
+ - [ ] Evaluate the impact of read vs. write operations on concurrent accesses to shared resources
61
+
62
+ ✅ <i >Mini project 2 : Global and Object-Based Isolation</i >
63
+
53
64
<b ><u >Week 3 : Actors</u ></b >
54
- - Understand the Actor model for building concurrent programs
55
- - Create simple concurrent programs using the Actor model
56
- - Analyze an Actor-based implementation of the Sieve of Eratosthenes program
57
- - Create Actor-based implementations of the Producer-Consumer pattern
58
- - Create Actor-based implementations of concurrent accesses on a bounded resource
65
+ - [ ] Understand the Actor model for building concurrent programs
66
+ - [ ] Create simple concurrent programs using the Actor model
67
+ - [ ] Analyze an Actor-based implementation of the Sieve of Eratosthenes program
68
+ - [ ] Create Actor-based implementations of the Producer-Consumer pattern
69
+ - [ ] Create Actor-based implementations of concurrent accesses on a bounded resource
70
+
71
+ ✅ <i >Mini project 3 : Sieve of Eratosthenes Using Actor Parallelism</i >
59
72
60
73
<b ><u >Week 4 : Concurrent Data Structures</u ></b >
61
- - Understand the principle of optimistic concurrency in concurrent algorithms
62
- - Understand implementation of concurrent queues based on optimistic concurrency
63
- - Understand linearizability as a correctness condition for concurrent data structures
64
- - Create concurrent Java programs that use the java.util.concurrent.ConcurrentHashMap library
65
- - Analyze a concurrent algorithm for computing a Minimum Spanning Tree of an undirected graph
74
+ - [ ] Understand the principle of optimistic concurrency in concurrent algorithms
75
+ - [ ] Understand implementation of concurrent queues based on optimistic concurrency
76
+ - [ ] Understand linearizability as a correctness condition for concurrent data structures
77
+ - [ ] Create concurrent Java programs that use the java.util.concurrent.ConcurrentHashMap library
78
+ - [ ] Analyze a concurrent algorithm for computing a Minimum Spanning Tree of an undirected graph
79
+
80
+ ✅ <i >Mini project 4 : Parallelization of Boruvka's Minimum Spanning Tree Algorithm</i >
66
81
67
82
## Distributed Programming in Java
68
83
69
84
<b ><u >Week 1 : Distributed Map Reduce</u ></b >
70
- - Explain the MapReduce paradigm for analyzing data represented as key-value pairs
71
- - Apply the MapReduce paradigm to programs written using the Apache Hadoop framework
72
- - Create Map Reduce programs using the Apache Spark framework
73
- - Acknowledge the TF-IDF statistic used in data mining, and how it can be computed using the MapReduce paradigm
74
- - Create an implementation of the PageRank algorithm using the Apache Spark framework
85
+ - [ ] Explain the MapReduce paradigm for analyzing data represented as key-value pairs
86
+ - [ ] Apply the MapReduce paradigm to programs written using the Apache Hadoop framework
87
+ - [ ] Create Map Reduce programs using the Apache Spark framework
88
+ - [ ] Acknowledge the TF-IDF statistic used in data mining, and how it can be computed using the MapReduce paradigm
89
+ - [ ] Create an implementation of the PageRank algorithm using the Apache Spark framework
90
+
91
+ ✅ <i >Mini project 1 : Page Rank with Spark</i >
75
92
76
93
<b ><u >Week 2 : Client-Server Programming</u ></b >
77
- - Generate distributed client-server applications using sockets
78
- - Demonstrate different approaches to serialization and deserialization of data structures for distributed programming
79
- - Recall the use of remote method invocations as a higher-level primitive for distributed programming (compared to sockets)
80
- - Evaluate the use of multicast sockets as a generalization of sockets
81
- - Employ distributed publish-subscribe applications using the Apache Kafka framework
94
+ - [ ] Generate distributed client-server applications using sockets
95
+ - [ ] Demonstrate different approaches to serialization and deserialization of data structures for distributed programming
96
+ - [ ] Recall the use of remote method invocations as a higher-level primitive for distributed programming (compared to sockets)
97
+ - [ ] Evaluate the use of multicast sockets as a generalization of sockets
98
+ - [ ] Employ distributed publish-subscribe applications using the Apache Kafka framework
99
+
100
+ ✅ <i >Mini project 2 : Filer Server</i >
82
101
83
102
<b ><u >Week 3 : Message Passing</u ></b >
84
- - Create distributed applications using the Single Program Multiple Data (SPMD) model
85
- - Create message-passing programs using point-to-point communication primitives in MPI
86
- - Identify message ordering and deadlock properties of MPI programs
87
- - Evaluate the advantages of non-blocking communication relative to standard blocking communication primitives
88
- - Explain collective communication as a generalization of point-to-point communication
103
+ - [ ] Create distributed applications using the Single Program Multiple Data (SPMD) model
104
+ - [ ] Create message-passing programs using point-to-point communication primitives in MPI
105
+ - [ ] Identify message ordering and deadlock properties of MPI programs
106
+ - [ ] Evaluate the advantages of non-blocking communication relative to standard blocking communication primitives
107
+ - [ ] Explain collective communication as a generalization of point-to-point communication
108
+
109
+ ✅ <i >Mini project 3 : Matrix Multiply in MPI</i >
89
110
90
111
<b ><u >Week 4 : Combining Distribution and Multuthreading</u ></b >
91
- - Distinguish processes and threads as basic building blocks of parallel, concurrent, and distributed Java programs
92
- - Create multithreaded servers in Java using threads and processes
93
- - Demonstrate how multithreading can be combined with message-passing programming models like MPI
94
- - Analyze how the actor model can be used for distributed programming
95
- - Assess how the reactive programming model can be used for distrubted programming
112
+ - [ ] Distinguish processes and threads as basic building blocks of parallel, concurrent, and distributed Java programs
113
+ - [ ] Create multithreaded servers in Java using threads and processes
114
+ - [ ] Demonstrate how multithreading can be combined with message-passing programming models like MPI
115
+ - [ ] Analyze how the actor model can be used for distributed programming
116
+ - [ ] Assess how the reactive programming model can be used for distrubted programming
96
117
118
+ ✅ <i >Mini project 4 : Multi-Threaded File Server</i >
0 commit comments