Skip to content

Commit

Permalink
Merge pull request preslavmihaylov#9 from gavvvr/preslavmihaylov#2-JC…
Browse files Browse the repository at this point in the history
…IP-summary-typos

preslavmihaylov#2 fix typos in JCIP notes
  • Loading branch information
preslavmihaylov authored Dec 18, 2022
2 parents 68c4ed6 + fab6d9d commit 9de9c99
Show file tree
Hide file tree
Showing 15 changed files with 43 additions and 43 deletions.
2 changes: 1 addition & 1 deletion java/java-concurrency-in-practice/chapter-02/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,7 +220,7 @@ The problem with `SlowCountingFactorizer` is that the synchronization was excess

This can make users quite frustrated, especially when the servlet is under high load.

Instead, strive for making the synchronized blocks are small as possible, but not too small.
Instead, strive for making the synchronized blocks as small as possible, but not too small.

An example of a too small synchronized block would be not synchronizing compound actions together.
An example of a too big synchronized block is synchronizing the whole method and/or unnecessarily synchronizing long-running I/O.
Expand Down
2 changes: 1 addition & 1 deletion java/java-concurrency-in-practice/chapter-03/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -273,7 +273,7 @@ Final fields are a less-strict form of the `const` keyword in C++.
They don't let an object reference change after construction, although they don't prevent the object's internal state to change.

However, they also have special "initialization safety" guarantee by the Java memory model.
It is this guarantee that allows immutable objects to be freely accesssed and shared without synchronization.
It is this guarantee that allows immutable objects to be freely accessed and shared without synchronization.

Even if an object is mutable, you should still consider making all fields that can be made final - final.
A mostly thread-safe object is still better than a not-at-all thread-safe object.
Expand Down
2 changes: 1 addition & 1 deletion java/java-concurrency-in-practice/chapter-05/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
- [Barrier](#barrier)
- [Building an efficient, scalable result cache](#building-an-efficient-scalable-result-cache)

This chapter covers the most useful concurrenct libraries & collections available for you to use in order to leverage thread-safety delegation.
This chapter covers the most useful concurrent libraries & collections available for you to use in order to leverage thread-safety delegation.

# Synchronized Collections

Expand Down
4 changes: 2 additions & 2 deletions java/java-concurrency-in-practice/chapter-06/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -76,8 +76,8 @@ This approach is fine for small to medium traffic. As long as the incoming reque
## Disadvantages of unbounded thread creation
For production use, creating threads unboundedly has some drawbacks:
* Thread lifecycle overhead - creating & managing threads has some overhead, it is not free. If the threads are too many, the multi-threaded application might become slower than the single-threaded one.
* Resouce consumption - active threads consume system resources, especially memory.
* Stability - there is a limit on how many threads one can create. This variaes by platform, but once you hit it, you would get an `OutOfMemoryException`.
* Resource consumption - active threads consume system resources, especially memory.
* Stability - there is a limit on how many threads one can create. This varies by platform, but once you hit it, you would get an `OutOfMemoryException`.

Up to a certain point, creating threads improve your application's throughput, but beyond it, more threads start getting in the way.

Expand Down
12 changes: 6 additions & 6 deletions java/java-concurrency-in-practice/chapter-07/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
- [Encapsulating non-standard interruption with newTaskFor](#encapsulating-non-standard-interruption-with-newtaskfor)
- [Stopping a thread-based service](#stopping-a-thread-based-service)
- [Example: a logging service](#example-a-logging-service)
- [ExecutorService shutdownn](#executorservice-shutdownn)
- [ExecutorService shutdown](#executorservice-shutdown)
- [Poison pills](#poison-pills)
- [Example: A one-shot execution service](#example-a-one-shot-execution-service)
- [Limitations of shutdownNow](#limitations-of-shutdownnow)
Expand Down Expand Up @@ -242,7 +242,7 @@ public static void timedRun(Runnable r, long timeout, TimeUnit unit) throws Inte

## Dealing with non-interruptible blocking
Sometimes, you might be doing some work which is non-interruptible & needs special care:
* Synchronous socker I/O in java.io - The read and write methods are not responsive to interruption, but closing the underlying socket makes blocked threads throw a `SocketException`
* Synchronous socket I/O in java.io - The read and write methods are not responsive to interruption, but closing the underlying socket makes blocked threads throw a `SocketException`
* Synchronous I/O in java.nio
* Asynchronous I/O with Selector - If a thread is blocked on Selector.select, you have to close the underlying channel to unblock
* Lock acquisition - If a thread is blocked waiting for an intrinsic lock, there is nothing for you to do to stop it from acquiring the lock
Expand Down Expand Up @@ -334,7 +334,7 @@ public abstract class SocketUsingTask<T> implements CancellableTask<T> {
# Stopping a thread-based service
Threads have owners & the owner is the one who created the thread.

In many occassions, that's the thread pool implementation you are using. Since it is the owner, you shouldn't attempt to stop its threads yourself.
In many occasions, that's the thread pool implementation you are using. Since it is the owner, you shouldn't attempt to stop its threads yourself.

Instead, the service should provide lifecycle methods for managing the threads. The `ExecutorService` has the `shutdown` and `shutdownNow` methods for dealing with that.

Expand Down Expand Up @@ -431,7 +431,7 @@ public class LogService {
}
```

## ExecutorService shutdownn
## ExecutorService shutdown
The executor service has the `shutdown` and `shutdownNow` utilities for graceful & not so graceful shutdown.

Simple programs can get away with using a global executor service, initialized from main.
Expand Down Expand Up @@ -695,12 +695,12 @@ The JVM can shutdown in an orderly or an abrupt manner. An orderly shutdown is w

## Shutdown hooks
In an orderly shutdown, the JVM starts all registered shutdown hooks. You can use this to make any clean up logic on exit.
Shutdown hooks can be registed with `Runtime.addShutdownHook`.
Shutdown hooks can be registered with `Runtime.addShutdownHook`.

The JVM makes no guarantees on the order of execution of shutdown hooks.
If the shutdown hooks or finalizers don't exit, than the JVM hangs and has to be stopped abruptly.

Shutdown hooks should be thread-safe & exit as quickly as possible. Sine shutdown hooks run in parallel, so they shouldn't depend on services that could be shutdown in another hook.
Shutdown hooks should be thread-safe & exit as quickly as possible. Since shutdown hooks run in parallel, so they shouldn't depend on services that could be shutdown in another hook.
To avoid this problem, you could encapsulate your entire shutdown mechanism in a single hook so that everything is executed synchronously in a single thread.

Example usage:
Expand Down
6 changes: 3 additions & 3 deletions java/java-concurrency-in-practice/chapter-08/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Sizing a threadpool depends on:
* What are the hardware resources of the deployment system
* Do tasks perform mostly computation, I/O or some combination?

Consider using multiple thread pools if you have heterogenous tasks as this way, the pools can be adjusted based on the nature of the executing tasks.
Consider using multiple thread pools if you have heterogeneous tasks as this way, the pools can be adjusted based on the nature of the executing tasks.

For compute-intensive tasks, a rule of thumb is using N+1 threads, where N = processor count.

Expand Down Expand Up @@ -136,7 +136,7 @@ Additionally, using synchronous hand-off is more efficient than using a normal q
The `newCachedThreadPool` factory uses synchronous hand-off. With it, tasks can't be rejected as the work queue is unbounded.

In addition to these mechanisms, you could use a `PriorityBlockingQueue` if you'd like certain tasks to have more priority than others.
Priority can be defined by natural order or if the tasks implement `Comparable` - via a `Comparator`.
Priority can be defined by natural order or if the task implements `Comparable` - via a `Comparator`.

Bounding the thread pool or work queue is only suitable when tasks are independent.

Expand Down Expand Up @@ -364,7 +364,7 @@ public<T> Collection<T> getParallelResults(List<Node<T>> nodes) throws Interrupt

## Example: a puzzle framework
An appealing application of this technique is solving puzzles that involve finding a sequence of transformations from some initial state to reach a goal state,
such as the familiar “sliding block puzzles”,7 “Hi-Q”, “Instant Insanity”, and other solitaire puzzles
such as the familiar “sliding block puzzles”, “Hi-Q”, “Instant Insanity”, and other solitaire puzzles

Puzzle definition:
```java
Expand Down
4 changes: 2 additions & 2 deletions java/java-concurrency-in-practice/chapter-09/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Swing data structures are not thread-safe either.
Nearly all GUI toolkits are implemented this way - exploiting thread-confinement.

# Why are GUIs single-threaded?
All modern GUI frameworks are single-threaded subsystems. They work by having a dedicated event threda, called EDT (Event Dispatch Thread).
All modern GUI frameworks are single-threaded subsystems. They work by having a dedicated event thread, called EDT (Event Dispatch Thread).

The reason why GUI frameworks are implemented like this is because of all sorts of problems with race conditions & deadlocks if they were implemented in a multi-threaded way.

Expand All @@ -24,7 +24,7 @@ Tasks in the event queue process sequentially. This makes development easier, bu
This is why, if you need to run a long-running task, you should process it in a different thread than the event thread & return the result to the event thread once the processing is complete.

## Thread confinement in Swing
All swing objects are thread confined to the event thread. Any component should be created, queries & modified in the event thread only.
All swing objects are thread confined to the event thread. Any component should be created, queried & modified in the event thread only.

There are only a few exceptions with methods which are thread-safe and meant to be used outside of the event dispatch thread:
* `SwingUtilities.isEventDispatchThread` - checks if the current thread is the event thread
Expand Down
2 changes: 1 addition & 1 deletion java/java-concurrency-in-practice/chapter-10/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -225,7 +225,7 @@ The reason is that the `setLocation` acquires the `Taxi` lock and `Dispatcher` l
This should not be the case as these classes are independent & such a constraint breaks encapsulation.

The root cause of the problem was that an alien method was called while holding a lock. An alien method is a method we don't own.
That could be a method of another class or a method which can be overriden in a subclass.
That could be a method of another class or a method which can be overridden in a subclass.

> Calling an alien method with a lock held is difficult to analyze and should be avoided.
Expand Down
18 changes: 9 additions & 9 deletions java/java-concurrency-in-practice/chapter-11/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ For most web applications, scalability is often more important than performance.
## Evaluating performance tradeoffs
Nearly all engineering decisions are tradeoffs.

Often times, one has to make traceoffs based on limited information. E.g. quicksort is better for large datasets, but bubblesort is best for small ones.
Often times, one has to make tradeoffs based on limited information. E.g. quicksort is better for large datasets, but bubblesort is best for small ones.
One has to know in advance the amount of data in order to process it most effectively.

This is what most often leads to premature optimizations - making trade offs with limited requirements.
Expand Down Expand Up @@ -98,7 +98,7 @@ public class WorkerThread extends Thread {
}
```

This piece of code is limited by the serialized portion, which is the blokcking take on the blocking queue.
This piece of code is limited by the serialized portion, which is the blocking take on the blocking queue.

## Example: serialization hidden in frameworks
![Blocking queue comparison](images/blocking-queue-comparison.png)
Expand Down Expand Up @@ -198,9 +198,9 @@ public class BetterAttributeStore {
}
```

In can even be further improved by delegating thread-safety to a thread-safe collection entirely (such as `ConcurrentHashMap`).
It can even be further improved by delegating thread-safety to a thread-safe collection entirely (such as `ConcurrentHashMap`).

A synchronized block needs to be shrank only if a significant computation is performed inside of it.
A synchronized block needs to be shrunk only if a significant computation is performed inside of it.
Shrinking a synchronized block too much can lead to safety issues - e.g. not synchronizing compound actions.

## Reducing lock granularity
Expand Down Expand Up @@ -255,19 +255,19 @@ But this will increase the threshold at which performance starts to suffer.

For highly contended locks, this might not improve performance significantly.

## Lock stripping
## Lock striping
Lock splitting on a heavily contended lock can lead to two heavily contended locks. This will not improve matters greatly.

This technique can be extended to partitioning a variable-set of independent objects into multiple locks.

For example, `ConcurrentHashMap` uses 16 different locks to synchronize access to different parts of the underlying hash buckets.

This technique is called lock stripping & improves performance greatly on objects susceptible to such partitioning.
This technique is called lock striping & improves performance greatly on objects susceptible to such partitioning.

However, it makes dealing with synchronization much more complex.
For example, when the hashmap needs to expand, it has to acquire all the locks before doing so, which is more complex than acquiring a single object lock.

Example implementation of a hash map using lock stripping:
Example implementation of a hash map using lock striping:
```java
@ThreadSafe
public class StripedMap {
Expand Down Expand Up @@ -312,7 +312,7 @@ public class StripedMap {
```

## Avoiding hot fields
Using lock stripping & lock splitting can improve matters when two threads are accessing independent objects.
Using lock striping & lock splitting can improve matters when two threads are accessing independent objects.

However, if the implementation has some kind of a "hot field", which is used in all synchronized operations, regardless of the objects, this hinders scalability.
For example, in the hashmap implementation, one has to provide a `size()` method.
Expand Down Expand Up @@ -373,6 +373,6 @@ Many logging libraries are thin wrappers around `println` utilities. This is an
An alternative is to have a dedicated logging thread which solely logs incoming requests. All other threads append their messages to a blocking queue & continue their work.
This is often more performant than the former approach as it involves less context switching.

When multiple threads are trying to write to stdin, they are all blocked on I/O & context switching occurs, waiting for the bounded resource.
When multiple threads are trying to write to stdout, they are all blocked on I/O & context switching occurs, waiting for the bounded resource.
By having a dedicated background logging thread, it is only that thread which is blocked on the I/O. All other threads can continue its work.

Original file line number Diff line number Diff line change
Expand Up @@ -45,8 +45,8 @@ public boolean userLocationMatches(String user, String regexp) {
}
}

public static class LockStrippingExample implements LockScopeExample {
// ConcurrentHashMap implements lock stripping. No need to implement your own :))
public static class LockStripingExample implements LockScopeExample {
// ConcurrentHashMap implements lock striping. No need to implement your own :))
private final Map<String, String> locations = new ConcurrentHashMap<>();

public void addUserLocation(String user, String loc) {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ public class Main extends Thread {
public static void main(String[] args) throws InterruptedException, BrokenBarrierException {
measureLockScopeExample(new Examples.BigLockScopeExample(), 16);
measureLockScopeExample(new Examples.SmallLockScopeExample(), 16);
measureLockScopeExample(new Examples.LockStrippingExample(), 16);
measureLockScopeExample(new Examples.LockStripingExample(), 16);
}

public static void measureLockScopeExample(Examples.LockScopeExample example, int threadsCnt)
Expand Down
2 changes: 1 addition & 1 deletion java/java-concurrency-in-practice/chapter-12/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -397,7 +397,7 @@ When writing performance tests, you should watch out for some common pitfalls wh
## Garbage collection
Garbage collection is unpredictable. You don't know when it will run.

If you have a run which measures N invocations & garbage collection runs on invocation N+1, a small deviance in trials cnt can change the test results drastically.
If you have a run which measures N invocations & garbage collection runs on invocation N+1, a small deviance in trials count can change the test results drastically.

This can be solved by:
* Disabling garbage collection while running your test
Expand Down
10 changes: 5 additions & 5 deletions java/java-concurrency-in-practice/chapter-14/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ In a multi-threaded one, preconditions that aren't met can change due to the act

Hence, a precondition that fails might be coded to block, rather than fail in such an environment. Otherwise, usage of the class might be clunky & error-prone.

One way to implement such behavior is to go throgh the painful route of using standard means of synchronization.
One way to implement such behavior is to go through the painful route of using standard means of synchronization.

In that case, the code would generally look like this:
```
Expand Down Expand Up @@ -109,7 +109,7 @@ They are called so as they queue up threads waiting for a condition to become tr

**In order to use a condition queue on object X, you must hold object X's intrinsic lock.**

When you call `Object.wait`, the lock you're holding is atomically released & reaquired once the thread is woken up.
When you call `Object.wait`, the lock you're holding is atomically released & reacquired once the thread is woken up.

Example implementation of bounded buffer using condition queues:
```java
Expand Down Expand Up @@ -269,7 +269,7 @@ Alternatively, you can lock on a private object explicitly. This, however, subve
Just as explicit locks are a generalization of intrinsic locks, `Condition` is a generalization of intrinsic condition queues.

Intrinsic queues have several drawbacks:
* Each intrinsic lock can have only one associated condition queue. This means that multople threads might wait on the same condition queue for different condition predicates
* Each intrinsic lock can have only one associated condition queue. This means that multiple threads might wait on the same condition queue for different condition predicates
* The most common pattern for using intrinsic queues involves making the queue publicly available.

If you want to have a concurrent object \w multiple condition predicates or exercise more control over the visibility of a condition queue, the explicit `Condition` object can help.
Expand Down Expand Up @@ -387,7 +387,7 @@ public class OneShotLatch {

In the above scenario, `tryAcquireShared` indicates to the AQS what condition means that the threads should block, while `tryReleaseShared` sets the state to the correct value in order to unblock the other threads.

`acquireSharedInterruptibly` is like waiting for the condition to hold in a conditino queue and `releaseShared` invokes `tryReleaseShared` which unblocks the waiting threads.
`acquireSharedInterruptibly` is like waiting for the condition to hold in a condition queue and `releaseShared` invokes `tryReleaseShared` which unblocks the waiting threads.

`OneShotLatch` could have extended `AQS` rather than delegating to it, but that is not recommended (composition over inheritance). Neither of the standard library classes using AQS extend it directly.

Expand All @@ -414,7 +414,7 @@ protected boolean tryAcquire(int ignored) {
}
```

## Semaphone and CountDownLatch
## Semaphore and CountDownLatch
Example usage of AQS in `Semaphore`:
```java
protected int tryAcquireShared(int acquires) {
Expand Down
Loading

0 comments on commit 9de9c99

Please sign in to comment.