I’m not sure if there is any task in software engineering that is easier to retrofit into an application than it is to implement at the start of the project that was designed for it. If the task, e.g. introducing parallelism, takes more time than the alternatives, e.g. writing sequential code, it may be wise from a cost/benefit analysis to go back and implement the more costly method only in the spots that start to cause you pain, e.g. excessive slowness, or once you cross the threshold where the cost of delaying the implementation of the more costly method overtakes the cost of the implementation of the costly method (I’m trying to set a world record for the most uses of the word “cost” in one sentence), which I believe is called “the last responsible moment” in the Lean methodology.
The big advantage of using parallel libraries is that very smart people spend a very long time solving very hard problems for you. The disadvantage is the APIs presented by a library are necessarily generic and may not be the absolutely most ideal solution to your problem. However, it is usually worth taking the minor hit of using a somewhat suboptimal solution in order to leverage the knowledge of the experts and avoid making costly mistakes yourself.
The disadvantage to having these types of refactorings done completely automatically is that there will always be some edge cases where for one reason or another you do not want to follow the usual pattern. Though I can’t think of any specific examples against automating the three refactorings presented, there’s *always* some exception to the rule. The advantage is that it saves you time & effort.