Yeah, parallelization is not synonym with asynchrony, identify which parts of your code are computationally demanding, in my experience, processing pipelines are most efficient, yet more memory intensive.
However, ensure that you make deep copies of the objects, or risk race conditioning.
I've done this before, especially if each task runs quite fast you end up taking mroe time to manage the parallel code than the single threaded algorithm took.
In that case I would consider it a success. After all, everyone knows that the measure of a well optimized application is seeing 100% CPU on all cores whenever it's running.
We never escape global variables. They just get hidden in singletons and then rooted out by static analysis and then hidden somewhere else. Global variables are useful, they just need to be treated carefully.
Why have a super efficient serialized processed when you can add more cpu overhead and introduce IO race conditions you have to build a ton of exceptions for π€·π»ββοΈ
Oracle used to have an issue where code ran on large amounts of data in parallel gave different results than when not parallel. Since only we had this issue since only we had databases this big they sent people out to try to figure it out, lol. That was a rough...year? to be a data analyst.
For a job interview, I was asked to write some multithreaded code. But the case was IO bound, and the best performance was the case of using only 1 thread. I reported that to them.
Comments
However, ensure that you make deep copies of the objects, or risk race conditioning.
Good luck, and Godspeed!
*exits with error*
"Oops, typo"
*exits with 27 errors*
This reply needs more up doots
Perfect
Punchline: I didn't get the job.