Before answering this question, it is necessary to briefly talk about how the processors work. Because we know that it is the processors that do all the work on the computer. The processor runs continuously from the moment the computer is powered up, but can only do one job at a time. For example, when receiving an input from the keyboard, no other action can be taken. Some operations take quite a short time. It takes time in nanoseconds to make an addition. However, some operations do not take such a short time; copying takes time in milliseconds, which, of course, varies depending on the content we copy.

While moving a file to another place, we can listen to music, write articles and do research on the interenet. How could that be? When we ask this question, we can first think of the concept of core. But what we call the core is actually the processor itself. A 4-core processor is the placement of 4 processors on a plate. Today, this plate itself is called a processor and each processor is called a core. This means with that 4-core processors we can perform 4 different processes at the same time. However, it is possible to do several tasks at the same time on single-core computers. In addition, the number of processes we can perform simultaneously is independent of the number of cores. This means that this has nothing to do with the cores. That’s where the concept of it multitasking comes into play, that’s the answer. The processor does a part of each job and switches to another job. It does some part of another job, another job, then another job… It does these things so fast that this loop creates an illusion, and users think she/he’s doing a few things at once. If too much work is given to the computer, this illusion may deteriorate slightly and users may become aware of slowed jobs.

How does the processor determine the time it will allocate?

Another question arose immediately. How does the processor determine the time it takes for jobs? How long will it take for one process to move to another? It is not possible to say these seconds precisely because this time may be different on every computer, on each operating system, but we can make a few inferences. For example, suppose we make a addition and move on to the next one. When a process is finished, we need to record where it is left, and when we move to another process, we need to find out where it remains and continue there. Let’s assume that we have A and B variables, we will add the values in these variables and write C to a different variable. At the beginning of this process, we first need to find the record of where we left off in this process and access the values in variables A and B. Adding is already a very short process, after that we have to save the place where we left off and the last values of our variables. Reading these records and saving our data at the end of the process corresponds to many processes. Almost thousands. It would be a bit unreasonable to do thousands of transactions for just one transaction. As a result, a suitable period of time is reserved for each transaction.

Who keeps the amount of time the processor devotes to processes?

Each process has a period of time, each of them is passed to another one by doing a little bit, and this cycle makes us believe that we do more than one process at the same time. So who’s keeping this time? Each computer has circuits that hold this time. These circuits send an interrupt to the processor when the time for a process expires, so that the processor can switch to another job.

Who does I / O operations?

We said that it takes a long time to read and write, but we can also do other things while we are reading and writing. Assuming that a program must read text from a file while it is running, during the read operation, the corresponding program is suspended until the process finishes, and the processor handles different tasks. Doesn’t the processor do the read and write operations then? Exactly. Our computers have a feature called direct memory access. This feature allows read and write operations without being connected to the processor. The processor tells the memory location of the data it needs to read to direct memory access, how much it will do, and the direct memory access can handle other tasks in the background, while the processor can handle other tasks. This saves a lot of time.