Pulling the power causes everything to stop in flight, with no warning. kill -9 has the same effect on a single process, forcefully terminating it with a SIGKILL.
If a process is killed by kernel or power outage, it doesn't do any clean-up. That means you could have half-written files, inconsistent states, or lost caches. You usually don't have to worry about any of this because of journaling, exit status and battery backup.
Temporary files in /tmp will be automatically gone if they are in tmpfs, but you may still have application-specific lock files laying around to remove, like the lock and .parentlock for firefox.
Most software is smart enough to retry a transaction if it doesn't record a successful exit status. A good example of this is a typical mail system. If a message is being delivered, but gets cut off in the middle, the sender will retry later until it gets a success.
Your filesystem is probably journaled. If you are moving or writing a file and it dies mid-stream, the journaled file system will still reference the original. The journaled filesystem will make changes non-destructively, leaving the old copy, then only reference the new copy as a last step before reclaiming space the old copies occupied on disk.
Now if you have a RAID array, it has all kinds of memory buffers to increase performance and provide reliability in a power failure. Most likely your filesystem will not know about the caches in the device and their state, so it thinks a change has been committed to disk, but it is still in the RAID cache somewhere. So what happens when the power dies? Hopefully you have a functional battery in your RAID enclosure and you monitor it. Otherwise you have a corrupt file system to fsck.
Yes, a few bits can become corrupted in a binary, but I would not worry about that much on modern hardware. If you are really paranoid, you can monitor the health of your disks and RAID with the appropriate tools, but you should be doing that anyway. Do regular backups and get an Uninterruptible Power Supply.
Sure...
Imagine that we are talking about an HTTP API ok?
The total payload for every request will be all the resources used to conclude this specific request only, so, we can think in one request payload for insert only ONE row into the database by the following flow, just for exemplification ok?
Step 1: Request start (CURL or any similar HTTP Clients)
Step 2: Server receives the request and starts a new thread (depending on webserver software)
Step 3: Weberver do the HTTP handshake, and starts a new thread for Script Interpreter (PHP .NET or Whatever)
Step 4: Script Interpreter, starts the connection to the Database (can be again throug network or local socket if the database is in the same server)
Step 5: Forward the single insert SQL command data to the SGDB process
Step 6: Executes the statemant and returns execution status (if in a select command the query data result will be transferred here too) to the Script Interpreter (again, PHP .NET or Whatever)
Step 7: Script Interpreter returns data to the Webserver Software (Apache, Nigins or Whatever)
Step 8: Webserver Software acess again the HTTP connection and puts the result data to the client for transmission throug network again...
Step 9: Finnaly ends the thread cicle, and frees up the system resources!
So now think about repeating this flow a thousand times throug only a few seconds, the need for server resources used will be high because even if we are inserting a single small row to database in every request, there are much more things happening on the server then we can see, that is just simplified example, in other scenarios the real paylod can have much more steps...
If we talk abaout a batch insert operation, for a thousand rows, all these steps are repeated, but only a single time, with the only diference that the SQL insert data will have now more size, resulting in more resources used in useful processing or IO task...
Obviously, every scenario has its own properties, and this logic need to be validated for each case...
Best Answer
No.
If copying around files causes filesystem corruption, you have other issues.
Of course, make sure you have a good backup system, and that you test regularly.