I know this is ages old, but in case someone runs into this. IMHO the way to go about it is like this:
1) Open the original file (e.g. original.txt) using file_get_contents('original.txt').
2) Make your changes/edits.
3) Use file_put_contents('original.txt.tmp') and write it to a temp file original.txt.tmp.
4) Then move the tmp file to the original file, replacing the original file. For this you use rename('original.txt.tmp', 'original.txt').
Advantages: While the file is being processed and written to the file is not locked and others can still read the old content. At least on Linux/Unix boxes rename is an atomic operation. Any interruptions during the file writing don't touch the original file. Only once the file has been fully written to disk is it moved. More interesting read on this in the comments to http://php.net/manual/en/function.rename.php
Edit to address commments(too for comment):
https://stackoverflow.com/questions/7054844/is-rename-atomic has further references to what you might need to do if you are operating across filesystems.
On the shared lock for the reading I am not sure why that would be needed as in this implementation there is no writing to the file directly. PHP's flock (which is used to get the lock) is a little but unreliable and can be ignored by other processes. Thats why I am suggesting using the rename.
The rename file should ideally be named uniquely to the process doing the renaming so as to make sure not 2 processes do the same thing. But this of course does not prevent editing of the same file by more than one person at the same time. But at least the file will be left intact (last edit wins).
Step 3) & 4) would then become this:
$tempfile = uniqid(microtime(true)); // make sure we have a unique name
file_put_contents($tempFile); // write temp file
rename($tempfile, 'original.txt'); // ideally on the same filesystem
It sounds to me like you will simply be handing these files to another piece of software to actually read them in. If that's the case, use python or ruby or whatever easy to use high level language you have on hand because this program isn't actually IO intensive.
Now, if you are actually reading the contents of files in yourself to process on your own then I would say if the language you normally use doesn't have any interoperability for making OS level calls, don't use that language.
Ideally in this case you'd want a high level language with low level facilities.
C# for instance does this well in allowing high level handling of the simple stuff like processing the user input from command line and organizing the steps of your task or whatever else, but then allowing OS calls and direct memory management for highest possible performance (if necessary!). Java may do similarly? Not certain. Haskell is very high level as well as having facilities for direct memory manipulation, though Haskell has an extremely high barrier for learning if you don't already know it. C++ is probably the most commonly used language for this type of task for this exact purpose, it is a high level language and with C in it's roots it has complete low level facilities available.
That said, beware of premature optimization. Writing a program in a language that isn't your strongest language will likely under perform the one you could write in your native language as you wouldn't be aware of available optimizations or be using the language appropriately. Further the only way to know if the quickest most robust route of using your main language won't work is by giving it a go which would be far quicker than going all in on a language you don't know. So prototype something and see how it performs, if you don't think it does well enough then whip up a quick prototype in other languages and compare.
Prototyping as a part of your technical solution analysis is an ever important skill for all programmers, so take advantage of this task to practice it. In the scientific method we don't try to prove our hypothesis as that leads to bias, rather we try to disprove them. Your hypothesis is your normal language won't perform well enough for this task, so start by trying to disprove that.
Edit:
Based on your edit, your program is absolutely not IO intensive, so write it in the language you are most comfortable with
Best Answer
Your understanding is wrong. Several Linux processes can write to the same file at once (and what happens then might be unspecified, except when the processes overwrite different segments of that file). And some other Linux process (e.g. your browser) can read a file which is written to.
You could adopt a convention to avoid that. For example, you could use advisory file locking (adopting the convention that every relevant program should do that), e.g. with flock(2), lockf(3), etc.... But that won't forbid other processes (not following your conventions) to access (or even write to) that file. You might use permission machinery (e.g. by dedicating a system user to your setuid program).
See also this & that answers on StackOverflow (some of them mentioned the deprecated mandatory locking Linux-specific mechanisms). Read about ACID properties, consider using some database (sqlite, PostGreSQL, MongoDB, etc...) or some indexed file (gdbm).
I don't know Windows, but heard the rumor that it does not allow concurrent access or writes to the same file.