The best way to manage document control is to have a clear and well documented configuration management policy. This policy should address versioning of everything related to the project, from external libraries to code to documentation. In the same location, you can also address change management - how you deal with changes to any aspect of the project, from requirements through your code.
The key is to have some kind of system to track changes and revisions. At work, we use SharePoint. You could probably get away with using any version control system, but you are
You probably keep code in a version control system. The idea is that you do the same with documents, although you probably want to enable locking for documents since merging binary files (Word Documents and so on) might not be possible.
The official versioned document should be kept on the corporate intranet or someplace with appropriate security and access control. At work, we use SharePoint, which handles versioning as well. If you use some other technology or tools, you might be able to leverage that (perhaps with plugins or extensions). Just make sure everyone knows where to go to find the latest and greatest official documentation.
It's also necessary to keep track of not only the current version identifier, but also a revision history, inside the document. This way, when you print the document, it becomes easy to "diff" them and figure out what sections have been modified between any two physical copies.
Just to describe what, specifically, our documents look like:
The first page is a cover page. The corporate or project logo is in the top left corner. The document ID, late modified date, and revision ID are in the top left corner. Classification markings are centered in the header and footer. The document title is roughly centered on the page. Below the title comes all applicable copyright and distribution notices.
On all other pages, the header contains classification markings in the center and the document name in the right corner. The footer contains the document ID and revision ID in the left corner, the classification markings in the center, and the page number in the right.
The next page is the approval sign off page. It contains the document title and ID number, along with the signatures and dates of the preparer and approvers. On official versioned copies, the document actually has an image with the signatures in it.
The third page is the revision record, which again as the document title and ID, followed by a table that provides a revision ID (usually a letter), any change requests associated with the revision for tracking back to defect reports (yes - we file defects against documents), the date of the revisions, and the pages/sections modified.
Short answer:
830-1998 is not a standard, it is a recommended best practice on how to write SRS in the style of 1998.
I can't find how it was superseeded (even with IEEE's advanced search :( )
But I guess it's because the whole method on how we specify requirements has changed drastically in recent years.
So, from now on, I try to answer a bit of modified question:
What is the industrial best practice / What are the recommended best practices on writing SRSs in the style of 2012?
On classical methods:
Usually I use IEEE 1471 recommendations for software documentation, although that was also superseeded recently by ISO/IEC 42010. This is a very complex kind of documentation, it's mainly used for handovers, although it does contain the requirements mostly (it's chapter 7 in the new ISO style document)
A moderately good book on formal documentation is Documenting Software Architectures, a surprisingly good book is the old iconix book, and an old classic is Cockburn's Writing Effective Use Cases.
On how it is actually done in the industry today:
Truth to be told, formal project documentation, especially requirements documentation was killed off mostly in the age of Agile, as the Agile Manifesto discourages formal documentation. There is no one, single, large formal specification, but instead, there are so called user stories, product backlogs and such. This is because of iterative development, only a handful of features are specified informally for each cycle of 2-4 weeks. A renowned book is User Stories Applied.
There are so-called "executable" specifications, which are formal, since they are essentially domain-specific languages (DSLs) for testing. They are no better or worse than UML's OCL, but they're more easier to grasp perhaps but also less scientific. Most of them are called BDD frameworks, and examples include FitNesse, Cucumber, Jasmine - you'll find a big bunch of these. There are also renowned books on BDD and TDD in general.
Also, specification by software engineers was superseeded by UX design, including information architecture and interaction design, so it's not done by people who can actually code nowadays, which can lead to conflict sometimes. This is a not-so-bad example on how one looks like (it's not a standard!), but you'll find a lot more inside the UX / interaction community, but there's even a whole separate stackexchange site for them. They have their own standards, recommended best practices, etc.
But what if you want to stick with the old methods, eg. for university work?
In general, try to adhere to the IEEE 830 (can't find on their webpage what was it superseeded with, although IEEE was never good with this, I guess it's because it doesn't matter anymore unfortunately), and make sure you try to record useful information (eg, I don't think that a single actor stick figure -> single bubble with a verb-subject is considered useful) from which the overall goals of the users, the overall range of users and the overall methods of usage can be reconstructed anytime.
Why do you recommend books? Why don't you show me standards instead?
Again, I guess this document was "superseeded" because today, we have a bit of chaos around requirements specification: there are many-many viewpoints on how it should be done.
There is no single authority who is able to tell you: "this is how specifications should be made". There are best practices, and I tried to provide you with a representative list of documents and directions, albeit by no means complete, and perhaps personally biased.
At the end of the day, what matters is wether the document you create is able to fulfill all the goals all the people who ever read it have with it: what people want to see, what people need to know in order to understand the requirements are pretty well described in these books, and these are best practices on their own right, albeit in much smaller communities than a single, undivided IT community what we had perhaps in 1998.
Best Answer
I'm more of a CMMI fan, but that might be because I've gone through the pain of getting to level 3 -- on what was originally a research project. "If we knew what we were doing we wouldn't call it research." That's a bit counter to the concepts of to any those software quality / process improvement efforts. I've also been with organizations that became ISO 9001 certified.
Both CMMI and ISO can be a bit (more than a bit!) burdensome. Getting certified at CMMI-DEV 3 is costly, in dollars and in time. Quality is not free. (At least that silly management mantra went out the door.) IMO, CMMI level 2 is a reasonable target for most organizations; CMMI 3 is where you start to need to be very sure the product is right. CMMI 4 and beyond: I wouldn't want to work there. The stuff I work on, if done wrong, could lead to hundred of million dollar catastrophes. Research project quality, or even CMMI 2, was not good enough. CMMI 4 was (thankfully) deemed too counterproductive.