Yes.
All it takes is a single mistake and you'll be kicking yourself for it. You're also in the position to choose which version control system (VCS) is used. If there is any possibility that you'll work in a development team in the future, this is a great time to give yourself hands-on experience with a VCS. SVN and Git (or Mercurial) would be great starting points and should only take a couple of hours to grasp the basic commands in each VCS.
Now to debunk what the negative points...
1) Additional resources required
The only resource required is disk space. Since this is a small percentage (smaller in Git than X) of your total code, I don't think this will be an issue. It doesn't cost any money either.
2) Time to setup, get used to it, etc.
There will be time required to learn it, but it is only a few hours for each of these (as mentioned above). On the longer term, it has the potential to save you an infinite amount of time (and so much more). Once you've mastered the basics of a VCS, it will be far less finicky than performing the local backup you have in mind.
@jbc1785: I'm going to take a stab at this based on the following assumptions about your code and your team -
- You currently do not have any code in a version control system and you are new to version control
- Every developer can access and change any file in the code base
- You have multiple developers working on multiple projects in parallel
- The code is not very modular and there are some key files in your code base that need to be touched almost every time you make any change.
In your situation, I would recommend that you start off with a centralized version control system like Subversion (SVN) instead of a distributed one like Git. I have worked in places without version control systems and (as you have correctly identified) it is vital to put in place a version control tool that simple and provides a simple workflow that teaches good version control habits. Since you have a system where you have a high chance of multiple developers making changes to the same file at the same time, you should leverage a centralized tool like subversion better to reduce the amount of errors and confusion that can occur with merge conflicts.
Subversion provides a "lock" functionality that allows users to lock files that they are editing - I recommend you use this in the initial stages so that your team gets used to the concepts of checking out and editing files. The lock status will reduce the initial confusion by making clear that a file is being currently worked on by someone and that conflicts will occur if the file is changed by someone else.
As the team gets used to the concepts of version control you can disable the "lock" functionality and introduce the concept of merging files, "merge conflicts" and their resolution. Usually once a team gets comfortable with the concept of locked files, the necessary caution and communication needed for working on files in parallel gets developed.
Once the team gets used to working with Subversion, a DVCS like Git is the logical next step. It's very easy (there are multiple tools) to move from Subversion over to Git. In fact you can run Subversion and Git in parallel and migrate over. Git scores over Subversion in terms of performance, ability to scale and the fact that since it's distributed there are no single points of failure. I think the only problem with Git is that it's command syntax is a bit cryptic and it takes a bit of getting used to.
Update: Since the comments for this post are getting numerous and hidden, I thought I'd add them to the post itself in an effort to make it more easily seen.
According to @Tamás Szelei and @JanHudec, DVCS is the way to go since locking is a bad feature of centralized version control systems that should be avoided. The contention is that Subversion dont have a good merge feature and therefore uses locking as a way to get around that deficiency.
While I accept that merging in Subversion is not as great as it is in a DVCS like Git, I disagree that using locks and the workflow encouraged by Subversion is necessarily an evil thing that should be avoided at all costs. I think Subversion has its place and in certain situations (like in case of binary files - mentioned in the other answer to this question) might even be better suited than a DVCS.
This is a matter of opinion and I still believe that given the situation mentioned in the question (listed in the beginning of this answer) using Subversion is a good way to instill version control practices and workflows in a team that have never used version control before.
Best Answer
I would keep each library in its own repository. Start keeping track of library versions, for example with
git tag
.A big problem with simply checking each library into each application's repository, is that you've essentially done copy and paste, and thus gain all the disadvantages that implies. Bugs fixed in the copy of the library in one application don't necessarily make it to the copies in other applications.
By having a single place for the library to officially live, you can more easily make sure all your applications can take advantage of bug fixes and new features.
For managing how each application gets a copy for actual build purposes, I'd recommend something similar to what the Cargo package manager does. Include a config file (xml, json, toml, etc) in each application's repository. In that config file, specify what libraries it needs, and what versions of those libraries it requires. You may also want to specify the locations of those libraries.
Then, either the developer or, preferably, a package manager, can read the file and clone the appropriate libraries, checkout the correct tag, and build them. You could clone them into a
.gitignore
d subdirectory of the application. A dedicated library directory may be better, as then multiple applications that use the same version of the same library can share one copy.