When dealing with multiple projects, I usually switch common libraries code from directly editable to versionned binaries. This way, I can have releases of this libraries, and it let me choose between updating or not updating the projects that depend on it. You generally don't want to force an update on a project if you can avoid it, and breaking a build is certainly the worst way to force an update. Stability comes to the price of relatively quick edition. But at the end of the day the time you loose managing your library is far less than the time and energy you would loose handling unhappy programmers with their builds broken.
This method implies that the common code is relatively stable, and that's where unit testing and even TDD really pays. It's possible to debate about their Return On Investment in small projects, but in my opinion, they really worth it when you're dealing with code that is used across multiple projects, because you cannot just go around and compile/test every project that use it.
Regarding SVN, If you go the binary way, each library should become a distinct project, and each release implies a label, and optionally a branch if maintenance is preferable to update. Avoid the monster library that rule them all. You would have to update it every time there's a minor change, making it impossible for projects to follow the versions. That's where it takes a bit of architecture, to organize your libraries, if they have dependencies on each others.
As for the binaries, some people upload them in SVN, some people use scripts to download them from a network repository. It really depends on the binaries size, the update frequences, your network architecture, and your personnal preferences.
That's my experience with this specific issue, but I mainly worked in C/C++ and .Net, so there might be other ways to combine editability and stability that I'm not aware of :)
NuGet would likely be my answer. After working the precursor (Nu) which was just sitting on top of RubyGems, I can say that having resources towards something helps. Now once job I ended up with a patched version of the executable because we had a local repository besides remote ones. I'm not sure if they fixed that problem yet or not, but since they are accepting patches, it should be easy to correct. On all the open source projects in the .NET arena I work on we now do all of through NuGet. It's much easier, though publishing up the chain does take a little while longer.
OpenWrap is an option, but everything needs built. Which is kinda a pain in the butt. If you can't get the project building right it takes a while to deal with. Openwrap has been trying to solve this for a long time and it's still really awkward. Binary only distribution is a lot easier (sometimes)...
The other two I'm not familiar with. So I can't comment on them a ton.
Best Answer
Externals can be used for what you are trying, but first you should check if you really need them. For our team, it is mostly sufficient to have our shared modules in a folder below named
shared_libs
, and let all using projects reference that folder by a relative path. However, one will need externals if you are going to have the libs/shared modules in different local folders below your project folder. That way, you can handle situations where you needProject A to reference shared module L version/revision 1.0
Project B to reference shared module L version/revision 1.1
both at the same time in local your development environment. By using externals, you cannot just control to have different copies at different locations in your development environment, you get also fine grained control which revision, or branch, or tagged version of the lib/module you want to have placed there.
For us, however, it is mostly sufficient to have different versions of the same lib in our production environment only (we use some externals, but not for the majority of shared libs). For testing and deployment, most of our build scripts pull the compiled results into a test or deployment folder, without the help of SVN.
The drawback of using externals is that you have increased administrative effort - you might need to update explicitly to a newer version of a lib in each project which uses it. For editing and commiting changes your libraries source code directly in the external's checkout folder you will have to utilize the peg-revision mechanism of SVN (or maybe its exactly what you want to prohibit). And if your shared libs are dependent from other shared libs, maybe recursively, you have to make sure the relative paths of all those libs in your local file system are always similar to each other. You will have to validate if this what you need and/or what you want.
Concerning your second question: SVN is not really the right tool for tracking dependencies, the only reliable place where the dependencies are stored are your make files / project files / build scripts / linker files (whatever you use for your build process), so my first suggestion would be to scan those files by a script to collect dependency information. Nevertheless, if you are going to use SVN externals consequently for every shared module dependency, I am sure you can write a script which iterates over the projects in your repo(s) and collects which externals are added there. And I do not think you will need a hook for that, just utilize
svn propget svn:externals
for this, as shown here.