I'm not 100% sure on the positives. Here's a few negatives
You often end up adding dependencies to 3rd party servers/endpoints that might
not be stable.
I've had it happen with bower that the repo of some dependencies
was deleted or moved. So a new dev comes along, clones my repo, types
bower install
and gets errors for un-accessible repos. If instead I
had checked in the 3rd party code into my repo that problem disappears.
This is solved like the OP suggests if you're pulling deps from copies
kept on a server you run.
Harder for noobs.
I work with art students with very little command line experience.
They make art with Processing, arduino, Unity3D, and get by with very
little tech knowledge. They wanted to use some HTML5/JavaScript I wrote.
Steps because of bower
- Download Zip of repo from github (notice that's on the right of every
repo on github. Because they don't know git)
- Download and install node (so we can run npm to install bower)
- Install git or msysgit (because bower requires it and it's not installed
on many students' machines)
- Install bower (
npm install -g bower
)
bower install
(finally to get our dependencies)
Steps 2-5 can all be deleted if we just check in the files to our github repo.
Those steps likely sound super easy to you and me. To the students they were
very confusing and they wanted to know what all the steps where and what
they were for which might be good learning possibly but was entirely
orthogonal to the class topic and so likely quickly forgotten.
It adds another step when pulling.
It's happened many times I do a git pull origin master
and then test my
code and it takes 5 to 10 minutes to remember I needed to type bower install
to get the latest deps. I'm sure that's easily solved with some pull script
hook.
It makes git branching harder
If 2 branches have different deps you're kind of screwed. I suppose you can
type bower install
after every git checkout
. So much for speed.
As for your positives I think there are counter examples to each of those
Eases the process of distributing and importing shared modules, especially version upgrades.
vs what? It's certainly not easier to distribute. Pulling one repo instead of 20 is not easier and is more likely to fail. See #1 above
Removes shared modules from source control, speeding and simplifying checkouts/check ins (when you have applications with 20+ libraries this is a real factor).
Conversely it means your dependent on others for fixes. Meaning if your deps are pulling from a 3rd party source and you need a bug fixed you have to wait for them to apply your patch. Worse, you probably can't just take the version you want plus your patch, you'd have to take the latest which might not be backward compatible with your project.
You can solve that by cloning their repos separately and then you point your project deps to your copies. Then you apply any fixes to your copies. Of course you could also do that if you just copy the source into your repo
Allows more control or awareness of what third party libs are used in your organization.
That seems arguable. Just require devs to put 3rd party libraries in their own folder under <ProjectRoot>/3rdparty/<nameOfDep>
. It's just as easy to see what 3rd party libs are used.
I'm not saying there are no positives. The last team I was on had > 100 3rdparty deps. I'm just pointing out it's not all roses. I'm evaluating if I should get rid of bower for my needs for example.
If I understand your situation clearly, it seems like you have many A type Maven artifacts, and these mostly all define a dependency to a few B type artifacts.
The A type artifacts are product of your development Group. The B type artifacts that A depends on are from an external Group. Your runtime environments for A type and B type Artifacts will have bundled into them the B Artifacts already.
The problem is that when the Version of a B Artifact changes on the server, you do not want to be hassled with manually updating many POM files with all of the dependency changes that happened in B initially.
Your first suggestion I think is the best approach:
I just thought about creating some kind of artifact, that contains all the dependencies that plugins like A could have.
You can absolutely do this and it is an accepted practice. What you do is define a parent POM file that declares all of the dependencies an A type Artifact could need. This parent POM file can be deployed to your Maven repository as it's own unique Group, Artifact and Versioned module. This parent POM module can then be referenced in your A type projects POM files using the <parent>
element. The benefit is that you can update all of the B type plugin versions in the parent project and reversion just the parent POM. Now your A type POM's only need to update their version number if they are to get the new dependencies. For more information on Parent POM projects see below.
https://stackoverflow.com/questions/14400642/maven-parent-pom
The really bad thing (even if this would work) is that plugin A would have dependencies it does not really need.
But you see this isn't actually a bad thing!
<dependency>
...
<scope>provided</compile>
</dependency>
Using the provided scope on a dependency to a B plugin is basically telling Maven that it is ONLY going to use this Artifact for compiling sources and unit testing. It will ignore the dependency specifically during packaging of the build artifact with the assumption that the JDK or the application container will provide this for the plugin during runtime.
http://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html
If some of the dependencies aren't being used it won't matter. They won't be bundled, they are only downloaded and used to compile and run test cases.
Best Answer
Moving third party build dependencies into your repository is perfectly fine, and even has some advantages (eg. no version mismatches, tracked upgrades). But doing so should not touch your code.
C/C++ use an include path to determine where to find libraries that are included like so
Depending on your toolchain and build system, you'll have to follow different steps to configure this. Regardless, the best thing would be to create a "third_party" folder with subfolders containing each dependency. Then add each of those folders to your include path such that your existing include directives work.
In CMake, you should use the
target_include_directories
to accomplish this. In a plain Makefile with GCC or clang, you would add the-I
flag for each folder.