I'm using Azure DevOps (formerly VSTS), and while this question is old, it may have value for others.
My understanding of best practice is:
- Use semantic versioning
- Separate package groups into distinct repositories where possible
- Don't update a package version if the package isn't changed
- This is tricky with version data coming from a CI build and
n
package projects in a .sln
- Organize NuGet feeds such that any feed can be used by any number of projects, but any one project uses only one feed (ie. employ upstream feeds)
Only Version Changed Packages
There are legitimate situations for many projects in a single solution
This has almost always been true.
Avoid the build collection balloon
For those situations, you CAN do as you have stated in your question and create a build for each .csproj
file in the .sln
and trigger on the path containing that .csproj
, but I might not recommend it.
Though I like the idea of discrete identification of why a build exists, I don't like the idea of creating a new build pipeline any time a developer adds a project to the .sln
. As I'm wearing 2 hats - one as the primary DevOps Engineer and another as a Sr. Developer - I don't like setting myself up for loads of new build pipeline requests from my team if I can help it.
Powershell to the rescue
It's a common (and accurate) saying:
Just because you CAN do something in Powershell doesn't mean you should.
However, I don't believe I'm abusing this tool here.
I'm opting to let the build chew on the .sln
file and build everything, and even create a new package version for all the projects based on the build number.
But...
When it comes time to move the packages into the Build Artifact Staging area, I want Powershell to take the generic Copy Files task to school and only copy packages produced by a .csproj
that has actually been changed.
How?
By iterating over the commits linked to the build and first using git cat-file -p $commit
to make sure that the commit isn't a Merge (non-merge commits only have 1 parent listed) and then use git diff-tree --no-commit-id --name-only -r $commit
to get the files changed for those commits. With that data in hand I can index the direct sub-directories of my build trigger directory (read: the directory containing the .sln
), which should be the project directories, and copy the packages where the package name contains the project directory name.
Because our convention for where projects live under a solution is well developed, I can make certain assumptions about what should be done based on the output of the git commands.
Why?
Doing this "Dynamic Artifact Composition" allows the .sln
to grow or shrink naturally and no additional changes to the CI are required. New projects are detected and their packages included, while removed projects are simply no longer available to produce a package for the script to copy.
This approach also gives a direct relationship between the artifact produced, which only includes Some.Lib.1.0.1907.302.nupkg
, with the commits shown in the build summary, which show changes to src/sln/Some.Lib/logic/ChangedClass.cs
for example.
Pump the breaks
I mentioned our convention. You need to make sure you can actually do this based on what your project and solution directory structure looks like. Solution authoring allows one to have a lot of freedom for adding projects that live outside the solution directory. While this doesn't remove the option to do it this way, you need to make sure your script is flexible according to your environment.
Best Answer
If its a pure function then you want the code to execute on the same CPU as the app. a nuget package is a good way of getting that code into your project.
If the function has side effects that you want to be global. ie every time this function runs we MUST generate an audit log. or, This function must only have a single instance running at a time and stop when we reach 1000 calls per day! Or more commonly, this function accesses a common database and we need to stop people deleting it. Then you need to control the way the function is used and must host it yourself. Only exposing an API for consumers to call.
The down side of a hosted API is that you have to host the server, probably a in a fail-over cluster so there is a cost involved. Plus sending the call over the network is slow.
A note on shared code solutions:
It's been suggested that you use shared code instead of NuGet packages for internal projects.
While this is a solution the downside over nuget is that each application that uses the code will build its own version of the package. So you lose the benefits of versioning, signing, multiple platform builds etc that you can get from nuget or other package managers.
I would suggest its best to treat you internal 'customers' the same as external ones and let them pull all the packages, internal and external from nuget.