Version Control – Should Minified CSS be Stored in Git?

cssdeploymentgitversion control

I use Gulp to generate minified CSS from my SASS code for a project I'm working on.

I wondered whether it's considered best practice to regenerate this minified CSS when pushing live from Git…

or

To store the minified CSS files in Git so they are automatically pushed live to production without further work on the server's part?

I'd appreciate people's ideas on this. Thanks!

Best Answer

"It depends." For normal development tracking, no. For cloud and DevOps deployments, however, it's often convenient, or even required.

Most of the time, @ptyx is correct. Indeed, his "no" could be stated somewhat more emphatically. Something like "No. No! OMG NO!"

Why not store minified or compressed assets in source control system like Git?

  1. They can be almost trivially regenerated by your build process on the fly from source code. Storing compressed assets is basically storing the same logical content twice. It violates the "don't repeat yourself" (aka DRY) principle.

  2. A less philosophic but more practical reason is that minified / optimized assets have very poor compressibility when stored in Git. Source control systems work by recognizing the changes ("deltas") between different versions of each file stored. To do that, they "diff" the latest file with the previous version, and and use these deltas to avoid storing a complete copy of every version of the file. But the transformations made in the minify/optimize step often remove the similarities and waypoints the diff/delta algorithms use. The most trivial example is removing line breaks and other whitespace; the resulting asset is often just one long line. Many parts of the Web build process--tools like Babel, UglifyJS, Browserify, Less, and Sass/SCSS--aggressively transform assets. Their output is perturbable; small input changes can lead to major changes in output. As a result, the diff-algorithm will often believe it sees almost an entirely different file every time. Your repositories will grow more quickly as a result. Your disks may be large enough and your networks fast enough that isn't a massive concern, especially if there were a value to storing the minified/optimized assets twice--though based on point 1, the extra copies may be just 100% pointless bloat.

There is a major exception to this, however: DevOps / cloud deployments. A number of cloud vendors and DevOps teams use Git and similar not just to track development updates, but also to actively deploy their applications and assets to test and production servers. In this role, Git's ability to efficiently determine "what files changed?" is as important as its more granular ability to determine "what changed within each file?" If Git has to do a nearly full file copy for minified/optimized assets, that takes a little longer than it otherwise would, but no big deal since it's still doing excellent work helping avoiding a copy of "every file in the project" on each deploy cycle.

If you're using Git as a deployment engine, storing minified/optimized assets in Git may switch from "no!" to desirable. Indeed, it may be required, say if you lack robust build / post-processing opportunities on the servers / services to which you deploy. (How to segment development and deployment assets in that case is a separate can of worms. For now, it suffices to know it can be managed several ways, including with a single unified repository, multiple branches, subrepositories, or even multiple overlapping repositories.)

Related Topic