I've previously worked in an environment where we've had SSO and local developer environments.
The key problem that needs to be worked on with SSO and developer environments is that the domain cookie needs to be able to be retrieved when hitting the local dev environment.
Admittingly, part of this has to do with how we set up the environment (and it was many years ago that I was involved with this). It was a polyglot environment with a mixture of static html, old perl cgis, a weblogic server for Java EE, an iis server for some apps that needed to run with asp, and something that engineering ran. The way that SSO communicated this information was that a reverse proxy stuck into the http headers the authenticated user name and all associated access they had. This way, no matter who got it, they could look at the headers and continue from there.
First off, DNS was set up so that each developer had a host name in the 'dev.company.com' domain - thus 'sxu.dev.company.com' (for you) and 'jsmith.dev.comapny.com' for John Smith. All of these names (CNAMES) pointed to a single common dev reverse proxy (the reverse proxy had very little on it) that then looked at the virtual host name that was coming through and then forwarded out the request to the appropriate developer's machine. Note that the interactions with the SSO code was completely contained within the reverse proxy so that no additional libraries needed to be installed anywhere else.
The interaction with infrastructure was simply to add another cname to common.dev.company.com whenever we had a new hire, and then we would add their name to a file that was run through some m4 macros to generate the proper apache http.conf file for the reverse proxy (that did take some work the first time we did it).
With this idea in place, you may wish to consider setting up a reverse proxy that acts to just forward everything to the appropriate developer's machine over http (no matter what type of connection it received). https://sxu.ci.company.com/xyz
goes to the reverse proxy, which handles the https, and forwards it to your dev box at http://10.1.2.3/xyz
. The gotcha with this approach that you have to watch for is that if you have any absolute paths in the code, or the dev's server tries to be aware of where its installed and the protocol being used to generate a link that is the same format, things might go a bit wonky (thats a technical term).
This avoids the problem of setting up https locally (only one server has is running https) or modifying the sso server to go to a non-https url. You've got different setup in a different place.
I am unsure if your local dev boxes respond to access other than localhost (its a valid configuration), if so, those will need to be modified because the request is coming from the proxy in this case rather than the local browser.
I do want to point out the side benefit that it will mean that other devs can hit your environment (you want someone to reproduce a bug while you're watching the log files that you don't know how to repo - have them hit your server).
The second option is a not-non-standard approach (I do recall several production SSO systems being configured to allow a limited set of appUrl type parameters - in part so that the external person could use SSO and go to a specific page once logged in. Adding a mapping of 'dev->localhost:8000' would allow for this, however you will need to reconsider how to address the problem of localhost doesn't get company.com cookies. You will likely need to modify your local boxes to identify as localhost.company.com for that to work.
One basic rule of source control is that you need only to put manual written artifacts into the repo (the original source files), everything which can be "compiled" or "generated" does not need to be stored there, because it will produce redundancy. One can (optionally) store intermediate outputs/parts of a build process in a repo (sometimes also called artifacts), when the steps to reproduce them are not fully automatic, or for caching purposes, when the build steps to reproduce the output is slow.
So if you have a fully automated process to generate the production files from your dev source files, you only need put the dev files into source control (together with the scripts for creating the production files). If not, establish such a process. Make sure noone has to fiddle around with the production files manually after they have been generated from the source. If you want to deploy "directly" from your VCS, make sure you have a deploy script which pulls the dev source files out of source control, does the "gulpification" and transfers the resulting files over to production in one step.
Of course, if you want to use source control also as a "poor mans backup" or as a cache for your production files, you can set up a second repo for this purpose and put there a copy of the production file structure after generation. This repo will not serve for development, and after it is set up, you should not have to maintain it manually. So make sure there will be no manual steps involved for making backups into this "prod repo" - include the necessary steps into the deploy script which makes the backup automatically. Think about adding a separate backup script if you cannot prohibit manual changes to production after deployment. That way, you can keep the process maintainable even if you have limited resources.
And yes, this should be a strictly separated, second repo, because it has a completely different purpose and a different life cycle than you dev repo. You use it only for backups, not for source control, which is a different process.
Best Answer
You are completely right that it should be the same artifacts that are deployed to all environments. Especially in npm, dependencies tend to be loosely defined so a updated dependency could easily break the build just by building at different times. Even if you don't use hat or tilde in packages.json transient dependencies could be less strict with versioning.
What is suggested in the other answer regarding dotenv will not work unless you combine it with #2 og #3 from this answer.
Basically you have a few options.
Switch on the url in order to find the current environment you are in. This is a complete hack. Do not do this!!
Have some server side code replace global vars in index.html on each request. The simplest solution depends on what you have installed on your server. It could be php or even something simple as SSI if you are on IIS. You will want to keep you index.html as valid html so you still can do local development. SSI is nice in this regard since it works inside comments. If you do php, keep your index.html and make a index.php which reads index.html but replaces your vars. A html parser works well for this replacement.
Setup your release to generate a transformed index.html from the index.html created during build. This transformed index contains a snapshot of variables for a given environment when release is run on the environment.
If your needs are simply configuration variables, I would recommend #3. If you need something a little more advanced, go for #2.