Ah. The easiest thing to do would be to avoid deleting the service before every deployment. In my experience, services tend to be very long lived; they provide a nice, fixed way to refer to things without having to worry about dynamic values for ports, ips, dns, etc.
In the Kibana service spec, remove the nodePort entry from the port configuration, so that the service can do its own thing; one less thing to think about. Don't set values for loadBalancerIP or externalIPs. Same rules apply to the other services.
For the ELK stack config files (I don't recall off the top of my head what they look like), refer to other components by their service names: no need to use hardcoded IPs or anything. (No idea if you were doing this, but just in case.)
Allow the services to be created; get the loadbalancer external IP and plug it into your DNS config.
You can continue using namespaces if that's how you prefer to do things, but don't delete the whole namespace to clear out the Deployments for ELK components.
Split your ELK stack spec into separate files for Deployments and Services (technically, I'm not sure if this is required; you might be able to get away with ), so that you can use:
kubectl delete -f logging-deployments.yaml
kubectl apply -f logging-deployments.yaml
or a similar command to update the deployments without bothering the services.
If you need (or prefer) to delete the ELK stack in another manner before creating a new one, you can also use:
kubectl -n logging delete deployments --all
to delete all of the deployments within the logging namespace. To me, this option seems a little more dangerous than it needs to be.
A second option would be:
kubectl delete deployments kibana
kubectl delete deployments elasticsearch
kubectl delete deployments logstash
If you don't mind the extra typing
Another option would be to add a new label, something like:
role: application
or
stack: ELK
to each of the Deployment specs. Than you can use:
kubectl delete deployments -l stack=ELK
to limit the scope of the deletion... but again, this seems dangerous.
My preference would be, unless there is some overriding reason not to, to split the config into >2 files and use:
kubectl create -f svc-logging.yaml
kubectl create -f deploy-logging.yaml
kubectl delete -f deploy-logging.yaml
kubectl apply -f deploy-logging.yaml
...
etc
in order to help prevent any nasty typo-induced accidents.
I break things down a little bit further, with a separate folder for each component that contains a deployment and service, nested together as makes sense (easier to keep in a repo, easier if more than one person need to make changes to related but separate components), and usually with bash create/destroy scripts to provide something like documentation... but you get the idea.
Set up this way, you should be able to update any or all deployment components without breaking your DNS/loadbalancing configuration.
(Of course, this all sort of assumes that having everything in one file is not some kind of hard requirement... in that case, I don't have a good answer for you off the top of my head...)
Best Answer
I do not think this is possible from the Portal. I tested modifying the URL in the portal, and also after validation.
If you modify the URL the portal redirects to the Marketplace.
https://portal.azure.com/#create/test/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fvm-disk-performance-meter%2Fazuredeploy.json
After the template was validated, I could modify the URL to https://portal.azure.com/#create/test and the deployment still gets created as Microsoft.Template
The Template itself also does not impact the deployment name. If you need to change the deployment name you may need to deploy from PowerShell or API