Bash commands run at terminal but not in Jenkins/Bash Script

bashdocumentationJenkinsscriptingweb-server

I'm using Jenkins to build and deploy HTML documentation to a local Apache web server for our devs to use. When I run the commands in the terminal, everything installs properly (Proving the server is setup properly). However, when run the same commands from within Jenkins, they get called but nothing changes. It doesn't delete html.zip (Line 18), doesn't move the files into /var/html/www/subdir, and doesn't report any errors outside of the curl request failing. I'm a bit lost as to what I'm doing wrong.

I should note that I'm calling this entire script as sudo. I know this is insecure but I figured I would try to get the script working first, then change that later. To ensure user doesn't run into issues installing the documentation, I've temporarily allowed it to run any command as sudo without a password. Again, I know this is insecure, but in the spirit of trying to eliminate variables, I added this.

Jenkins calls this script like so:
sudo ./documentation-publisher.sh

Permissions on the script are the least restrictive for now, 777. Calling ls -l on the script reports:
-rwxrwxrwx 1 devop developers 1144 Dec 3 10:29 documentation-publisher.sh

I tried a suggestion from this post about explicitly setting the path in the script, but noticed no differences. An explicit path to each of the commands used doesn't change the behavior either.

#!/bin/sh -x

echo "Archiving generated HTML for transfer..."
cd Example/docs/html/
zip -r html.zip ./
scp -i ~/.ssh/id_rsa html.zip user@my.host.example.com:/home/user 
ssh -i ~/.ssh/id_rsa user@my.host.example.com 

echo "Extracting generated HTML into www directory..."
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
unzip -o html.zip -d ./subdir
rm -r /var/www/html/subdir
mkdir /var/www/html/subdir
cp -r ./subdir/* /var/www/html/subdir/

echo "Cleaning up after file transfer..."
rm -rf ./subdir 
rm ./html.zip 

echo "Testing install..."
curl -f my.host.example.com/subdir/index.html 
exit 

What could I be doing wrong?

Best Answer

It looks like you want to ssh into my.host.example.com, then have the rest of the script run on that host. If this is the case, you need to pass the rest of the script as input to the ssh command; as it is now, ssh is taking input from the script's stdin which is probably empty, so it opens a remote shell session, sends it an end of file, which closes the ssh session and executes the rest of the script locally. In order to run those commands remotely, you need to pass them as input to the ssh command, something like this:

ssh -i ~/.ssh/id_rsa user@my.host.example.com <<EOF

echo "Extracting generated HTML into www directory..."
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
[...]
exit
EOF

Secondly, the script has no error checking. In general, it's a good idea to look at each command in a script, and ask yourself what would happen if it failed. Should the rest of the script continue, or would it "run off the rails" and do something silly? For example, if the scp command were to fail (for whatever reason), there wouldn't be any point in running the rest of the script (and it might be destructive, wiping out /var/www/html/subdir and then replacing it with... oops, nothing). You can either run an error check on each individual command's exit status, something like:

scp -i ~/.ssh/id_rsa html.zip user@my.host.example.com:/home/user || {
    echo "Failed to scp the html files to my.host.example.com." >&2
    exit 1
}

... or use the shell's -e option to make it exit the script if any command fails. This option saves you from having to error-check each command individually, but doesn't give informative error messages, and can cause trouble by exiting the script if something that doesn't matter returns an error status for some reason (see BashFAQ #105 for some examples of why -e can cause unexpected behavior). Also, if you do go with this option, make sure you use set -e both at the beginning of the script (or use -xe on the shebang line), and add set -e as the first command sent to the remote computer.

BTW, the cd command at line 4 is particularly likely to fail, since it uses a relative path. This means that the directory it tries to cd to depends on the working directory the script was started from. Note that this is not necessarily the directory the script is in, it's inherited from the process that started the script and hence could be almost anything. Jenkins may be starting the script with a different working directory than you are, thus causing it to fail right out of the gate. Well, not actually fail, just run all the remaining commands in the wrong directory (and because of the ssh input problem, on the wrong host).