Get a URL with variables via Linux Command Line

curlurlwget

Sorry if this seems to be a very simple command, but I have been searching for hours without any solutions and exhausted my very limited knowledge of Linux. 🙂

I need to use some command to process the following URL from a Centos Linux command line.

http://www.example.com/?sm_command=build&sm_key=kdfs7kj6dgo3sigj34df1

I have attempted to use wget and curl with various options enabled, with single quotes, and double quotes around the URL. All attempted commands pulled the home page and dropped everything from the question mark. This URL runs successfully in a browser and simply displays "DONE."

Ultimately, this will end up in a cron job that runs every so often.

Please advise me on what would be the proper syntax to complete this.

Thanks so much in advance for the assistance.

Best Answer

It should work with single quotes, if not escape the & with \&

The behavior depends on the shell you are using, so YMMV

Without quotes, bad request:

david@atl:~$ wget http://www.example.com/?sm_command=build&sm_key=kdfs7kj6dgo3sigj34df1
2013-04-07 01:54:10 (68.1 MB/s) - `index.html?sm_command=build' saved [1111]

With single quotes, successful request:

david@atl:~$ wget 'http://www.example.com/?sm_command=build&sm_key=kdfs7kj6dgo3sigj34df1'
2013-04-07 01:54:24 (99.7 MB/s) - `index.html?sm_command=build&sm_key=kdfs7kj6dgo3sigj34df1' saved [1111]

Without quotes, escaped ampersand, successful request:

david@atl:~$ wget http://www.example.com/?sm_command=build\&sm_key=kdfs7kj6dgo3sigj34df1
2013-04-07 01:57:31 (102 MB/s) - `index.html?sm_command=build&sm_key=kdfs7kj6dgo3sigj34df1.2' saved [1111]