There's at least 3 things that you could call "loaders" in Linux. The "ld" part of a compile, that puts together various object (.o) and archive (.a) files into a single executable is one. "ld" also checks shared object (.so) files to see if the dynamic linker can work correctly.
The dynamic linker (do man ld.so
for details) gets run by the Linux kernel as part of starting up an ELF format executable. It reads linking information from the ELF file, and then (at least) maps in shared object files (.so suffix). The details are rather involved, and at least sometimes involve updating the GOT, a section in memory that maps a compiled-in branch destination to the actual, as-loaded address of the library code. See: http://netwinder.osuosl.org/users/p/patb/public_html/elf_relocs.html for a lot of details.
There's also a piece of the Linux kernel that reads in executable files, that's sometimes called a "loader". When you configure a linux kernel, you can choose to include or exclude some of these executable file formats (a.out, mainly) loaders. I had the code of linux 2.6.20.9 lying about, and I found "loaders" for different executable formats in linux-2.6.20.9/fs/: binfmt_aout.c, binfmt_elf.c, binfmt_script.c, and a few others.
I know next to nothing of how Windows does this same process, but it must do most or all of the same things.
Don't know about python, but I've moved Java applications from Windows to Linux and vice-versa. Java makes the "write once, run anywhere" claim which may not be 100% true, but with very little work I was able to make it true enough (basically everything works great on Linux, a few issues on Windows).
I'll use W and L for Windows and Linux:
W: files and folders are case insensitive. L: case sensitive. Test file name capitalization carefully on Linux because Windows hides these issues.
Windows has a more granular file permissions system, that lets you use intersections of various groups and permissions. Linux has a simpler system of one group and one user per file or folder. Plus an execute-bit. Well, there are some other little things like setting the execute bit on a folder makes the permissions cascade the way they do in Windows vs. being set to the user and group that created each file the way they do by default in Linux. These issues mostly come into play when zipping and unzipping files, for instance during an install.
W: drives are mounted in the root folder as letters. L: Drives are mounted anywhere as anything. A single file can appear multiple places in your file system (symlinks).
Folder separator: W: \ L: /
Path separator: W: ; L: :
End of line in a text file: W: \r\n L: \n
Default character set: W: ISO-8859-1 L: UTF-8
You need to know which Linux distribution you are targeting. Two areas of difference are how System V init scripts are handled and how super-user tasks are performed (sudo vs su). Also you mentioned an install script. Apt and Yum are popular, but you need to work with the tool that your distribution uses. Use yum on RedHat, apt on Debian, etc.
This is why you need a Linux machine for testing, whether virtual or physical. It must use the same exact distribution you are targeting. Have someone set up a dual-boot on an old server or something. I also strongly recommend cygwin for every developer. The file permissions aren't quite the same as Linux, and you can set it to case sensitive (though it's more useful case insensitive on Windows) but it makes a pretty reasonable test bed.
It doesn't hurt you to know both (Windows and Linux) and once you do, you can make an informed choice about what works best for you. I was a Windows-only developer for the first 10 years of my career. I've been almost purely Linux for the last 4-6 years, so some of my Windows information might be old. I still run Windows in a virtual machine to do testing on Internet Explorer.
One thing you will get used to quickly on Linux is that you can solve most problems by Googling the error message. 90% of command line tools tell you how they work if you type "man ." If you really need it, most source code is easily available, depending on the distribution. When I solve a problem in Linux, I feel like I learned something about how computers really work. In Windows, I feel like I just keep blindly trying stuff until something works. When I find the solution, I'm lucky to remember everything I tried, no less know what it all means.
So I'd encourage you to spend some of your own time learning Linux and this job might be a way to get paid for some portion of that learning. But don't mistake temptation for opportunity. If the time frame is truly short, or money is tight, you may have to say that you have to deploy to what you know (Windows) or not take the job.
Best Answer
It depends heavily on your approach.
My preferred way is to use CMake to create project files or makefiles based on the current platform. For example, under Linux this would create classic makefiles, while under Windows it would create a VS project. You still only have to maintain the CMake source file(s).
In your case I'd have a look at the Linux Tools for VS extension, considering you can't really debug the GPIO pins on your Windows machine. Further details are in the linked blog post, since this is a bit lengthy to explain and setup. The usage is rather trivial though.
This essentially prepares Visual Studio to only act as a remote IDE. When issuing a build command or starting a debug run, it will upload modified files through SSH, issue the build commands and then connect to the GDB server to perform debugging, everything with the known tools and features from Visual Studio (e.g. visual debugging, watch list, etc.).