Static
With a statically linked application, everything that you need to run the application is part of the application itself. It depends on nothing else.
If there is a fatal error in a dynamically linked library, it doesn't 'care'.
If the dynamically linked version of a library it uses isn't there (someone accidentally removed /lib
on a unix system?) it doesn't care.
It requires no additional installs of libraries to run on any binary compatible platform. You download the app and you can run it.
There is an entire directory of statically linked applications on every unix system in /sbin
that are guaranteed to always work even if significant parts of the system are otherwise broken or missing. reboot
? halt
? ifconfig
? mount
? fsck
? ping
? Those are likely all in /sbin
so that even if the libraries are missing or corrupt (or not mounted because they're on a network filesystem...) you can still run those commands to get the system working (and mount that networked file system with the libraries you are looking for).
True story: One sysadmin many years ago, back when an 80 megabyte hard disk was big was looking for some space. Standard compiles included debugging/symbol information and that took a few kilobytes in each binary. So, he found every executable on the system and ran strip on it. Did you know that shared libraries are 'executables' on many systems? Removing the symbol information from them prevents anything from dynamically linking to them... oops. The only things working where those in /sbin. Fortunately, there was enough there that he was able to mount the other system of the same type and copy the shared libraries back over to the messed up system...
Dynamic
You want to update a library? No problem. Just put the library in the proper spot and all is well. Yea, that's a bit simplistic, but its the idea.
$ ldd /bin/date
linux-vdso.so.1 => (0x00007fff6ffff000)
librt.so.1 => /lib64/librt.so.1 (0x00007f54ba710000)
libc.so.6 => /lib64/libc.so.6 (0x00007f54ba384000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f54ba167000)
/lib64/ld-linux-x86-64.so.2 (0x00007f54ba919000)
Those are all libraries that are already loaded somewhere that you don't have to have in your application. There can be some savings in disk space size because you don't need to put a full copy of libc or any other library as part of your application.
System patching becomes easier. Instead of needing to push out a patch for everything in /bin because something changed (you need to do this for /sbin), you just push out a new shared library object and update the symbolic link chain in /lib.
Drop in replacements become possible. Just provide the same API and symbols, drop in the new library and update the symbolic link chain and all should be good.
On a system that runs multiple processes (almost all of them these days... but it was a significant distinction back in old times) static libraries meant that each application running had a full copy of its libraries taking up memory resources. Shared libraries means that they can all just point to the same spot in already loaded memory.
Statically linking an application can be considered a derivative work of the libraries. This becomes an issue with GPL software. There are differing opinions on if dynamically linking a GPL licensed library creates a derivative work.
Dynamic linking allows for plugins. The application loads the plugins and dynamically calls the functions available to it.
Multiple languages can use the same calling conventions and call a dynamic library. Look at Java for an example there - dynamic linking means that Clojure, Scala, Groovy, etc... with dynamic linking they can all use the same shared libraries. Alternatively look at all the different languages that can invoke a .DLL on windows.
Related reading:
We have implemented several push servers and all of them following the old tuple "socket-certificate".In Java (1.5 - 1.7).
To work with certificates has some disadvantages. For instance, we need one for each environment (test, pro, etc). You have to be methodic at managing them or it's quite easy to end up with the wrong cert in pro. Or to forget its renewal (they also expire).
Related to the socket, this approach requires having opened a specific range of ports in the firewall.
Related to the whole protocol of communication. You get so little info after pushing messages. It's hard to figure out what happened with the messages. The only way is to retrieve messages from the queue of responses. A queue which orders is not guaranteed. Neither when APNS is going to put the responses onto it. It might not happen at all.
Compared to GCM (Google cloud message which runs via HTTP), the "socket-cert" of APNS is a pain in the ...
My suggestion is to get focus on the HTTP 2 - JWT protocol. This is a very common implementation of security in client-server communications. You will find many more references about http2 and JWT than looking for APNS sockets and Certs.
Security via JWT is commonly implemented these days. There is full support from the community.
Moreover, If they have planned to drop the support to the current implementation, why even to dare to try it? Why spend time and money twice?
Be preventive. Implementing HTTP2 - JWT approach will save you from further code reviews and refactors. In any case, it's a work to do, so better have it done sooner than later.
Related to the library CleverTap. Well nobody is stopping you from implementing your own client! Suited to your need and requirements.
This has been our case with our current engine. We discarded all the 3rd party implementations and built our own. So far I know it keeps working perfectly... Till Apple drops the service.
(If we didn't move yet to HTTP2 - JWT is due to time and money)
There's perhaps alternatives. Google Firebase Cloud Message is multi-platform. So you can push messages to Android and iOs devices from the same service. It works over HTTP and API keys (tokens). I suggest to you to take a look.
Best Answer
One of Apple's criteria for accepting a program is whether or not it makes calls to unsupported Apple API's (or other bad stuff). By requiring static linking, they can prove that the software does not make such calls. Allowing dynamic linking would allow any kind of behavior to be added later, which pretty much invalidates their approval process.
Apple allows dynamic linking in OSX because, well, Macintoshes are real computers, not tablet devices, and the users of real computers expect them to be programmable in this fashion. The market for tablets and phones is quite different from that of desktop and laptop computers. Computers are production devices; users expect to be able to produce products on them, including writing programs that do what they want, how they want. This was never the expectation of tablet devices, which are consumption devices.
The whole point of tablets and the Apple Store was to create a closed environment to protect consumers from pedestrian viruses, and the like (well, and to allow Apple to collect 30% of all software sales made through their store).