Unit tests are just one level of the hierarchy of automated tests. Unit tests exist to verify that the code you (the developer) have actually written behaves the way you thought it should when you wrote it.
There are two caveats inherent in unit testing. First, coverage is not exercise. You may execute every line of code in your codebase via one or more unit tests, but if you do not assert, somewhere, that code which needed to do something actually did it, then as long as you don't get an exception, the test passes with or without the code doing this key thing.
Second, unit tests by definition exercise small, isolated pieces of your code (units), making sure each piece behaves the way the developer thinks it should. They don't test that these units play nicely with each other (that's an "integration" test), nor do they assert that the code, at any level, behaves the way the client thinks it should (that's an "acceptance" and/or an "end-to-end" test).
This second problem is your main issue in the case in point. You have 100% unit test coverage, but no integration testing, which would prove that the little pieces are put together the right way to do the larger job you expect. You also seem to have no automated acceptance testing, which approaches the entire program from the top down from the perspective of an end user. These are the levels of testing, which can be automated, that will identify your "missing" code based on failure to satisfy acceptance criteria.
No, it's not, for two reasons:
Speed
Commits should be fast. A commit which takes 500 ms., for example, is too slow and will encourage developers to commit more sparingly. Given that on any project larger than a Hello World, you'll have dozens or hundreds of tests, it will take too much time to run them during pre-commit.
Of course, things get worse for larger projects with thousands of tests which run for minutes on a distributed architecture, or weeks or months on a single machine.
The worst part is that there is not much you can do to make it faster. Small Python projects which have, say, hundred unit tests, take at least a second to run on an average server, but often much longer. For a C# application, it will average four-five seconds, because of the compile time.
From that point, you can either pay extra $10 000 for a better server which will reduce the time, but not by much, or run tests on multiple servers, which will only slow things down.
Both pay well when you have thousands of tests (as well as functional, system and integration tests), allowing to run them in a matter of minutes instead of weeks, but this won't help you for small scale projects.
What you can do, instead, is to:
Encourage developers to run tests strongly related to the code they modified locally before doing a commit. They possibly can't run thousands of unit tests, but they can run five-ten of them.
Make sure that finding relevant tests and running them is actually easy (and fast). Visual Studio, for example, is able to detect which tests may be affected by changes done since the last run. Other IDEs/platforms/languages/frameworks may have similar functionality.
Keep the commit as fast as possible. Enforcing style rules is OK, because often, it's the only place to do it, and because such checks are often amazingly fast. Doing static analysis is OK as soon as it stays fast, which is rarely the case. Running unit tests is not OK.
Run unit tests on your Continuous Integration server.
Make sure developers are informed automatically when they broke the build (or when unit tests failed, which is practically the same thing if you consider a compiler as a tool which checks some of the possible mistakes you can introduce into your code).
For example, going to a web page to check the last builds is not a solution. They should be informed automatically. Showing a popup or sending an SMS are two examples of how they may be informed.
Make sure developers understand that breaking the build (or failing regression tests) is not OK, and that as soon as it happens, their top priority is to fix it. It doesn't matter whether they are working on a high-priority feature that their boss asked to ship for tomorrow: they failed the build, they should fix it.
Security
The server which hosts the repository shouldn't run custom code, such as unit tests, especially for security reasons. Those reasons were already explained in CI runner on same server of GitLab?
If, on the other hand, your idea is to launch a process on the build server from the pre-commit hook, then it will slow down even more the commits.
Best Answer
Run your unit tests on every branch. How do you know your code is actually working as expected otherwise? This goes the same for any other tests you may have.
You say it yourself right here, "If they only run on the develop branch and a feature branch going on for long, I am afraid that the developer is pushing changes to his feature branch which would actually fail the build".
The latter problem you're hinting at is up to the developer to address. It is their responsibility to update the tests alongside the features they are developing.
On top of all that, investing in actual continuous integration, as suggested by your reference model, will alleviate some of the pain of feature branches straying to far from the development branch or other feature branches.