If you ever need to move the application, independently of the user database, then you need a separate database for the application (in whatever form that takes), so that the database can travel with the application, leaving the user data intact in its original location.
It therefore follows that, if the application database is updated periodically from the vendor (that's you), then it needs to be kept separate from the user's database, so that you can distribute changes to the application database without affecting the user database.
Now, if you need to add fields or tables to the user database, that's a different story. For that, you need a module that can accept as input a table of changes from the application database, to be applied to the user database. Some programs do this by "converting" the user database to the new format.
Data conversion can be done by using SQL DDL to apply the field and table updates to the user's database, in a way that doesn't negatively affect the user's data. In some advanced scenarios, data transformations might actually take place; normalization or denormalization, for example.
If you need to provide users with the ability to do a data transfer, you should use some other mechanism such as a communications conduit, or an import/export file containing the data to be transferred (perhaps in XML).
First of all, having the ftp push/pull mechanism separated from the core processing sounds to me a good design, since it will allow to test the core processing separately and plug the parts easily together in a different way if needed. This is a good example of separation of concerns.
Every once in a while we will have problems with these processes not finding the file needed
Before thinking about any solution with the chance of causing probably more problems than it will solve, make sure you know what the root cause of the problem is. Is it because job A (pulling the data) puts the file in a wrong folder where job B (pushing the file data) does not expect it? Then you need a better way to pass the file path from job A to job B in a reliable way.
Or is it because sometimes job B starts too early, before the output of job A has arrived completely? Well, then you need a better mechanism to trigger the start of job B. Is it not possible to put A and B in a command script which makes sure B does only start when A is complete? Maybe you have to implement a polling mechanism in job B which makes sure it does not start its processing until the output of job A is available. Maybe you have to implement a loop around job A to make sure it will try to download a file again when the first attempt has failed. It may be a good idea to let the ftp process download all data into a temporary file first, and rename that file as a final step when it is complete. Renaming is an atomic operation on most file systems, so this makes the file only visible to the following processes when it is ready for further processing. Another possible technique is to work with some "lock files", prohibiting shared access to a file "X" as long as "X.lock" exists.
So, IMHO the architecture you described is not brittle per se, but you have to provide a reasonable amount of synchronization and failure tolerance around your processes.
Best Answer
The simple way to determine if the data has been altered is to put a timestamp column on each relevant table, and a trigger on insert and update that updates this row to
Now
. If you want to make searching for new data fast, put an index on this column. Then store in another table the last time you checked for updates.When you check for updates, read the last update time, then run a query on each table for all rows where the timestamp > the last update time. The tricky part here is when you reset the last update time. If you reset it immediately after you read it, you could get notified of some changes twice, whereas if you reset it after you're done processing updates, you could miss a few. You'll have to decide which is more acceptable based on your business requirements. (Or, if you want to make sure not to miss any, you'll need one row in the table that records the last update time for each table you're monitoring this way.)
This isn't exactly the database notifying you of what's changed, but it's getting the DB to do the bookkeeping for you, which greatly simplifies the work you have to do.
Doing it on a file system, follows the same basic principles, but it's a bit simpler because you have a timestamp built in to the file system. You just need a recursive scan of the file system and filter out anything whose last update time is before the time you're looking for.