You should never use multiple columns to hold more than one of the same item. This is bad database design and will be a major pain to use (try to write a query that will fetch and display all of a user's photos, or delete one photo, and you will see what I mean).
Another issue is that you might want to store other info about each photo--such as when it was uploaded and what format it is. Instead of 20 columns, suddenly you might have 60 or 80 columns in your table. Ugh!
Instead, have a table of photos that is separate from the table of users. This is linked to the user table on user ID. You can enforce the 20 photo limit with a table constraint, or simply by checking the number of photos programmatically before you allow upload.
Table user:
Primary Key | | |
USER_ID | FIRST | LAST | ETC.
12345 | John | Smith | ... Any other user-specific columns
Table photo:
Primary Key | Foreign Key | | | |
PHOTO_ID | USER_ID | PHOTO | FORMAT | DATE | ETC.
13516 | 12345 | <data> | JPG | 2013-05-01 | ... Other photo-specific columns
If you set the foreign key for photo to on delete cascade
, a user's photos will automatically be deleted if you delete the user.
See this StackOverflow question for info on a table constraint to limit the number of photos per user.
Getting the photos for a user uses a simple query:
select * from photo
where user_id = 12345;
If you need photos and user information like the name, use a join:
select first, last, photo from photo p
inner join user u on p.user_id = u.user_id;
Update: I do agree with Pieter B that, in most cases, you are probably better off storing the images separately in the file structure, and just making the images
table store the file name of each image. This will make it a lot easier to, say, display the images on a web page. But you will also need extra code to make sure the directory stays in sync with the table--if you delete a user, you have to go manually delete all of the images.
The key for connecting to a MS SQL Server from arbitrary places is ODBC.
You are looking for some language that can handle reading a csv easily, and inserting the data back across ODBC.
There are several choices for ODBC. If you are a Java shop, one can go and get the jdbc for SQL Server and connect to the database that way. Have that .jar be part of your class path and connect away. This should probably be the option if you have people on high dictating technologies (java is a fairly safe one and you can always find or hire a java programmer).
Me? I'm a perl guy at heart. Crunching one data file and exporting it into another is a classic perl solution domain.
WIth perl, one would get DBI (docs) for the interface to a database layer, and then DBD::CSV (tutorial) to read the data from the csv file as if it was a database itself, and then DBD::ODBC to write out to an ODBC connection. Installing perl modules through CPAN is fairly easy (though I do admit that I say that after having done it many, many times).
This could be done with Text::CSV instead. However, there is a certain elegance in the DBD approach of writing a program to read from one arbitrary database (do any appropriate transforms on the data) and write to another arbitrary database. If done right, and you find yourself wishing to copy data from another database at some point, it becomes a fairly minor change to the code (install the appropriate DBD driver for the database and read the data.
Taking a completely different approach that bends more into the sysadmin world than the programmer domain, one could move the data from the linux machine to the windows machine on a regular basis.
- You could put the files on a samba share and mount them from windows.
- Similar option to above, you could make the file available on a web server and then fetch and process it on windows.
- With the appropriate packages, one could mount a windows share on linux (something along the line of
mount -t cifs -o guest,uid=client_user,gid=users //192.168.1.100/share /path_to/mount
or //192.168.44.100/share /path_to/mount cifs guest,_netdev 0 0
in /etc/fstab)
- Instead of sharing linux to windows, or windows to linux, NAS (network attached storage) is an easy way to have a single server (not linux or windows) that serves files to each.
- Set up an ftp server on linux and have a scheduled job to pull the file from windows.
- Have an automated job on linux (cron) ftp the file to windows (this assumes you've set up an ftp server on some windows machine).
- Create a web app on windows that can insert a csv file into a database, call this web app via curl from windows.
These options take a greater degree of synchronization between the machines. Making the file available requires that both machines can access each other. This may be difficult if one is say, in the dmz and another is not. Having two sets of programs to do the work on different machines (copy file, process file) requires that clocks be synced reasonably and multiple sets of scheduled processes be able to run.
Best Answer
CQRS is a good angle to follow for lots of things. But I'm not sure if it is right here.
What they are really proposing is creating a service bus of some sort. The general concept is the live transaction gets written to a persistent message queue which then gets processed and pushed to appropriate services. You might want to look at [presuming .NET b/c SQL Server tag]:
To start with. There are lots of options out there but the proposed model is solid. Lots of landmines in this space, rolling your own is definitely not recommended.
The other angle here is measurably and instrumentation. You can waste a lot of time guessing why things crashed. Knowing why things crashed is invaluable.