Database Design – How to Store Large Amount of Session Data

datadatabasedatabase-designMySQL

Current project that i'm working has a requirement to capture data relevant to a session and store. Purpose of this is to use these data for the analytics. Clients would use these data to generate graphs, chart and etc.

For each client, 10 data points are captured within a minute(10 rows within a minute). As per the project requirements, table would exceed millions of rows within one week.

Current solution is to store the data in a Mysql database? Would this be the proper solutions?

Best Answer

Databases can store many (hundreds of) millions of rows and even dozen of billions of them. You need just to create the appropriate database schema, and you should care about good indexing.

Of course you need enough disk space. This is the most important issue (and you need a careful estimate of the required disk capacity).

BTW (without knowing your actual requirements), by guessing a kilobyte per row, your database will certainly fit in a single terabyte disk (and more probably, only a few hundreds gigabytes of disk). You probably want the index to fit in RAM (so you might want a server with perhaps a few dozens of gigabytes)

You could use MySQL or PostGreSQL, but you could even use SQLite (see this answer). However, if you have multiple clients accessing (concurrently) the same database, you'll better use a real RDBMS server (e.g. PostGreSQL or MySQL), but not SQLite. If you want a lot (e.g. hundreds) of simultaneous connections, you may want to dimension appropriately your server (e.g. several disks, or SSDs to improve I/O bandwidth, or more RAM).

(I don't feel that your case involves a large database - I'll call it a medium-sized one, but I am not a DB expert; however, notice that PostGreSQL, MySQL and even SQLite all claim -see this for MySQL, this for PostGreSQL, this for SQLite- to be able to deal with multi-terabyte databases)

Related Topic