I don't know enough about your system but you need to look at the following:
1-How you obtain the data and in what format? Answering this will give options of how to store it and load it initially if you will end-up using a database.
2-How do you process this raw data? Answering this, will help you figure the 'active' set size. This will help in deciding how to store and how to load the data also. You may find that you don't need the entire input record and all you need is few fields of it only. If most of the fields are not used, you can keep them in a separate archived storage.
3-How do you inquire this data (online/batch and what criteria is most likely to be used)? Answering this will be the key factor in answering how to store the data what parts to keep on-line and what parts to keep off-line. Oracle for example allows you to run SQL on text files without loading the files in first. This could be a huge time saver, but of course it depends on your scenario.
as per your point:
single individuals variant data could feasibly some day require a million rows of data in a table
I really don't understand how this is possible. If it is accurate, I am not sure how it will be used. Maybe you need to separate the concepts of mere data storage from the concept of which parts of the data will be used. If you understand more about how the data will be used, you may be able to cut down the number of rows by aggregation or a similar technique.
In short much analysis is required for before a solution can be found. The guiding principles are:
1-Know your data well
2-Cut down on row size by keeping the needed columns only and linking to off-line storage when possible
3-Cut down on total row numbers by aggregation when possible
4-Use table partitioning and avoid excessive indexing
5-Know how the users need to use this data
6-Consider loading data as it arrives
7-You are probably going to need a star schema (fact and dimensions) to speed queries, but we can't tell by just the information provided
This is an optional 1 to 1 relationship or a 1 to (0,1) relationship. A true identity relationship would be 1 to (1,1).
I find this notation useful in understanding the scale of the relationship. A one to many relationship could be 1 to (0,10), 1 to (1,5), 1 to (1, *). The fist digit is always 0 (optional) or 1 (mandatory) while the second specifies an upper boundary or unlimited/unspecified.
A many to many relationship resolves to two 1 to relationships to the required join table.
I generally find it is not useful to move optional columns out of the table. In this case, all 10 columns are required, but it is possible to create a record without having the full set of data.
The one case where I have seen identity relationships make sense is inventory-like tables, where there is tombstone data which doesn't change much and frequently changing counts. The access rights to the tombstone data are often different as well. As the system scales out this often becomes a one to many relationship with inventory counts for each location. Joins are created to only one of the tables with the same identity, if the required data is in only one of the tables.
Best Answer
SQL has not been designed for this scenario, but document and key-value stores have been. Have you considered using one of those?
For example MongoDB has a C# driver with Linq support (although I doubt you will actually need that). You can simply store all the "dynamic data" in one mongodb document per SQL entity.
Another alternative would be redis, simply mapping unique entity identifiers to hashes that contain the corresponding key-value-data.