To elaborate on RichardOD's answer, you generally have three options when dealing with subtyping, and which you choose depends on what you need to do with the data in question.
The first option is the one you're currently using: keep all columns related to the different types in one table, with flags and nulls used to indicate which type a given record is. It is the simplest way to manage subtyping, and it generally works well when you only have a few types or if the different types aren't very different. In your case, it seems like the types can vary quite a bit.
The second option is to keep a central table that contain all of the common columns between the subtypes, and have one-to-one relationships with other tables that contains the type-specific details of those types.
The third option is to not think of the different types as subtypes at all and just keep all the types' records in separate tables. So you'd have no common table between the types that keeps the common data, and each table would have some columns that are repeated across tables.
Now, each option has its place. You'd use the first option when there aren't many differences between the different types. You'd use the second option if you need to manipulate the common fields independently of the type-specific fields; for example, if you wanted to list all sports games in a big grid with general information, and then let users click to see the type-specific details of that game. You'd use the third option when the types aren't really very related at all and you're just storing them together out of convenience; dissimilar schemas, even if it shares a few fields, shouldn't be merged.
So think about what you need to do with the data and how it fits into the three options and decide for yourself which is best. If you can't decide, update your question with the details about how you plan to use the data and I or someone else should be able to help you more.
Best Answer
There are a few things to consider here:
If any of these are true, you might think about a properties store approach like EAV, hstore, json fields, xml fields, etc.
If not - if you have a fairly static list of properties where most of them make sense for most of the rows - then there's not really a problem with having them as 60 individual columns. It'll be easier to add indexes for commonly searched for sets of attributes, including partial and composite indexes, etc, and searches - particularly those for many different attributes - will be much faster.
See also: Database design - should I use 30 columns or 1 column with all data in form of JSON/XML?
There's also a compromise option available to you: A main table for the most important details you look up a lot, plus side-tables for logical groupings of attributes. Say:
plus
etc. The
integer primary key
that's also aforeign key
means you have an enforced 1:1 (optional) relationship to the other table. This approach can be useful if you have a few logical groupings of attributes that you can cluster into side-tables.I'd also be surprised if a little more thought didn't reveal things that do make sense to normalize. Do you have
year7_blah
,year8_blah
,year9_blah
etc columns? If so: Great candidate for normalization.