Firstly, command classes are decoupled from the medium and protocol. That means you can design the command classes for your own programming convenience, rather than having to design it to match exactly to the specifics of each protocol (which would be impossible, since different protocols may have different bit widths for the same command and field).
When I mention convenience, what I mean is that you can use the maximum bit width you'll ever need for each command's fields.
However, you may still need to have device- or protocol-specific validation code, since each device or protocol imposes its own limits to what values can be in those fields. Unless you don't plan to implement any validation at all.
When it comes to validation, there are several choices:
- Not doing it at all, if you will be doing all of the programming yourself, and if it is a hobby project such that mistakes do not result in damages.
- Validating it eagerly, i.e. in the command class. This may be difficult, since a command class might not know which device or protocol it will be sent to.
- Validating it late, i.e. in the protocol class where the command values are being converted into bytes.
For example, even if a validation rule says that a particular field can only have a value in the range 0 - 100, it doesn't stop you from using a uint32_t
or int32_t
for that field in the command class.
To the second question of having an overloaded method that takes in various built-in number types and append the bytes to an internal byte vector, do notice the caveats.
In my opinion, if you only needs to work with the fundamental integer types, you don't need templates. Instead, you simply provide function overloads for each of the types, and you call the functions with a value of the appropriate type.
void CPacket::addData(uint32_t data) { ... }
void CPacket::addData(int32_t data) { ... }
void CPacket::addData(uint16_t data) { ... }
void CPacket::addData(int16_t data) { ... }
...
Regarding the code inside, there are several choices:
Type punning with union
. This assumes that your code will work exclusively with one byte-endianness, thus not needing to consider the possibility of porting to a different byte-endianness.
union
{
uint32_t value;
uint8_t bytes[4];
} pun = { data };
// after that, add the bytes to the vector one-by-one, according to the byte endianness of the communication.
Explicitly extracting the bytes with endian-agnostic bitwise arithmetic: (see note on casting)
// only if value is unsigned. For signed value, it must first be cast to unsigned
uint8_t byte0 = (uint8_t)value;
uint8_t byte1 = (uint8_t)(value >> 8ul);
uint8_t byte2 = (uint8_t)(value >> 16ul);
uint8_t byte3 = (uint8_t)(value >> 24ul);
Because the type of a column is a template parameter, you are modelling the column type within the C++ type system. This is good. A Column<int>
and Column<std::string>
are different types. If there are some properties that are common for all column types (e.g. that a column has a name), you could extract these into a base class so that these common operations can be accessed via a common type. However, no type-specific operations like get()
or sum()
can exist in this base, and must be part of the templated Column<T>
.
If you have a table type that has columns of different types, it is clearly not sensible to force these to have the same type since you would necessarily lose access to the template parameter (“type erasure”). Instead, embrace the different types and make your Table
strongly typed as well. A container like std::tuple<T...>
can help here.
If you need access to the column-type independent parts, you can always get a pointer to the column that can be used as the base type.
A sketch using C++14 (C++11 would require you to implement a couple of convenience functions yourself, but has std::tuple
and template parameter packs):
class ColumnBase {
...
public:
std::string name() { … }
};
template<class T>
class Column : public ColumnBase {
std::vector<T> m_items;
...
};
template<class... T>
class Table {
std::tuple<Column<T>...> m_columns;
template<std::size_t... index>
std::vector<ColumnBase*> columns_vec_helper(std::index_sequence<index...>) {
return { (&std::get<index>(m_columns))... };
}
public:
std::vector<ColumnBase*> columns_vec() {
return columns_vec_helper(std::make_index_sequence<sizeof...(T)>{});
}
};
We could then print out the name of all columns:
for (const auto& colBase : table.columns_vec())
std::cout << "column " << colBase->name() << "\n";
without having to handle each column type separately.
(runnable demo on ideone)
Only templates will give you the type safety that you get an int
out of an integer column. In contrast, unions/variant types require the using code to remember all possible types (with template, the type checker enforces that we handle everything). With subtyping, we can't have column-type specific operations that share an implementation. I.e. a method int IntColumn::get(std::size_t i)
and a related method const std::string& StringColumn::get(std::size_t i)
might look like they have a common interface, but that would be only accidental and cannot be enforced. In particular, any combination of virtual methods and templates in C++ gets very ugly, very fast.
The disadvantage of templates is that you will be required to carefully write generic code, and will have to do template metaprogramming. When done correctly the results can have amazing usability, but the implementation would be advanced C++. If your design is intended to be maintained by less advanced programmers (which will be as baffled as I will be when I look back at this code in a couple of months), then it might be more sensible to avoid such a “clever” solution despite its benefits and use more traditional OOP patterns that give you a similar structure, but might require a couple of static_cast
s to work.
Best Answer
First thing I would check is if using an existing CAD format would not be sufficient. That would not only prevent you from reinventing the wheel, but also improve the interoperability with other CAD programs. Moreover, adapting the terms of such a format like "Layers" and "Entities", instead of using terms like "layouts" and "patterns" (which have typically different meanings in other CAD systems), you might be able to reduce communication issues.
If you really feel you need to create your own file format, make sure you can keep the format backwards compatible with each new version of your CAD system, otherwise you will run sooner or later into problems. This is much simpler when your format provides a lot of metadata. In my experience, formats like XML or JSON make it easier to fulfill this requirement than proprietary binary formats. The decision between these two is surely pretty opinonated. XML is still more wideley used and supported by a lot more third party tools. It will allow you easier integration of other existing XML based standards. On the other hand, it has the drawback that the files tend to become very big, much bigger than a typical JSON file carrying the same information. This is a tradeoff you need to consider for your use case.