Firstly, command classes are decoupled from the medium and protocol. That means you can design the command classes for your own programming convenience, rather than having to design it to match exactly to the specifics of each protocol (which would be impossible, since different protocols may have different bit widths for the same command and field).
When I mention convenience, what I mean is that you can use the maximum bit width you'll ever need for each command's fields.
However, you may still need to have device- or protocol-specific validation code, since each device or protocol imposes its own limits to what values can be in those fields. Unless you don't plan to implement any validation at all.
When it comes to validation, there are several choices:
- Not doing it at all, if you will be doing all of the programming yourself, and if it is a hobby project such that mistakes do not result in damages.
- Validating it eagerly, i.e. in the command class. This may be difficult, since a command class might not know which device or protocol it will be sent to.
- Validating it late, i.e. in the protocol class where the command values are being converted into bytes.
For example, even if a validation rule says that a particular field can only have a value in the range 0 - 100, it doesn't stop you from using a uint32_t
or int32_t
for that field in the command class.
To the second question of having an overloaded method that takes in various built-in number types and append the bytes to an internal byte vector, do notice the caveats.
In my opinion, if you only needs to work with the fundamental integer types, you don't need templates. Instead, you simply provide function overloads for each of the types, and you call the functions with a value of the appropriate type.
void CPacket::addData(uint32_t data) { ... }
void CPacket::addData(int32_t data) { ... }
void CPacket::addData(uint16_t data) { ... }
void CPacket::addData(int16_t data) { ... }
...
Regarding the code inside, there are several choices:
Type punning with union
. This assumes that your code will work exclusively with one byte-endianness, thus not needing to consider the possibility of porting to a different byte-endianness.
union
{
uint32_t value;
uint8_t bytes[4];
} pun = { data };
// after that, add the bytes to the vector one-by-one, according to the byte endianness of the communication.
Explicitly extracting the bytes with endian-agnostic bitwise arithmetic: (see note on casting)
// only if value is unsigned. For signed value, it must first be cast to unsigned
uint8_t byte0 = (uint8_t)value;
uint8_t byte1 = (uint8_t)(value >> 8ul);
uint8_t byte2 = (uint8_t)(value >> 16ul);
uint8_t byte3 = (uint8_t)(value >> 24ul);
Best Answer
std::vector
stores its elements in a contiguous block of memory as you described, but that doesn't mean that the whole vector object resides in a contiguous piece of memory. While it's implementation dependent, it's most likely safe to assume that the vector object itself contains some data that is mostly "management overhead" like the size of the vector and a pointer to the payload data instead of the payload data itself. That way it's much easier to do efficient move assignments or reallocations.For example if you look at the vector implementation in Visual C++, you'll find that its data members are actually just a few pointers.
So in your scenario of a vector of vectors, the data stored in the "inner" vector simply contains the management data but not the payload for the nested vector. Thus, extending or shrinking the nested vector will not cause elements in the outer vector to move around in memory.