This is a classic OOP issue, not that it signals that anything is wrong with the object oriented way of thinking, but it does require one to carefully consider the design.
When you wish to iterate over a container of objects, and alter some property that only a part of the objects have, and more importantly: only makes sense for a part of the objects, it is in my opinion the design that is the problem. The question here is, why do you want to alter some class specific property in a generic list? I am sure it make good sense in your situation, but think about it for a while.
You have a list of figures, this can be squares, rectangles, triangles, circles, polygons, etc. And now you wish to perform some action on them, which is fine, if they all support the action. But it does not make sense to alter a property on an object that would clearly not support it. It is counter-intuitive to iterate a list of figures, setting the radius, it is not however counter-intuitive to iterate a list of circles, setting the radius.
That does not mean, that some complicated, or let's just say smart design choice, can enable you to achieve this, but it goes against the concept of object orientation, by somehow trying to circumvent the encapsulation. I would argue, that this goes against the idea of polymorphism.
How to do it instead however is something different, which I am afraid I cannot produce an answer for at this moment. I do believe however, that you should do your utmost to try to avoid putting yourself in that situation.
One possible alternative you could look into is using a visitor pattern, where you have a visitor that accepts an object type that allows changing the radius. For instance:
class Visitor {
public:
void Visit(Circle& circle) { /* Do something circle specific */ )
void Visit(Square& square) { /* Do something square specific */ )
// ...
};
Firstly, command classes are decoupled from the medium and protocol. That means you can design the command classes for your own programming convenience, rather than having to design it to match exactly to the specifics of each protocol (which would be impossible, since different protocols may have different bit widths for the same command and field).
When I mention convenience, what I mean is that you can use the maximum bit width you'll ever need for each command's fields.
However, you may still need to have device- or protocol-specific validation code, since each device or protocol imposes its own limits to what values can be in those fields. Unless you don't plan to implement any validation at all.
When it comes to validation, there are several choices:
- Not doing it at all, if you will be doing all of the programming yourself, and if it is a hobby project such that mistakes do not result in damages.
- Validating it eagerly, i.e. in the command class. This may be difficult, since a command class might not know which device or protocol it will be sent to.
- Validating it late, i.e. in the protocol class where the command values are being converted into bytes.
For example, even if a validation rule says that a particular field can only have a value in the range 0 - 100, it doesn't stop you from using a uint32_t
or int32_t
for that field in the command class.
To the second question of having an overloaded method that takes in various built-in number types and append the bytes to an internal byte vector, do notice the caveats.
In my opinion, if you only needs to work with the fundamental integer types, you don't need templates. Instead, you simply provide function overloads for each of the types, and you call the functions with a value of the appropriate type.
void CPacket::addData(uint32_t data) { ... }
void CPacket::addData(int32_t data) { ... }
void CPacket::addData(uint16_t data) { ... }
void CPacket::addData(int16_t data) { ... }
...
Regarding the code inside, there are several choices:
Type punning with union
. This assumes that your code will work exclusively with one byte-endianness, thus not needing to consider the possibility of porting to a different byte-endianness.
union
{
uint32_t value;
uint8_t bytes[4];
} pun = { data };
// after that, add the bytes to the vector one-by-one, according to the byte endianness of the communication.
Explicitly extracting the bytes with endian-agnostic bitwise arithmetic: (see note on casting)
// only if value is unsigned. For signed value, it must first be cast to unsigned
uint8_t byte0 = (uint8_t)value;
uint8_t byte1 = (uint8_t)(value >> 8ul);
uint8_t byte2 = (uint8_t)(value >> 16ul);
uint8_t byte3 = (uint8_t)(value >> 24ul);
Best Answer
It would be a reasonable choice to make World implement IHittable because it gives you the option to add an instance of World to another World (or generalize the idea of World and rename it Container). This would be an example of the Composite pattern (https://en.wikipedia.org/wiki/Composite_pattern).
If you never intend to put a world in a world, there is no harm at all to not implement IHittable. After all your design is not about getting it right, but what serves your use case the best. If your code will never treat a World object as an IHittable, then World shouldn't implement that interface.
This ends up being a question about your object boundaries and abstraction. There is no easy answer to these types of question, since it takes some experience to get to the right abstractions specific use case. But I give you some things to consider: If
intersects
is an instance method, you can makeWorld
implementIHittable
If you want to makeintersects
a static method that takes both aRay
and aWorld
then you need to be able to access the list ofIHittable
objects the world contains When you makeobjects
public to allow access from an static method, your World object ends up just being a data structure. That works but you end up losing all the aspects of object oriented programming (whether that is a good thing or a bad thing).