It'd be defined by the architecture you were using. On a Zilog z80 chip (common embedded chip) they'd be one size while they could be an entirely different size on a x86 chipset. However, the sizes themselves are fixed ratios to each other. Essentially short and long aren't types but qualifies for the int type. Short ints will be one order of magnitude smaller than (regular) int and long ints will be an order of magnitude higher. So say your Int is bounded to 4 bytes, the short qualifier bounds it to 4 bytes though 2 bytes is also very common and the long qualifier boosts it potentially to 8 bytes though it can be less down to 4 bytes. Keep in mind that this is subject to word length as well so on a 32 bit system you'd max out at 4 bytes per int anyway making long the same as a regular int. Thus, Short ≤ Int ≤ Long.
However, if you lengthen it again, you can push in the int to the next cell giving you 8 whole bytes of storage. This is the word size for 64 bit machines so they don't have to worry about such things and just use the one cell for long ints allowing them to be another order above standard ints while long long ints get really bit.
As far as which to choose, it boils down to something that Java programmers for instance don't have to worry about. "What is your architecture?" Since it all depends on the word size of the memory of the machine in question, you have to understand that up front before you decide which to use. You then pick the smallest reasonable size to save as much memory as you can because that memory will be allocated whether you use all of the bits in it or not. So you save where you can and pick shorts when you can and ints when you can't and if you need something bigger than what regular ints you give; you'd lengthen as needed until you hit the word ceiling. Then you'd need to supply big number routines or get them from a library.
C may well be "portable assembly" but you still have to know thine hardware.
looking for a specific use case where both a subclass and a class within the same package needs to access a protected field or method...
Well to me, such a use case is rather general than specific, and it stems from my preferences to:
- Start with as strict access modifier as possible, resorting to weaker one(s) only later as deemed necessary.
- Have unit tests reside in the same package as tested code.
From above, I can start designing for my objects with default access modifiers (I would start with private
but that would complicate unit testing):
public class Example {
public static void main(String [] args) {
new UnitTest().testDoSomething(new Unit1(), new Unit2());
}
static class Unit1 {
void doSomething() {} // default access
}
static class Unit2 {
void doSomething() {} // default access
}
static class UnitTest {
void testDoSomething(Unit1 unit1, Unit2 unit2) {
unit1.doSomething();
unit2.doSomething();
}
}
}
Side note in above snippet, Unit1
, Unit2
and UnitTest
are nested within Example
for simplicity of presentation, but in a real project, I would likely have these classes in separate files (and UnitTest
even in a separate directory).
Then, when a necessity arises, I would weaken access control from default to protected
:
public class ExampleEvolved {
public static void main(String [] args) {
new UnitTest().testDoSomething(new Unit1(), new Unit2());
}
static class Unit1 {
protected void doSomething() {} // made protected
}
static class Unit2 {
protected void doSomething() {} // made protected
}
static class UnitTest {
// ---> no changes needed although UnitTest doesn't subclass
// ...and, hey, if I'd have to subclass... which one of Unit1, Unit2?
void testDoSomething(Unit1 unit1, Unit2 unit2) {
unit1.doSomething();
unit2.doSomething();
}
}
}
You see, I can keep unit test code in ExampleEvolved
unchanged due to protected methods being accessible from the same package, even though accessing object is not a sub-class.
Less changes needed => safer modification; after all I changed only access modifiers and I did not modify what methods Unit1.doSomething()
and Unit2.doSomething()
do, so it is only natural to expect unit test code to continue run without modifications.
Best Answer
I've heard legends varying from "it should be optional as some small compilers should be able to be C11-compliant without VLAs" to "it was a mistake on the first place". I've never got one true and definite answer to this, though. Ultimately, I don't believe anyone really has one as the reason (assuming - and hoping - there is one) was never disclosed (as far as my old searches went).
From Chapter 4 (page 13) of Rationale for International Standard - Programming Languages - C 5.10 (2003)
Emphasis mine. Notice that this decision goes against their own rationale. Yet, another thing made optional. Now you either get
__STDC_NO_VLA__
or VLA support. It is a very odd decision.