The short answer is that you switch when the effort to not switch exceeds the effort of switching, or when you can foresee that it will soon become so. That's a very subjective assessment which requires experience, but it's relatively easy to see the extreme cases.
For example, say you estimate one month to switch, but not switching means you can't use a module that's only available in the upgraded version, so you'll have to take two months to implement one from scratch. The choice is easy.
For another example, if you are spending all your time fixing security vulnerabilities in software that is no longer supported by the vendor, it's a good time to switch.
If you never have meetings where someone says, "This would be a lot better/easier with the new version," then you probably don't need to switch.
The in-between cases are harder to recognize, but it usually feels like building pressure, where sticking with the old version feels more and more limiting.
As far as your well-done software test, your bug tracker provides a good rule of thumb on that. When you have zero critical defects and your list of non-critical defects is at a reasonable level and steadily shrinking, you can call it 'done.' You can get a sense of your software architecture quality by the turnaround time for adding new features or fixing bugs. There's no cut off point that says, "this is professional," but the lower the better.
It depends on the platform and implementation.
C++ guarantees that the size of char
is exactly one byte and at least 8 bits wide. Then, size of a short int
is at least 16 bits and not smaller than char
. Size of an int
is at least as big as size of short int
. Size of long int
is at least 32 bits and not smaller than int.
sizeof(char) == 1; sizeof(long int) >= sizeof(int) >= sizeof(short int) >= sizeof(bool) >= sizeof(char).
The actual memory model of C++ is very compact and predictable though. For example there is no metadata in objects, arrays or pointers. Structures and classes are contiguous just like arrays are, but padding may be placed where necessary and needed.
Frankly though, such comparison is silly at best as the Java memory usage depends more on the Java implementation than on the code it runs.
Best Answer
Correct.
Java supports 64 bit in a way that introduces no behavioural differences. A Java program will run on 32 bit or 64 bit platforms without change. All of the primitive types are the identical, the class library APIs are identical, the bytecode formats are the same. As far as the program is concerned, the only difference is that you can allocate more things before your heap fills up.
Inaccurate. What you see is a value that may or may not be related to the object's memory address at some time in the object's lifetime. Even if it is a memory address, it can't be used as one. There's no way to turn the
int
value back into a reference in pure Java. (Even with JNI / JNA it would be highly unreliable ... so don't think about doing it!)An identity hashcode is a "uniquish" number. That's all.
No. That would make the class libraries different for 32 bit and 64 bit Java and prevent the same code from running on either platform.
It just does. There are one or two "issues". For instance array sizes and string lengths are limited to 32-bit
int
values. These mean that you can't fully exploit the hardware; e.g. you can't arrays with more than2^31
elements on a 64-bit JVM. (But if you could, there would be compatibility problems between 32 bit and 64 bit JVMs and applications designed to run on the two flavours.)The
identityHashCode
return type is not a real issue because the value is NOT guaranteed to be related to machine address, and you can't be used as a machine address either.