There is actually a (subtle) difference between the two. Imagine you have the following code in File1.cs:
// File1.cs
using System;
namespace Outer.Inner
{
class Foo
{
static void Bar()
{
double d = Math.PI;
}
}
}
Now imagine that someone adds another file (File2.cs) to the project that looks like this:
// File2.cs
namespace Outer
{
class Math
{
}
}
The compiler searches Outer
before looking at those using
directives outside the namespace, so it finds Outer.Math
instead of System.Math
. Unfortunately (or perhaps fortunately?), Outer.Math
has no PI
member, so File1 is now broken.
This changes if you put the using
inside your namespace declaration, as follows:
// File1b.cs
namespace Outer.Inner
{
using System;
class Foo
{
static void Bar()
{
double d = Math.PI;
}
}
}
Now the compiler searches System
before searching Outer
, finds System.Math
, and all is well.
Some would argue that Math
might be a bad name for a user-defined class, since there's already one in System
; the point here is just that there is a difference, and it affects the maintainability of your code.
It's also interesting to note what happens if Foo
is in namespace Outer
, rather than Outer.Inner
. In that case, adding Outer.Math
in File2 breaks File1 regardless of where the using
goes. This implies that the compiler searches the innermost enclosing namespace before it looks at any using
directive.
float
and double
are floating binary point types. In other words, they represent a number like this:
10001.10010110011
The binary number and the location of the binary point are both encoded within the value.
decimal
is a floating decimal point type. In other words, they represent a number like this:
12345.65789
Again, the number and the location of the decimal point are both encoded within the value – that's what makes decimal
still a floating point type instead of a fixed point type.
The important thing to note is that humans are used to representing non-integers in a decimal form, and expect exact results in decimal representations; not all decimal numbers are exactly representable in binary floating point – 0.1, for example – so if you use a binary floating point value you'll actually get an approximation to 0.1. You'll still get approximations when using a floating decimal point as well – the result of dividing 1 by 3 can't be exactly represented, for example.
As for what to use when:
For values which are "naturally exact decimals" it's good to use decimal
. This is usually suitable for any concepts invented by humans: financial values are the most obvious example, but there are others too. Consider the score given to divers or ice skaters, for example.
For values which are more artefacts of nature which can't really be measured exactly anyway, float
/double
are more appropriate. For example, scientific data would usually be represented in this form. Here, the original values won't be "decimally accurate" to start with, so it's not important for the expected results to maintain the "decimal accuracy". Floating binary point types are much faster to work with than decimals.
Best Answer
The AWS SDK for .NET features samples for several Amazon Web Services, including an Amazon SQS Sample, which demonstrates how to make basic requests to Amazon SQS using the AWS SDK for .NET.
The SDK is installed via Windows Installer and integrates with Visual Studio; by default, the desired sample ends up in
C:\Program Files (x86)\AWS SDK for .NET\Samples\AmazonSQS_Sample
and provides Visual Studio solutions for both versions 2008 and 2010.