:= is the assignment operator for languages that use single equals sign equality testing. The most well known of those languages is Pascal. Due to C's influence most languages switched to = for assignment and == for testing. Some older texts and authors that were trained in such styles use := for pseudocode. You sometimes see arrows <- as well for assignment.
From the article:
input: an array a of length n with array elements numbered 0 to n − 1
inc ← round(n/2)
while inc > 0 do:
for i = inc .. n − 1 do:
temp ← a[i]
j ← i
while j ≥ inc and a[j − inc] > temp do:
a[j] ← a[j − inc]
j ← j − inc
a[j] ← temp
inc ← round(inc / 2.2)
Some modern languages use arrows for assignment; most notably R, which uses it for global assignment whilst using the single equals ( = ) for local assignment.
From Sebesta's Concepts of Programming Languages and the class notes of Dr. K. N. King we learn that the assignment standards go back much farther than C or Pascal. It appears that in 1958 when Algol was being designed, it was decided to use := for assignment. The commitee was composed of American and European representatives. Some of the Germans on the committee were familiar with Konrad Zuse's Plankalkul language (which was drafted during World War II but not published until 1972 and not implemented until 2005) and wanted the assignment to follow that language's proposed assignment method which was b+c => a
where b+c is assigned to a. The committee changed this to =: on the grounds that the method of entering programs at the time called a keypunch, did not have a ">" to use. So they compromised on the equals colon. However, the Americans being familiar with FORTRAN (it didn't have lower case until 1990) wanted the assignment to operate to the left since that was how FORTRAN did it.
So they managed to get it changed to := instead and had the assignment operate toward the left rather than the right in the style of FORTRAN (being a known implemented language) rather than Plankalkul (a virtually unknown language outside of Germany and not implemented). Algol 60 strongly influenced all major subsequent imperative languages including Pascal and C. Thus Pascal kept ALGOL's syntax for assignment and both kept the lefthandedness of assignment.
ALGOL was designed to be easy to read and close to mathematical notation. It was the de facto (and basically de jure) standard for writing algorithms in journals for next 20+ years. Therefore, instructors and computer scientists educated from 1960 to around 1980 would have been familiar with that style of notation.
The release of the IBM 029 Keypunch in 1964 allowed for > and < characters, thus prompting their inclusion in C among others.
These are very specific terms as related to logic.
Here are some starting points:
http://en.wikipedia.org/wiki/Soundness
http://en.wikipedia.org/wiki/Completeness_(logic)
Basically, soundness (of an algorithm) means that the algorithm doesn't yield any results that are untrue. If, for instance, I have a sorting algorithm that sometimes does not return a sorted list, the algorithm is not sound.
Completeness, on the other hand, means that the algorithm addresses all possible inputs and doesn't miss any. So, if my sorting algorithm never returned an unsorted list, but simply refused to work on lists that contained the number 7, it would not be complete.
It is complete and sound if it works on all inputs (semantically valid in the world of the program) and always gets the answer right.
Best Answer
In short: a smaller(bigger) exponent of n.
It relates directly to a part of my answer to an earlier question of yours here:
, which basically says that n to some power grows faster, if the exponent is bigger, regardless of constant factors.
In case 1, function
f(n)
is assumed to be polynomially smaller than this one:Don't get scared by the logarithm here, it is just a number, because
a
andb
are constants, so islog_b(a)
. (Which is why poly-logarithmically does not enter the picture, in that case)It is then defined to what class of functions
f(n)
must belong to. This is already the answer to your question"what it means to be polynomially smaller"
:All it means is, that the exponent of
n
must be less thanlog_b(a)
: You subtract a positive number (epsilon) from it.This is another way of looking at it:
The polynomially bigger one has more factors of n:
epsilon
more. (or less in the case of smaller)At
57:30
he gives an example, where he ends up in case 1: He comparesf(n) = n
withn^2
.f(n)
is polynomially smaller because1 < 2
.