If the points are constantly moving, then it's impossible to do better than O(n)
, because you have to check each point every time it moves. You have some code somewhere that moves the points, just add the bounds check into that code. Keeping a list sorted, or updating a quadtree or something would only be more efficient if the point positions were not changing frequently and instead you were checking several different bounding boxes, like searching for points on a map, for example.
If the points are moving according to a preset pattern, like in a straight line at a constant velocity, then you can change your algorithm to only do the bounds check when the direction or speed is changed. For example, if you are moving one pixel per second, and the bounding box is 10 pixels away, you know you don't have to check that point for another 10 seconds. 100 points is a pretty small number though, and at that small of a scale, the overhead from the extra complexity may not outweigh the extra iterations of the simpler algorithm.
Also, that small of a scale makes me wonder if you're sure the bounds checking is your bottleneck. A modern computer should be easily able to bounds check 100 points in under 10 microseconds. Have you actually measured it?
If there is a known maximum N, you can use a Bit array for really fast lookup time.
Simply keep an array of size N/8 (rounded up) around, with each bit corresponding to a number, 1 if it is in the set, 0 if it isn't. Lookup the relevant bit to check whether a number is in the set.
If this is too slow, and you have the megabytes ("millions" doesn't sound very high), then just keep a boolean array of size N.
Edit: for much larger values of N this won't fit into memory, and you probably need some sort of hashing. It'd still be nice to use the fact that subsequent calls are likely to be close to each other, though.
So a sketch of an idea: use the number divided by 256 as the key, and store a 256-bit bit array as the value (for all the numbers that divide to this key). Store the most recently retrieved key and bit array. If the next call divides to the same key, you can look it up in the cache bit array immediately. The number 256 can be tweaked a lot, of course. I'm not sure this is actually faster than just using map without extra gimmicks, so profile it.
For large values of N it's going to depend on lot on the sparseness of the data; clearly if you have max N 10^12 and a lot of the values are in the set, then it's not going to fit into memory anyway.
Best Answer
If it is truly an ordered linked list, this should be a fairly bad choice because you have to traverse the list one by one until you find the right place to insert the item. In other words O(N). This is ok if the list is small but can get out of hand for big graphs.
Usually, what you will need for that kind of stuff is a heap:
http://en.wikipedia.org/wiki/Heap_%28data_structure%29
...where you can insert and pop in O(log(n)) ...it's also usually available out of the box in most programming languages.