In Big O notation, allocate an array of N element is defined by O(1) or O(n) ?
For example in C#, if I allocate an array like this :
int[] a = new int[10]
When I display this array, I have :
{0,0,0,0,0,0,0,0,0,0}
allocationarraybig o
In Big O notation, allocate an array of N element is defined by O(1) or O(n) ?
For example in C#, if I allocate an array like this :
int[] a = new int[10]
When I display this array, I have :
{0,0,0,0,0,0,0,0,0,0}
f ∈ Θ(g)
is the same as
f ∈ Ο(g) ∧ f ∈ Ω(g)
So, if you can prove that f ∈ Ο(g) ∧ f ∈ Ω(g)
, then by all means use Θ
.
Note that this has absolutely nothing whatsoever to do with time complexity of algorithms, average or otherwise. Θ
, Ο
, ο
, Ω
and ω
are about the growth rate of functions; whether those functions describe average time/step/space complexity of an algorithm, worst-case time/step/space complexity of an algorithm, best-case time/step/space complexity of an algorithm, expected case time/step/space complexity of an algorithm, the population of China, the number of users on StackOverflow, or the size of Lara Croft's bra is completely irrelevant.
The Java Collections framework is something that you should browse. There are also some types that aren't array-like (Set and Map) but you will find that you will use them often. There are many different types of collections, and some have advantages over others for certain types of operations.
First, the thing that isn't in the collections framework. The Array. Not ArrayList, but rather the classic int[]
. The advantage of the array is that it closely maps to what memory is. foo[42]
is to lookup. Many system level calls that give you a fixed number of items are arrays and they are good to know. "1,2,3".split(",")
returns back an array for example.
The problem with arrays is that they aren't that flexible. Adding an element to the end of an array that already is at is maximum capacity means allocating a new, larger array and then doing an System.arraycopy()
(javadoc) to copy all the elements from one array to another array in a timely manner.
To remove an element from the list you need to remove it and then walk the remainder of the list to have everything be contagious again. If you want to remove foo[4]
from the list, you then assign foo[4] = foo[5]
and foo[5] = foo[6]
and so on down to the end of the list. Thats if you have to do it by hand. You can also use that System.arraycopy
mentioned to do it for you given the right set of arguments. Something like:
System.arraycopy(foo,5,foo,4,foo.length - 4);
I might have that wrong, but thats the general idea. Copying the array back on itself. It should work.
If the src and dest arguments refer to the same array object, then the copying is performed as if the components at positions srcPos through srcPos+length-1 were first copied to a temporary array with length components and then the contents of the temporary array were copied into positions destPos through destPos+length-1 of the destination array.
As you see, arrays are a bit awkward to work with. You've got do it all by hand. Its good to know, but a pain to practice. There are two lists that are in common use that make this easier for you. For the most part, Lists look like arrays.
List<Integer> foo = ...;
foo.get(4);
foo.remove(5);
You call methods on them, but you are fetching the 4th element via method call instead of via direct access through the array. See that foo.remove(5)
. Thats it. You're done. You have removed an element from the list and deleted it. Much easier to remember.
The ArrayList is backed by an Array. This means that doing foo.get(4)
runs fast, but foo.add(42, foo.size()+1)
is likely to be slow because it has to do all of that moving of things around to allocate a new list (yes, I know that the list can be longer than the size, but lets assume that its the same size as its actual list).
Instead of having to remember how to do all the shuffling, you've got nice methods to do your work for you. It doesn't mean that its not done (the work is still done behind the scenes of the interface), but its something that you don't have to worry about.
So, the ArrayList is for:
For some reason, everyone's goto list implementation is the ArrayList. I don't know why. Many times when I'm working with Lists, the LinkedList is the one that is more applicable. You keep adding things to the end. Rarely do you want the nth element. You just want to iterate over them.
This is what the LinkedList is good for. It does fast remove and append and its iterator is just as fast as the ArrayList. Its just if you want foo.get(400)
its going to be a bit slower than the ArrayList implement.
The LinkedList is also a Deque too, but people forget about that. That lets you access the LinkedList as a double ended queue (nice for implementing a Stack or Queue).
So, if you are just appending and iterating, the LinkedList is for you.
Note that the O(index) there is a bit of an award bit. It will index depending on which way is faster to get there (from the end or start). So if the LinkedList is 100 long, and you get(10), it starts from the start. If you ask for get(90), it starts from the end. So get(index) is never more than get(length/2), but thats all just factors that don't matter when looking at Big O.
Best Answer
Normally, size of an array has no effect on complexity of allocation itself. AFAIK, an array is internally a pointer to some address where the array begins and hidden fields for element size and element count. So it would be
O(1)
.At least this is the case in Object Pascal, though I am not sure for C# or other languages.
But finding the chunk of memory which fits for your array has some higher complexity which depends on the size at some point but is more dependent to how the active memory manager works: Time complexity of memory allocation.