Algorithms Data Structures – Why Deletion is Harder Than Insertion

algorithmsdata structures

Can you think of any specific reason why deletion is usually significantly harder to implement than insertion for many (most?) data structures?

Quick example: linked lists. Insertion is trivial, but deletion has a few special cases that make it significantly harder. Self-balancing binary search trees such as AVL and Red-black are classic examples of painful delete implementation.

I would like to say it has to do with the way most people think: it is easier for us to define things constructively, which leads nicely to easy insertions.

Best Answer

It's more than just a state of mind; there are physical (i.e. digital) reasons why deletion is harder.

When you delete, you leave a hole where something used to be. The technical term for the resulting entropy is "fragmentation." In a linked list, this requires you to "patch around" the removed node and deallocate the memory it is using. In binary trees, it causes unbalancing of the tree. In memory systems, it causes memory to go unused for awhile if newly-allocated blocks are larger than the blocks left behind by deletion.

In short, insertion is easier because you get to choose where you are going to insert. Deletion is harder because you can't predict in advance which item is going to get deleted.

Related Topic