If you look at your code for swapping you:
// If current element is lower than pivot
// then swap it with the element at store_index
// and move the store_index to the right.
But, ~50% of the time that string you just swapped needs to be moved back, which is why faster merge sorts work from both ends at the same time.
Next if you check to see if the first and last elements are the same before doing each of the recursive call you avoid wasting time calling a function only to quickly exit it. This happens 10000000 in your final test which does add noticeable amounts of time.
Use,
if (pivot_index -1 > start)
quick_sort(lines, start, pivot_index - 1);
if (pivot_index + 1 < end)
quick_sort(lines, pivot_index + 1, end);
You still want an outer function to do an initial if (start < end) but that only needs to happen once so that function can just call an unsafe version of your code without that outer comparison.
Also, picking a random pivot tends to avoid N^2 worst case results, but it's probably not a big deal with your random data set.
Finally, the hidden problem is QuickSort is comparing strings in ever smaller buckets that are ever closer together,
(Edit: So, AAAAA, AAAAB, AAAAC, AAAAD then AAAAA, AAAAB. So, strcmp needs to step though a lot of A's before looking the useful parts of the strings.)
but with Merge sort you look at the smallest buckets first while they are vary random. Mergsorts final passes do compare a lot of strings close to each other, but it's less of an issue then. One way to make Quick sorts faster for strings is to compare the first digits of the outer strings and if there the same ignore them when doing the inner comparisons, but you have to be careful that all strings have enough digits that your not skipping past the null terminator.
Vector instead of Linked List
Whatever you do, don't use a doubly linked list for this. It has a lot of allocation and space overhead. The data is read and appended sequentially, there is no random access or random removal of items, so a vector-like data structure would be much more suitable to store data. But even that is overkill for the raw data, as you'll see below.
Data flows through Listeners, then disappears
To do linear pattern matching you don't have to store the data at all, and you'll only have to traverse it once. The idea is to have several matchers listening for their patterns as the data hums along. These store only the data needed to detect a pattern. Any items that cannot be part of a pattern anymore will be forgotten.
I'll describe one way of achieving that. I must warn you that the task you want to perform is not trivial to do efficiently. Judging from your proposal of using a linked list, it may take some time to wrap your head around the principles involved.
Continuously register Matchers listening to the data
Let's start by adding some entities to listen for your patterns in the data. Register a matcher factory for every combination you want to recognize. Typically each pattern would be in its own matcher class, parametrized by the resolution it is looking for.
Now start reading in the data, and feed each item to each matcher factory as you read it. Then have that factory instantiate/reuse a matcher for every place that could be the starting point of a pattern. For example, a matcher with a 7 day resolution could be instantiated each time the first data point for the new week comes in.
Matchers update internal state until they reject/accept a pattern
The ticker items are also fed to each active matcher. Each matcher should track its own internal matching state. For example, a matcher with a 7 day resolution may be accumulating ticker values to calculate the 7 day average. After each 7 days pass, it stores the average in the next position of an array, starting a new average accumulation for subsequent incoming ticker items. This continues until it has seen enough weeks to either reject the or confirm the presence of the pattern. To get some ideas on how to do this, look into 'Finite State Machines'.
Efficiency gains by eliminating duplicate calculations
Of course, if there are multiple matchers that need the data on a 7-day resolution it is not efficient to have each one calculate it on its own. You may build a hierarchy of matchers so intermediate patterns only have to be calculated once. Look into ring buffers for ideas on how to maintain rolling averages (or other aggregate functions)
Related: Parser Generators
So-called 'Parser Generators' do a similar thing automatically for formal grammars. The generated parsers employ a finite state machine to detect hundreds of patterns with about the same effort it would take to recognize just one, and in just one pass of the source data. I imagine such tools may also exist for continuous time-series data, or you could transform their ideas to apply them to your problem.
Hope this helps!
Best Answer
Ok, so I asked this question on the theoretical CS stack exchange website.
Louis has given what I think is the most suitable answer to this question:
That is, when I am inserting a new node at an arbitrary location in the linked list, I am assuming I have random access. Else, I would have to shift all the subsequent nodes ahead in memory before adding the new one.