Vector instead of Linked List
Whatever you do, don't use a doubly linked list for this. It has a lot of allocation and space overhead. The data is read and appended sequentially, there is no random access or random removal of items, so a vector-like data structure would be much more suitable to store data. But even that is overkill for the raw data, as you'll see below.
Data flows through Listeners, then disappears
To do linear pattern matching you don't have to store the data at all, and you'll only have to traverse it once. The idea is to have several matchers listening for their patterns as the data hums along. These store only the data needed to detect a pattern. Any items that cannot be part of a pattern anymore will be forgotten.
I'll describe one way of achieving that. I must warn you that the task you want to perform is not trivial to do efficiently. Judging from your proposal of using a linked list, it may take some time to wrap your head around the principles involved.
Continuously register Matchers listening to the data
Let's start by adding some entities to listen for your patterns in the data. Register a matcher factory for every combination you want to recognize. Typically each pattern would be in its own matcher class, parametrized by the resolution it is looking for.
Now start reading in the data, and feed each item to each matcher factory as you read it. Then have that factory instantiate/reuse a matcher for every place that could be the starting point of a pattern. For example, a matcher with a 7 day resolution could be instantiated each time the first data point for the new week comes in.
Matchers update internal state until they reject/accept a pattern
The ticker items are also fed to each active matcher. Each matcher should track its own internal matching state. For example, a matcher with a 7 day resolution may be accumulating ticker values to calculate the 7 day average. After each 7 days pass, it stores the average in the next position of an array, starting a new average accumulation for subsequent incoming ticker items. This continues until it has seen enough weeks to either reject the or confirm the presence of the pattern. To get some ideas on how to do this, look into 'Finite State Machines'.
Efficiency gains by eliminating duplicate calculations
Of course, if there are multiple matchers that need the data on a 7-day resolution it is not efficient to have each one calculate it on its own. You may build a hierarchy of matchers so intermediate patterns only have to be calculated once. Look into ring buffers for ideas on how to maintain rolling averages (or other aggregate functions)
Related: Parser Generators
So-called 'Parser Generators' do a similar thing automatically for formal grammars. The generated parsers employ a finite state machine to detect hundreds of patterns with about the same effort it would take to recognize just one, and in just one pass of the source data. I imagine such tools may also exist for continuous time-series data, or you could transform their ideas to apply them to your problem.
Hope this helps!
In this video I watched recently, Rich Hickey comments that he likes the destructuring part of languages like Scala, but not so much the pattern matching part, and he designed Clojure accordingly. That probably explains why the pattern matching is in a library and not as robust, although the kind of problems seen in the post you mentioned are clearly bugs.
What Rich Hickey mentions as an alternative to pattern matching is multimethods. Most languages let you do polymorphic dispatch based on type. Some languages let you also do it based on a value. Using multimethods, Clojure lets you do it based on any arbitrary function. That's a pretty powerful concept.
It comes down to the principle that programmers using a language should use the language's own best idioms. Trying to write Scala-like code in Clojure is going to have its difficulties, and vice versa.
Best Answer
You are correct in that OOP class hierarchies are very closely related to discriminated unions in F# and that pattern matching is very closely related to dynamic type tests. In fact, this is actually how F# compiles discriminated unions to .NET!
Regarding extensibility, there are two sides of the problem:
That said, F# will give you a warning when you miss a case in pattern matching, so adding new union cases is actually not that bad.
Regarding finding duplications in root choosing - F# will give you a warning when you have a match that is duplicate, e.g.:
The fact that "route choice is immutable" might also be problematic. For example, if you wanted to share the implementation of a function between
Foo
andBar
cases, but do something else for theZoo
case, you can encode that easily using pattern matching:In general, FP is more focused on first designing the types and then adding functions. So it really benefits from the fact that you can fit your types (domain model) in a couple of lines in a single file and then easily add the functions that operate on the domain model.
The two approaches - OO and FP are quite complementary and both have advantages and disadvantages. The tricky thing (coming from the OO perspective) is that F# usually uses the FP style as the default. But if there really is more need for adding new sub-classes, you can always use interfaces. But in most systems, you equally need to add types and functions, so the choice really does not matter that much - and using discriminated unions in F# is nicer.
I'd recommend this great blog series for more information.