Disclaimer: This answer does not answer the question that completely. But it's too long for a comment.
NP-hard? I believe, this problem could be NP-hard.
Consider a special case of the Knapsack problem:
Given a set of positive integers and a positive integer b, does there exist a subset of the set such that the sum of all integers in that subset equals b?
This sounds somewhat similar to our Countdown problem, and it seems to be much simpler. However, Knapsack (and this special case of Knapsack) is NP-hard (and NP-complete, of course).
I did not manage to use this for a proof that Countdown is NP-hard. I could not get rid of the division. Consider we have a thousand 2, and b = 7. This will never be solvable with Knapsack, but always (?) with Countdown, at least in all the ways I tried to transfer the problem.
Now, if Countdown really was NP-hard, we could deduce that there with very high probability is no algorithm that is significantly more efficient than brute-force trying all possibilities. (And if we should find such an algorithm, we will become very famous.)
No, I don't think there must be an efficient algorithm.
Heuristics. The Youtube video linked in the question has a nice example: The contestant found an exact answer 952 = ((100 + 6) * 3 * 75 - 50) / 25. That is completely against my intuition, I would never have tried it that way in the first time: Produce a very large number, then divide it and yield the result.
On the other, we humans feel that we do not need to try (arbitrary example) 50 * 75 * 100 / 2 / 3 / 7 to reach a three-digit-number. But computers do not feel anything, they simple calculate.
After all, if we implemented some heuristics, and this heuristics does not find an exact solution, we still will have to try all other solutions to make sure there really is none.
What the contestant in the Youtube video does is, I think, to very quickly check a large number of possibilities and to quickly discard those that will not (or likely will not) give a solution.
Conclusion. When implementing an algorithm, one could take care to strip equal calculations such as a / b / c = a / (b * c), but I think this is quite hard to do and I do not know whether this improves the runtime significantly.
Computers of course are faster than humans in checking a large number of possibilities. And nowadays, even smartphones are so fast they can solve this problem, I think, within a second by simply trying all possibilities. (I did not test this.) There are only six numbers, it would be different if there were e.g. 60 of them.
There are two cases to consider: multiplication (including division) and addition (including subtraction). Because floating point numbers are stored in exponential form (i.e as m*2^e), these operations are performed on the mantissa (m) and exponent (e) as separate values, not on the whole numbers involved.
Multiplication (basically) involves multiplying the mantissae (which are always between 1 and 2) and adding the exponents (which are integers). It follows that unless the integer addition of the exponents overflows, the absolute magnitude of the numbers makes no difference to the precision of the result.
For addition, however, things are different. To add floating point numbers, one of the mantissae is shifted by an appropriate number of bits to make the exponents equal, and then the mantissae are added. This shifting operation loses precision proportionally to the difference in exponents. Thus, addition is more precise when the operands are closer in magnitude. In practice this means that if you are adding many floating point numbers that may vary wildly in magnitude, it is best to start with the smallest (closest to zero) and progress out to add the larger magnitude.numbers at the end. If you don't know the relative magnitudes in advance, put them in an array and sort the array by magnitude before summing it. If you do, use an accumulator variable and add the smaller numbers first.
Best Answer
Define a recursive function that takes an array of values and will generate a list of all the trees we want.
When the function receives just one value, it should return a list containing as it's sole element that value.
When the function receives two values, it should returns list containing as it's sole element the op of those two values in order.
When the function receives more than two values, it should keep a list to return, and iterate through the indices between the members of the array. For each index between members of the array, split the array into a left array and a right array based on the index. Recurse on these arrays, generating the list of trees for each. Then take the cross product of the lists this generated and add it to the list to return.
The way this works is that for any array of more than two values, we can split it multiple ways. For each way to split the array, there is at least one possible tree on the left side, and at least one possible tree on the right side. Each combination of possible trees on the right and left sides is a possible tree.
Edit: Here's some pseudo code (roughly Java/python):
Actually, I don't think we even need the n=2 base case. If n=2, we will only ever have index=1. Then we split the left and right lists, which will each have exactly one element. They will each return a single element list containing the identity tree of their element. Then the only thing added to treeList will be OpTree of the two IdentityTrees.