First, don't assume that a RDMS isn't going to scale. It might not be the right solution, but saying it won't scale doesn't make sense unless you've considered how your data is going to come into the system, how it's going to be queried and what you eventually want to see from those queries.
Recording raw page hits may or may not be a large dataset. If you do this in a simple naive way, recording a row for every single hit, it may not scale, but this isn't necessarily the smartest way to record these things. You are likely going to be working from server logs, which will then distill them into an aggregated form.
Path tracking is likely to be the largest dataset here since you'll need the breadcrumbs from each individual user, but the querying part is important here. To do this in a sophisticated way, you'll likely be using some application logic, not a raw query.
Unless you have a large number of users, a single RDMS should be able to handle these queries. The general idea is to keep both the aggregate data and the raw, fine grained level data in different tables. The aggregates provide fast queries with indexes, etc. and the fine grained data can be used to be build new metrics.
Some databases and some BI solutions provide automated ways to do this. Oracle has aggregate persistence for example, but in my work I've found myself writing batch jobs to build aggregates.
Longer term, you'll want to learn about modeling your data dimensionally rather than relationally. Dimensional models and star schemas are more extensible than a relational model you replicate from a production system and provide a better way to manage the granularity of the data and cached aggregates.
If you have really large datasets, then you'll need to start thinking about using distributed processing, map/reduce, etc. But you'll save yourself a lot of time if you can manage to use a traditional database in an efficient manner. Performing complex analytics (i.e., more than simple aggregates such as SUM or AVE) requires a lot of more thinking and expertise in the map/reduce framework than in SQL.
Is there a best practice of handling this large amount of data?
Not sure about "best practice", but I think would be a good idea to clearly define what you need to do with the data. What will the user actually need to see at any one time? Instinctively, 24 hours of data points every 2 seconds doesn't seem that possible to handle, cognitively. Maybe you'll have a per-5-minute average, that you'll need to graph? In that case, I would create an API that requests these averages so the browser doesn't need to deal with all the data. You could cache these on the server if needs-be, for other requests. If the user will see one-screen of fine-grained temperatures at any one time, then create an API that requests one-screen's worth of data at a time (similar to your paging API).
Using one-screen's worth of information at a time is also a good strategy if you have lots of Angular's bindings: if you end up with 1000s of them then interface performance will drop.
I am just looking for some suggestions as to what might provide a good balance between good performance and not caching the entire data set on the client side
Would it be useful to load those first 20 minutes into the live view and then cache into local storage something like the last 24 hours?
It's always a bit hard to decide how to optimize before you know exactly where the bottlenecks are. Keeping-it-simple for the first design is often a good idea. Don't build your own caching layer, at least at first: use $http's built-in cache, and maybe also an aggressive caching strategy using http headers, so the browser can cache the Ajax requests as well.
Best Answer
I would use autonomous servers. Each server hosts the uploading frontend, the encoder and the download service. This way you don't have to transfer files around. To scale, simply add more servers- it sounds like you don't have any obstacles doing that.
Research if you can stream the process; start encoding the file while it is still uploading, download the result while it is still being encoded. This won't reduce the cost of any of the operations, but the end user will perceive a significant benefit.
Offer alternatives to HTTP upload if your files are large. If the upload stops for whatever reason, the user must restart the upload from scratch.