No, instruction sets aren't "standardized" in a way that you could produce assembly that's fit for – or is simply mappable to – ARM, x86, PPC, MIPS, Itanium, Sparc, ... (and their variants).
Native code compilers are pretty complex beasts. Not all the work they do is processor-specific. All the lexing/parsing is language-dependent but not chip-related. Some optimization passes are also hardware-independent, but possibly not all – e.g. the right code size v.s. raw speed tradeoffs might depend on the target.
At some point, if you're producing native code, you'll need to know the details of the chip you're targeting. You need to be aware of their "quirks" (memory coherency properties for instance) and complete instruction sets to produce an instruction stream that is both correct and reasonably efficient.
Even if you restrict yourself to one instruction set (say x86_64), different brands of chips have different extensions that need to be considered. Different models of the same brand also have instruction set differences (new features added, sometimes old features removed). Sticking with the "lowest common denominator" could work, but you'll be missing out on a lot of stuff.
Does that mean the you do a complete rewrite of the compiler for every new instruction set or extension that hits the market? Of course not. Those are incremental changes, sometimes only to "machine description files" or whatever the compiler uses to model the target instruction set.
But introducing a new ISA altogether is not a trivial task and requires detailed knowledge of the target.
If you're setting out to build a compiler yourself, do have a look at LLVM. Chances are you use it for the "emitting native code" part at least, whatever language it is you're trying to compile.
The LLVM Project is a collection of modular and reusable compiler and toolchain technologies.
I have some experience with the sort of scheduling problem you present, and the technical stack you're using, and so I can speak from authority and say this is a database problem and belongs in SQL Server, not Excel!.
Assuming that these are regular visits that are already scheduled and you only need to allocate your staff to the various tasks, (and staff have to return to their home office between visits), I recommend the following process:
- Migrate your staff list to a normalized table structure, with separate
Employee
and Availability
tables.
- Do the same to your service users, with
Customer
and Visit
tables. (How you populate the visits is a different question.)
- Create a distinct
Assignment
table, that reflects the assignments you give your staff. This table will include both actual shifts, and necessary non-visit assignments (such as "drive across town"). Your program will generate suggestions, but you need to allow a human to revise the list.
- Generate a "complexity" score for each
visit
, by adding all the necessary requirements together. ("Must be seen by two certified women for 3 hours" may be seven points, for example.)
- For the highest complexity value, make all assignments for your time period for all visits on that assignment. Do this one row at a time, with tie-breakers based on the visit length and start-time. Select staff who meet the needs based on availability and then some sensible sort value (seniority or "times done this shift.")
- Mark a travel time entry for the assigned staff between the customer's location and the employee's central location.
- Repeat from #5, above, until all visits are satisfied.
- Return to Excel two lists, one of
Customer Visits
that indicates the assigned employees(s), and one of Employee assignments
that notes the times each employee will be assigned. Include any highlight in both any unassigned items.
There are professional Workforce Management systems that will much of this for you, but such require several tens of thousands of dollars to purchase and assume you'll have an employee doing much of the work manually. If they were doing this in Excel by hand, any automation will be an aid, and you can improve as you go on.
(And take a look at .NET or Access to replace Excel in some instances; round-tripping data from Excel back to SQL Server is significantly harder than just making a form for a SQL server table in Access.)
Best Answer
In the usual terminology, processes don't arrive. Processes roughly corresponds to the programs running on a machine. What can arrive (and cause some processing to be done) are requests and/or events. I will assume you meant those when using "process" in the question.
Assuming that the processing of a request is CPU-bound (meaning that a single request keeps the CPU fully occupied for 8 seconds, with no spare time to do anything else), then your calculation of
is a reasonable estimate for the average processor load.
If the processing of a single request isn't CPU-bound, then the 8 second response time is not a correct figure for calculating CPU load, because you need to subtract the time that the process is waiting for external systems, like the disk drive or database.