I would strongly advise against #1, because just ignoring errors is a dangerous anti-pattern. It can lead to hard-to-analyze bugs. Setting the result of a division by zero to 0 makes no sense whatsoever, and continuing program execution with a nonsensical value is going to cause trouble. Especially when the program is running unattended. When the program interpreter notices that there is an error in the program (and a division-by-zero is almost always a design error), aborting it and keeping everything as-is is usually preferred over filling your database with garbage.
Also, you will unlikely be successful with thoroughly following this pattern through. Sooner or later you will run into error situations which just can't be ignored (like running out of memory or a stack overflow) and you will have to implement a way to terminate the program anyway.
Option #2 (using NaN) would be a bit of work, but not as much as you might think. How to handle NaN in different calculations is well-documented in the IEEE 754 standard, so you can likely just do what the language your interpreter is written in does.
By the way: Creating a programming language usable by non-programmers is something we've been trying to do since 1964 (Dartmouth BASIC). So far, we've been unsuccessful. But good luck anyway.
You need to keep in mind that in FPU arithmetics, 0 doesn't necessarily has to mean exactly zero, but also value too small to be represented using given datatype, e.g.
a = -1 / 1000000000000000000.0
a is too small to be represented correctly by float (32 bit), so it is "rounded" to -0.
Now, let's say our computation continues:
b = 1 / a
Because a is float, it will result in -infinity which is quite far from the correct answer of -1000000000000000000.0
Now let's compute b if there's no -0 (so a is rounded to +0):
b = 1 / +0
b = +infinity
The result is wrong again because of rounding, but now it is "more wrong" - not only numerically, but more importantly because of different sign (result of computation is +infinity, correct result is -1000000000000000000.0).
You could still say that it doesn't really matter as both are wrong. The important thing is that there are a lot of numerical applications where the most important result of the computation is the sign - e.g. when deciding whether to turn left or right at the crossroad using some machine learning algorithm, you can interpret positive value => turn left, negative value => turn right, actual "magnitude" of the value is just "confidence coefficient".
Best Answer
The CPU has built in detection. Most instruction set architectures specify that the CPU will trap to an exception handler for integer divide by zero (I don't think it cares if the dividend is zero).
It is possible that the check for a zero divisor happens in parallel in hardware along with the attempt to do the division, however, the detection of the offending condition effectively cancels the division and traps instead, so we can't really tell if some part of it attempted the division or not.
(Hardware often works like that, doing multiple things in parallel and then choosing the appropriate result afterwards because then each of the operations can all get started right away instead of serializing on the choice of appropriate operation.)
The same trap to exception mechanism will also be used when overflow detection is turned on, which you ask for usually by using different add/sub/mul instructions (or a flag on those instructions).
Floating point division also has built in detection for divide by zero, but returns a different value (IEEE 754 specifies NaN) instead of trapping to an exception handler.
Hypothetically speaking, if the CPU omitted any detection for attempt to divide by zero, the problems could include: