Hardware verification languages like vera, e and later SystemVerilog and SystemC were created to make the verification process more efficient. The story goes that as per Moore's law, the design size and thus its complexity is almost doubling every 18 months. In the early days of design when even microprocessors were actually drawn as schematics, no verification as exists now was performed before actually producing the silicon. Well of course, now we use computers to verify our design, but how did they verify the first computer? lol
In those days, when a full design is split into blocks, different people would use their own brain to design, verify and also simplify the design. If you would have told somebody that someday there would a tool in which you just type in some text and it will synthesize and simplify the circuit automatically, they would laugh at you. I read this in an actual book. However, things changed as designs became more complex and the design speed increased. Automating any task means removing human intervention and thus eliminating potential of human induced error. More complex means that human induced error is more likely. By this time, doing all design by hand was inefficient, people moved from hand drawing (yes actual physical drawing) to schematics on a computer to do the design. Later came the HDL which provided a much more efficient way to design digital circuits and exchange designs, overtime the synthesis tools became quite robust as well, thus the need to do schematic design was, well, no more. Now we can verify small blocks like full adder and multiplexer our self. However, what about a complex design? At this point people use HDLs for verification. In this case, we create a "testbench" that applies a predetermined stimulus to a model of our design and the generated output is compared with the expected output provided by the designer. This is possible since the digital circuits follow Boolean logic, their output can be predicted. It is possible to write a computer program which does this and this is exactly what happens in simulation.
Now to the question, why do we need HVLs? This is related to the software domain. As software became more and more complex, people moved from assembly to procedural languages like BASIC and C among others. However, writing and maintaining huge programs was still difficult. This is when people developed the object oriented programming paradigm. OOP is certainly a revolutionary development that has made it possible for computer programs to abstract a real world problem at unprecedented levels. It makes writing a program more efficient and also maintaining and expanding it.
Simulation is essentially a purely software based activity. When we use HDL to write a testbench we have to write precisely every single signal wiggle that must take place and at the time that it must take place. However, if we raise the level of abstraction to say writing a whole word (e.g a byte to data bus) at a time, writing testbench will become a lot quicker and less tedious. We can raise the level of abstraction even more, e.g with ethernet design we can write a whole packet at once and check a whole packet at once rather than single bits.
With HVLs they apply the OOP technique into the domain of hardware verification. They do so by making it possible to verify a design at a higher level of abstraction. At the same time, they contain features that are especially adapter for verification, rather than to write synthesize able code.
e.g SystemVerilog provides 2 important features. These are concurrent assertions and constrained-random testing.
Like assertion checks that an expression is true at a give time, a concurrent assertion checks to make sure that a sequence in which signal toggles is correct.
Rather than having to write every single piece of stimulus that must be applied to a design under verification, constrained-random testing applies all possible stimulus that fits constraint given by the verification engineer, overtime. This saves a lot of time in writing stimulus.
The answer I've got (thanks to user3467290) :
There is a very important difference between these forms.
In the first, the register value will be generated, using all defined constraints, + the constraint you wrote in this action (field1 == 1). The new generated value will be written to the DUT.
In the second code, what you state is that you want to modify only one field of the register - field1. What will happen is that vr_ad will get the current value of the register from the e model (the shadow model), change field1 - and will write the new value the the register in the DUT. None of the other register's fields will be changed. Also - there is no check that the value you assign to field1 complies with the constraints defined on this register.
Best Answer
Was an ASIC Design Verification Engineer at Qualcomm. In the most simple way I can explain it:
Testing: Making sure a product works, after you've created the product (think QA).
Verification: Making sure a product works BEFORE you've created it.
They're both testing, just that verification is more complicated because you have to figure out a way to test the product before it exists and you have to be able to make sure it works as designed and to spec when it actually comes out.
For example, Intel is designing their next processor, they have the specs, they have the schematics and the simulations. They spend $1 Billion USD to go through fabrication and manufacturing. Then the chip comes back and they test it and find out that it doesn't work. They just threw a lot of money out the window.
Throw verification in. Verification engineers create models that simulate the behaviour of the chip, they create the testbench that will test those particular models. They get the results of these models and then they compare it with the RTL (model of the circuit writting in a hardware design language) results. If they match, things are (usually) OK.
There are a number of different methodologies for the verification process, a popular one is Universal Verification Methodology (UVM).
There is a lot of depth in the field and people can spend their entire career in it.
Another random tidbit of information: Usually you need 3 verification engineers for 1 design engineer. That's what everyone in the field says anyway.
EDIT: A lot of people think of verification as a testing role, but it's not; it's a design role in itself because you have to understand all the intricacies of your IC like a designer does, and then you have to know how to design models, testbenches, and all the test cases that will cover all the feature functionality of your IC, as well as trying to hit every single line of RTL code for all possible bit combinations. Remember that a processor nowadays has billions of transistors due to the fabrication process allowing smaller and smaller (now 14nm).
Also, in large corporations like Intel, AMD, Qualcomm, etc, designers don't actually design the chip. Usually the architect will define all the specs, layout the types of pieces that need to go together to get a particular function with a specific requirement (i.e. speed, resolution, etc.), and then the designer will code that into RTL. It's by no means an easy job, it's just not as much designing as a lot of engineers coming out of school think it is. What everyone wants to be is an architect, but it takes a lot of education and experience to get to that point. A lot of architects have PhD's, and like 15-20 years of experience in the field as a designer. These are brilliant people (and sometimes crazy) who deserve to be doing what they're doing, and they're good at it. The architect on the very first chip I worked on was a bit awkward and didn't really follow some social norms, but he could solve anything you're stuck with regarding the chip, and sometimes he would solve it in his head and tell you to look at one signal and you'd be like, "how the hell did he do that?". Then you ask him to explain and he does and it goes way over your head. Actually inspired me to read textbooks even though I've graduated already.