What stops manufacturers from making hardware optimised game engines

integrated-circuitsoftware

Just like the mp3 and h264 have been made into a circuit, what is stopping graphics card manufacturers from building a game engine (the program that most often runs on a graphics card)into the graphics card? From my perspective it will:
Probably make cheaper hardware run demanding 3d software.
Performance per Watt will increase
Cut costs on game production. Game developers will not worry that much about developing a game engine or performance optimisations
Even if we reach a performance plateu in the industry, manufacturers will still be able to sell their new products which uses the next game engine and game A and B ae built only for this new engine.
Sure, developing a universal game engine and then tranfering it into a circuit must be hard, but:
It could start with simpler game engines that other (software) engines can build upon.
Considering the demand for low price, high performance, less watt graphics cards (especially on the mobile market) isn't the expence of developing such a chip justified?

Best Answer

Modern GPU's have the ability to be programmed for general purpose computation via GPGPU languages/toolkits, like OpenCL, CUDA, OpenACC, etc.

Developers can utilize these tools to write any game engine they want with extra hardware support. For example, NVidia offers their PhysX toolkit for using the GPU for accelerating physics-like computations.

However, just effectively utilizing these pre-developed tools can be relatively tricky, and writing your own is at least an order of magnitude harder.

Requiring anything specialized for a particular game (or group of games) will mean:

  1. The GPU must have multiple small blocks for each special function, meaning die space increases dramatically. This increases costs, and may even make producing a chip in-feasible.
  2. A game will only run on a limited set of hardware because it is specialized. Should a user be expected to own 10 different GPU's to play 10 different games?
  3. Fixing bugs will be virtually impossible.

The GPU community has already gone through something similar in the past:

OpenGL 1/2 and old versions of DirectX use to include special function calls which would draw a polygon, manipulating a camera, and do really basic lighting/shadows. Some hardware even had special acceleration for these functions. Why did they get deprecated? Because they dramatically limited what you could do, and if you wanted to do anything outside of the limited set of features, you were either out of luck, or had horrible, bad performing code.

Thus they moved towards shader-based programming which meant that the developer was responsible for writing how a polygon gets shaded, and how shadows work, and even how to handle camera projections. You no longer had to worry about "does the GPU have a Phong shader, or an Oren-Nayar shader?". You could write your own to run (or copy/paste an existing one) on the GPU's general hardware and know that it works* across multiple platforms.

TL;DR: specialized hardware is too complex and expensive to do in general, hence why it is call "specialized".

addendum:

Professional video cards DO have specialized hardware in them for running professional programs such as CAD. However, if you look at the list of these programs, there's only a handful, and they all do about the same thing. So how much do these "professional" cards cost? An order of magnitude more. True, some of this cost is due to other things professional users care about like reliability, smaller sales volumes, and because GPU manufacturers know they can get away with a higher price, but I wouldn't expect too dissimilar pricing for specialized hardware for a few games.