A "dependent texture read" is when return values from one texture lookup (or other in-shader computations) are used to determine WHERE to look up from a second texture. An important implication is that the texture coordinates (where you look up from) is not determined until the middle of execution of the shader... there's no kind of static analysis you can do on the shader (even knowing the values of all parameters) that will tell you what the coordinates will be ahead of time. It also strictly orders the two texture reads and limits how much the execution order could be changed by optimizations in the driver, etc.
On older graphics cards, there used to be quite a few limitations on this kind of thing. For example, at one point (IIRC) you could look up from multiple textures but only with a small number of distinct texture coordinates. The hardware was actually implemented in a way that certain types of dependent texture reads were either impossible or very inefficient.
In the latest generation or two of cards, you shouldn't have to worry about this. But you may be reading books or articles from a couple years ago when you really did have to pay close attention to such things.
In OpenGL, a material is a set of coefficients that define how the lighting model interacts with the surface. In particular, ambient, diffuse, and specular coefficients for each color component (R,G,B) are defined and applied to a surface and effectively multiplied by the amount of light of each kind/color that strikes the surface. A final emmisivity coefficient is then added to each color component that allows objects to appear luminous without actually interacting with other objects.
A texture, on the other hand, is a set of 1-, 2-, 3-, or 4- dimensional bitmap (image) data that is applied and interpolated on to a surface according to texture coordinates at the vertices. Texture data alters the color of the surface whether or not lighting is enabled (and depending on the texture mode, e.g. decal, modulate, etc.). Textures are used frequently to provide sub-polygon level detail to a surface, e.g. applying a repeating brick and mortar texture to a quad to simulate a brick wall, rather than modeling the geometry of each individual brick.
In the classical (fixed-pipeline) OpenGL model, textures and materials are somewhat orthogonal. In the new programmable shader world, the line has blurred quite a bit. Frequently textures are used to influence lighting in other ways. For example, bump maps are textures that are used to perturb surface normals to effect lighting, rather than modifying pixel color directly as a regular "image" texture would.
Best Answer
A parametric surface is defined by equations that generate vertex coordinates as a function of one or more free variables.
In the one-dimensional case it is customary to define parametric curves (e.g. Bezier, Lissajous, or any of several other types) of curves using free variable t often defined on the interval [0,1] which can be thought of as a sort of fractional arc length. An equation is specified which generates each coordinate value as a function of t. As a result, the curve can be rendered to arbitrary precision by evaluating as many vertex points as desired along the defined interval of t values.
The alternative is a nonparametric curve which is simply defined as a specific set of vertices which are generally connected with straight lines. Curves defined nonparametrically don't hold up well to scaling and zooming as eventually the limitations of the defining geometry become apparent.
Parametric surfaces are the higher-dimensional equivalents of parametric curves, where two or more free variables and corresponding functions define the vertices of a mesh.