You should not disable vertex attrib arrays after unbinding a VAO. In an OpenGL 3 core context, the second you unbind a VAO you no longer have a context for vertex array commands to apply to; you must always have a VAO bound or these commands will be invalid operations.
Moreover, VAOs store vertex array state persistently. The idea is instead of enabling or disabling states every time you draw something, you just bind a VAO that has all of the necessary states already setup.
Here is how you should be thinking about setting up vertex arrays using Vertex Array Objects. Since the VAO stores most of the state, you do not have to do things like disable vertex arrays or unbind VBOs to prevent state leaks. Just change the bound VAO whenever you want to draw a different vertex array.
Stage 1: GL Vertex Array / Buffer Object Initialization
When Mesh is Constructed:
- Generate VAO, VBO, IBO
- Bind VAO, VBO, IBO
-> Upload Vertex Data to VBO
-> Upload Index Array to IBO
Foreach Attribute <n>
- Setup Attrib Pointer (n)
- Enable Attrib Array (n)
End Foreach
Stage 2: Drawing a Mesh Instance
When an Object (instance of Mesh) is Rendered:
- Bind Mesh's VAO
- Bind program/shader(id)
-> send uniforms
-> glDrawElements
Also, unbinding a VAO is really unnecessary if your software is setup correctly (e.g. everything that draws using vertex arrays has its own VAO to manage state). Think of applying textures, you rarely unbind a texture after you draw something. You count on the next batch you draw knowing exactly what texture state(s) it needs; if it needs a different texture (or none at all), then it should be the thing to change the texture state. Restoring texture state after every batch is a waste of resources, and so is restoring vertex array state.
On a side note, I was looking at your OpenGL trace and came across something you may not be aware of. You are using GL_UNSIGNED_BYTE
indices, which are provided by the API but not necessarily supported by hardware. On a lot of hardware (e.g. any desktop GPU) GL_UNSIGNED_SHORT
is the preferred index type (even for very small collections of vertices). It is tempting to assume that using GL_UNSIGNED_BYTE
will save space and therefore increase throughput when you have fewer than 256 vertices, but it can actually throw you off the "fast path." If the hardware does not support 8-bit indices, then the driver is inevitably going to have to convert your indices to 16-bit after you submit them. In such cases, it increases driver workload and does not save any GPU memory, sadly.
swap buffers (vsync causes this to block until vertical monitor refresh)
No, it doesn't block. The buffer swap call may return immediately and not block. What it does however is inserting a synchronization point so that execution of commands altering the back buffer is delayed until the buffer swap happened. The OpenGL command queue is of limited length. Thus once the command queue is full, futher OpenGL calls will block the program until further commands can be pushes into the queue.
Also the buffer swap is not an OpenGL operation. It's a graphics / windowing system level operation and happens independent of the OpenGL context. Just look at the buffer swap functions: The only parameter they take are a handle to the drawable (=window). In fact even if you have multiple OpenGL contexts operating on a single drawable, you swap the buffer only once; and you can do it without a OpenGL context being current on the drawable at all.
So the usual approach is:
' first do all the drawing operations
foreach w in windows:
foreach ctx in w.contexts:
ctx.make_current(w)
do_opengl_stuff()
glFlush()
' with all the drawing commands issued
' loop over all the windows and issue
' the buffer swaps.
foreach w in windows:
w.swap_buffers()
Since the buffer swap does not block, you can issue all the buffer swaps for all the windows, without getting delayed by V-Sync. However the next OpenGL drawing command that addresses a back buffer issued for swapping will likely stall.
A workaround for that is using an FBO into which the actual drawing happens and combine this with a loop doing the FBO blit to the back buffer before the swap buffer loop:
' first do all the drawing operations
foreach w in windows:
foreach ctx in w.contexts:
ctx.make_current(w)
glBindFramebuffer(GL_DRAW_BUFFER, ctx.master_fbo)
do_opengl_stuff()
glFlush()
' blit the FBOs' renderbuffers to the main back buffer
foreach w in windows:
foreach ctx in w.contexts:
ctx.make_current(w)
glBindFramebuffer(GL_DRAW_BUFFER, 0)
blit_renderbuffer_to_backbuffer(ctx.master_renderbuffer)
glFlush()
' with all the drawing commands issued
' loop over all the windows and issue
' the buffer swaps.
foreach w in windows:
w.swap_buffers()
Best Answer
Obviously you are mixing VBO mode and VA mode. This is perfectly possible but must be use with care.
When you call:
This means that next time you render something with
glDrawElements(..., ..., ..., x)
, it will use x as a pointer on the indices data, and the last call to glVertexPointer points on the vertices data.If you don't unbind the current VBO and IBO (with the above two glBindBuffer calls), this means that when rendering with the same glDrawElements, x will be use as an offset on the indices data in the IBO, and the last call to glVertexPointer as an offset on the vertices data in the VBO.
Depending values of x and glVertexPointer you can make the driver crash because the offsets go out of bounds and/or the underlying data is of the wrong type (NaN).
So for answering your question, after drawing with VBO mode and then drawing with VA mode:
glVertexPointer
glDrawElements
and then it will be fine.