Recently,
I moved an app I am building from a computer with Intel & Nvidia to a computer with AMD CPU & GPU.
The problem is, that glDrawElements gives me bad access location and I can't figure out why.
The render function is the following:
template<class PositionData> void drawTexturedMesh(const PositionData &pd, const IndexedFaceMesh &mesh, const unsigned int offset, const float * const color, GLuint text)
{
// draw mesh
const unsigned int *faces = mesh.getFaces().data();
const unsigned int nFaces = mesh.numFaces();
const Vector3r *vertexNormals = mesh.getVertexNormals().data();
const Vector2r *uvs = mesh.getUVs().data();
// Update our buffer data passed to GPU
glBindBuffer(GL_ARRAY_BUFFER, vbo[0]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(double) * 3 * pd.size(), &pd.getPosition(0)[0]);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, vbo[1]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(double) * 2 * mesh.getUVs().size(), &uvs[0][0]);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ARRAY_BUFFER, vbo[2]);
glBufferSubData(GL_ARRAY_BUFFER, 0, sizeof(double) * 3 * mesh.getVertexNormals().size(), &vertexNormals[0][0]);
glBindBuffer(GL_ARRAY_BUFFER, 0);
// Binding element array and drawing. numFaces * 3 triangles per face.
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ibo);
glBufferSubData(GL_ELEMENT_ARRAY_BUFFER, 0, sizeof(unsigned int) * mesh.getFaces().size(), mesh.getFaces().data());
glDrawElements(GL_TRIANGLES, 3 * nFaces, GL_UNSIGNED_INT, BUFFER_OFFSET(0));
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
}
Math, vectors and matrices are from Eigen. Any ideas or any other code part you would like to see? Bare in mind that this exact code works fine with nVidia.
Thanks.
glGetError()just beforeglDrawElements()? \$\endgroup\$