Well,
would not it be possible to have BOTH - precision AND speed - as selectable driver configurations?
Here's the deal. The OpenGL spec gives the lowest level of precision that's acceptable for certain datatypes, but allows you to over-achieve on things a bit. For instance, a particular value may be allowed to be represented as a float (32-bit floating point), but that's just the low end, you're more than free to represent that value as a double (64-bt floating point). Obviously, the double has more precision, but it's also a slight bit slower to work with. Also, there are many OpenGL functions that are all basically the same with the only differences being the size of the datatypes accepted and whether or not the arguments come in an array or are explicitly spelled out. I'm not going to sit here & implement separate versions of these functions. Instead, I'm going to do the smart thing, the part of the GL context that the function modifies will be in only 1 precision. All of the various functions that modify that particular variable will have to convert their data to the correct datasize & then call another function that does the actual modification.
Ex.:
GLAPI void GLAPIENTRY glVertex2d( GLdouble x, GLdouble y );
GLAPI void GLAPIENTRY glVertex2f( GLfloat x, GLfloat y );
GLAPI void GLAPIENTRY glVertex2i( GLint x, GLint y );
GLAPI void GLAPIENTRY glVertex2s( GLshort x, GLshort y );
GLAPI void GLAPIENTRY glVertex3d( GLdouble x, GLdouble y, GLdouble z );
GLAPI void GLAPIENTRY glVertex3f( GLfloat x, GLfloat y, GLfloat z );
GLAPI void GLAPIENTRY glVertex3i( GLint x, GLint y, GLint z );
GLAPI void GLAPIENTRY glVertex3s( GLshort x, GLshort y, GLshort z );
GLAPI void GLAPIENTRY glVertex4d( GLdouble x, GLdouble y, GLdouble z, GLdouble w );
GLAPI void GLAPIENTRY glVertex4f( GLfloat x, GLfloat y, GLfloat z, GLfloat w );
GLAPI void GLAPIENTRY glVertex4i( GLint x, GLint y, GLint z, GLint w );
GLAPI void GLAPIENTRY glVertex4s( GLshort x, GLshort y, GLshort z, GLshort w );
GLAPI void GLAPIENTRY glVertex2dv( const GLdouble *v );
GLAPI void GLAPIENTRY glVertex2fv( const GLfloat *v );
GLAPI void GLAPIENTRY glVertex2iv( const GLint *v );
GLAPI void GLAPIENTRY glVertex2sv( const GLshort *v );
GLAPI void GLAPIENTRY glVertex3dv( const GLdouble *v );
GLAPI void GLAPIENTRY glVertex3fv( const GLfloat *v );
GLAPI void GLAPIENTRY glVertex3iv( const GLint *v );
GLAPI void GLAPIENTRY glVertex3sv( const GLshort *v );
GLAPI void GLAPIENTRY glVertex4dv( const GLdouble *v );
GLAPI void GLAPIENTRY glVertex4fv( const GLfloat *v );
GLAPI void GLAPIENTRY glVertex4iv( const GLint *v );
GLAPI void GLAPIENTRY glVertex4sv( const GLshort *v );
All of these functions create a vertex (4-dimensional point). The differences are how many planes of space are used to represent the vertex, how the data is passed to the function, & the precision of the data. Obviously, I wouldn't want to represent the vertex as a short (16-bit integer); however, representing it as an int verses a float or double may give certain speed advantages on older CPU's & GPU's. On modern hardware, it may not be as big of an issue & there'd really be no reason to not go with a float or a double. Who knows, the future may bring code that decides which to use based on which CPU the host system has, but for now, I need to get something running & I thought it might be prudent to solicit outside opinions. Also, I've already decided to go ahead & represent all vertices in terms of (x, y, z, w) internally, regardless of how many values are passed to the glVertex function. Non-specified values will probably be represented as 0 for the basic 3d coordinates & 1 for the w component.