Nowadays geometry engines are called Vertex Shaders since they are programmable and can run so-called (vertex) shader programs to compute and animate the geometry of a scene. Every vertex that needs to be calculated can contain a lot of information, such as an x, y, z coordinate (a 3-dimensional position), texture coordinates, Normal Information (what direction the vertex faces), an identifier (which triangle it belongs to), Skinning parameters, lighting values or just about anything else.
However, vertex processing alone does not result in a visible picture. . So as to see all the triangles made up of all the vertices that the vertex shaders have calculated, they need to be colored. Certainly, the invisible object that is a result of the geometry processing needs to be “wallpapered” so it becomes visible. . To be able to do this, the polygons need to be converted into pixels. this is done during the triangle setup. The pixels are then dealt with in the pixel shaders and pixel pipelines. The color value of a pixel is looked upon a texture. this texture exists in graphics memory as a bitmap that was designed by the 3D artist. . Textures can be available in different resolutions. Higher resolution textures look better. however, use more memory space and more memory bandwidth than lower resolution textures. . For far away objects, this would not only result in wasted processing cycles, but it could also lead to display anomalies. As a result of this, textures are usually available in different resolutions. . If textures of different resolutions are combined on one object then this is referred to as mip-mapping. This mip-mapping can produce visible borders between the two textures of different resolutions, called mipmap banding. . Moreover, this mipmap banding can be minimized using different filtering techniques. Filtering means that for every pixel to be colored, more than one texel on the texture is looked up and the average is calculated and applied to the pixel. .