This is part 9 of my series on OpenGL4 with OpenTK.
For other posts in this series:
OpenGL 4 with OpenTK in C# Part 1: Initialize the GameWindow
OpenGL 4 with OpenTK in C# Part 2: Compiling shaders and linking them
OpenGL 4 with OpenTK in C# Part 3: Passing data to shaders
OpenGL 4 with OpenTK in C# Part 4: Refactoring and adding error handling
OpenGL 4 with OpenTK in C# Part 5: Buffers and Triangle
OpenGL 4 with OpenTK in C# Part 6: Rotations and Movement of objects
OpenGL 4 with OpenTK in C# Part 7: Vectors and Matrices
OpenGL 4 with OpenTK in C# Part 8: Drawing multiple objects
OpenGL 4 with OpenTK in C# Part 9: Texturing
OpenGL 4 with OpenTK in C# Part 10: Asteroid Invaders
Basic bullet movement patterns in Asteroid Invaders
OpenGL 4 with OpenTK in C# Part 11: Mipmap
OpenGL 4 with OpenTK in C# Part 12: Basic Moveable Camera
OpenGL 4 with OpenTK in C# Part 13: IcoSphere
OpenGL 4 with OpenTK in C# Part 14: Basic Text
OpenGL 4 with OpenTK in C# Part 15: Object picking by mouse
If you think that the progress is slow, then know that I am a slow learner :P
This part will build upon the game window and shaders from part 8..
Textures
So now that we have some objects flying around on the screen it is time to give them textures. After the initialization of buffers and attribute bindings in our RenderObject. I've done some restructuring of the code here so that we have a TexturedRenderObject to work with.As we are rolling a new object type here, we can ditch the old vertex struct that had position and color and instead have position and a texture coordinate instead. (color will be take from the texture)
public struct TexturedVertex { public const int Size = (4 + 2) * 4; // size of struct in bytes private readonly Vector4 _position; private readonly Vector2 _textureCoordinate; public TexturedVertex(Vector4 position, Vector2 textureCoordinate) { _position = position; _textureCoordinate = textureCoordinate; } }A new constructor for the TexturedRenderObject binds the position and texture coordinate attributes from the above struct.
private int _texture; public TexturedRenderObject(TexturedVertex[] vertices, int program, string filename) : base(program, vertices.Length) { // create first buffer: vertex GL.NamedBufferStorage( Buffer, TexturedVertex.Size * vertices.Length, // the size needed by this buffer vertices, // data to initialize with BufferStorageFlags.MapWriteBit); // at this point we will only write to the buffer GL.VertexArrayAttribBinding(VertexArray, 0, 0); GL.EnableVertexArrayAttrib(VertexArray, 0); GL.VertexArrayAttribFormat( VertexArray, 0, // attribute index, from the shader location = 0 4, // size of attribute, vec4 VertexAttribType.Float, // contains floats false, // does not need to be normalized as it is already, floats ignore this flag anyway 0); // relative offset, first item, in bytes GL.VertexArrayAttribBinding(VertexArray, 1, 0); GL.EnableVertexArrayAttrib(VertexArray, 1); GL.VertexArrayAttribFormat( VertexArray, 1, // attribute index, from the shader location = 1 2, // size of attribute, vec2 VertexAttribType.Float, // contains floats false, // does not need to be normalized as it is already, floats ignore this flag anyway 16); // relative offset after a vec4, in bytes // link the vertex array and buffer and provide the stride as size of Vertex GL.VertexArrayVertexBuffer(VertexArray, 0, Buffer, IntPtr.Zero, TexturedVertex.Size); _texture = InitTextures(filename); }
The initialization of a texture follows the same pattern as with buffers and attributes. First we call a create method to get a new name that we then can setup storage for, bind and then store stuff in. This is called the direct state access and was introduced in OpenGL 4.5.
private int InitTextures(string filename) { int width, height; var data = LoadTexture(filename, out width, out height); int texture; GL.CreateTextures(TextureTarget.Texture2D, 1, out texture); GL.TextureStorage2D( texture, 1, // levels of mipmapping SizedInternalFormat.Rgba32f, // format of texture width, height); GL.BindTexture(TextureTarget.Texture2D, texture); GL.TextureSubImage2D(texture, 0, // this is level 0 0, // x offset 0, // y offset width, height, PixelFormat.Rgba, PixelType.Float, data); return texture; // data not needed from here on, OpenGL has the data }
The LoadTexture method uses System.Drawing to load an image from disk. GetPixel(int x, int y) gets the color of the pixel at the coordinates and we just pick it up and put it in or array in the correct order (RGBA).,
private float[] LoadTexture(string filename, out int width, out int height) { float[] r; using (var bmp = (Bitmap)Image.FromFile(filename)) { width = bmp.Width; height = bmp.Height; r = new float[width * height * 4]; int index = 0; for (int y = 0; y < height; y++) { for (int x = 0; x < width; x++) { var pixel = bmp.GetPixel(x, y); r[index++] = pixel.R / 255f; r[index++] = pixel.G / 255f; r[index++] = pixel.B / 255f; r[index++] = pixel.A / 255f; } } } return r; }
Then, when binding the TexturedRenderObject, we need to bind the Texture together with the Program and VertexArray.
public virtual void Bind() { GL.UseProgram(Program); GL.BindVertexArray(VertexArray); GL.BindTexture(TextureTarget.Texture2D, _texture); }
Vertex Shader
The job of our vertex shader at this stage is to just forward the texture coordinate to the fragment shader.
#version 450 core layout (location = 0) in vec4 position; layout (location = 1) in vec2 textureCoordinate; out vec2 vs_textureCoordinate; layout(location = 20) uniform mat4 projection; layout (location = 21) uniform mat4 modelView; void main(void) { vs_textureCoordinate = textureCoordinate; gl_Position = projection * modelView * position; }
Texture Coordinates
Texture coordinates tell OpenGL how to fit the texture to the triangles that you have.
In the the code where we generate a cube, we need to add the texture coordinates for each vertex. Following the pattern in the picture each negative side (-side) becomes 0 and each positive side (+side) becomes 1. Each vertex has 3 coordinates, but for each side we only care about 2 of them. The third, the one that does not change on that panel is ignored.
Example side:
x
|
y
|
z
|
w
|
texture.x
|
texture.y
|
-side
|
-side
|
side
|
1.0f
|
0 | 0 |
side
|
-side
|
side
|
1.0f
|
1 | 0 |
-side
|
side
|
side
|
1.0f
|
0 | 1 |
-side
|
side
|
side
|
1.0f
|
0 | 1 |
side
|
-side
|
side
|
1.0f
|
1 | 0 |
side
|
side
|
side
|
1.0f
|
1 | 1 |
Should be easy, took me a while to figure it out. Most examples seem to expect that you use an application like Blender to do this for you.
In the end we end up with a create method that looks like the following:
public static TexturedVertex[] CreateTexturedCube(float side, float textureWidth, float textureHeight) { float h = textureHeight; float w = textureWidth; side = side / 2f; // half side - and other half TexturedVertex[] vertices = { new TexturedVertex(new Vector4(-side, -side, -side, 1.0f), new Vector2(0, 0)), new TexturedVertex(new Vector4(-side, -side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(-side, side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(-side, side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(-side, -side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(-side, side, side, 1.0f), new Vector2(w, h)), new TexturedVertex(new Vector4(side, -side, -side, 1.0f), new Vector2(0, 0)), new TexturedVertex(new Vector4(side, side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(side, -side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(side, -side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(side, side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(side, side, side, 1.0f), new Vector2(w, h)), new TexturedVertex(new Vector4(-side, -side, -side, 1.0f), new Vector2(0, 0)), new TexturedVertex(new Vector4(side, -side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(-side, -side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(-side, -side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(side, -side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(side, -side, side, 1.0f), new Vector2(w, h)), new TexturedVertex(new Vector4(-side, side, -side, 1.0f), new Vector2(0, 0)), new TexturedVertex(new Vector4(-side, side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(side, side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(side, side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(-side, side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(side, side, side, 1.0f), new Vector2(w, h)), new TexturedVertex(new Vector4(-side, -side, -side, 1.0f), new Vector2(0, 0)), new TexturedVertex(new Vector4(-side, side, -side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(side, -side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(side, -side, -side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(-side, side, -side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(side, side, -side, 1.0f), new Vector2(0, 0)), new TexturedVertex(new Vector4(-side, -side, side, 1.0f), new Vector2(0, 0)), new TexturedVertex(new Vector4(side, -side, side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(-side, side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(-side, side, side, 1.0f), new Vector2(0, h)), new TexturedVertex(new Vector4(side, -side, side, 1.0f), new Vector2(w, 0)), new TexturedVertex(new Vector4(side, side, side, 1.0f), new Vector2(w, h)), }; return vertices; }
Fragment Shader
#version 450 core in vec2 vs_textureCoordinate; uniform sampler2D textureObject; out vec4 color; void main(void) { color = texelFetch(textureObject, ivec2(vs_textureCoordinate.x, vs_textureCoordinate.y), 0); }
Turns out that I had the vertice order wrong, when you turn on face culling (tell OpenGL to not render backsides of triangles, things got nifty. The code above is updated, unsure about the repository.
A great next step is to build a CreateTexturedCube that takes a 1536x256 texture, 256x256 per side so that one can have different type on each side.
Tie it all together
Lets make some changes to or OnLoad method to initialize these shaders and render objects.protected override void OnLoad(EventArgs e) { Debug.WriteLine("OnLoad"); VSync = VSyncMode.Off; CreateProjection(); _solidProgram = new ShaderProgram(); _solidProgram.AddShader(ShaderType.VertexShader, @"Components\Shaders\1Vert\simplePipeVert.c"); _solidProgram.AddShader(ShaderType.FragmentShader, @"Components\Shaders\5Frag\simplePipeFrag.c"); _solidProgram.Link(); _texturedProgram = new ShaderProgram(); _texturedProgram.AddShader(ShaderType.VertexShader, @"Components\Shaders\1Vert\simplePipeTexVert.c"); _texturedProgram.AddShader(ShaderType.FragmentShader, @"Components\Shaders\5Frag\simplePipeTexFrag.c"); _texturedProgram.Link(); _renderObjects.Add(new TexturedRenderObject(ObjectFactory.CreateTexturedCube(0.2f, 256, 256), _texturedProgram.Id, @"Components\Textures\dotted2.png")); _renderObjects.Add(new TexturedRenderObject(ObjectFactory.CreateTexturedCube(0.2f, 256, 256), _texturedProgram.Id, @"Components\Textures\wooden.png")); _renderObjects.Add(new ColoredRenderObject(ObjectFactory.CreateSolidCube(0.2f, Color4.HotPink), _solidProgram.Id)); _renderObjects.Add(new TexturedRenderObject(ObjectFactory.CreateTexturedCube(0.2f, 256, 256), _texturedProgram.Id, @"Components\Textures\dotted.png")); CursorVisible = true; GL.PolygonMode(MaterialFace.FrontAndBack, PolygonMode.Fill); GL.PatchParameter(PatchParameterInt.PatchVertices, 3); GL.PointSize(3); GL.Enable(EnableCap.DepthTest); Closed += OnClosed; Debug.WriteLine("OnLoad .. done"); }At this point, we should have textured objects flying across the screen similar to the following:
For the full source at the end of part 9, including all the refactoring, go to: https://github.com/eowind/dreamstatecoding
Thank you for reading, here's a cat video to lighten up your day.
To render transparent or semitransparent texture, you need add this code to OnLoad:
ReplyDeleteGL.Enable(EnableCap.Blend);
GL.BlendFunc(BlendingFactor.SrcAlpha,BlendingFactor.OneMinusSrcAlpha);
Thanks for the tip! :)
DeleteI get the error: "vs_textureCoordinate" not declared as an output from the previous stage
ReplyDeleteFWIW the signatures have changed for many things with OpenGL4... the enum usage mentioned above needs to be:
ReplyDeleteGL.BlendFunc(BlendingFactorSrc.SrcAlpha, BlendingFactorDest.OneMinusSrcAlpha);