Showing posts with label Matrix. Show all posts
Showing posts with label Matrix. Show all posts

Sunday, February 19, 2017

OpenGL 4 with OpenTK in C# Part 13: IcoSphere


This time we will look at how to generate a sphere in code, I got tired of the cubes and wanted a little variation. The decision fell on IcoSpheres, mostly as they seem to be more flexible in the long run.

This is part 13 of my series on OpenGL4 with OpenTK.
For other posts in this series:
OpenGL 4 with OpenTK in C# Part 1: Initialize the GameWindow
OpenGL 4 with OpenTK in C# Part 2: Compiling shaders and linking them
OpenGL 4 with OpenTK in C# Part 3: Passing data to shaders
OpenGL 4 with OpenTK in C# Part 4: Refactoring and adding error handling
OpenGL 4 with OpenTK in C# Part 5: Buffers and Triangle
OpenGL 4 with OpenTK in C# Part 6: Rotations and Movement of objects
OpenGL 4 with OpenTK in C# Part 7: Vectors and Matrices
OpenGL 4 with OpenTK in C# Part 8: Drawing multiple objects
OpenGL 4 with OpenTK in C# Part 9: Texturing
OpenGL 4 with OpenTK in C# Part 10: Asteroid Invaders
Basic bullet movement patterns in Asteroid Invaders
OpenGL 4 with OpenTK in C# Part 11: Mipmap
OpenGL 4 with OpenTK in C# Part 12: Basic Moveable Camera
OpenGL 4 with OpenTK in C# Part 13: IcoSphere
OpenGL 4 with OpenTK in C# Part 14: Basic Text
OpenGL 4 with OpenTK in C# Part 15: Object picking by mouse

As stated in the previous post, I am in no way an expert in OpenGL. I write these posts as a way to learn and if someone else finds these posts useful then all the better :)
If you think that the progress is slow, then know that I am a slow learner :P
This part will build upon the game window and shaders from part 12.


Introduction

So I take no credit for this algorithm to generate the sphere. I found a WPF version over at catch 22, Andreas Kahlers blog. I ported it to work with our OpenTK implementation of vertex buffers and texturing.
Texturing fix for the common artefact of a funny looking stripe that goes from pole to pole is based on a solution described over at sol.gfxil.net.
So, now that credit is where it belongs, lets look at the code.

Code

private struct Face
{
    public Vector3 V1;
    public Vector3 V2;
    public Vector3 V3;

    public Face(Vector3 v1, Vector3 v2, Vector3 v3)
    {
        V1 = v1;
        V2 = v2;
        V3 = v3;
    }
}
First we need a struct where we will store each face of our sphere. Basically just contains 3 vectors for each point in a triangle.

The algorithm from Khalers blog. As noted above, this does not use vertex indexing. Meaning that we have a little memory overhead as every vertex is doubled instead of reused.

Basic algorithm is to generate the initial points manually and then for each iteration split each face into 4 new faces and project them into the unit sphere by normalizing them.

public class IcoSphereFactory
{
    private List<Vector3> _points;
    private int _index;
    private Dictionary<long, int> _middlePointIndexCache;

    public TexturedVertex[] Create(int recursionLevel)
    {
        _middlePointIndexCache = new Dictionary<long, int>();
        _points = new List<Vector3>();
        _index = 0;
        var t = (float)((1.0 + Math.Sqrt(5.0)) / 2.0);
        var s = 1;

        AddVertex(new Vector3(-s, t, 0));
        AddVertex(new Vector3(s, t, 0));
        AddVertex(new Vector3(-s, -t, 0));
        AddVertex(new Vector3(s, -t, 0));

        AddVertex(new Vector3(0, -s, t));
        AddVertex(new Vector3(0, s, t));
        AddVertex(new Vector3(0, -s, -t));
        AddVertex(new Vector3(0, s, -t));

        AddVertex(new Vector3(t, 0, -s));
        AddVertex(new Vector3(t, 0, s));
        AddVertex(new Vector3(-t, 0, -s));
        AddVertex(new Vector3(-t, 0, s));

        var faces = new List<Face>();

        // 5 faces around point 0
        faces.Add(new Face(_points[0], _points[11], _points[5]));
        faces.Add(new Face(_points[0], _points[5], _points[1]));
        faces.Add(new Face(_points[0], _points[1], _points[7]));
        faces.Add(new Face(_points[0], _points[7], _points[10]));
        faces.Add(new Face(_points[0], _points[10], _points[11]));

        // 5 adjacent faces 
        faces.Add(new Face(_points[1], _points[5], _points[9]));
        faces.Add(new Face(_points[5], _points[11], _points[4]));
        faces.Add(new Face(_points[11], _points[10], _points[2]));
        faces.Add(new Face(_points[10], _points[7], _points[6]));
        faces.Add(new Face(_points[7], _points[1], _points[8]));

        // 5 faces around point 3
        faces.Add(new Face(_points[3], _points[9], _points[4]));
        faces.Add(new Face(_points[3], _points[4], _points[2]));
        faces.Add(new Face(_points[3], _points[2], _points[6]));
        faces.Add(new Face(_points[3], _points[6], _points[8]));
        faces.Add(new Face(_points[3], _points[8], _points[9]));

        // 5 adjacent faces 
        faces.Add(new Face(_points[4], _points[9], _points[5]));
        faces.Add(new Face(_points[2], _points[4], _points[11]));
        faces.Add(new Face(_points[6], _points[2], _points[10]));
        faces.Add(new Face(_points[8], _points[6], _points[7]));
        faces.Add(new Face(_points[9], _points[8], _points[1]));



        // refine triangles
        for (int i = 0; i < recursionLevel; i++)
        {
            var faces2 = new List<Face>();
            foreach (var tri in faces)
            {
                // replace triangle by 4 triangles
                int a = GetMiddlePoint(tri.V1, tri.V2);
                int b = GetMiddlePoint(tri.V2, tri.V3);
                int c = GetMiddlePoint(tri.V3, tri.V1);

                faces2.Add(new Face(tri.V1, _points[a], _points[c]));
                faces2.Add(new Face(tri.V2, _points[b], _points[a]));
                faces2.Add(new Face(tri.V3, _points[c], _points[b]));
                faces2.Add(new Face(_points[a], _points[b], _points[c]));
            }
            faces = faces2;
        }


        // done, now add triangles to mesh
        var vertices = new List<TexturedVertex>();

        foreach (var tri in faces)
        {
            var uv1 = GetSphereCoord(tri.V1);
            var uv2 = GetSphereCoord(tri.V2);
            var uv3 = GetSphereCoord(tri.V3);
            vertices.Add(new TexturedVertex(new Vector4(tri.V1, 1), uv1));
            vertices.Add(new TexturedVertex(new Vector4(tri.V2, 1), uv2));
            vertices.Add(new TexturedVertex(new Vector4(tri.V3, 1), uv3));
        }

        return vertices.ToArray();
    }

    private int AddVertex(Vector3 p)
    {
        _points.Add(p.Normalized());
        return _index++;
    }

    // return index of point in the middle of p1 and p2
    private int GetMiddlePoint(Vector3 point1, Vector3 point2)
    {
        long i1 = _points.IndexOf(point1);
        long i2 = _points.IndexOf(point2);
        // first check if we have it already
        var firstIsSmaller = i1 < i2;
        long smallerIndex = firstIsSmaller ? i1 : i2;
        long greaterIndex = firstIsSmaller ? i2 : i1;
        long key = (smallerIndex << 32) + greaterIndex;

        int ret;
        if (_middlePointIndexCache.TryGetValue(key, out ret))
        {
            return ret;
        }

        // not in cache, calculate it

        var middle = new Vector3(
            (point1.X + point2.X) / 2.0f,
            (point1.Y + point2.Y) / 2.0f,
            (point1.Z + point2.Z) / 2.0f);

        // add vertex makes sure point is on unit sphere
        int i = AddVertex(middle);

        // store it, return index
        _middlePointIndexCache.Add(key, i);
        return i;
    }

}

Get sphere coordinate is my own addition for calculating the texture coordinate for each vertex.
public static Vector2 GetSphereCoord(Vector3 i)
{
 var len = i.Length;
 Vector2 uv;
 uv.Y = (float)(Math.Acos(i.Y / len) / Math.PI);
 uv.X = -(float)((Math.Atan2(i.Z, i.X) / Math.PI + 1.0f) * 0.5f);
 return uv;
}

At this point it looks like the following:
icosphere showing the texturing glitch that goes from pole to pole
Icosphere that shows the texturing glitch from pole to pole
And then the solution for the pole to pole stripe as described on sols site. Had to do the correction twice to the expected result.
private static void FixColorStrip(ref Vector2 uv1, ref Vector2 uv2, ref Vector2 uv3)
{
    if ((uv1.X - uv2.X) >= 0.8f)
        uv1.X -= 1;
    if ((uv2.X - uv3.X) >= 0.8f)
        uv2.X -= 1;
    if ((uv3.X - uv1.X) >= 0.8f)
        uv3.X -= 1;

    if ((uv1.X - uv2.X) >= 0.8f)
        uv1.X -= 1;
    if ((uv2.X - uv3.X) >= 0.8f)
        uv2.X -= 1;
    if ((uv3.X - uv1.X) >= 0.8f)
        uv3.X -= 1;
}

And call it from here in the Create function
foreach (var tri in faces)
{
    var uv1 = GetSphereCoord(tri.V1);
    var uv2 = GetSphereCoord(tri.V2);
    var uv3 = GetSphereCoord(tri.V3);
    FixColorStrip(ref uv1, ref uv2, ref uv3);
    vertices.Add(new TexturedVertex(new Vector4(tri.V1, 1), uv1));
    vertices.Add(new TexturedVertex(new Vector4(tri.V2, 1), uv2));
    vertices.Add(new TexturedVertex(new Vector4(tri.V3, 1), uv3));
}

Now the texture should look OK like this:
Icosphere that shows the texturing glitch fixed from pole to pole
Icosphere that shows the texturing glitch fixed
Lastly, change the generation of the initial models in GameWindow class to use these spheres instead of the cubes.
models.Add("Wooden", new MipMapGeneratedRenderObject(new IcoSphereFactory().Create(3), _texturedProgram.Id, @"Components\Textures\wooden.png", 8));
models.Add("Golden", new MipMapGeneratedRenderObject(new IcoSphereFactory().Create(3), _texturedProgram.Id, @"Components\Textures\golden.bmp", 8));
models.Add("Asteroid", new MipMapGeneratedRenderObject(new IcoSphereFactory().Create(3), _texturedProgram.Id, @"Components\Textures\moonmap1k.jpg", 8));
models.Add("Spacecraft", new MipMapGeneratedRenderObject(RenderObjectFactory.CreateTexturedCube6(1, 1, 1), _texturedProgram.Id, @"Components\Textures\spacecraft.png", 8));
models.Add("Gameover", new MipMapGeneratedRenderObject(RenderObjectFactory.CreateTexturedCube6(1, 1, 1), _texturedProgram.Id, @"Components\Textures\gameover.png", 8));
models.Add("Bullet", new MipMapGeneratedRenderObject(new IcoSphereFactory().Create(3), _texturedProgram.Id, @"Components\Textures\dotted.png", 8));

End result should be as in the following video:

Known issues that still need to be fixed:

Texturing at the poles is still glitchy. I was unable to get the solution described at sol.gfxil.net for it to work.. Yet.
Maybe implement the vertex index solution as soon as I figure it out. As this currently fits my needs that might take a while.

For the complete source code for the tutorial at the end of this part, go to: https://github.com/eowind/dreamstatecoding

So there, thank you for reading. Hope this helps someone out there : )

Tuesday, February 14, 2017

OpenGL 4 with OpenTK in C# Part 12: Basic Movable Camera


In this post we will create some basic cameras in OpenGL with the help of OpenTK.

This is part 12 of my series on OpenGL4 with OpenTK.
For other posts in this series:
OpenGL 4 with OpenTK in C# Part 1: Initialize the GameWindow
OpenGL 4 with OpenTK in C# Part 2: Compiling shaders and linking them
OpenGL 4 with OpenTK in C# Part 3: Passing data to shaders
OpenGL 4 with OpenTK in C# Part 4: Refactoring and adding error handling
OpenGL 4 with OpenTK in C# Part 5: Buffers and Triangle
OpenGL 4 with OpenTK in C# Part 6: Rotations and Movement of objects
OpenGL 4 with OpenTK in C# Part 7: Vectors and Matrices
OpenGL 4 with OpenTK in C# Part 8: Drawing multiple objects
OpenGL 4 with OpenTK in C# Part 9: Texturing
OpenGL 4 with OpenTK in C# Part 10: Asteroid Invaders
Basic bullet movement patterns in Asteroid Invaders
OpenGL 4 with OpenTK in C# Part 11: Mipmap
OpenGL 4 with OpenTK in C# Part 12: Basic Moveable Camera
OpenGL 4 with OpenTK in C# Part 13: IcoSphere
OpenGL 4 with OpenTK in C# Part 14: Basic Text
OpenGL 4 with OpenTK in C# Part 15: Object picking by mouse

As stated in the previous post, I am in no way an expert in OpenGL. I write these posts as a way to learn and if someone else finds these posts useful then all the better :)
If you think that the progress is slow, then know that I am a slow learner :P
This part will build upon the game window and shaders from part 11, including homing bullets from this post.

Camera

At this point, it is quite easy to implement a movable camera, it is actually just another level of matrix multiplication that we do when we setup the ModelView matrix for each object. So lets start there by modifying the Render method of the AGameObject class to take a camera as input.
public virtual void Render(ICamera camera)
{
    _model.Bind();
    var t2 = Matrix4.CreateTranslation(_position.X, _position.Y, _position.Z);
    var r1 = Matrix4.CreateRotationX(_rotation.X);
    var r2 = Matrix4.CreateRotationY(_rotation.Y);
    var r3 = Matrix4.CreateRotationZ(_rotation.Z);
    var s = Matrix4.CreateScale(_scale);
    _modelView = r1*r2*r3*s*t2*camera.LookAtMatrix;
    GL.UniformMatrix4(21, false, ref _modelView);
    _model.Render();
}

In this case our cameras should be able to provide a LookAtMatrix and be able to update themselves. So the interface looks like this:
public interface ICamera
{
    Matrix4 LookAtMatrix{ get; }
    void Update(double time, double delta);
}
The update method is called from the OnFrameUpdate override in the GameWindow and the look at matrix is used for each object rendered in the OnRenderFrame override.

So, lets look at some cameras

Default: Static Camera


This camera is basically what we have out of the box and been using so far. It is located at origin (0, 0, 0) and is pointed towards the negative Z axis. I.e. into the screen (remember right handed coordinate system).
Lets create a camera that implements this camera so that we can change back to it whenever we want to.
public class StaticCamera : ICamera
{
    public Matrix4 LookAtMatrix { get; }
    public StaticCamera()
    {
        Vector3 position;
        position.X = 0;
        position.Y = 0;
        position.Z = 0;
        LookAtMatrix = Matrix4.LookAt(position, -Vector3.UnitZ, Vector3.UnitY);
    }
    public StaticCamera(Vector3 position, Vector3 target)
    {
        LookAtMatrix = Matrix4.LookAt(position, target, Vector3.UnitY);
    }
    public void Update(double time, double delta)
    {}
}
Also added a constructor that makes this camera a little bit more useful, it can initialize to any position and look at any static target. Note that we are using the OpenTK Matrix4 method LookAt to create out camera look at matrix.

First Person Camera

Next camera is the First Person Camera. We send in a AGameObject that the camera should follow and it should give us a feed following the path of the object.
public class FirstPersonCamera : ICamera
{
    public Matrix4 LookAtMatrix { get; private set; }
    private readonly AGameObject _target;
    private readonly Vector3 _offset;

    public FirstPersonCamera(AGameObject target)
        : this(target, Vector3.Zero)
    {}
    public FirstPersonCamera(AGameObject target, Vector3 offset)
    {
        _target = target;
        _offset = offset;
    }

    public void Update(double time, double delta)
    {
        LookAtMatrix = Matrix4.LookAt(
            new Vector3(_target.Position) + _offset,  
            new Vector3(_target.Position + _target.Direction) + _offset, 
            Vector3.UnitY);
    }
}

Here as well we have an overloaded constructor that takes an offset. Still looking in the direction that the object is moving, but from an offset to the position variable, maybe from the cockpit of an airplane instead of the origin of the model.

Third Person Camera

Our third person camera looks at the object that we are tracking from an offset baside it.
public class ThirdPersonCamera : ICamera
{
    public Matrix4 LookAtMatrix { get; private set; }
    private readonly AGameObject _target;
    private readonly Vector3 _offset;

    public ThirdPersonCamera(AGameObject target)
        : this(target, Vector3.Zero)
    {}
    public ThirdPersonCamera(AGameObject target, Vector3 offset)
    {
        _target = target;
        _offset = offset;
    }

    public void Update(double time, double delta)
    {
        LookAtMatrix = Matrix4.LookAt(
            new Vector3(_target.Position) + (_offset * new Vector3(_target.Direction)),  
            new Vector3(_target.Position), 
            Vector3.UnitY);
    }
}

Demo in the video of the three basic movable cameras:


For the complete source code for the tutorial at the end of this part, go to: https://github.com/eowind/dreamstatecoding

So there, thank you for reading. Hope this helps someone out there : )

Friday, February 3, 2017

OpenGL 4 with OpenTK in C# Part 7: Vectors and Matrices


In this post we will look at vector and matrix math with OpenTK.

This is part 7 of my series on OpenGL4 with OpenTK.
For other posts in this series:
OpenGL 4 with OpenTK in C# Part 1: Initialize the GameWindow
OpenGL 4 with OpenTK in C# Part 2: Compiling shaders and linking them
OpenGL 4 with OpenTK in C# Part 3: Passing data to shaders
OpenGL 4 with OpenTK in C# Part 4: Refactoring and adding error handling
OpenGL 4 with OpenTK in C# Part 5: Buffers and Triangle
OpenGL 4 with OpenTK in C# Part 6: Rotations and Movement of objects
OpenGL 4 with OpenTK in C# Part 7: Vectors and Matrices
OpenGL 4 with OpenTK in C# Part 8: Drawing multiple objects
OpenGL 4 with OpenTK in C# Part 9: Texturing
OpenGL 4 with OpenTK in C# Part 10: Asteroid Invaders
Basic bullet movement patterns in Asteroid Invaders
OpenGL 4 with OpenTK in C# Part 11: Mipmap
OpenGL 4 with OpenTK in C# Part 12: Basic Moveable Camera
OpenGL 4 with OpenTK in C# Part 13: IcoSphere
OpenGL 4 with OpenTK in C# Part 14: Basic Text
OpenGL 4 with OpenTK in C# Part 15: Object picking by mouse

As stated in the previous post, I am in no way an expert in OpenGL, or math. I write these posts as a way to learn and if someone else finds these posts useful then all the better :)
If you think that the progress is slow, then know that I am a slow learner :P
This part will not really build on any previous code, we will instead look at the VectorN and MatrixN structs in OpenTK and see how to do basic calculations with help of their methods.

Vectors

Lets start with a definition.
In mathematics, physics, and engineering, a Euclidean vector (sometimes called a geometric or spatial vector, or—as here—simply a vector) is a geometric object that has magnitude (or length) and direction - wikipedia
So the difference between a point (x, y, z) and a vector (x, y, z) is that where the point is just a point in space, the vector has its magnitude (distance from origin (0, 0, 0))
The distance can be calculated with following method:
public static double Magnitude(Vector3 v)
{
    return Math.Sqrt(Math.Pow(v.X, 2.0) + Math.Pow(v.Y, 2.0) + Math.Pow(v.Z, 2.0));
}
As we are using OpenTK, it already provides this for us.
var v = new Vector3(10, 10, 10);
v.Length;
So a point in space (x, y, z) can represent both a point, a vector and a vertex. This is great as we have the need them all in 3D.

Unit vectors

Unit vectors always have the magnitude 1.0.
One great usage for them is to store only direction, without the magnitude. They are also great for calculations that include the magnitude as it will always be 1.
In OpenTK there are 2 ways to normalize and 1 way to estimate a normalization if accuracy is not priority.
[TestMethod]
public void VectorTest_NormalizeVsNormalized()
{
    var normalize = new Vector4(2, 2, 2, 2);
    var fast = new Vector4(2, 2, 2, 2);
    var normalized = normalize.Normalized(); // returns a copy that is normalized
    normalize.Normalize(); // normalizes this vector
    fast.NormalizeFast(); // estimates a normalized vector, inaccurate
    Assert.AreEqual(normalize, normalized);
    // AreEqual fails Expected:<(0,5; 0,5; 0,5; 0,5)>. Actual:<(0,4991541; 0,4991541; 0,4991541; 0,4991541)>
    Assert.AreNotEqual(normalized, fast);  
}

Scalar multiplication
If you have a unit vector with a direction, you can scale it by multiplying it with a scalar value, in the example below we scale scale it with the original vectors length to get a vector that is equal to the original.
[TestMethod]
public void VectorTest_NormalizeMultipliedByLength()
{
    var original = new Vector4(2, 2, 2, 2);
    var normalized = original.Normalized();
    var scaled = normalized * original.Length;
    Assert.AreEqual(original, scaled);
}

Vector operations

Vectors can be added, subtracted multiplied just as any numerical value.
[TestMethod]
public void VectorTest_Addition()
{
    var v1 = new Vector4(2, 2, 2, 1);
    var v2 = new Vector4(2, 2, 2, 1);
    var actual = v1 + v2;
    Assert.AreEqual(4, actual.X);
    Assert.AreEqual(4, actual.Y);
    Assert.AreEqual(4, actual.Z);
    Assert.AreEqual(2, actual.W);
}
[TestMethod]
public void VectorTest_Subtraction()
{
    var v1 = new Vector4(2, 2, 2, 1);
    var v2 = new Vector4(2, 2, 2, 1);
    var actual = v1 - v2;
    Assert.AreEqual(0, actual.X);
    Assert.AreEqual(0, actual.Y);
    Assert.AreEqual(0, actual.Z);
    Assert.AreEqual(0, actual.W);
}
[TestMethod]
public void VectorTest_Multiplication()
{
    var v1 = new Vector4(2, 2, 2, 1);
    var v2 = new Vector4(2, 2, 2, 1);
    var actual = v1 * v2;
    Assert.AreEqual(4, actual.X);
    Assert.AreEqual(4, actual.Y);
    Assert.AreEqual(4, actual.Z);
    Assert.AreEqual(1, actual.W);
}
Vectors can only be divided with scalars, don't ask me why. Tried to understand a thread on it over at Physics Stack Exchange.
[TestMethod]
public void VectorTest_Division()
{
    var v1 = new Vector4(2, 2, 2, 1);
    var actual = v1 / 2;
    Assert.AreEqual(1, actual.X);
    Assert.AreEqual(1, actual.Y);
    Assert.AreEqual(1, actual.Z);
    Assert.AreEqual(0.5f, actual.W);
}

Dot Product (inner product)
Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them -wikipedia
If the two vectors are unit vectors, the value will be between -1 and 1, i.e. directly equal to the cosine of the angle between them.
Ok, so what will we use this for? Well, lightning..
So, how to do it
[TestMethod]
public void VectorTest_DotProduct()
{
    var v1 = new Vector2(1, 1).Normalized();
    var v2 = new Vector2(0, 1).Normalized();
    var actual = Vector2.Dot(v1, v2);
    // we know that the angle is 45 degrees, just need the radians
    Assert.AreEqual((float)(Math.Cos(45 * (Math.PI / 180f))), actual);
}
The order of vectors does not matter for the dot product.

Cross Product (vector product)
Lets check wikipedia here as well
Given two linearly independent vectors a and b, the cross product, a × b, is a vector that is perpendicular to both a and b and therefore normal to the plane containing them -wikipedia
The order of vectors matters for the cross product, if you change the order the resulting vector would be inversed.
[TestMethod]
public void VectorTest_CrossProduct()
{
    var v1 = new Vector3(1, 0, 0).Normalized();
    var v2 = new Vector3(0, 1, 0).Normalized();
    var actual = Vector3.Cross(v1, v2);

    Assert.AreEqual(0, actual.X);
    Assert.AreEqual(0, actual.Y);
    Assert.AreEqual(1, actual.Z);
}
[TestMethod]
public void VectorTest_CrossProduct_Inverse()
{
    var v1 = new Vector3(1, 0, 0).Normalized();
    var v2 = new Vector3(0, 1, 0).Normalized();
    var actual = Vector3.Cross(v2, v1);

    Assert.AreEqual(0, actual.X);
    Assert.AreEqual(0, actual.Y);
    Assert.AreEqual(-1, actual.Z);
}

Matrices

In mathematics, a matrix (plural matrices) is a rectangular array of numbers, symbols, or expressions, arranged in rows and columns. -wikipedia
We saw in the previous post that matrices can be really handy when working with 3D when we did the rotations and translations (move) of the object.

The reason we are using a Matrix4 object to store our transformation matrices in is because they can store both the rotation part and the translation part in the same structure.

Identity

Basically a square matrix with its diagonal set to 1 and the rest to 0. It is its own inverse. For more info, wikipedia.
1 0 0 0
0 1 0 0
0 0 1 0
0 0 0 1
In OpenTK
var m = Matrix4.Identity;

Translation

1 0 0 tx
0 1 0 ty
0 0 1 tz
0 0 0 1
Built from a identity matrix, the last column contains the x, y, z values of where you want to move the vertex.
[TestMethod]
public void MatrixTest_Translate()
{
    var translate = Matrix4.CreateTranslation(1, 1, 1);

    var v1 = new Vector4(0, 0, 0, 1);
    var actual = Vector4.Transform(v1, translate);
    Assert.AreEqual(1, actual.X);
    Assert.AreEqual(1, actual.Y);
    Assert.AreEqual(1, actual.Z);
}
Or in a shader, just v * m as we did in previous post.

Rotation

Rotation.X
1 0 0 0
0 cos(a) sin(a) 0
0 -sin(a) cos(a) 0
0 0 0 1
Rotation.Y
cos(a) 0 -sin(a) 0
0 1 0 0
sin(a) 0 cos(a) 0
0 0 0 1
Rotation.Z
cos(a) -sin(a) 0 0
sin(a) cos(a) 0 0
0 0 1 0
0 0 0 1
Where a being the angle.
All these can be multiplied together to form a rotation matrix for all x, y, z.
var rx = Matrix4.CreateRotationX(k * 13.0f);
var ry = Matrix4.CreateRotationY(k * 13.0f);
var rz = Matrix4.CreateRotationZ(k * 3.0f);
var rotation = rx*ry*rz;
Not going to write a test for rotation just now :)

Scaling

sx 0 0 0
0 sy 0 0
0 0 sz 0
0 0 0 1
To scale an object, we generate a scaling matrix that is basically a diagonal with the x, y and z scaling factors set for the 3 first items.
[TestMethod]
public void MatrixTest_Scale()
{
    var translate = Matrix4.CreateScale(2, 4, 8);

    var v1 = new Vector4(2, 2, 2, 1);
    var actual = Vector4.Transform(v1, translate);
    Assert.AreEqual(4, actual.X);
    Assert.AreEqual(8, actual.Y);
    Assert.AreEqual(16, actual.Z);
}

So this has been a long post, at least I am starting to get a hang of things. Hopefully this has helped someone out there.
To get you in a better mood after all the theory, here is a GIF of 2 of our cats fighting.. Again.