Making a game engine

Hey everyone,

So as 2016 has come and past I thought I'd write to you all about the most interesting thing I think I got up to last year. And that was making a game engine. I'm going to try give you guys a general walk through as best I can (I may have forgotten some things) of the path I took to make it. It's an exercise I really would suggest most people should try at least once. Preferably in a managed language. Below you'll see the end result of all of this so buckle up because this should be a long one.

image

Part 1 - The fixed function pipeline

So the approach I took was a chronological one. Instead of jumping right into the new technologies I opted out to rather learn the older ones first and then progress forwards. There are a few advantages I saw to this:
- Backwards compatibility
- Interest sake (I'm making a game engine so I may as well try learn the technologies in more detail)
- It's actually easier - basically as OpenGL progressed, things got harder to do

So this might be a good point before I really get into the good stuff to also point out that OpenGL is just an API. There is no singular library called "OpenGL". What this means is that it's a strict definition of how other people should make their graphics libraries work. The same goes for DirectX. You can hopefully then see the advantage of learning OpenGL. If you learn it once, it means you can use it in many different situations. I used lwjgl to learn OpenGL but I have read through some WebGL code recently and there were only a few minor differences.

So lets start! What do people actually mean when they say the fixed function pipeline? Lets skip back a few years to when games looked like this:

image

Every time you wanted to make a game you had a bit of a problem, you were making code directly for that hardware. So the good folks at Microsoft (I believe they were first but you can fact check me on this) decided to provide a standard way to make graphics for their operating system. From this point on graphical rendering started to become a little easier for everyone as using graphical APIs became the norm.

However, hardware was quite different back then. It just didn't have the strength of what we have today. So rather than make a graphics card that is just really awesome and will do what ever you want it to do. It was a better approach to say, "well my graphics card can do these things". And that's what we mean by the fixed function pipeline. You were quite literally dealing with a fixed set of functions.

So let's say I want to draw a simple rectangle? How might I do that?

glBegin(GL_QUADS);
		glVertex2f(-1,1);
		glVertex2f(0,1);
		glVertex2f(0,0);
		glVertex2f(-1,0);
	glEnd();


This is what I meant about this stuff being pretty easy. We start by saying we want to draw a primitive, and then specify that it will be a quad. And then we provide the vertexs for this quad. Note how I'm specifying that it will be a vector2 and use floating point values. You could if you want say Vector3d (3 being a 3 dimensional vector, and d being decimal). You'll also notice that the values I used were in the range of -1..1. That's because this is all OpenGL deals with.

image

What you are looking at above is the plane OpenGL uses. There is of course an extra dimension for 3d but my paint skills weren't really doing us any favors while I was trying to draw it out. Basically, OpenGL wants everything to be in a cube.
So hang on, when we make games it's a lot nicer for to instance in a 2d game say I want a vertex at position (4, 50) on our screen! How would I pull that off using this plane?! Do I seriously have to convert that to a -1..1 value?!
Well yeah, but we're going to do that in a way that we'll only really worry about that once. This is where we'll start talking about projection matrices. The idea behind projection is to take one matrix and convert it to another matrix in a lower plane. Or to put it in another way, convert one set of values into another set of values. This second way of thinking isn't actually entirely correct but it will be a useful way of thinking of things for now.

So since we're using the fixed function pipeline, we can go ahead and just use a function to sort this all out for us:

glOrtho(0, 800, 600, 0, -1, 1);


Okay so lets start with the function name. I'm sure you guys have seen this before, ortho, ie: orthographic. Lets look at the wikipedia definition, "It is a form of parallel projection, where all the projection lines are orthogonal to the projection plane". What does this mean? Well where a vertex is is where ever you'll see it on the screen! You might be thinking to yourself, but should that always be where a vertex is? Well when we get to 3d, you'll see that we will actually want to mess around with it's positioning a bit ourselves.

So what do these parameters mean? The first four parameters represent our left, right, bottom and top of our plane. So hopefully it should simple enough to see why I provided those values. And the last two values are our z near, and z far. What this means is how close to and how far away from our z axis will we view things? Well since we are projecting into a 2d plane we can cut off most of the z axis. There are definitely reasons for why we could want to have higher values for these which if anyone is interested I can give an example for in the comments below (I used this myself for shadow mapping).

But that's really cool because it means we can changed our rectangle code to this:

glBegin(GL_QUADS);
		glVertex2f(0,0);
		glVertex2f(200,0);
		glVertex2f(200,100);
		glVertex2f(0,100);
	glEnd();


Which is a lot easier to read!

But we're not actually quite done, there's one thing we'll need to do before this works:
glMatrixMode(GL_PROJECTION_MATRIX);
		glLoadIdentity();
		glOrtho(0, 800, 600, 0, -1, 1);
	glMatrixMode(GL_MODELVIEW_MATRIX);

	glBegin(GL_QUADS);
		glVertex2f(0,0);
		glVertex2f(200,0);
		glVertex2f(200,100);
		glVertex2f(0,100);
	glEnd();


You'll notice I've added an extra little bit telling OpenGL which matrix we're dealing with. Well that's pretty important. This stuff will make a bit more sense a bit later when I go through the newer technologies but as I've said, in what part we're telling OpenGL how things should get projected and with the other bit we then tell it how things should be projected in the model and the view space.

Well done! You've successfully drawn out a white rectangle! But you can't see it! Why is that? Well you've told your code to draw it out once and then it closes. I'm purposefully not going to add that bit because this is where things will be a bit different depending on which library you're using. Why is that? Well OpenGL is a graphics rendering API which means they don't really care about things like window frames. That's why we often talk about OpenGL contexts. Because our window frame asks for an OpenGL context to hold onto and deal with.

So I'll add one more bit that is language agnostic:

WINDOW.create();

	glMatrixMode(GL_PROJECTION_MATRIX);
		glLoadIdentity();
		glOrtho(0, 800, 600, 0, -1, 1);
	glMatrixMode(GL_MODELVIEW_MATRIX);

	while (true) {
		glClear(GL_COLOR_BUFFER_BIT);

		glBegin(GL_QUADS);
			glVertex2f(0,0);
			glVertex2f(200,0);
			glVertex2f(200,100);
			glVertex2f(0,100);
		glEnd();
	
		WINDOW.update();
		WINDOW.sync();
	}


So the only extra bit of OpenGL I've added here is the glClear function. What exactly is that doing? Well lets say one frame I rendered a rectangle like this:

image

And then in the next frame, I tried to render a rectangle about but a bit left and a bit down:

image

Well what has happened? Well to draw stuff to screen, your computer is going to need to keep track of what colors should be drawn onto the screen. So every time you redraw the screen you'll have to clear that color buffer. There's another buffer I may as well talk about at this point. Which is the depth buffer. When you're drawing in 3d space, how do you know when one thing is in front of another? Well one way would be to save that information in a texture.

image

You can see here I'm using a grey to represent depth in this texture. So when drawing something, OpenGL will first check out the depth of that fragment before deciding whether it should replace that point on the color buffer and depth buffer.

So we'll add this to our clear function:

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);


I'm going to assume you guys are all fine with bitwise operations but if you have any trouble understanding what I wrote please feel free to comment.

And that really is all we need to render a simple rectangle to the screen! But there's a problem with this approach. Can you guys see what it is? Well every time we render we are telling our computer what we want to render again and again. That's a waste of CPU power. That's why this approach is called the immediate mode and why it's also pretty terrible.

So that's where we'll stop with this part of the post. More will be added in the future but I see this is already running to be quite long so I'll get some feedback from you guys before I continue! I know it seems like we haven't done so much so far but from this point on it will actually be pretty simple because we've covered a few core concepts. Namely:
- Some history of Graphics Libraries
- What the fixed function pipeline is
- What projection matrices are
- OpenGL's -1..1 ranged plane
- Orthographic projection
- The immediate mode
and
- The framebuffers

Next week if you guys want I'll add part 2 in which we will still use immediate mode but we will draw stuff in 3d! And maybe look at some other rendering mode like vertex arrays.

Comments

  • Very cool learning process / resource. Well done :)

    I'm really not trying to sound pedantic now (but I'm going to come across like I am!), but are you focussing on solely a "rendering" engine here or a full game engine? I ask because the rendering component is really just a small (medium?) part of what a full-on "game engine" needs to have built for it, alongside such other components as Input, Audio, IO, AI, Animation, UI, Networking, etc, etc.

    Just making sure that people don't confuse "game" and "rendering" engine as one and the same, for learning purposes :)
    Thanked by 2mattbenic edg3
  • Oh no if there's enough interest for me to continue I'm going to cover a lot of other aspects :) visuals are just kinda where you have to start.
  • What was the primary programming language used for this and what are the target platforms?
    In my opinion, South Africa needs more people polishing their skills in established engines.
    That said, i do admire the capability you demonstrate.
    Thanked by 1edg3
  • edited
    About 80% of books that I read show how to make a mini engine and maybe 1 or two games at the end. It is after 2010 that game programming books started to cover more about games using commercial engines than. Here is some of the list that I found ammusing...

    Advanced Game Design with HTML5 and JavaScript - Rex van der Spuy
    I managed to collect all of Rex's books. He is the best game programming author and he responds to all the emails.
    The book covers principles of game programming using just canvas and javascript, but the lessons learned can be used everywhere. near the end he shows how to combine all the principles to make a small engine and breakout game as the finally. The engine is similar to cocos2d-js

    Building JavaScript Games: for Phones, Tablets, and Desktop - Arjan Egges
    The is a similar book on "XNA" written by the same author and this book was a rewrite on javascript. The book has four parts and four games, it starts with a simple paint can game and thats where you start writing your engine. The book progresses by improving the engine slowly as you create other games and by the end of the book you have four polished games and small canvas engine.

    Build your own 2D Game Engine and Create Great Web Games: Using HTML5, JavaScript, and WebGL - Kelvin Sung
    This book again has an older version written in XNA. I read the book because I wanted to remind myself of opengl and wanted to use some shaders in my cocos2d-x projects. Eventually I never got to finish the book, but managed to go through the game code that was covered in the book. Overall the book is nice, it covers software engineering, computer graphics, mathematics, physics, game development, game mechanics, and level design in the context of building a 2D game engine from scratch. And what i really loved mostly about the book was the coverage of shaders because I could just copy and paste them into my cocos2d-x projects and they worked.

    Game Coding Complete, Fourth Edition - Mike McShaffry
    I cant say much more about this book except that it is the best. If you want to make your own mini-unity engine pick up this book. If you don't have a background in directX, I suggest you read one of Frank Luna's books first.
    Thanked by 1edg3
  • SkinnyBoy said:

    Game Coding Complete, Fourth Edition - Mike McShaffry
    I cant say much more about this book except that it is the best. I you want to make your own mini-unity engine pick up this book.
    image
    Thanked by 2edg3 pieter
  • edited
    Part 1 and a half - Texturing and colouring our quads

    So so far we have this:
    WINDOW.create();
    
    	glMatrixMode(GL_PROJECTION_MATRIX);
    		glLoadIdentity();
    		glOrtho(0, 800, 600, 0, -1, 1);
    	glMatrixMode(GL_MODELVIEW_MATRIX);
    
    	while (true) {
    		glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    
    		glBegin(GL_QUADS);
    			glVertex2f(0,0);
    			glVertex2f(200,0);
    			glVertex2f(200,100);
    			glVertex2f(0,100);
    		glEnd();
    	
    		WINDOW.update();
    		WINDOW.sync();
    	}


    But this is a little boring, I'm not going to say you can't make a game with graphics like this because someone will probably pull out some cool game from the depths of the internet but obviously only dealing with one shape and one colour is a little boring.

    So as most of you will have noticed, that glBegin is parameterised. So what else can we put in there? Well let's consult the OpenGL online docs (which you can find here by the way, an incredibly useful resource).

    my bible
    image
    "Ten symbolic constants are accepted: GL_POINTS, GL_LINES, GL_LINE_STRIP, GL_LINE_LOOP, GL_TRIANGLES, GL_TRIANGLE_STRIP, GL_TRIANGLE_FAN, GL_QUADS, GL_QUAD_STRIP, and GL_POLYGON."
    From this point on we'll concentrate on only using triangles (GL_TRIANGLES). By why Ben? Well because triangles are actually incredibly flexible and you can make pretty much any shape using them. The quads was really just getting made up of two triangles anyway.

    So where we had
    glBegin(GL_QUADS);
    	glVertex2f(0,0);
    	glVertex2f(200,0);
    	glVertex2f(200,100);
    	glVertex2f(0,100);
    glEnd();


    We'll be replacing that with
    glBegin(GL_TRIANGLES);
    	glVertex2f(0,0);
    	glVertex2f(200,0);
    	glVertex2f(0,100);
    	glVertex2f(200,0);
    	glVertex2f(0,100);
    	glVertex2f(200,100);
    glEnd();


    It might seem like it's a bit over the top right now but it really makes the whole process a lot easier moving forward. Remember we don't plan on staying on the immediate mode forever!

    So that's cool, how about we get some colours going now?
    glColor3f(0,1,0);
    glBegin(GL_TRIANGLES);
    	glVertex2f(0,0);
    	glVertex2f(200,0);
    	glVertex2f(0,100);
    	glVertex2f(200,0);
    	glVertex2f(0,100);
    	glVertex2f(200,100);
    glEnd();


    And just like that we have a green square!
    Why thought? Well just like our vertex function the 3 represents the dimensions of the vector and the f represents that this is a floating point value. The parameters are red, green, and blue all within the range of 0..1 inclusive. What if you'd like to use some transparency? Well then just use this function:
    glColor4f(r,g,b,a);


    Simple isn't it?

    Okay so lets get onto the last little cool bit before we try make a little game. Texturing. I'm going to have to apologise here because I'm going to be using a library to lightly go over this. I haven't learnt how to load in textures myself as I already have a library that can do that for me so I'm just going to work with the assumption that in whatever language you are working in you can hopefully find a library to do this for you. I use slick-utils which you'll see in my little demo I plan to make for you guys before I make the next part.

    So lets imagine we've just loaded in a texture using some made up library
    OUR_TEXTURE = new Texture("From\some\path");

    How are we going to use that with our rectangle?

    Well we need to let OpenGL know where things will map out. This is commonly known as uv mapping. Why? Well because we normally use x and y to refer to vertexs so it just makes things a little easier to refer to the u and v axis. Later on when we use shaders well again change what we refer to these axis but for now it's u and v.

    So lets say we have just one triangle for now:
    image

    How are things going to map out? Well OpenGL uses this plane for texturing:
    image

    You'll notice it moves from the top left instead of what be a more straight forward bottom left. Why is this? Well this goes back to how screens used to work back when we had CRTs. Light would start shining from the top left move across to the right hand side. It would then backtrack one level down and move across again. On and on. This has actually influenced a lot of coordinate systems in computing. Even though led screens don't really need this, the standards have stuck.

    So we're going to be using the following function to let our renderer know where things map out to:
    glTexCoord2f(u,  v);


    So to try make things a little clearer, I'm first going to write out the parameters in their algebraic form, ie: x1, x2.. u1, u2
    Before I continue on to use real values.

    So what we'll end up with is this

    glBegin(GL_TRIANGLES);
    	glTexCoord(u1, v1);
    	glVertex2f(x1, y1);
    
    	glTexCoord(u2, v1);
    	glVertex2f(x2, y1);
    
    	glTexCoord(u1, v2);
    	glVertex2f(x1, y2);
    
    	glTexCoord(u2, v1);
    	glVertex2f(x2, y1);
    
    	glTexCoord(u1, v2);
    	glVertex2f(x1, y2);
    
    	glTexCoord(u2, v2);
    	glVertex2f(x2, y2);
    glEnd();


    So as you can see we're simply providing the texture coordinate for each vertex. Let's fill this in with real values now.

    glBegin(GL_TRIANGLES);
    	glTexCoord(0, 0);
    	glVertex2f(0, 0);
    
    	glTexCoord(1, 0);
    	glVertex2f(1, 0);
    
    	glTexCoord(0, 1);
    	glVertex2f(0, 1);
    
    	glTexCoord(1, 0);
    	glVertex2f(1, 0);
    
    	glTexCoord(0, 1);
    	glVertex2f(0,1);
    
    	glTexCoord(1, 1);
    	glVertex2f(1, 1);
    glEnd();


    Now the renderer knows where to map things out to! Yay! But hold on, we haven't actually told it which texture to use yet. Lets change that.

    glBindTexture(GL_TEXTURE_2D, OUR_TEXTURE);
    glBegin(GL_TRIANGLES);
    	glTexCoord(0, 0);
    	glVertex2f(0, 0);
    
    	glTexCoord(1, 0);
    	glVertex2f(1, 0);
    
    	glTexCoord(0, 1);
    	glVertex2f(0, 1);
    
    	glTexCoord(1, 0);
    	glVertex2f(1, 0);
    
    	glTexCoord(0, 1);
    	glVertex2f(0,1);
    
    	glTexCoord(1, 1);
    	glVertex2f(1, 1);
    glEnd();


    So here we are telling our renderer to bind our texture to the 2d texture buffer. Now if any of you are following at home you'll probably be saying at this point that I've done something wrong. And that's because you won't be seeing the texture. But that's because we need to do one more thing. OpenGL is state based which means we're going to need to enable texturing. Why not just always have textures enabled? Well then our renderer wouldn't be able to tell if it should show a colour or our texture.

    So let's add this to our creation code:
    WINDOW.create();
    
    	glMatrixMode(GL_PROJECTION_MATRIX);
    		glLoadIdentity();
    		glOrtho(0, 800, 600, 0, -1, 1);
    	glMatrixMode(GL_MODELVIEW_MATRIX);
    
    	glEnable(GL_TEXTURE_2D);


    And that's really it, this is what our code would look like now:

    WINDOW.create();
    
    	glMatrixMode(GL_PROJECTION_MATRIX);
    		glLoadIdentity();
    		glOrtho(0, 800, 600, 0, -1, 1);
    	glMatrixMode(GL_MODELVIEW_MATRIX);
    
    	glEnable(GL_TEXTURE_2D);
    
    	while (true) {
    		glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    
    		glBindTexture(GL_TEXTURE_2D, OUR_TEXTURE);
    		glBegin(GL_TRIANGLES);
    			glTexCoord(0, 0);
    			glVertex2f(0, 0);
    
    			glTexCoord(1, 0);
    			glVertex2f(1, 0);
    
    			glTexCoord(0, 1);
    			glVertex2f(0, 1);
    
    			glTexCoord(1, 0);
    			glVertex2f(1, 0);
    
    			glTexCoord(0, 1);
    			glVertex2f(0,1);
    
    			glTexCoord(1, 1);
    			glVertex2f(1, 1);
    		glEnd();
    	
    		WINDOW.update();
    		WINDOW.sync();
    	}


    Which really isn't a lot when you look at it.

    And this is the kind of thing you should end up with
    image

    Now this really gets to why I took this path and why I like to show people this route first. This is by no means the best way to do things. It is actually the worst way possible to render stuff. But here's the kicker, that is so damn simple to render stuff out that you can go on and make a simple game with what I have just shown you.

    I plan to make a little demo for you guys before the next part so that you can see this stuff in action. Unfortunately input will be tied to your window manager so there's really no point in me spending time showing you how to do that so it will be in my demo as well.

    Anyways for the next post, would you guys preferred it if I then went onto a better rendering mode, shaders, or sound using OpenAL? I hope you all have a great weekend!
  • I'm sorry for this bit being a bit code heavy, here's a picture of a cute cat
    image
    Thanked by 1edg3
  • I take back what i said, instead of sharpening Unity skills, continue being great at engine development. A post i read reminded me that there's a shortage of people like you.
    We appreciate the contribution.
    image
    Thanked by 1edg3
  • Very awesome!

    I have been coding a game engine for roughly 6 - 8 years now (I don't actually really know when it started). It has mostly been used as a tool for me to learn programming though as the game engine itself didn't have enough planning. I decided to put that one on hold to try out making a quick 2D game with OpenGL. It went similar to your stuff, just not as far.

    I found that it was easier than all the 3D stuff I was doing since I could use images instead of meshes and worrying about the physics (I spent a looong time getting the car physics right).

    Then finally I've decided (sort of, maybe still deciding) to just use the unreal Engine to make a game. It's been so many years of game engine development I think I now need to join the masses and do it the easy way.

    Maybe I'll go back to one of my game engines at some point as I really enjoy it but I just don't have the time anymore. A fulltime job can really suck the life out of one.

    /end personal experience

    Anyway, well done. I always find these journeys interesting :)
  • This doesn't have to change what is being done (and it might have been shared before but I didn't see it) - are there any possibilities of attempting to use your engine for a game we choose to create? It has the possibility for different platforms, is an easily reusable code-base, I am forcing my engine choices today (and can change them) simply because I attempted (unsuccessfully, where yours works so that is successful) to build engine's and only spent a little time and broke it up for myself. When it becomes a possibility to use I might move my projects to the way you have created for yourself (and I hope for us! :D ) - your way looks better than what I used for what I made (Game Maker, Unity, XNA, Monogame, more) - I had to use tons of different ways since some needs were broken on the engine I started with and I had to change tons of times.
Sign In or Register to comment.