GLSL part 1

This article is the first in a series of tutorials aiming at writing a rather complete game engine for embedded devices.

What’s on the menu ?

If all goes well, at the end of this tutorial, you will be able to render the following 3D scene:
If you are among the lucky few running a brand new web browser with WebGL support, you can rotate the scene with your mouse and zoom with the mouse wheel

Event if this blog is about embedded development and GLES20 in particular, this first post will be about desktop development in GL2. But don’t worry, our second tutorial will be all about getting the same output on an Android device.


This tutorial relies on three libraries:

  • FreeGLUT: This library will do all the dirty work of creating a valid window and requesting a valid OpenGL context. It also feature a nice abstraction for handling mouse and keyboard events. You can get the windows build here.
  • GLEW: OpenGL being a constantly evolving API, with different implementations coming from multiple vendors, you simply cannot directly link to OpenGL functions at compile time. Instead, you must retrieve their address at runtime using some GetProcAddress() equivalent, and it is a rather tedious process described here. The GLEW library abstracts all this.
  • glm: This nice C++ Math library shares GLSL syntax and datatypes, so using it is really a breathe. This templated, header-only library is well documented, efficient and concise.

Project Setup

We will use MSVC for this tutorial. Create a new project and add the correct include an library paths for our dependencies:

  1. Right-Click on your project -> Properties -> Configuration Properties -> VC++ Directories
  2. Select the ‘All Configurations’ option from the Configuration dropdown before adding any new directory
  3. Edit the “Include Directories” field and add glm, FreeGlut and GLEW include directories : C:\glm-;C:\freeglut\include;C:\glew-1.12.0\include;$(IncludePath)
  4. Do the same for the “Library Directories” field and add the proper FreeGlut and GLEW library directories: C:\freeglut\lib;C:\glew-1.12.0\lib\Release\Win32;$(LibraryPath)
** Of course change the paths according to your setup **

Since glm is an include-only library, there is no .lib associated and therefore no entry in the Library Directories field.

The OpenGL Window

FreeGLUT comes with its own event/render loop. You must use FreeGLUT’s Callback Registration Functions to integrate your logic into the update loop. We implement the rendering logic in our onDraw() function and register it with glutDisplayFunc(onDraw). Note that all callback registrations must happen before the call to glutMainLoop() :

#include <GL/glew.h>
#include <GL/glut.h>
#include <glm/glm.hpp>

#include <stdlib.h>
#include <time.h>

#pragma comment(lib,"freeglut.lib")
#pragma comment(lib,"glew32.lib")

void onDraw() {	
	static clock_t start = (clock_t)0;
	static int frameCnt = 0;
	const clock_t curr = clock();	
	if ((curr - start) / (CLOCKS_PER_SEC / 1000) > 1000) {
		char buff[256];
		sprintf(buff, "Demo     fps: %d", frameCnt / ((curr - start) / CLOCKS_PER_SEC));
		start = curr;
		frameCnt = 0;

int main(int argc, char * argv[]) {
	glutInit(&argc, argv);
	glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
	glutInitWindowSize(640, 480);
	if (glewInit() != GLEW_OK)
		return -1;

	int glVersion[2] = {-1, -1};
	glGetIntegerv(GL_MAJOR_VERSION, &glVersion[0]);
	glGetIntegerv(GL_MINOR_VERSION, &glVersion[1]);

	printf("OpenGL version %d.%d (%s)", glVersion[0], glVersion[1], glGetString(GL_VERSION));

	return 0;
Build and run the program, you should get a nice black window with an FPS counter.

OpenGL Error Detection

*Every GL function call must be checked*
The following 8 lines will help you catch around 90% of the bugs appearing in the course of your project’s development.

Header File : GLCheck.hpp
#pragma once

#define CHK_GL_AFTER
#if defined(_DEBUG)
#define GL_CHECK(cmd) { assert(CHK_GL_BEFORE glGetError() == GL_NO_ERROR); } cmd; { assert(CHK_GL_AFTER glGetError() == GL_NO_ERROR); }
#define GL_CHECK(cmd) cmd;
#endif //defined(_DEBUG)

This code performs the bare minimum just to get started. CHK_GL_BEFORE and CHK_GL_AFTER only purpose is to be captured by the assert() macro and displayed as-is. We will add support for logging and more advanced debugging features later on.

As a side note, glGetError() tends to degrade overall performances on many mobile targets, and should not be used in release builds.

Reading files

Let’s centralize every file access in a single FileManager class. We will extend it later on to support reading/writing data to different types of storage.
Header File : FileManager.hpp
#pragma once
#include <memory>

struct _dataBlob {
	inline _dataBlob(const unsigned char * const data, const size_t size) : data(data), size(size) {}
	inline ~_dataBlob() { delete [] data; }
	const unsigned char * const data;
	const size_t size;

typedef std::unique_ptr<_dataBlob> DataBlob;

struct FileMananger {
	static DataBlob Read(const char * const file);
We simply deal with blobs at the FileManager level.
Source File : FileManager.cpp
#include "FileManager.hpp"

DataBlob FileMananger::Read(const char * const file) {
	FILE * const fd = fopen(file, "rb");
	fseek(fd, 0, SEEK_END);
	const long sz = ftell(fd);
	char * const data = new char[sz + 1];
	size_t read_sz = fread(data, 1, sz, fd);
	data[read_sz] = '\0';
	DataBlob p(new _dataBlob(reinterpret_cast<unsigned char * const>(data), read_sz));
	return p;

See how we add a trailing ‘\0′ to every file … in the case of blobs containing text data, there is absolutely no guaranty that a text bock is null-terminated, so adding a null to every buffer does not hurt.

The Shader Class

OpenGL / OpenGLES 2.0 comes with a programmable graphic pipeline. This means your job is to instruct the GPU on how to take a stream of vertices from video memory and output some pixels on a screen. To do that, you write small programs in GLSL language and let the OpenGL driver compile and link them at runtime. This topic is the subject of a lot of tutorials, so we won’t go into these details and assume a general knowledge of shader programs. We only concentrate on writing a structured and extensible shader class that can serve as base for larger developments.

OpenGL ES 2.0 supports two type of shader codes:

  • Vertex Shader: executed on all the input vertices.
  • Fragment Shader: executed on all the pixels covered by the
For simplicity reasons, we store each shader code in a separate file, that will be compiled in a single program at runtime.

uniforms and attributes are the two type of input arguments defined in the versions of GLSL compatible with OpenGLES20. We must identify these arguments at link time and store their location in order to access them from our C++ code.

Header File : BaseShader.hpp
#pragma once
#include <exception>

class ShaderException : public std::exception {
	virtual const char * what() const throw() { return "Error in shader creation"; }

struct BaseShader {
	static const char * const _baseDir;
	const GLuint _progId;
	// wrap the calls to glGetXXXXlocation functions to reduce surface of OpenGL dependency in derived classes
	const GLuint _getAttrib(const char * const name) const;
	const GLuint _getUniform(const char * const name) const;
	BaseShader(const char * const name);
	virtual ~BaseShader();
	void Enable() const;
Source File : BaseShader.cpp
#include <GL/glew.h>
#include <stdio.h>
#include <assert.h>
#include "GLCheck.hpp"
#include "BaseShader.hpp"
#include "FileManager.hpp"

inline static const bool _checkShaderCompileState(GLuint id) {
	GLint len = 0;
	GL_CHECK(glGetShaderiv(id, GL_INFO_LOG_LENGTH, &len));

	if (len > 0) {
		GLchar * const nfoBuff = new GLchar[len];
		GL_CHECK(glGetShaderInfoLog(id, len, &len, nfoBuff));
		printf("compiler log:\n%s\n", nfoBuff);
		delete [] nfoBuff;
	} else {
		printf("no compiler log\n");

	GLint compileOk = false;
	GL_CHECK(glGetShaderiv(id, GL_COMPILE_STATUS, &compileOk));
	if (!compileOk) {
		throw new ShaderException();
	return true;

inline static const GLuint _createProgram(const char * const path, const char * const basename) {
	GL_CHECK(GLuint vsh = glCreateShader(GL_VERTEX_SHADER));
	GL_CHECK(GLuint fsh = glCreateShader(GL_FRAGMENT_SHADER));
	char tmp[512];
	sprintf(tmp, "%s/%s.vert", path, basename);
	const DataBlob vsh_data = FileMananger::Read(tmp);
	sprintf(tmp, "%s/%s.frag", path, basename);
	const DataBlob fsh_data = FileMananger::Read(tmp);

	GL_CHECK(glShaderSource(vsh, 1, reinterpret_cast<const char * const *>(&vsh_data->data), 0));
	GL_CHECK(glShaderSource(fsh, 1, reinterpret_cast<const char * const *>(&fsh_data->data), 0));


	GL_CHECK(GLuint programId = glCreateProgram());
	GL_CHECK(glAttachShader(programId, vsh));
	GL_CHECK(glAttachShader(programId, fsh));

	GLint len = 0;
	GL_CHECK(glGetProgramiv(programId, GL_INFO_LOG_LENGTH, &len));

	if (len > 0) {
		GLchar * const nfoBuff = new GLchar[len];
		GL_CHECK(glGetProgramInfoLog(programId, len, &len, nfoBuff));
		printf("linker log:\n%s\n", nfoBuff);
		delete [] nfoBuff;
	} else {
		printf("no linker log\n");

	GLint link_ok = GL_FALSE;
	GL_CHECK(glGetProgramiv(programId, GL_LINK_STATUS, &link_ok));
	if (!link_ok) {
		throw new ShaderException();
	GL_CHECK(glDetachShader(programId, vsh));
	GL_CHECK(glDetachShader(programId, fsh));
	return programId;

const GLuint BaseShader::_getAttrib(const char * const name) const { GL_CHECK(const GLuint id = glGetAttribLocation(_progId, name)); return id; }
const GLuint BaseShader::_getUniform(const char * const name) const { GL_CHECK( const GLuint id = glGetUniformLocation(_progId, name)); return id; }

const char * const BaseShader::_baseDir = "../Shaders/";
BaseShader::BaseShader(const char * const name) : _progId(_createProgram(_baseDir, name)) {}
BaseShader::~BaseShader() {

void BaseShader::Enable() const {

The Camera Class

OpenGL ES 2.0 deprecated the entire fixed pipeline, so the matrix stack was eliminated and the GLU functions manipulating it are of no use. We have to handle matrix transformations ourselves now. The glm library comes to the rescue, with its mat4 type and its implementation of the GLU functions (gluLookat, gluPerspective, gluOrtho, gluProject/gluUnproject, etc …).

We also need a high level object to clearly visualize the View->Projection->Screen transformations we must apply to our 3D vertices in order to draw a frame. This is the role of the Camera object.

We will start by implementing a 3rd person Camera. This camera is defined in space by two points and a vector:

  • Position point: the position of the ‘eye’ of the camera
  • Lookat point: the point targeted by the camera, so that the direction vector is |lookat – pos|
  • Up vector: define the rotation around the camera’s direction vector

This camera uses perspective projection; it needs an angle of view parameter and a couple of clipping planes. It also store the screen size, to get the current aspect ratio, and to be able to un-project 2D screen coordinates in the 3D world later on.

Internally, the camera holds the Projection and View matrices. To properly update these two matrices, the Camera::Refresh() function must be called at each step of the game loop.

The View matrix transforms a 3D point in what is called the ‘eye space': the origin is at the camera position, and the 3 axis are defined as:

  • Y axis : normalize(lookat – pos), the direction vector
  • X axis : cross(Y, up), the ‘right’ vector, normal to the plane defined by direction and up vector
  • Z axis : cross(X, Y), the proper ‘up’ vector normal to the XY plane

The Projection matrix defines what is called the ‘clip space’. This 3D projective space is often visualized as a square frustum.

For any 3D point in our world space, we can get the projected 2D point from our camera’s point of view using the formula:
projected_pos = Proj * View * world_pos.
This operation is done in the pixel shader.

Note that the resulting projected_pos is in ‘clip space':

  • The x and y components contain the projected (2D) position of our point.
  • The z component contains the depth value.
  • The w component contains the perspective divisor value. If the x, y and z values are all in the range [-w, w], the point is visible.

To convert from clip space to Normalized Device Coordinates, simply divide each component by w. This will remap our values in the range [-1, 1]. To finally get our coordinates in screen space, remap to [0, 2] and multiply by the half sizes of the screen:

const float scr_x = (1.0f + projected_pos.x /projected_pos.w) * scr_width / 2.0f;
const float scr_y = (1.0f + projected_pos.y /projected_pos.w) * scr_height / 2.0f;

You are likely interested in the depth value as well:

const float normalized_depth = (1.0f + projected_pos.z /projected_pos.w) / 2.0f;

The programmable OpenGL pipeline performs de ‘divide by w’ step internally. The glViewport() function sets the screen geometry and the glDepthRangeF() function controls the depth range. See §2.12 of the OpenGLES 2.0 Specification for more details.

Header File : Camera.hpp
#pragma once
#include <glm/glm.hpp>

struct Camera {
	static const int NO_UPDATE_NEEDED = 0x00;
	static const int NEED_PROJ_UPDATE = 0x01;
	static const int NEED_VIEW_UPDATE = 0x02;
	int _dirty;

	glm::vec3 _up;
	glm::vec3 _pos;
	glm::vec3 _lookat;

	float _fov;
	float _zNear;
	float _zFar;
	glm::vec4 _screen;

	glm::mat4 _projMat;
	glm::mat4 _viewMat;


	inline Camera(const glm::vec3 & pos, const glm::vec3 & lookat, const float FoV, const float zNear, const float zFar, const glm::vec4 & screen) :
		_up(0.0f, 0.0f, 1.0f),
		_screen(screen) { }

	inline void SetUp(const glm::vec3 & up) { _up = up; _dirty |= NEED_VIEW_UPDATE; }
	inline void SetPosition(const glm::vec3 & pos) { _pos = pos; _dirty |= NEED_VIEW_UPDATE; }
	inline void SetLookat(const glm::vec3 & lookat) { _lookat = lookat; _dirty |= NEED_VIEW_UPDATE; }

	inline const glm::vec3 & GetUp() const { return _up; }
	inline const glm::vec3 & GetLookat() const { return _lookat; }
	inline const glm::vec3 & GetPosition() const { return _pos; }

	inline void SetFoVY(const float angle) { _fov = angle; _dirty |= NEED_PROJ_UPDATE; }
	inline void SetPlanes(const float zNear, const float zFar) { _zNear = zNear; _zFar = zFar; _dirty |= NEED_PROJ_UPDATE; }

	inline const float & GetFoVY() const { return _fov; }
	inline const float & GetZNear() const { return _zNear; }
	inline const float & GetZFar() const { return _zFar; }

	inline const glm::mat4 & GetViewMat() const { return _viewMat; }
	inline const glm::mat4 & GetProjMat() const { return _projMat; }

	void Refresh();
	void Resize(const int width, const int height);
	void Rotate(const float dx, const float dy);
	void Zoom(const float fact);

Source File : Camera.cpp
#include <glm/gtc/matrix_transform.hpp>
#include <GL/glew.h>

#include "Camera.hpp"
#include "GLCheck.hpp"

void Camera::Refresh() {
	if (_dirty == NO_UPDATE_NEEDED)
	if (_dirty | NEED_VIEW_UPDATE)
		_viewMat = glm::lookAt(_pos, _lookat, _up);
	if (_dirty | NEED_PROJ_UPDATE)
		_projMat = glm::perspective(_fov, _screen.z / _screen.w, _zNear, _zFar);

void Camera::Resize(const int width, const int height) {
	_screen = glm::vec4(_screen.x, _screen.y, (float)width, (float)height);
	GL_CHECK(glViewport((int)_screen.x, (int)_screen.y, width, height));
	_dirty |= NEED_PROJ_UPDATE;

#define DEG2RADFACT 00.0174533f
#define RAD2DEGFACT 57.2957795f

void Camera::Rotate(const float dx, const float dy) {
	const float theta1 = dx * DEG2RADFACT;
	const float theta2 = dy * DEG2RADFACT;

	const float cosT1 = glm::cos(theta1);
	const float sinT1 = glm::sin(theta1);
	const float cosT2 = glm::cos(theta2);
	const float sinT2 = glm::sin(theta2);

	// we rotate the camera 'eye' around the lookat point, first on the horizontal plane, then on the vertical plane.

	const glm::vec3 d1(_pos - _lookat); // d1 := camera director vector
	// rotate our camera director vector on the horizontal plane (XY) using the dx value as angle
	_pos.x = d1.x * cosT1 - d1.y * sinT1 + _lookat.x;
	_pos.y = d1.x * sinT1 + d1.y * cosT1 + _lookat.y;

	const glm::vec3 d2(_pos - _lookat); // d2 := new director vector, since previous XY rotation modified it
	// rotate our new camera director vector on the vertical plane (YZ) using the dy value as angle
	_pos.y = d2.y * cosT2 - d2.z * sinT2 + _lookat.y;
	_pos.z = d2.y * sinT2 + d2.z * cosT2 + _lookat.z;
	_dirty |= NEED_VIEW_UPDATE;

void Camera::Zoom(const float fact) {
	_pos = _lookat + ((_pos - _lookat) * fact);
	_dirty |= NEED_VIEW_UPDATE;

The Camera class internal attributes are exposed through Getter/Setter functions. Each attribute setter invalidates the corresponding matrix. Invalid matrices are updated once at the start of the game loop, in Camera::Refresh(). This way, multiple calls to SetXXX functions only result in one matrix update. If, at some point in the frame logic, someone needs to access the internal matrices, (s)he must call Camera::Refresh() first to sync them.

The Camera::Rotate() implementation is very basic and will be updated later on.

The Geometry

This is the final piece of the puzzle: how to store and draw our geometry. We will create a container object to handle every geometry-related action.

For this first demo, we will draw some random animated points on the surface of a sphere. We won’t update our geometry every frame though, so we rely on time-dependent attributes to compute the resulting position of the points every frame.

Storing the geometry

We express each point as a pair of random angles defining the starting position on the surface of the sphere, and a random 2D vector defining the speed of that point. Since we don’t need much precision for these random values, each one is stored on a single byte.

A point is therefore defined as:
#pragma pack(push, 1)
struct point {
const byte angle0;
const byte angle1;
const byte rnd0;
const byte rnd1;
#pragma pack(pop)

We upload our point definitions once, when the container object is constructed, in a video memory region called a Vertex Buffer Object. There is no pointer semantic/arithmetic in OpenGLES 2.0, VBO are instead identified by a number. The driver maintains the association between this identification and the corresponding video memory blocks containing our vertex data.

The OpenGL API defines the use of Vertex Buffer Objects as a two step process:

  1. We must first activate our VBO so that subsequent calls to OpenGL functions affect our memory region. The glBindBuffer() function activates a buffer.
  2. Next, we perform all the OpenGL operations needed (updating geometry data, defining internal data representation, issuing draw commands, etc …)
One we are done with our VBO for the current frame, it is considered good practice to unbind it so that other, unrelated, OpenGL calls won’t alter its state. There is no unbinding function, so we instead use glBindBuffer with the special id GL_NULL.

For the record, this usage pattern has been generalised to other opengl objects (textures, shaders, etc…) in recent versions of the OpenGL/OpenGLES APIs.

Header File : GraphicObject.hpp
#pragma once

#include <GL/glew.h>
#include "Shaders.hpp"

struct DemoShader : public BaseShader {
	const GLuint uMVP;
	const GLuint uEyePos;
	const GLuint uTimer;
	const GLuint aAngles;
	const GLuint aRandom;
	inline DemoShader(const char * const filename) :
	{ }

struct GraphicObject {
	const int _pointCount;
	const DemoShader & _sh;
	const GLuint _vbo;
	GraphicObject(const int vtxNumber, const DemoShader & sh);

	void Draw();
Source File : GraphicObject.cpp
#include <stdlib.h> // for rand()
#include "GraphicObject.hpp"
#include "GLCheck.hpp"

inline static const GLuint _createVBO(const int pointCount) {
	GLuint vbo;
	unsigned char * const rndBuff = new unsigned char[pointCount * 4];
	for (int i = 0; i < pointCount * 4; ++i) {
		rndBuff[i] = rand() % 255;
	GL_CHECK(glGenBuffers(1, &vbo));
	GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, vbo));
	GL_CHECK(glBufferData(GL_ARRAY_BUFFER, pointCount * 4, rndBuff, GL_STATIC_DRAW));
	delete [] rndBuff;
	return vbo;

GraphicObject::GraphicObject(const int vtxNumber, const DemoShader & sh) : _pointCount(vtxNumber), _vbo(_createVBO(_pointCount)), _sh(sh) { }

GraphicObject::~GraphicObject() {
	GL_CHECK(glDeleteBuffers(1, &_vbo));

void GraphicObject::Draw() {
	// activate our VBO ... because they may very well be another VBO currently bound to the GL_ARRAY_BUFFER target
	GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, _vbo));
	GL_CHECK(glVertexAttribPointer(_sh.aAngles, 2, GL_UNSIGNED_BYTE, GL_TRUE, 4, 0));
	GL_CHECK(glVertexAttribPointer(_sh.aRandom, 2, GL_UNSIGNED_BYTE, GL_TRUE, 4, (void *)2));

	GL_CHECK(glDrawArrays(GL_POINTS, 0, _pointCount));

	GL_CHECK(glBindBuffer(GL_ARRAY_BUFFER, 0)); // play nice, bind the default buffer to the GL_ARRAY_BUFFER target

Drawing the geometry

The GLSL code handles the drawing logic: each random point is projected on the surface of a sphere; we then draw a sprite as a very faint hallo using additive blending.

Vertex Shader : demo.vsh
uniform mat4 uMVP;
uniform float uTimer;
uniform vec3 uEyePos;
attribute vec2 aAngles;
attribute vec2 aRandom;

#define PI 3.14159265358979
#define PI2 6.283185307179586

void main() {
	// some time-dependent random 2d variable, defined in [0, 1], with uniform distribution
	vec2 vRnd = fract(aAngles + aRandom * uTimer * 0.1);

	// spherical coordinates (Theta, Phi) of that random point on the surface of the sphere ... note that Rho = 1.0
	vec2 vNormRnd = vec2(PI2 * vRnd.x, acos(2.0 * vRnd.y - 1.0));

	vec2 cva = cos(vNormRnd); // {cosTheta, cosPhi}
	vec2 sva = sin(vNormRnd); // {sinTheta, sinPhi}

	//convert from Spherical to Cartesian : {x, y, z} <= {cosT * sinP, sinT * sinP, cosP} ... again, note that Rho = 1.0
	vec3 npos = vec3(cva.x * sva.y, sva.x * sva.y, cva.y);

	gl_Position = uMVP * vec4(npos, 1.0);

	// the further particles are from the observer, the smaller they appears
	float distFact = 1.0 - smoothstep(0.0, 3.0, length(npos - uEyePos));
	gl_PointSize =  8.0 + distFact * 12.0;

Here is more info about picking a random point on a spehre and converting spehrical coordinates to cartesian coordinates

Fragment Shader : demo.vsh
precision mediump float;

void main() {
	// the 'smoothed' distance to the center of the sprite
	float dist = 1.0 - smoothstep(0.0, 0.5, length(gl_PointCoord - 0.5));
	gl_FragColor = vec4(0.4, 0.5, 0.7, dist * 0.4);

When using the content of a VBO to draw some geometry, we must define how to map specific regions of a VBO to the corresponding input streams of a shader program. This is the role of glEnableVertexAttribArray() and glVertexAttribPointer() functions.

The same VBO can be drawn from multiple shaders programs in the same frame. In OpenGLES2.0, the internal structure of VBOs according to the active shader is not persistent across multiple bindings, so we must perform the glEnableVertexAttribArray()/glVertexAttribPointer sequence every time a VBO is bound. Newer versions of the OpenGL API encapsulate the VBO/Shader association in Vertex Array Objects, but in GLES2.0, we are stuck with constantly redefining these bindings each time.

Let’s Put It All Together : The Renderer Object

Our Renderer object contains our Geometry and Shader objects, and exposes a Camera object.

Header File : Renderer.hpp
#pragma once
#include "Camera.hpp"
#include "GraphicObject.hpp"

struct Renderer {
	Camera _cam;
	DemoShader _sh;
	GraphicObject _sd;
	inline Camera & GetCam() { return _cam; }
	void onResize(int w, int h);
	void onDraw(const float elapsedTime);
Source File : Renderer.cpp
#include <glm/gtc/type_ptr.hpp>

#include "Renderer.hpp"
#include "GLCheck.hpp"

Renderer::Renderer() :
	_sd(10000, _sh),
	_cam(glm::vec3(1.0f, 1.0f, 0.5f), glm::vec3(0.0f, 0.0f, 0.0f), 90.0f, 0.001f, 100.0f, glm::vec4(0, 0, 640, 480)) {

void Renderer::onResize(int w, int h) {
	_cam.Resize(w, h);

void Renderer::onDraw(const float elapsedTime) {
	const glm::mat4 & mvp = _cam.GetProjMat() * _cam.GetViewMat();
	glUniformMatrix4fv(_sh.uMVP, 1, false, glm::value_ptr(mvp));
	glUniform3fv(_sh.uEyePos, 1, glm::value_ptr(_cam.GetPosition()));
	glUniform1f(_sh.uTimer, elapsedTime);

Complete Program

The main program simply attach our Renderer object to FreeGLUT’s mouse and screen update hook functions
Source File : Main.cpp
#include <GL/glew.h>
#include <GL/glut.h>

#include <time.h>

#include "GLCheck.hpp"
#include "Renderer.hpp"

#pragma comment(lib,"freeglut.lib")
#pragma comment(lib,"glew32.lib")

Renderer * g_renderer;
inline void onReshapeWrapper(int width, int height) {	 
	g_renderer->onResize(width, height);

inline void onDrawWrapper() {	
	g_renderer->onDraw((float)clock() / CLOCKS_PER_SEC);

int px = 0;
int py = 0;
inline void onMouseWrapper(int button, int state, int x, int y) {
	if ((button == 3) || (button == 4)) { // GLUT specific behaviour : mouse wheel event
		if (state == GLUT_UP) return;
		const float zoomFact = 1.0f + 0.01f * ((button == 3) ? -1.0f : 1.0f);
	} else { // normal button
		px = x;
		py = y;
inline void onMouseMoveWrapper(int x, int y) {
	g_renderer->GetCam().Rotate((float)(x - px), (float)(y - py));
	px = x;
	py = y;

int wnd;
int main(int argc, char ** argv) {

	glutInit(&argc, argv);

	glutInitDisplayMode(GLUT_RGBA | GLUT_DOUBLE);
	glutInitWindowSize(640, 480);
	wnd = glutCreateWindow("Demo");
	if (glewInit() != GLEW_OK)
		return -1;

	g_renderer = new Renderer();


	return 0;

Comments are closed