OpenGL / Cinder – Custom Dynamic Attributes Example (GLSL 1.2)

Here’s an example of using dynamic custom attributes in GLSL 1.2 and Cinder.
I also found this link useful!

void makeVBO()
	gl::VboMesh::Layout layout;
	layout.addDynamicCustomFloat();  // add a custom dynamic float

	vboModel = gl::VboMesh(NUM_VBO_VERTICES, 0, layout, GL_POINTS); // create vbo mesh with dynamic positions
	GLuint loc = mShader.getAttribLocation("myAttribute"); // get location of "attribute float myAttribute" in vertex shader
	vboModel.setCustomDynamicLocation(0, loc); // set the local 'id' of the attribute

	int distFromCenter = VBO_RADIUS;

        // generate random vertices
	vector<Vec3f> vPositions;
	for (int j = 1; j < NUM_VBO_VERTICES; ++j)
			Vec3f(-(distFromCenter / 2) + randFloat()*distFromCenter,
			-(distFromCenter / 2) + randFloat()*distFromCenter,
			-(distFromCenter / 2) + randFloat()*distFromCenter)

        // iterate through vertices
	int i = 0;
	gl::VboMesh::VertexIter iter = vboModel.mapVertexBuffer();
	for (int idx = 0; idx < NUM_VBO_VERTICES; ++idx) {

		// set position of vertex
                // set the value of 'myAttribute' to a random number
                iter.setCustomFloat(0, 10+Rand::randFloat()*90);


void draw()

Hundreds of Thousands of Particles at 60 fps

Recently been getting to grips with Cinder, OpenGL and GLSL for a project. I managed to get close to a million particles drawing at around 60fps 

Here’s what I did to achieve this:

Firstly I created a VBO mesh that contained randomly positioned vertices (each vertex represented a root particle position).

Using VBO meshes mean you don’t have to upload the geometry before every draw call- it  gets uploaded to the GPU once, then you can transform and draw the referenced mesh in your render loop.

Once i had the VBO mesh, I created vertex and fragment shaders that accept GL_POINTS as the input and output types. This means the vertex shader is expecting a bunch of vertices and the fragment shader is expecting to draw a single points to the screen.

The clever thing is, you can flick a switch in OpenGL to enable point sprites, this means you can tell the fragment shader to draw a texture at the point instead.

You can also tell the vertex shader what size each point sprite should be rendered at based on its distance from the camera

Using point sprites is fast, because there are no triangles or textures that need mapping to them- the fragment shader just renders the texture at a given screen position and at the size you have specified.


Auto Orientate Image (Exif / PHP)

The following function will take an image path and automatically orientate it based on the images exif data. If it can’t find the exif data, it will do nothing!

function autoRotateImage($imagePath) {
	$exif = exif_read_data($imagePath);

	if(!isset($exif['Orientation'])) return;

	$ort = $exif['Orientation'];

	$image = imagecreatefromjpeg($imagePath);

	switch($ort) {
		case 3:
			$image = imagerotate($image, 180, 0);
		case 6:
			$image = imagerotate($image, -90, 0);
		case 8:
			$image = imagerotate($image, 90, 0);

	imagejpeg($image, $imagePath);

Mimic Mac on Windows

All the below worked a treat for me…

Don’t make a habbit of using CTRL, swap it on Windows:

How to Remap Windows Keyboard Shortcuts in Boot Camp on a Mac

Then reverse the trackpad scroll too:

How to Get the Worst OS X Lion Feature in Windows (Reverse Scrolling)

Note: I added an extra line at the top of the reverse-scroll script to stop an annoying “max hotkeys per interval” dialog box popping up. Here’s my full AutoHotKey.ahk script:

#MaxHotkeysPerInterval 400

Send {WheelDown}

Send {WheelUp}

Split String Method C++ / CPP

Here is a method that allows you to split a string in C++ (CPP). I couldn’t find a decent one on the internet so I wrote my own. The method accepts a source string and splits it by the specified delimiter. It returns a vector. I hope someone finds this useful!

std::vector<string> splitString(std::string source, std::string delimiter)
    console() << "Splitting " << source << " :: at :: " << delimiter << std::endl;
    int len = source.length();
    std::vector<string> words;
    std::string prev = "";
    for(int i = 0; i < len; ++i)
        std::string c = source.substr(i,1);
        if( != 0) prev = prev + c;
        if(( == 0 || i == len-1) && != -1) {
            prev = "";
    return words;

Redirect www. requests to TLD (PHP)

The below php statement redirects any requests to

// Redirect www. to tld
	header('HTTP/1.1 301 Moved Permanently');
	header('Location: http://'.substr($_SERVER['HTTP_HOST'],4).$_SERVER['REQUEST_URI']);

Solution for ‘HTTP “Content-Type” of “text/plain” is not supported…’

Trying to get HTML5 video working but you get this issue in the console? Using Apache?

HTTP "Content-Type" of "text/plain" is not supported...

Add these lines to your servers .htaccess file:
(Note: it may help to clear any local DNS caches)

AddType video/webm .webm
AddType video/ogg .ogv
AddType video/mp4 .mp4


Solved: Removing a big, huge, nasty file from git!

Disclaimer: BE VERY CAREFUL! you could delete everything if you use an asterisk!

Very simple really. This will go through ALL of your commits and remove all references to your big file you accidentally committed way back.

1. Go to your repository in terminal.
2. paste this line in, but substitute bigfile.psd to whatever you want to remove.

git filter-branch --index-filter 'git rm -rf --cached --ignore-unmatch bigfile.psd’ -- --all


AIR & Away3D 4.0, Dancing Monkeys, Retro Viruses and a Bubbling Flask

We were able to pull off silky smooth 3d animations and multi-marker AR with the help of Away3D 4.0 (which uses the Stage3D API in AIR for GPU accelerated graphics); FLARToolkit; the dab hand of a 3D artist and some upbeat disco music. Not forgetting days I spent working out a consistent workflow to get textured 3d models with skeleton animations from maya to Away3D. Fun times!

Android USB Browser Console Debugging (OSX)

How to get all your console.log output stream in terminal:
1. Get the Android SDK.
2. Navigate to the platform-tools directory in terminal.
3. Run this command:

adb -d logcat browser:V *:S

Make sure you open a tab and goto about:debug, to enable console debugging. Otherwise it may not work (well it didn’t on the Samsung Galaxy Tab 2).

Over The Air AdHoc Distribution with Xcode 4.6


1. Create App ID (this may be wildcard, so you can use it for multiple apps with the same bundle prefix. e.g. com.wehaverhythm.* for all internal apps.)
2. Create an AdHoc provisioning profile using the above App ID. (More devices can be added later, but must be added before distribution export).

In Xcode
3. In Xcode’s organiser, refresh your provisioning profile. (make sure you refresh each time the profile has been amended online, like when you add more device id’s)
4. Click Product -> Build For -> Archiving.
5. Click Product -> Archive.

[When archive is complete, it will appear in Xcode’s organiser under ‘Archives’.]

6. Select the application, and archive for that application you want to distribute for AdHoc.

7. Click ‘Distribute’.

8. Select ‘Save for Enterprise or Ad-Hoc Deployment’

9. Click ‘Next’.

10. Select the correct provisioning profile from the ‘Code Signing Identity’ drop down.

11. Click ‘Next’, again.

12. …wait for as long as it takes…

13. MAKE SURE YOU CLICK ‘Save for Enterprise Distribution’! Even for AdHoc. This drove me insane.

14. Enter the URL where the application will be hosted. The absolute url, including the filename.

15. Give it a title. Fill in the image url’s if you need them.

16. Save the *.ipa somewhere you will find it.

[You will notice this will also save an accompanying *.plist file]

In a text editor
17. Create an html file like the one below:

<title>Install AdHoc Distribution</title>

<a href="itms-services://?action=download-manifest&amp;url=">Install My AdHoc App</a>

17. Upload your *.ipa, *.plist files to the url specified in the export process, earlier. Upload your html file on a web server.

On your authenticated devices
18. In your web browser on one of the authenticated devices, goto the html page you created and click the hyperlink to your app.


LOVE Xmas 2012: Online Interactive Projection Mapping

This Christmas at LOVE we decided to take our 2D mural projection mapping a step further. LOVE commissioned an Ian Stevenson to come up with a new Christmas themed mural to cover our 5×3 metre wall by reception. My job was to develop a system that could enable us to projection map animations back onto the wall, and let people create, send and playback personalised greetings onto the wall via a live webcam stream. This was pretty tricky!

Here’s some more about it…

There were several show stoppers I had to overcome at the start. Firstly, where do I find a projector with a short enough throw; can cover a 5×3 metre wall; will stay on 24/7 without overheating; and will be bright and crisp enough to animate, light up and show personalised messages on the wall – bearing in mind users would also need to view these online, day and night (we have clients in China remember!). This took many phone calls, and professionals saying it couldn’t be done for under 4 grand. Well, we proved them wrong 😉 We were projecting on top of a print, so we only needed it bright enough to light up what was already on the wall. I found a pretty descent one for the job in the end. I’m rather knowledgeable with portable projectors now!

Webcam & Live Stream
Another potential show stopper was the webcam. We were running OSX, and the only compatible cameras were Logitec, so we bought the best one only to find that the drivers weren’t fully compatible with Mac’s. We had to have control over auto-whitebalance and auto-focus, otherwise the camera wouldn’t focus on the projection. So we ended up installing windows 7 on bootcamp, to gain complete control. Job’s a good-en. These were the two main issues, plus we had to stream at a high enough bit-rate that the text would be legible after encoding for the live stream.

I developed the openFrameworks application on top of the previous quad warp projection mapping tool we developed, and added functionality for fading between several videos; connecting to the node.js backend; queuing messages (prioritising clients over users); and elegantly displaying messages on the projection. Nice text in openFrameworks was a bit of a nightmare to begin with. I had to build my own text-align and word wrapping functionality in.

The node.js backend handles all of the message creation and hosts the html/js. I hash the messages so they are gibberish in the query string when someone is sent a link, otherwise it’ll spoil the fun if they’ve seen the message already!

We are using a Flash front-end which uses external interface calls and callbacks to trigger and recieve events from Socket.IO