Facesynth – Api Driven Noise Making with Node.js & PureData

Facesynth

It’s Nola Tech Week. Let’s celebrate by hacking together something goofy and sort-of musical called Facesynth with Node.js and a little piece of software called PureData.

What’s PureData?

PureData is an open-sourced visual programming language that was developed by Miller Puckette in the 90’s. Like its more commercial brother Max/MSP, Pd has evolved into a powerful digital signal porcessing tool that utilizes a data-flow programming paradigm, in which its objects and fucntions are visually linked together to model the flow of audio signals and the manipulations of those signals. Download Pd here (you’re going to want Pd-extended for this project).

Here we have the three most commonly used Pd types:

api-driven-noise-making 2

This project will leverage the power of the Open Sound Control (OSC) network protocol. OSC was developed at UC Berkeley’s Center for New Music and Audio Technologies to share data between signal emitting musical instruments (most commonly synthesizers) and computers (or computer-like devices). OSC signals have room for extensible parameters that enable a unique blend of music-tech experimentation.

Pd-Extended bundles OSC functionality into the Pd environment. This opens up a world of possiblity in making networked multimedia signal processing projects.

But, we’re internet people here. Our project’s gotta be plugged in to the net-tubes. So, let’s make an ‘internet synth’ using Pd, its OSC capabilities, and a Node.js API proxy.

Our synth will send an OSC signal from PureData to our Node.js server. That signal will be dispatched to the browser via websockets, along with a url to a random image we will pull from the ever-useful random user generator API.

What’s our purpose here? Call it an artistic rendering of API driven development. An Internet of Things music machine. A post-modern examination of sensual reciprocation in the API/web whatever-point-oh era. Or, just call it Facesynth.

Let’s start with the JavaScript.

So every Node app begins with a package.json file. Here we describe our app, list it’s module dependencies, and determine how to start it.

package.json

{
  "name": "facesynth",
  "version": "1.0.0",
  "description": "",
  "main": "server.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1",
    "start": "node server.js"
  },
  "author": "",
  "license": "ISC",
  "dependencies": {
    "ejs": "^2.3.4",
    "express": "^4.13.3",
    "node-osc": "^1.1.0",
    "request": "^2.64.0",
    "socket.io": "^1.3.7"
  }
}

We’re going to need five npm modules. Express is the most common Node web framework accouchement, but if you read this far, you probably knew that already. Ejs is a boring js template engine that’ll do the trick for our not boring project. Request is a module that makes HTTP requests from Node easier to do than the core http lib does. Socket.io is similarly a friendlier-to-write wrapper for websockets, where we get our real-time functionality. Finally, the piece de resistance: node-osc, handles our Open Sound Control expirementation.

Pop this file into an empty directory and hammer npm install in your terminal.

On to the server script…

server.js

var express = require('express');
// request allows us to make http requests to external apis
var request = require('request');
var path = require('path');
var fs = require('fs');
var app = express();
// node-osc is a wrapper on the module osc-min
// It has OSC 'emitting' and 'recieving' functionality
var osc = require('node-osc');

// load up index.html with ejs
app.engine('html', require('ejs').renderFile);
app.set('view engine', 'html');

// our index route
app.get('/', function (req, res) {
  res.setHeader('Content-Type', 'text/html');
  res.render('index.html');
})

// gotta tell our node app to open its ears
var server = app.listen(3000, function() {
  console.log('listening at localhost:3000');
})

// load up socket.io and have it listen within our node server
var io = require('socket.io')(server);

// Connect our osc listener to 0.0.0.0:9999,
// where our patch is emitting events
var oscServer = new osc.Server(9999, '0.0.0.0');

// when socket.io is connected, listen for osc messages
io.on('connection', function(socket) {
  oscServer.on('message', function (msg, rinfo) {
    // when a message is recieved, http GET a randomly generated
    // user JSON blob
    request('https://randomuser.me/api/', function (error, response, body) {
      if (!error && response.statusCode == 200) {
        // we just need the picture url string
        var picUrl = JSON.parse(body).results[0].user.picture.medium;
        // emit event to front end and send over url
        io.emit('supguys', picUrl);
      }
    })
  })
})

Here we got a good old fashioned triangle of death, but whatever, this is just for fun ain’t it? Anyway, Server listens to browser. Server listens to front-end socket.io emitter. Server listens to OSC for a key change on our soft-synth. When Server hears OSC, she asks randomuser API for a user, then tells socket.io to scream that user’s profile picture URL to its browser-side counterpart.

On the front-end, we’re going to have socket.io listen to the server for that OSC event (‘supguys’). When that event fires, we will do some basic DOM manipulation to populate the view with our contacts’ faces. Faces will pop up randomly positioned on the browser view as we hit notes in our Pd patch, thanks to OSC! This program is written in vanilla ES5 JavaScript to keep things simple, and the script is short enough to throw in the bottom of our index.html, which should be in a ‘/views’ directory because that’s where Express is going to look for it.

views/index.html

<html>
<head>
  <meta charset="UTF-8">
  <meta name="viewport" content="width=device-width, initial-scale=1">
  <title>FACESYNTH</title>
  <!-- lets hide any overflow -->
  <style>
    body {
    	margin: 0px;
    	overflow: hidden;
    }
  </style>
</head>
<body>
  <script src="https://cdn.socket.io/socket.io-1.3.7.js"></script>
  <script>
    var socket = io();

    // were gonna need a function to return an integer between 0 and 100
    // to determine where the face will pop up on the page
    function getRandomArbitrary(min, max) {
        return Math.random() * (max - min) + min;
    }

    // lets listen for socket event here
    socket.on('supguys', function(picUrl) {

      // and when that event fires, lets do some dom manipulation
      var pic = document.createElement('img');
      var x = getRandomArbitrary(0, 100);
      var y = getRandomArbitrary(0, 100);

      // we make an img element and designate the
      // random user picture as its source
      pic.src = picUrl;

      // and position that sucker at a random place on the screen
      pic.style.cssText = 'position:absolute;left:' + x + 'vw;top:' + y + 'vh;'
      document.body.appendChild(pic);
    })
  </script>
</body>
</html>

Let’s set up our simple Pd patch that emits OSC messages on the port our Node server is listening to. The patch is to listen for any keybord action and convert the pressed keycode to a MIDI value, and finally, convert that MIDI value to an audio frequency.

Create our elements from the Put menu (refer to the above Pd image if the graphical realm becomes overwhelming), and link them together in edit mode (Command + e to toggle modes).

api driven noise 3

Make sure DSP is checked on the console and mash some keys. If you’re hearing horrible sounds you’re doing it right.

And, you’re good to go. Go to your Node project directory in your terminal, if you haven’t done so already. Start your server up with npm start. Connect OSC to the port by clicking on the connect ‘message’. You should see this line in the console:

sendOSC: connected to port 0.0.0.0:9999 (hSock=2226176) protocol = UDP

Tune your browser to localhost:3000. Turn on DSP and turn the volume up on the patch (just a little; this synth needs a lot of work), and start mashing your keyboard.

api driven noise 2

Faces and noises!

Possibilities: one could use the arguments being passed around via OSC to get even more creative; make a better synth (learn to Pd here); refine the key input on our patch… All sorts of possibilities.

Anyways, that’s Facesynth. Merry NOLA Tech week y’all! Have fun using Node.js and PureData.

Source Code: https://github.com/Harleymckee/facesynth

If you like reading about connections between Music and Coding check out these other posts : Music <> CodingJavaScript Music & 5 songs about ECMAScript6.