Apps and Software Consultancy

< >
  • Sketch Synth Splashscreen
  • sos
  • IMG_00000010_edit-3

I might annoy some people when I say I can’t recommend using the iPad for much when doing audio production, I simply prefer creating sounds with a real keyboard, plenty of display space and synthesizers that eat up a quad core cpu. I really enjoy hearing analog synths, but they are out of my price scope at the moment. So why should I produce and sell Sketch Synth 2? Well it solves a problem I have and a problem that anyone wanting to move from simple loops to an arrangement will have. That problem is how do it start the arrangement? That might sound a bit simple, you may have all sorts of rules and ways of experimenting; well I need a way of experimenting really rapidly. I don’t want to faff around with arrangements and automation until I actually have a feel for how this will work. This is the bit that Sketch Synth 2 does for me, I don’t seriously think of it as an instrument or a synth that could produce a sound to rival Sylenth or Massive, it is a purposeful tool and at this one thing it is in my mind absolutely perfect.

So lets go back to the problem: I’m using Ableton at the moment and I have a bunch of loops that I want to turn into stems. For each instrument I have 2 alternate parts and I want to experiment with bringing one sound in after another, rapidly fading some sounds in and sometimes slowly cross fading between other sounds. I will also want to experiment from time to time with putting a bit of fx like reverb or delay on the drums to see what kind of transition that helps pull off. So what is my problem: I take an age to work out how to transition nicely between different sounds and so often produce soundscapes and loops over songs. Sketch Synth 2 is a swiss army knife like solution here, in blueprint mode I can rapidly experiment with layering up and transitioning sounds ready for when I want to make my final arrangement in Ableton, I have a chance at genuinely understanding the interactions between sounds before committing to making stems.

So does this mean I am saying that 50% of Sketch Synth 2 is pointless, well, it’s there because it’s expected not because it really adds anything to the world of music making. The most requested feature for Sketch Synth 2 is audio bus and unfortunately I can’t give people what they expect there; although inter app audio may be possible later down the line; however adding audio bus would not extend the real use of Sketch Synth 2 by much, the point is to experiment so that you can put stuff to paper later on. I wouldn’t dream of recording stems on my iPad, I want to do stuff in a DAW with my favourite mastering, eq and compression plugins. The iPad is there to experiment with while I am on a flight or a train.

I am starting to consider an update to Sketch Synth 2 again and I need to consider extending it to improve on what it does best, and the most important thing next is to be able to communicate with a DAW so that when I make my sketches I can export them to DAW. There are two ways to do this, via Midi live, or via Midi files. I am actually considering breaking from the age old norm of live midi output using Wifi to outputting midi clips using iTunes file sharing so that the midi clips can be imported into Ableton to automate tracks, I need a little while to think about this because merging existing ableton clips with these clips might be a problem; it is not the normal way of doing things but Midi over WiFi is sometime flakey whereas file based capture is an interesting prospect. Hmmm, I’m umming and ahhing this problem a bit. Possibly may be better to just do midi over wifi… I’ll give it a try and see, at least over Ad-Hoc connections midi over wifi seems to work and whilst rapid note presses sometimes suffer from latency, modulation is often slower and it may worked fine in Sketch Synth 3D.

 

 

Share

Focusrite have been reviewed well in Sound on Sound and a ton of other places recently with their new entry level pack Scarlet Studio, just for example:

www.musictech.net/2013/02/focusrite-scarlett-studio-first-review/

and

http://www.djmag.co.uk/content/tech-review-focusrites-scarlett-studio-package

The value for money of their offerings looks superb. I already have a Saffire Pro card but it uses firewire and it just really isn’t convenient like USB can be; but what I can say about it is the sound quality of recordings is great, it picks up the crispness and dynamics and breathiness of people’s voices that other cards can’t pick up. I’m on the move a bit these days and the Scarlet studio package with headphones, mic and soundcard makes an ideal offering. The cheapest I could find it is at gear4music at £199, although it is that price in at least three other online stores in the UK:

go to gear4music.com

Edit: Just received my Scarlet Studio in the post, very pleased with the presentation, I know I shouldn’t get too caught up in the visuals; but there is something nice when you open the package and everything is all nice and red for you, APART FROM THE USB CABLE!!! FOCUSRITE WHAT WERE YOU THINKING!!!! ALL LOVELY RED STUFF AND A DULL GREY USB CABLE!!! :-) I forgive you though, I haven’t done a side by side sound comparison with the Saffire Pro stuff I have, but the couple of hours I did use it I definitely enjoyed what I experienced.

Share

Screen Shot 2013-06-06 at 23.59.30
Sketch Synth FX 1.0.2 has just been released, it adds 4 new sound banks each with their own FX layouts. In the past every FX layout was the same, you moved from left to right through pitch, low pass filter, high pass filter, distortion, chorus, delay and then importantly to a volume / bypass pad.

In the new FX layouts the pitch and bypass pads are always there but there are now new effects like more detailed delay, reverb, bit crusher, hard limiter &  compression. Each new pad is now just themed visually with new sounds, but also has new FX layouts as well. For instance once of my favourites is the new Explosives pad, it has 18 different explosive sounds that you can adjust 4 different delay parameters, 4 different reverb parameters and a low pass filter with LFO on in order to turn a single explosive sound into a whole bombing party.

Another of the new pads is called warmer, the previous pads in Sketch Synth FX varied between dark, light and urban, this time it was important to have something a bit warmer and so it contains 18 much warmer samples, and it’s FX allow you to work in detail with the chorus, reverb and compression settings to warm up your sound.

If soft vintage sounds aren’t your thing then there is always Jagged Edge, this pad contains heavy bass and murky sounds that benefit from a tight set of distortion effects. On the menu is compression, followed by hard limiting, followed by a bit crusher then a gated distortion effect, last a little reverb is applied to give a metallic finish. This one is definitely aimed less at fuzzy overdrive and more at metallic sheen, I wanted raspy effects not over the top fuzz.

Lastly there is the toughest of the lot of them to get to grips with, the noise pad, but ultimately the most satisfying. You have 18 different noise types and then a diverse set of effects. Starting with a high pass filter, then gated distortion, reverb, a screaming chorus feedback system and lastly  wobble it is possible to create some really quite bizarre sounds with this one from foot steps, to screams, to running water, to explosions… someday I should offer a prize for the most innovative sound to come out of this. Much more taxing than the others but more interesting long term.

All in all this is a major update for Sketch Synth FX, I’m very thankful to all of you who rated it so far, it’s been great seeing some 5* reviews for it:

Fun ★★★★★

by Obis11 – Version 1.0.1 – Dec 14, 2012

My favorite from Shape of Sound. Can’t wait to use the updates projected for sometime. Great job.

Sketch synth ★★★★

by Lukerivera – Version 1.0.1 – May 12, 2013

A mon avis,meilleur que sketch synth 2.

Superbe processeur d’effets ★★★★★

by M80M80 – Version 1.0.1 – Feb 10, 2013

Rien à dire, cette app est magnifique. Un peu à la manière de Live FX, un petit cran en dessous en terme de technicité. Ici elle est plus dédiée au Live set ou au jam (amusement). L’input est le mic/mini-jack ou l’Audiocopy :) L’App est donc productive. Une fois l’input sélectionné, le son passe de gauche à droite dans différentes banques d’effets (flangeur, echo, etc etc.) que l’on contrôle à la manière d’un pad de iKaossilator ou de Live FX. Il manquerait que Audiobus pour en faire un must. ==> To the devs, please implement Audiobus! We really need this option.

I hope you enjoy this update. I’m sorry that I didn’t manage to get audio bus in there, there are difficulties. I use a framework called Unity3D for making these apps and it does not allow itself to be played in the background, it seems to work for a while and then crashes after 30s or so. If anyone has managed to run Unity3D in the background on the iPad then please let me know because there are lots of audio users who want this feature.

 

Share

If you know anything about Augmented Reality then this introductory article should be skipped over; hopefully though these ramblings may be of use to some steam punk historian who digs this up in 200 years time, if the digital record lasts that long, in fact the more I think about it the more I imagine that this commentry will just go out into the ether and that its main readers will be bots trying to ascertain whether or not it is work reading before passing it up the food chain to it’s first human reader who I imagine will look like some bearded chap with goggles and bad teeth, just a guess, maybe I’ve got you all wrong, then again…

For posterity I’ll just mention what is an augmented reality system? Well, there’s virtual reality where everything is completely inside a machine, and reality where it isn’t, but even virtual reality is part of reality and so is augmented reality so you can only really differentiate virtual from augmented reality by saying virtual reality is a complete representation of reality inside a machine whereas augmented reality is part normal sensory input and part virtual ‘overlay’.  From a philosophical perspective that puts television in the augmented reality bracket which is kind of okay, same with telephone and painting…. so we’ve been at this augmented reality thing for some time, we just called it communication and art. Now we have computers and are more purposeful we call it augmented reality and we are getting better at making the augmented bit seem like it really is part of the world, not that much better though, the mirror is still mighty effective, especially for crows that invariably want to try and fight or mate with themselves in it.

As an apps developer one of the areas of augmented reality I am most interested in is the mobile / handheld / glasses sector. In this sector we have a basic task:

  • Find a position in 3D space
  • Display something over the visual display when that position is located

Okay, well we can do this in a few ways:

  • Location Services (GPS)
  • Visual Cues (Symbol)
  • Visual Cues (Features)
  • Audio Cues

There are lots of other ways for instance Crowd Optics does it by intersecting sight lines, very interesting but in a lot of cases you may be on your own in this reality. Location Services tend to be great for big things like adding tags on maps for buildings e.g. viaPlace, but for actually make something appear to be part of your immediate reality then GPS is too inaccurate. Audio Cues are cool but they can’t really be left anywhere without something running, which leaves visual cues:

  • QR Code
  • Marker / Trigger
  • Facial Recognition

QR Codes are not much dissimilar to a custom symbol, they are small compact and can be left somewhere where they can be scanned and add to reality, usually they are just used for web pages, but they can be used to bring up images or other things.

A marker/trigger requires a bit of image processing to recognise, but once it is recognised then attempts can be made to work out where the viewer is, especially if the trigger has recognisable dimensions.

Facial recognition doesn’t give us an exact position, but does allow us to add detail on top of faces, nobody said the position had to be exact right?

Once we have the users location then we need to overlay some augmentation on top, at the moment it is pretty easy to do:

  • Sound
  • Imagery

What of our other senses, taste, smell, touch, all quite difficult with a mobile device. Sense of direction is an interesting one, can you make someone think they are moving or facing a different direction? well you can with mirrors, so it can’t be too hard with a mobile phone right??

Anyway, thats a beginners guide to augmented reality, next up is to start talking about some technologies and applications. Rather than implement this all from scratch, lots of people have done the hard work already and some nice soul has provided this table:

http://socialcompare.com/en/comparison/augmented-reality-sdks 

In the next article I will show the results of some experiments with the SDKs.

 

Share

With 3 Sketch Synth products out there you have to start asking the question which Sketch Synth is for me? Well, each Sketch Synth works with a different paradigm and so lets start by describing those paradigms:

  • Sketch Synth 3D came first, the concept is fairly simple, you have 4 channels of music playing and you modulate each channel independently in three different dimensions in order to communicate geometry and shape in your music, starting with the built in synth and then moving to using MIDI and OSC to control powerful desktop synthesizers. Over time Sketch Synth 3D was adapted to meet the requirements of the app store market and so has many more internal synth like features than originally intended.
  • Sketch Synth FX came next. It is a much simpler 2D tool designed for the app market. The premise is that you have a sample playing and you apply a range of effects to it in different layers until you get your desired sound and then you can record and share easily. It has simplicity on it’s side.
  • Sketch Synth 2 is a potential Leviathon in waiting, it’s premise is different. You have 10 crossfadable loops which you can bring in and out using the XY pad. This means that once you’ve created a bunch of loops in Ableton or what have you then you can bring them across to Sketch Synth 2 and experiment with their arrangement. Their are 4 fx channels too so you can bring in effects to accentuate or muffle parts of the performance. In Blueprint mode all the visual elements of the instrument are customisable as well so you can completely make up your own instrument.

Three different products, three different paradigms. So lets look at which to buy?

Well, which has the best reviews? Sketch Synth FX:

Fun ★★★★★

Sketch synth ★★★★

Superbe processeur d’effets ★★★★★

What about Sketch Synth 3D? The product is bigger with higher risks attached and the reviews are more mixed:

THE FUTURE OF SOUND CREATION HAS ARRIVED!!! ★★★★★

clunky ★

And Sketch Synth 2, version 1.0.0 was a bit limited, but now that the blueprint and fx modes have been added the product is expanded dramatically and the result?

Awesome! ★★★★★

by Dpg499 – Version 1.2.0 – Jun 20, 2013

Downright awesome!

Sketch Synth 3D has two variants, Sketch Sound and Sketch Synth 3D. Sketch Synth 3D is by far the better product although has worse reviews because it was initially much more expensive, it is now a bundle of all the Shape of Sound bits and pieces, whereas Sketch Sound was an older reduced price product that I keep on the app store so that people who bought it way back can still download it. I don’t really recommend Sketch Sound because it lacks the MIDI capabilites of Sketch Synth 3D and when you actually come to doing something properly, thats whats important.

Some people may hate me for saying this; but I’ve never heard great sounds come out of an iPad synth. I’ve played with a lot and compared to Desktop synths they do sound like they cost $5 instead of $100. If you don’t believe me then try listening to some of the samples for Sylenth 1, ElectraX, Gladiator, Massive and see if any of your iPad apps can match them. When I did Sketch Synth 2 I always intended it to be used with samples from a Desktop machine because otherwise I’m afraid the output will sound like it’s come from an iPad. I do put Audio Copy & Paste in there so you can use it on the go and in fact Ed at the Apps4iDevices magazine was able to make some pretty cool sounds, but not enough to convince me to leave my Desktop synths. The closest anything has come to swaying me from that opinion is when I saw DJ Sasha mention that he was using some iPad synths on blogspot, especially for some of the granular stuff.

Although, just adding to this I got Proppelerhead’s Thor recently and the sounds coming out of it are tight, I like the slot like modules and it’s one I think I could recommend. I could also recommend Sunrizer and an introductory synth, great to learn on. But when it comes to the crunch neither of them match up to what you can do with a desktop, so I’m going to put my money where my mouth is and create an ordinary synth instead a Sketch based one. I doubt I will do audio bus, so I am planning a new feature for the synths which will be a communication channel to a headless VST on your machine, it can then communicate via midi to this VST and act as the UI. I think it’s better than audio bus because it means you can integrate it with your performance.

 

 

 

 

 

Share
Down

I got the blackberry Z10 as a test device recently for development, I was quite sceptical of it on the basis of having seen some pretty dreadful Blackberry touch devices over the years. This time though I think they’ve got it right. The latest software performs well on the device, everything slides and swipes about nicely, no glitches. The UI takes about a day to fully get used to and then it is pretty intuitive and makes for pretty glitch free performance. Shock feature was the Camera though, we compared this to an iPhone 4S and it just kicks it out the door. The Camera has an excellent auto focus and a range of post-effects that make it very easy to bring out extra life in your snaps. The auto focus is the bit though that grabs me, I managed to take some cool pictures from a moving vehicle with virtually no blur. Anyway here is a snap of some flowers from the Z10:

IMG_00000010_edit-3I really haven’t done that much to them, I used the sixties frame effect and altered the contrast and sharpness a little to get the focussed centre and blurred edges look. Very very impressed with that Camera.

I think people are talking about the apps a bit at the moment  and there didn’t seem to be much of interest on the app world; but my bet is that will change. I have noticed that Blackberry are investing a little time in at least two major platforms:

   * Phonegap

   * Unity3D

Both platforms allow for cross-platform development of apps and Unity3D boasts that some 50% of mobile gaming apps are published using it. Even Rovio are using Unity for their Angry Birds stuff. Phonegap is just a platform to make HTML5 apps a bit easier to publish, but still some cool stuff can be made with it. So definitely Blackberry will catch up on the apps, the phone is certainly fast enough for it.

I didn’t try out any 3D graphics so can’t tell you about that. I’ll be sad to give this device up once the testing period is over.

Share

Just reeling in the figures for a prMac press campaign, actually very good. I thought I would show you some sales figures from an actual release to give you an idea of why it might be important. On April 8th below a new product was released and got it’s usual bit of attention on the Mac app store, on April 12th I sent a press release to prMac and by April 14th I got a large spike in sales more than making up for the time put in writing the press release and the $30 or so spent on it. there was an extra add-on that I purchased where I got detailed feedback on my press release which was pretty good. The key thing picked up was that the way I was writing it was as a sales document rather than a press release to try to gather ongoing interest. I think that point is reflected in my figures, that there was a spike in sales shortly after the release and not much more; next time I will be writing it for journalists instead.Screen Shot 2013-05-10 at 10.44.48

 

If you want to use prMac then they you can go through the link below and I get some kind of referral goodies, they have a scheme where if your refer 15 people you get a free press release which isn’t too shabby either.

Register @ PrMac.com Today

Share

This is the first article in the going deeper into the sound series, I’ll be looking at all sorts of ways to analyse music and analysis various genres of music looking for connections in structure and the conveyed music and geometry of the piece. This article will start introducing you to the tools and show you how structure can be analysed visually. In this analysis you will see it is easier to make these conclusions by listening; but as we go on we will look at picking up subtler modulations and patterns where it would be very difficult to communicate these or pick them up by ear. So without further ado I want to take you through a deeper analysis of the sounds from one of my favorite producers of the last few decades: DJ Sasha

Sasha has produced some of the deepest and most spiritually moving sounds in dance music, mastering a dream like quality in his Involver series and I’m going to start digging deeper into his sound using some of todays easily accessible audio toolchains.

The track I will be analyzing is Chained from the Involv3r album just because it is my favourite track on the album at the moment, you can get the album here.

I will be using 3 main tools: Audacity, Mixed In Key and Traktor. Audacity is a free tool that really lets you dig down into the waveform and get a quick spectral analysis on fragments of a track, you can get a copy here, it’s user interface isn’t that quick to come to grips with; but once you know the keyboard shortcuts then it can be quite fast to dive into stuff. Mixed In Key is a tool that allows you to look at the key of the track (available here) and Traktor is a DJing tool with good display of transient components in a track that allows analysis of the beat of the music (available here).

So lets start looking at the structure of the track, I’ll use Mixed In Key for this although any of the tools would be effective:

Screen Shot 2013-04-23 at 21.14.35

The Mixed in Key waveform shows several distinct chunks, when we listen to the track we can match these with up to areas of the tracks where the drums and bass sounds come in more heavily. I want to concentrate on the intro right now which is the first pink area labelled with the Am key in the track above, the Am label indicates the key of the track is likely to be A minor. Intro:

Screen Shot 2013-04-23 at 21.28.29

The intro lasts 1 minute 33 seconds with a constant beat before the bass and vocals whip in, so in those 93 seconds what does Sasha do to entertain us, well, we can see some dips or breaks in the into waveform above, there is a light one after the play button and a really distinct one at the end of the intro, so we have at least 2 breaks the last more pronounced and longer than the first, the drop where the bass and percussion comes back in is also longer each time. Overall then you can’t say the music is getting more or less consistent as the intro continues, the drops are more pronounced but further apart.

Next lets take a look at the breaks themselves, when we listen to the track we can hear the bass fade out on that first break at 40 seconds in. If we use audacity to get the frequency spectrum before the break:

Screen Shot 2013-04-23 at 21.51.27

 And compare that to the frequency spectrum during the break:

Screen Shot 2013-04-23 at 21.51.41

You can see that in the lower graph the first 200Hz of frequencies have levels in the range of -12dB to -18dB compared to -9dB to -16dB in the earlier graph. That range of frequencies is the range where the sub bass and kick drums live. We also see a hump in the mid band frequencies at around 400Hz during the break, implying that there are more instrumental and key like parts going on during the break. As soon as the break ends then the bass and kick drums drop back in and the music rises with additional vocal fragments thrown in to give you a sense that the vocals will be coming soon.

The second break follows the same pattern as the first, we need no tools to analyze what is going on here, the kick drums and bass give way to an almost xylophone like percussion before a swooshing noise sample builds us up for the drop where the vocals come back in and we can relax into the song, listen to the words without fear of missing anything in the lush sounds below.

The last thing to look at in the this intro in Traktor:

Screen Shot 2013-04-23 at 22.09.59

Traktor has the bpm down as 124 bpm and as the break begins the interesting bit is that we can see a gradual fade over about 10 seconds where the base and kicks fade out into the break. On the other side at the end of the break we see the drums and kicks come back in one swoop:

Screen Shot 2013-04-23 at 22.11.28

It may all seem right now like we’ve gone into an awful lot of technical detail to understand what we can listen to, but there is a point to it which is that it demonstrates the concepts in a visual way to people where you are not under pressure to come to a conclusion on the spot which is the case if you do this with just the audio.

What we have learnt so far is that in this track Sasha uses progressively longer and more pronounced breaks and drops to build the mood. Early on the breaks are subtler and kicks and bass are faded out over 5-10s whereas further into the track the breaks are more immediately faded out, once into full flow all the dynamics are more pronounced. Percussive key like elements are used during the breaks and short noises and vocal fragments are used to build anticipation.

The next step is to have some means to display this on a track, we are going to look at several different ways of communicating structure, motion and geometry of music through several different artists and communicating this visually is going to allow the best way to communicate a common understanding of geometry and motion within a piece of music. Sure we can make assertions about music aurally more easily but then the accuracy with which it can be conveyed to another person is minimal and generally results in arguments that boil down to linguistic differences rather than anything interesting, at least with a visual representation the communication is clear and that creates a sound basis for learning about the geometry of music. We examined the breaks and drops earlier and saw that to start with the breaks were becoming longer and how accentuated they were was also becoming greater. Through the visual and audio inspection techniques I’ve drawn over the waveform below with a blue line that increases as the track becomes more consistent in terms of the length of breaks and drops, and a green line is draw showing the strength or difference between the breaks and drops. These are done by inspection and so it is quite valid to dispute that; however it is quite clear from the graphs below that the piece has a structural geometry which is interpreted by me to be a triangle / saw tooth like waveform.

chained

In the next parts of the series we will look at other tracks, other measures and start to draw comparisons, develop measurement techniques to help us see these patterns visually and then start upon recreating some of these geometries.

If you want to experience the sounds of Sasha yourself, or use any of the tools then you can follow the links here:

Involv3r (Mixed By Sasha) - Sasha

You can get Audacity here

Mixed In Key is available here

And Traktor is available here

Share