3 Advantageous Uses of the Adapter Software Design Pattern

The Adapter is one of my favourite design patterns: of course each one is appropriate for different situations; but the Adapter is:

  • Simple to understand
  • Simple to implement
  • Leads to better design
  • Makes code that is cheaper to maintain

Of course you can get it wrong and in some cases such as in logging systems it can go wrong. For instance in some environments when logging statements add line numbers automatically, it is often better to use those logging statements than create an adapter to bridge between different logging systems. There are though many highly advantageous places to use adapters. Three areas where they can get you an advantage are:

  1. Adapting between different Analytics systems
  2. Adapting between cross platform plugins
  3. Adapting between plugins after updates

 

The implementation of the adaptor software design pattern is fairly simple. There are examples for different languages here:

http://www.dofactory.com/net/adapter-design-pattern

https://www.tutorialspoint.com/design_pattern/adapter_pattern.htm

What we are doing is creating these components:

  • Target Class – this is the base interface with the raw functions that we will access from our client code
  • Adapter – inherits from the target class and adapts the Adaptee to behave like the target class
  • Adaptee – this is the existing interface we have that can be adapted to behave like the target class

When we create this adapter we end up with a really nice interface point in our design where things can be switched in an out. Whenever changes are made to components or plugins we can just update that adapter and choose the correct adapter for our particular setup. This brings us to the different use cases where this design pattern is fantastic:

Adapting Between Analytics Systems

Constantly I am needing to bridge between different analytics systems, one customer wants one package and then it doesn’t work, then we try a different system with a new feature. Analytics calls are spread across code in multiple locations and so plugging in an out different analytics systems can be traumatic.

Adapting Between Cross Platform Plugins

One time when I was integrating video players for different platforms it was invaluable to have one common interface to speak to; yet every video player was different. The adaptor pattern provided a bridge where new video players could be swapped in and out and could easily and no code needed to change around the user interface.

Adapting Between Plugins Being Updated

Sometimes in environments like Unity3D we find that updating components breaks an interface, and in some cases we might operate through different versions of the same interface, i.e. we might want to try communicating by version 2 of a network interface and if that doesn’t work then we fall back to version 1. In these cases an adapter provides a simple way to adapt between different versions off plugins or software components

This article is provided by adbc-innovation, if you would like design assistance in any of your software projects then please get in contact on the contact us form on this website.

Data Flow, Messaging & Events in Unity3D for Code Re-Use

In Unity3D we are given an object hierarchy that allows us to really easily separate bundles of code into different plugins. In this object hierarchy the standard object is a gameObject and a number of components which usually represent software scripts are attached. This is a great way of working if you can organise your code; but if you start making a jumbled set of connections between objects then you will find that it is very hard to adapt and re-use your code later on. This is one of the highest impact issues I see when looking at code from junior engineers, it presents difficulty in respect to:

  • Readability
  • Maintainability
  • Code re-use and adaptation

It is a common problem with outcomes that are difficult to deal with later in a project.

This article will examine 4 different areas that can help you make code that is more readable, will cost less to maintain and will allow you to re-use portions of the code. There are many more methods that are more and less appropriate in different situations and I encourage you to comment at the end of the article.

  1. Using Messages
  2. Using Events
  3. Layered Designs & Interfaces
  4. Understanding the Implications of Data flow

One key thing to think about in designing your code is the direction of data flow. If you have 2 components that want to talk to each other then:

  • If component A pushes data and instructions onto component B
    • A needs to know how to talk to B ( A understands B)
    • B needs to know how to understand the data from A
  • If component B pulls data and instructions from component A
    • B needs to know about A
    • B needs to understand the data from A
    • A does not need to understand B
    • B must understand A

If we are to make a component a plugin then we need that component to know as little as possible about other components and so if component A is our plugin, we want component B to pull data from A rather than have a system where component A pushes data to B. If that scenario is undesirable for some reason then we need to adopt a system where the components do not need to have any understanding of the existence of each other in code. It is also desirable to use such a method so that plugins need not share code. Using network sockets is too heavyweight for this, we just want to connect things together without knowing about the other system. Unity provides two systems for this.

One potential solution and potential pitfall is to use the Unity GameObject.SendMessage system. The unity messages system allows us communicate between different gameObjects and broadcast to other gameObjects at a price. The performance cost of a message is relevant on mobile platforms when it is done every frame: if it is just once in a while then you can get away with it. Other readability issues can crop up when using messages, so keep the communication tightly defined.

The code for a message is as follows:

using UnityEngine;
using System.Collections;

public class MessagingSystem : MonoBehaviour
{
    public UnityEngine.UI.Button.Button

    void ReceiveMessage(int payload)
    {
        Debug.Log(“Received:  + payload);
    }
    void Example()
    {
        gameObject.SendMessage(ReceiveMessage, 1);
    }
}

Component A can talk to component B through a message and not have to know anything about object B.

Another way of handling this is to use the new Unity UI events system. A quick way of doing this is to add ButtonClick events to your code:

using UnityEngine;
using System.Collections;

public class PluginComponent : MonoBehaviour {

    public UnityEngine.UI.Button.ButtonClickedEvent trigger_event;

    public UnityEngine.UI.Slider.SliderEvent float_event;

    public UnityEngine.UI.Toggle.ToggleEvent bool_event;

    public void TriggerEvent ()
    {
        trigger_event.Invoke ();
    }

    public void FloatEvent ()
    {
        float_event.Invoke (3.0f);
    }

    public void BoolEvent ()
    {
        bool_event.Invoke (true);
    }
}

In the object hierarchy you can now link objects together and they can communicate simply via events where the performance cost is not high like with messages. There is another factor to consider when using events which is that you will need to link objects together in the game view, this is a blessing and a curse. As a system grows and you start to use prefabs it is difficult to keep track of all the links between objects; whereas messages are easy to find in code if you kept your communication code tightly defined; but there is a performance cost of sending them.

In summary I use messages sparingly, in certain architectures I do use them and I am careful with them. Events on the other hand I can be less careful with; but they are not a magic bullet and sometimes messages can reduce complexity of larger systems for you.

Finally now that we have discussed a couple of means of communication between objects, lets look at the design of the components themselves. If we think of our design in terms of layers then we have a helpful way to stop the code become an entangled mess. If we are thinking in terms of individual game objects then you can have many to many relationships between gameObjects. If we think in terms of layers then we limit the amount of communication going on between different objects and we can start to define interfaces between layers. This bonus of this is that layers can evolve separately from one another without tight coupling between multiple components. You can over layer things and under layer things; but with practice you will get to a point that works for your projects.

I recommend always using an interface or an abstract / virtual class to facilitate communication between different layers. Sometimes it can be a pain to update abstract classes and all their concrete implementations; but in the end it usually pays off to make a black and white decision in this case. You can think deeply about it and work out if it will be worth it; but it is so often worth it that now I just do it as a matter of course unless something more advanced is required.

The interface or virtual class is the channel through which other components communicate and it means you have a single reference point through which you can examine communication and check it all goes in the right direction. Whether an object pulls data from an object or pushes data to an object is important. The reason it is important for code re-use is that it defines what one layer needs to know about another. If one layer needs to know about the behaviour of another then it is very difficult to separate them.

If you use the message technique above then having message calls take place inside a communication interface can be an advantage because all the calls are in one place and you can keep track of what is calling what. It may seem a bit dull writing functions for calling functions; but at the end of the day the ability to create separable plugins in Unity3D is valuable from many perspectives.

These are not the only techniques out there, there are design patterns like MVVM that can be very helpful in this regard which I will talk about in other articles. Do please feel free to comment on the LinkedIn page with your opinions, there is more than one way to skin a cat in this respect and not every solution works in every situation so it would be great to hear your opinions.

4 Hazards In Software Coding Estimation

Some years back I worked with some of the best coders in my life doing information security; but even the best coders were getting their time estimates wrong. Our sub team was one of the few that delivered on time. Armed with this confidence I made estimates for the cost of mobile apps thinking I had a knack for it and then I got it wrong as well. If you have been working in software some time like me then you have probably got it wrong and experienced negative consequences as a result.

I want to share with you a few techniques I use to improve the accuracy of software estimates. I will start by examining some of the hazards:

  1. Forgetting to consider error conditions
  2. Lack of prototypes
  3. Lack of structured estimation technique / finger in the air
  4. Thinking about too much at once

For all the above hazards there are solutions we can put in place to help reduce the risk of estimation issues and finally improve our estimates. We can’t though eliminate the risk of getting estimates wrong, just act to reduce the risk.

The first area is forgetting to consider error conditions. If we take the example of a networked client server system then it is all too easy to estimate based on the features and requirements we have; but what happens when the network fails temporarily? Did you include time to handle different error conditions in your estimates? If you didn’t and you are financially responsible then you could be in store for a difficult week. Three ways of dealing with this are:

  • Using a failure mode analysis technique of sorts
  • Including additional contingency in the quote
  • Delivering a proposal where the customer is financially responsible for error conditions

Prototyping is a great way to get you to consider features, architectures and many problems before they arise. In the UI/UX world they use paper prototypes to do exactly this. In design we can use block diagrams and disposable prototypes to do exactly this. I recommend producing disposable prototypes for difficult parts in the estimation phase. Rapid prototyping is a great way to de-risk aspects of a project and find potential problems well in advance of them occurring when it is often much easier to deal with them.

Many times for small changes I have been tempted to just put my finger in the air and say, oh it will be 3 hours without thinking about some major error condition. Producing 3 point estimates (min, expected, max) for components and breaking down larger components into smaller components and estimating on the sum of smaller estimates is a very helpful. I also recommend having a checklist that is relevant to your project. Here is a list I have produced for one of my clients that has an ordering / inventory system for producing quotes and surveys:

  • How long will the feature take to estimate?
  • What possible error conditions could occur?
  • Does this component need to communicate with other devices?
  • Does this component need to access any native functions?
  • Are there complicated calculations that could go wrong?
  • What sub components are there of this feature?

Having a checklist like this puts you in the position where you actually think about every feature instead on skimming through each thing.

The final hazard is thinking about too much at once. We’ve discussed the idea of breaking things down into sub-components: when doing 3 point estimates we can estimate for many components and hope that the inaccuracies over multiple components will cancel themselves out. This is important for arriving at better estimates, if we break the estimates down into smaller components and then add these estimates together then we reduce the complexity, think through problems and get a more reliable result.

If you would like help estimating the cost of software or any further information then please get in touch on the contact us page.

Simplifying 3D VR/AR User Interface Coding

3D VR/AR User Interfaces are not that dissimilar from 2D user interfaces, except:

  • We have one further axis of movement
  • Many components need to be designed from scratch

The part where it gets difficult is that with many components being designed from scratch, we get to a situation where there can be significantly more complexity going on than in a 2D interface.

This articles looks at a design pattern that you can develop with or give to your developers that will inspire them to keep the system structured and simple.

It is worth noting that with any design pattern the very best results come not from cloning them; but being inspired by them and designing with that inspiration behind you.

Apple use the MVC design pattern for their components, and the organisation of components it provides can be really helpful in providing an architecture that works. I used this architecture for a project in a virtual reality company and it provided a architecture with implementation time of 4 weeks; compared to the company previously having spent 9 months building what it offered. That 9 months was of course spent investigating so the comparison isn’t perfect; but it highlights a direction.

The MVC design pattern specifies 3 main components:

  • Model – the data and business logic
  • View – the component a user can see
  • Controller – the link between the model and the view

Apple have a great guide on this here:

https://developer.apple.com/library/content/documentation/General/Conceptual/DevPedia-CocoaCore/MVC.html

MVC keeps our data and UI components separate; but we still have the trouble of having a lot of user specific code handling 3D components and no real hierarchy. For the project mentioned above a window management system with animation was required and a solid hierarchy was critical to reducing complexity and potential issues. I won’t share that hierarchy; but instead propose a slightly different one:

  • Window – a container of panels
  • Panel – a container of view controllers
  • View Controller – a UI object linking a view to a model / data
  • View – an UI object made up of Interactive Components
  • Interactive Component – the most basic building block of a 3D UI system

An Interactive Component would usually by triggered by a raycast system of sorts, in many Unity3D VR applications with Gaze Control the raycast system will respond to objects with BoxColliders and VRInteractiveItem scripts attached.

By tightly defining our hierarchy in advance then we can avoid making code that is like a mess of scrambled eggs with connections all over the place.  We do need to consider how objects in this hierarchy communicate. MVC specifies that a view can notify a controller and the controller updates the view. We need to specify that kind of interaction for each layer.

  • An Interactive component – notifies the view and is updated by the view
  • A View – notifies the view controller, is updated by the view controller, is notified by the interactive component and updates the interactive component
  • View Controller – is notified by the view, updates the view, notifies the panel, and interacts with the model
  • Panel – is notified by the view controller and passes notifications on to the window
  • Window – is notified by the panel

The above model does not allow for moveable windows out of the box; but draggable windows could be implemented in it. Draggable windows in VR are not a requirement of any of the systems I have been involved in and so this model leaves out handling things like that.

With our communications defined and our hierarchy defined we are ready to look deeper into the design and start looking at areas like error handling which is vital to understand in order to give solid time estimates.

If you would like any assistance with your user interface or design for your virtual reality system then please get in contact not he contact us page.

 

 

 

2 Learning Models for VR Training Applications

Two streams of thought in learning that are very relevant to VR are:

  • Mixed Modality Learning
  • Multi Sensory Learning

An advantage of both of these models are very simply realisable in VR:

Multi sensory learning models such as Montessori have specialised equipment that allows a child to explore learning a concept using multiple senses at once: for example the sandpaper letters work tactile and visual senses together to help learn the letters of the alphabet. Multi sensory learning is well accepted and used in education worldwide, including UK and USA:

http://learning.gov.wales/docs/learningwales/publications/140801-multi-sensory-learning-en.pdf

Mixed modality learning uses one of a number of learning modalities from the VARK model to assist in learning. The VARK model is the model that splits learners down into:

  • Visual
  • Audio
  • Reading
  • Kinaesthetic

The VARK model itself has come under some criticism over the years:

http://www.innovativelearning.com/teaching/learning_styles.html

But one study has shown that when a mixture of modalities are used at once that memorisation is improved:

http://www.davidlewisphd.com/courses/EDD8121/readings/1999-MorenoMayer.pdf

The above theories are both very compatible with virtual reality training. In virtual reality we have the following modalities:

  • Visual – graphics, video, 3D models
  • Audio – sounds, music, spoken word
  • Reading – text
  • Kinaesthetic – actions and movement

We also have audio and visual senses represented in commercial headsets, and in simulation and advanced environments we can have movement, tactical feedback, smells etc.

Combining these two theories we can create a compelling case as to why a training system should be implemented in VR over a real world environment: where our argument is actually based on research rather than a vague argument that because it’s immersive it’s better.

If you would like to discuss these ideas further or would like a training system designed for you then please get in contact on the contact us page.

 

 

3D Audio Spatialization in Unity3D for VR

Unity3D has a built in audio spatialization system which is good; but for advanced applications it lacks a few things. The standard audio spatialization systems facilitates:

  • Left & Right Panning
  • Volume Drop To Represent Distance
  • Reverb To Represent Rooms
  • Doppler Effect To Represent Velocity

There is no support for differentiating between:

  • Front & Rear
  • Above & Below
  • Whether Sounds Are Occluded or Obstructed

This is fine as a starting point; but if we want to get deeper then there are a few other options:

  1. Spatial Audio is available on the Unity Asset Store here for a one off fee of $25 and claims to improve on the standard Unity model by using phase adjustment and frequency filtering to make sounds appear like they are coming from the rear
  2. Two Big Ears have been snapped up by Facebook and so have made some of their tools available for free here. The Two Big Ears 3Dception plugin allows for placing of sounds above, below, in front and behind the player and so for a free tool is highly advanced.
  3. Real Space 3D have a set of engine’s that have been highly recommended on forums because of their binaural effect, you can find their website here. RealSpace™ 3D Audio advertises “virtual placement of sound anywhere in 3-D space with pin-point accuracy, creating the perception of real source direction, distance, depth, and movement relative to the listener and heard through standard stereo headphones”
  4. AstoundSound advertises 360 degree placement of sounds and offers flexible licensing models
  5. Phonon supports physics based audio and occlusion of sounds
  6. The Oculus Audio SDK offers spatialisation for free
  7. Dysonics is a a promising SDK currently in beta

Most of the solutions were compared in an excellent article in 2015:

3D Audio: Weighing The Options

Since that article was written, Two Big Ears have been snapped up by Facebook / Oculus and so we can expect improvement in the Oculus SDK for free.

If you would like to know more then please get in contact on the contact us page.

 

Publishing For Gear VR With Unity3D

Publishing for Samsung Gear VR with Unity3D can be straightforward or it can present intense difficulties depending on how you you prepare. In order to avoid you stumbling into potential pits, this guide will provide you with a lot of the preparation needed to overcome the difficult bits. In building an application there are the following challenges that are technically significant:

(1) Ensuring the Android Build works

(2) Crafting the Android Manifest

(3) Remembering All The Little Things

(4) Building the Submission Checking Tools

If you have built your app for iPhone or Desktop and expect it to build immediately for Android then you could be fine; but you could also be in for some challenges. The main issues that occur are issues building dex files, or Android Manifest clashes. Generally the dex file building issues come from multiple plugins including the same public classes and consistent culprits are zip file plugins, some asset store libraries include code for zipping and unzipping files which clashes with other asset store libraries that add a separate library such as zip-file. If you built your plugins up from scratch, building along the way then this problem would be easy to diagnose; however if you built for another platform and then switched to Android then this problem can be a pain.

The Android Manifest files are a source of potential problems, mainly because there are several of them. Unity builds a manifest file from the project settings and places it in the Temp/Staging Area folder; but libraries/plugins can provide their own manifest files and there is a project manifest file that can be provided in the Assets/Plugins/Android folder. The problem comes when all of these are merged together and there is a conflict such as minimum SDK version.

To craft a manifest file, the best way is to do a build of the VR app in unity3d for android with VR Mode enabled in project settings, force install location to internal and set the Android SDK version set to 19. Once this build is complete then a manifest file will be placed in the Temp/StagingArea folder of the project. Grab this manifest file and place it in the Plugins/Android folder. Then there are a few bits you need to play about with. There is a complete list here:

https://developer3.oculus.com/documentation/publish/latest/concepts/publish-mobile-manifest/

What you will find though is that Unity3D has implemented most of these for you, apart from one major one:

<intent-filter>
<action android:name=”android.intent.action.MAIN” />
<category android:name=”android.intent.category.INFO” />
</intent-filter>

When we publish for Gear VR we set the category to INFO rather than LAUNCHER. If we are testing then we want the category to be LAUNCHER so we can launch the app; wheres for publishing we want INFO. We need to make that change right before publishing.

Above are the main problems that occur; there are a couple little things you need to remember also:

(1) Signing the APK – it is necessary to setup ordinary Android signing keys and sign the apk. You must remember to add the passwords into the project settings every time you restart Unity though.

(2) When doing new builds you need to update the version and version codes in the project settings. Version is a string, version code is an int, so I used “1.1” and 11 to represent version 1.1.

(3) Make sure you have implemented the Gear VR back buttons in every scene in your project, a simple way to do this is to add the OVRPlatformMenu script from the SDK to an object in every scene.

With all of that out the way then you are ready to build the submission checking tools. Oculus don’t provide a simple application for this unfortunately. To run the submission checker you need openssl, aapt and nm. On windows I used cygwin and just did my own build of openssl, nm was already installed and aapt is available in the android SDK tools. Once you get it running then it will check for Android Manifest issues for you which is pretty handy.

A final tip before uploading you binary is the use jar signer to check you have properly signed the file:

jarsigner -verify -verbose – certs YouAPKFile.apk

if that has CN_DEBUG tags all over the place then you need to sign the file and make a release build, if you thought you already did that then just check that unity hasn’t cleared the signing passwords for you.

That is a pretty brief overview, if you need anything more in depth or consulting services then feel free to get in contact:

http://www.immersive-gaming.com/support-contact/