Tag Archives: Programming

Why Availability Isn’t Serving Your Users

Is your service highly available?

Is it 99.999% available?

The real question is, does it matter?

Availability is defined as the total ‘uptime’ of a given service, over a window of time. This metric is typically counted in minutes, and those with experience in a Software-as-a-Service world, typically want to know, “What’s the Availability of the Service?”

Availability becomes important when users are vetting which vendors or businesses they would like to work with. A high level of availability, provides a level of confidence with the vendor or business, so that users know that when they need to use the service, it will be online and available.

But, as technology grows, and man does it grow fast, availability is beginning to become a buzz word, and here’s why.

Availability is Beginning to become a Buzz Word

A service or application can be highly available. It can be online and available for 99.999% of the year, but the real question is, is it working as end users expect it to work?

Let me explain.


Availability, calculated by the number of minutes a service is online over a period of time, is often calculated that simply. Often times a simple ‘ping’ or health check against the hardware and endpoints of the service, is all that is needed to get an accurate reading on availability, but this doesn’t account for bugs, degradations, partial outages, low performance, or missed customer expectations.

If a user attempts to use a service your team is offering, and it’s online, but it takes far to long to complete it’s work, does the user really care that the service is available 99.999% of the time?

Availability metrics like this tell you almost nothing about the reliability, or the quality of the service, beyond that it is online. With the growth of cloud computing and big players like AWS, standing up a handful of servers that are able to reach five nines of availability has never been easier. Being highly available is no longer something that separates the good from the great, it’s a flat out expectation.

Being Highly Available is no longer something that separates the Good, from the Great. It’s a flat out Expectation.

Not to worry though, where High Availability is now becoming the norm, business specific SLO’s and SLA’s, will be the next phase that separates the good from the great. Proper SLO’s that are built around specific business use cases, that meet customers expectations, will soon become the topic of conversation that replaces the low value data around availability.


Superior SLO’s that are instead based on the business, and not the technology, will reign supreme. SLO’s such as:

  • Users can execute the search functionality, and it completes successfully 99.9% of the time
  • Users can execute a job that completes successfully 99.99% of the time
  • Users can perform the key functionality of the service 99.9% of the time

These types of business specific SLO’s that ensure, not only is the system available, but that it’s working exactly as it’s expected to work, will be the next wave of, “How available is your service?”

I highly encourage all teams adopting SRE cultures and SLO’s, to take that next step into what meeting customer expectations is really all about. When your next business opportunity arises and the crowd wants to talk about “Availability” they’ll be blown away by the fact that you’re able to provide a level of detail around the health and quality of your services, that goes so much deeper and is so much more impactful, than simply, “Availability.”

How To Setup A Github Repo For A Unity Project

Edit: Fixed link to git ignore file (10/14/21)

If you’re not familiar with using Git you can take a peek at my quick guide Git It? or spend some time browsing the web to get familiar. If you are familiar with Git, you will need to install Git Bash for this tutorial. You can grab a copy of all the necessary install files right here: https://git-scm.com/downloads


The first thing we want to do is setup our new repository on Github. You will need to first create a Github account. Once you’ve created your account, you can click the little plus sign, in the top-right corner of your screen and select ‘New repository.’

This will take you to the ‘Create a new repository‘ screen to setup your new repo. You will need to provide a name for your repository and then select if you want it to be viewable by the general public. In this example, I will keep the repo private.


I do not want to create a ReadMe or .gitignore at this time but I will do so after I have created my Unity project. Go ahead and press that little green button, ‘Create respository.’

Now that the new repo is setup, Github will kindly provide you with some instructions on how to push your new project to this online repo.


But the first thing we need is to setup a new Unity project. I will walk through how to create a new Unity 2D project and then we will setup a .gitignore file to exclude any files we don’t want to include in our repo. Finally we will push all of our changes up to Github.

Let’s walk through the process.

You’ll want to open Unity Hub and create a new project.

Now that the project has been created you’ll want to navigate to the directory where the project has been saved. Once you are in the directory, you can right-click and select, ‘Git Bash Here.’ This will open the git command line.


The first command we want to run is ‘git init‘ this will initialize the git repo locally and allow you to start adding and commiting your unity project for source tracking.

Now that the repo is setup, we can run ‘git status‘ to view the files that we can potentially add to our repo, BUT before we add anything, we want to include our .gitignore file. This will allow us to only include the files we need, and allow us to ignore any temp or unneeded files.

To do this you will need to create a new empty text file and rename it to ‘.gitignore’ so that git will recognize this file as being the one that will tell us which files to ignore.

Now go ahead and open up the file in a text editor and in the contents of the text file we are going to include the files we wish to ignore. You can find a good example of what to include by searching on Google for a Unity .gitignore file or you can use the one here: https://raw.githubusercontent.com/github/gitignore/master/Unity.gitignore


Once you’ve added that to the contents of the .gitignore file, you can save the file. Back in the git command line window you can run the ‘git status‘ command and see the remaining files we will include in our repo.

We will now run the ‘git add .‘ command which will include all of the files that are not ignored, to be a part of our project.


Next we will run the ‘git commit‘ command which will commit our newly added files to our main branch. We will actually use the ‘git commit -m‘ command so we can include a little message on what these changes are.

After adding our files it’s time to finally push them up to Github. To do this we will run two more commands. The first is the link this local git repo to the one on Github.com. The second is to push the changes from our machine, up to Github.

Github provides the first and second command for you:


Execute the ‘git remote add…’ command followed by the ‘git push‘ command (you may need to change the push command from ‘main’ to ‘master’) and you’re done.

You now have a brand new Unity project on Github. If you’re interested in knowing more on how to work with Git or Github leave a comment!

How To Mock Dependencies In Unit Tests

Writing unit tests for your code base is extremely valuable, especially in the DevOps world we’re living in. Being able to automate and execute those unit tests after each round of changes ensures your code base is, at the very least, functional.

But sometimes we need to be able to write a test for a function that connects to some other service. Maybe it connects to a database or it connects to a 3rd party service. Maybe also, this database or service isn’t currently available from your local work stations or hasn’t completed development yet. Better yet, what if we want to run our unit tests as part of our DevOps build pipeline on server that doesn’t have an open connection to our 3rd party service?


In any of these cases, we need to find a way to write our code, and test our code without having to connect to that extra service. In .NET this can be done with Moq. Let’s start off with what Moq is. Moq is an open source code base on GitHub. It provides functionality to write “fake” implementations of things like databases that we can then test against. We can get a better understanding by looking at some examples.

Say we have a function that looks like this:

void AddDatabaseNumbers()
   Database db = new DatabaseHelper();
   Connection conn = db.CreateConnection("");
   string query = "select number1, number2 from numberTable";
   Table results = db.Execute(query);
   return results.number1 + results.number2;

When we run this function it is going to connect to our database, execute a query, and then add the two results together. If we were to write a unit test for this it might look like:

public void TestAddNumbers()

In our development environment this test might pass just fine. It connects to the database and executes the query no problem BUT when we push this code to our repository and kick off the next build, the unit test fails. How come?

Well in this case, our build server doesn’t have a direct line of sight to our database, so it can’t connect and therefore the unit test fails trying to make that connection. This is where Moq can help!

Instead of having the function connect to the database, we can write a “fake” connection that pretends to do what the “db.CreateConnection” does. This way the rest of our code can be tested and we don’t have to worry about always having a database available. We can also use Dependency Injection (more on that in a later post)!

Here’s how we can rework this with a Mock. First let’s refactor our function to pass in the Database object so we can Mock its functions:

void AddDatabaseNumbers(Database db)
   Connection conn = db.CreateConnection("");
   string query = "select number1, number2 from numberTable";
   Table results = db.Execute(query);
   return results.number1 + results.number2;

Now let’s look at how our unit test will change to mock the database functionality:

public void TestAddNumbers()
   var mock = new Mock<Database>();
   mock.Setup(db => db.CreateConnection()).Returns(Connection);

   var fakeQuery = "blah do nothing";
   mock.Setup(db => db.Execute(fakeQuery)).Returns(Table);


What we’ve done here is that instead of passing a true Database object to our function we’re passing in this mock one. The mock one has had both of it’s functions ‘CreateConnection’ and ‘Execute’ setup so that instead of actually doing anything they just return a ‘Connection’ and ‘Table’ object. We’ve essentially faked all of the functionality that relates to the database so that we can run our unit test for all of the other code.


Now when we push our code changes up to our repository, the unit tests run and pass with flying colors, and since we used Dependency Injection to pass our Database object to the function our unit tests can use the mock object and our actual production code can pass the real Database object. Both instances will work as they should and our code is all the better for it!

I highly encourage you to write unit tests for as much code as you can and to take a look at the Quickstart guide for writing Moq’s to get a better understanding: Moq Quickstart Guide.

How To Build A Spawner In Unity

Spawning objects in Unity exists in just about every project out there. Whether we’re spawning enemies for a player to fight or collectables for them to pickup, at some point and time you’re going to need systems that can create and manage these objects. Let’s walk through what this can look like in a Unity 2D project.

There’s no best way to implement a system that creates objects and every project has slightly different needs but there are some commonalities and ways you can create your system to be reusable. For this particular example we’re going to look at spawning for Survive!


There are two main objects that are currently created in Survive using the current spawning system with a handful more on the way. The great part is that implementing each objects spawning logic and system is fairly simple after the first one is in place. Let’s take a look at how that’s done.

Currently we are creating two objects at game start. The white blocks that act as walls and the green enemies the player must avoid. Both of these are built using the same spawning system and code base. We do this so that we can then re-use these objects and scripts to easily add new features to our project. Here is what our spawner code currently looks like:

using UnityEngine;

public class Spawner : MonoBehaviour
    public GameObject objectToSpawn;
    public GameObject parent;
    public int numberToSpawn;
    public int limit = 20;
    public float rate;

    float spawnTimer;

    // Start is called before the first frame update
    void Start()
        spawnTimer = rate;

    // Update is called once per frame
    void Update()
        if (parent.transform.childCount < limit)
            spawnTimer -= Time.deltaTime;
            if (spawnTimer <= 0f)
                for (int i = 0; i < numberToSpawn; i++)
                    Instantiate(objectToSpawn, new Vector3(this.transform.position.x + GetModifier(), this.transform.position.y + GetModifier())
                        , Quaternion.identity, parent.transform);
                spawnTimer = rate;

    float GetModifier()
        float modifier = Random.Range(0f, 1f);
        if (Random.Range(0, 2) > 0)
            return -modifier;
            return modifier;

And here is what our script looks like from the Editor:

As you can you see, our script allows us to pass in the ‘Object To Spawn’ which is the prefab of the object we want to create. We can then assign it an empty parent object (to help keep track of our objects) and from there we are free to tweak the number of objects it spawns at a time as well as if there should be a limit and how frequently they should be spawned.


With this approach we have a ton of flexibility in how we can control and manipulate object creation. We could attach another script to this object that randomly moves the spawner to different places on the screen (this is how it currently works) or we could create multiple spawners and place them in key locations on the map if we wanted more control over the spawn locations. The point is, we have options.

The best part about this approach is that we can easily include or add another object to use this same functionality with little effort. Here’s the same script, same code, but for creating wall objects:

And a third time for a new feature I’m currently working on:


Each object and system has it’s own concept and design to how it should work, for example the wall objects need to create so many objects quickly (higher rate) and then stop (reach their limit). The zombie objects need to be created over and over as the player destroys them but not as fast (slower rate). The new Heart Collectible needs to be created only once until the player collects it (limit).

When building objects and writing scripts in Unity we should always be thinking of how we can create something that is reusable. It might not be reusable in this particular project, but it can save you mountains of time when you go to the next project and you already have a sub-system like spawning built and ready to go!

If you want to take an even deeper dive, take a look at an article on Object Pooling: Taking Your Objects for a Swim that can help with performance issues you may run into while working with Spawning Systems.

Don’t forget to go checkout Survive and sign up for more Unity tips and updates:

How To Write a Valuable Commit Message




The best commit message out there. The one that tells you absolutely nothing about anything.

Commit messages are a funny thing. They’re not very valuable when you submit them, yet when a bug pops up in production or you’re building a release, knowing the right place to look (especially if it’s not your code change) can save hours.

The commit message is one of those things you do so that you can reap the benefits when needed. Sort of like locking your door at night before you go to bed. You don’t expect to need those locks, but when you do, you’re grateful you had them in place and locked.

So what can make a great commit message? First I think it’s better to try and ask, what makes a great commit? This question can actually be trickier to answer but I think answering it, helps us write a better commit message.


A good commit, can best be seen as a lot of commits. The reason being is that when you commit often you can end up with a best of both world scenario.

You see some would prefer one giant chunk of completed code to complete a feature. This let’s you see the entire solution together as one. It can be difficult to review or push changes to a main branch, one-by-one. The good news is, most source control solutions now offer a way to squash (combine) all of the new commits on a branch before merging them into main, making it incredibly easy to accomplish this.

photo cred: Melanie Hughes

However, seeing individual commits can let you break things down more easily. You can get an idea of how the changes developed over time and it can help you pinpoint issues. You can also write more frequent commit messages instead of one really long message on a single massive commit. Oh and if you ever need to switch gears and work on that production bug ASAP, having your in progress work already committed makes it easy to switch to your main branch without losing any work.

Now that we know we want to have lots of little commits, let’s talk about what to put in the commit message. There are a couple of key details to include in your commit message based on your branching strategy. If you have a good branching strategy your commit message can contain more or less information as needed but you typically want to try and cover the following:

Include the ID of the related work ticket.

Include the type of commit.

Include a brief description of the change.


These three items make up a tasty commit stew and can be viewed as: ID-Type: Message. Let’s look at couple of examples.

23489-Feature: Added new items to the drop down on search page

23489-Bug: Fixed the drop down from not opening completely

89012-Refactor: Cleaned up old, duplicate code for invoicing

1203-Dependencies: Updated dependencies ahead of new features

Each of these commit messages provides enough detail for you to be able to tie back the code changes directly to the work item they belong to as well as what the change is so you can easily help identify potential issues.

When pushing these changes to a main branch you then have two options, you can either hang on to the individual commits (which can be useful for future debugging) or you can squash them all together and push the changes as a whole feature to the main branch (which will give you a cleaner history for the code base).

At the end of the day, there’s no right way to write a commit message. There are however, wrong ways… The key is to do yourself and your fellow developers a favor and make it as easy as possible to identify what the code changes are.

How to Write Unit Tests in .NET

Unit testing is the idea of writing a separate set of code to automatically execute your production code and verify the results. It’s called a ‘unit’ test because the idea is that you are only testing a single ‘unit’ of code and not the entire application. Writing Unit Tests is often seen in Test Driven Development but can and should be used in any development environment.

Unit tests give your application increased testing coverage with very little over-head. Most unit tests can execute extremely fast which means you can write hundreds of them and execute them all relatively quickly. In environments with multiple developers, unit tests can provide a sanity check for other developers making changes to existing code. But how do we actually go about writing a unit test?


In .NET there’s a couple of ways we can do this. In Test Driven Development, you write your tests first. That sounds backwards but it’s really about teaching your mind to think a certain way before you start writing any code because to write unit tests, you need highly decoupled code with few dependencies. When you do have dependencies you’ll want to use Dependency Injection (which we can cover at another time).

The .NET stack provides a built in testing harness called MSTest. It gets the job done but doesn’t come with the bells and whistles. I personally prefer xUnit which can be downloaded as a Nuget package. There is also NUnit which is very similar but I prefer xUnit because each execution of a test is containerized where as in NUnit all tests are run in a single container.

So once we’ve installed xUnit we can start writing our first tests. The first thing to do is to create a new project in your solution. We can do this by opening up our project in Visual Studio and then right-clicking on our solution, choosing Add, and then new project. From the ‘Add a new project’ window we can search for ‘xUnit Test Project‘ and add that. I simply name the project ‘Test’ and click create.


By default a new class is created which contains your new test class. You should also see the ‘Test Explorer’ window in Visual Studio on the left-hand side. If you don’t, go to the ‘View’ menu and select it. This menu contains all of your tests that you write and allows you to run them all or run them individually. You can also kick off a single test to debug it.

Now the fun part, writing a test! Let’s keep it simple for starters and look at an example test:

public void ItCanAddTwoNumbers()
   var result = AddTwoNumbers(1, 4);
   Assert.Equal(5, result);

So this test is doing a couple of things. By defining [Fact] we are saying this function is a test function and not some other helper function. I try to name my test functions based around what the application is trying to do like, ‘ItCanAddTwoNumbers’ but that is completely up to you.

Within the test function we can then call the function we want to test which in this case is ‘AddTwoNumbers(int num1, int num2).’ Simply calling this function and making sure the application doesn’t crash or throw an error is already a little bit of test coverage which is great, but we can go farther. We can not only make sure it doesn’t error, we can make sure we get the right results back.

We can do this using ‘Assert‘ which gives us some different options for verifying the results are correct. In this case, our test will check to make sure the ‘result’ variable does equal 5. If it does, our test will pass with green colors. If not, our test will fail and show red in our Test Explorer. This is great when you already have some test written, you make some code changes, and then re-run all of your tests to make sure everything is still working correctly.

One last tip, instead of using [Fact] we can use [Theory] to allow us to pass in multiple test values quickly like this:

public void ItCanAddOneToANumber(int number)
    var results = AddOneToNumber(number);
    Assert.Equal(number + 1, results);

Hopefully this gives you a brief introduction into how to write Unit Tests in .NET using xUnit. Unit testing will save you hours of debugging and troubleshooting time when that sneaky bug shows up that you can’t quite track down. Even if the bug is not in your unit tests, it can help you track down the issue faster because you’ll know it’s not in any of your code that you have unit tests for.

Always test and happy coding!

What is an SLO?

It means that you should work carefully and SLOwly…

Nah, I’m just kidding that’s not what it means at all. It actually stands for Service Level Objective, but what does that even mean? Is it like an SLA? What’s an SLA? Is that like an SLI? What the hell is an SLI…?

Don’t sweat any of it as this is the first part in an upcoming mini-series on what the hell all of the SL(insert letter)’s really are. Let’s dive in!

An SLO represents a level of service that a business intends to meet for it’s customers. In particular, it is an objective, a goal, or a bench mark. It is the target that the company has set to aim for and reach for and it is the mark the customers and clients will come to expect. So what goes into an SLO?

Defining an SLO can be done in a number of ways. Some of the easier ways to define and set an SLO directly relate to technology. For example, a company such as AWS may set an objective at having their services up and running 99.99% of the time. That is their objective and goal. It is what they work towards maintaining and being at, at all times.

photo cred: Christian Wiediger

If AWS has an outage, let’s say the power goes out somewhere, and their system goes down for a couple of hours they would no longer be at their objective of being up for 99.99% of the time. This would let the AWS team know they need to create and invest in ways to mitigate such outages like routing traffic to a different data center.

AWS just so happens to provide an SLA (Service Level Agreement) which states some of their SLO’s, you can view it here: Amazon Computer Service Level Agreement. An SLA is merely the agreement between AWS and their customers so that if they are not meeting their SLO they can provide credit in return for the lack of service they have agreed to meet. Think of it as a way of saying, “hey we’re sorry we didn’t do what we said we were going to do. Here’s a refund.

Obviously missing their SLO’s and having to offer up credits is not something AWS wants to do, which is why you’ll notice there is rarely a service outage for AWS. IT does however let customers and clients know that AWS is committed to providing top-tier service. I wonder if that has anything to do with why they are so widely used…. 😉


If you take a peek at the AWS SLO link above you can see that they don’t actually target having their systems up and running 100% of the time. Why is that? The reality is that 100% is not realistic.

Consider the following example, in a single day there are 1440 minutes and let’s say that there is one tiny, minor, little hiccup in the internet. Let’s say it’s so tiny that it doesn’t even take up a full second. Instead it takes up milliseconds… like… .0144 seconds. That little blip would cause AWS to miss their 100% mark. Perfection is the enemy of progress. Remember that.

Instead, most services aim for somewhere that’s more acceptable. In some cases it can be 99.999% and in other cases it can be 80% (think of an internal service that provides customer data back to AWS. It’s not a critical system so if it fails 20% of the time, it’s not the end of the world). The point is that an objective is set and the company strives to achieve it.


Now I know we dove in a little deep there and the turns got twisty. That tends to happen when you start talking SL(insert letter here)’s because there is no hard and fast right way, BUT there are some best practices and I’ll continue this series and dive in a little deeper each time.

Hopefully you learned a little bit about what an SLO is and how it relates to the service a company is aiming to achieve for it’s customers. I recommend taking a look at another SLA from Google to help paint the picture. (remember SLA is the agreement between company and customer, the SLO is the actual target the company is aiming for, the 99.99%): Google Computer Engine Service Level Agreement.

Read the next article in the series: What is an SLI?