Category Archives: Programming

Why Availability Isn’t Serving Your Users

Is your service highly available?

Is it 99.999% available?

The real question is, does it matter?

Availability is defined as the total ‘uptime’ of a given service, over a window of time. This metric is typically counted in minutes, and those with experience in a Software-as-a-Service world, typically want to know, “What’s the Availability of the Service?”

Availability becomes important when users are vetting which vendors or businesses they would like to work with. A high level of availability, provides a level of confidence with the vendor or business, so that users know that when they need to use the service, it will be online and available.

But, as technology grows, and man does it grow fast, availability is beginning to become a buzz word, and here’s why.

Availability is Beginning to become a Buzz Word

A service or application can be highly available. It can be online and available for 99.999% of the year, but the real question is, is it working as end users expect it to work?

Let me explain.

Advertisements

Availability, calculated by the number of minutes a service is online over a period of time, is often calculated that simply. Often times a simple ‘ping’ or health check against the hardware and endpoints of the service, is all that is needed to get an accurate reading on availability, but this doesn’t account for bugs, degradations, partial outages, low performance, or missed customer expectations.

If a user attempts to use a service your team is offering, and it’s online, but it takes far to long to complete it’s work, does the user really care that the service is available 99.999% of the time?

Availability metrics like this tell you almost nothing about the reliability, or the quality of the service, beyond that it is online. With the growth of cloud computing and big players like AWS, standing up a handful of servers that are able to reach five nines of availability has never been easier. Being highly available is no longer something that separates the good from the great, it’s a flat out expectation.

Being Highly Available is no longer something that separates the Good, from the Great. It’s a flat out Expectation.

Not to worry though, where High Availability is now becoming the norm, business specific SLO’s and SLA’s, will be the next phase that separates the good from the great. Proper SLO’s that are built around specific business use cases, that meet customers expectations, will soon become the topic of conversation that replaces the low value data around availability.

Advertisements

Superior SLO’s that are instead based on the business, and not the technology, will reign supreme. SLO’s such as:

  • Users can execute the search functionality, and it completes successfully 99.9% of the time
  • Users can execute a job that completes successfully 99.99% of the time
  • Users can perform the key functionality of the service 99.9% of the time

These types of business specific SLO’s that ensure, not only is the system available, but that it’s working exactly as it’s expected to work, will be the next wave of, “How available is your service?”

I highly encourage all teams adopting SRE cultures and SLO’s, to take that next step into what meeting customer expectations is really all about. When your next business opportunity arises and the crowd wants to talk about “Availability” they’ll be blown away by the fact that you’re able to provide a level of detail around the health and quality of your services, that goes so much deeper and is so much more impactful, than simply, “Availability.”

The Error Budget

In two of my previous posts I’ve covered the SLO and SLI concepts, if you’re not familiar, I would start there:

What is an SLO?

What is an SLI?

But this post here, is really the cream of the crop when it comes to Site Reliability Engineering. The Error Budget, as it’s so called, is what takes the SLO and the SLI concepts from above, and gives us something actionable to work with. Let’s dive into the specifics.



The Error Budget concept, originally created by Google, bridges the gap between Product teams and Engineering teams. Where Product teams want new, bigger, and more features, Engineers are historically burdened with balancing deliverables, technical debt, and well… keeping production from crashing on a Friday afternoon.

…keeping production from crashing on a Friday afternoon.

But no more!

The Error Budget aims to strike a balance between what’s wanted, like new features, against what’s needed, like performance. It works by taking the concepts of SLO’s and SLI’s, ties them together and draws a tight line in the sand on when it’s time to shift gears and focus on technical debt.

Advertisements

This is accomplished by defining SLO’s around an applications performance. Should the application have 99.9% uptime? Should the application be able to handle a request in under .5 seconds? Should the application degrade gracefully in the event of a hardware failure? These are all various areas an SLO can aim for, and once we have the SLO, it’s a matter of implementing the right SLI, or measuring stick, to ensure we’re meeting our targets.

So what actually happens when we don’t meet our objectives of an SLO? What happens when our application isn’t available 99.9% of the time? What happens when that last feature push started a slow degradation of our applications ability to handle a request in under .5 seconds? The Error Budget happens.

The Error Budget is a formal agreement, written and signed off on, by all of the involved parties, and simply states what must happen in the event of a missed SLO. The Error Budget is not a punishment or tool to place blame. The Error Budget is a gauge to let us know, “Hey we’ve done too much and neglected some key areas of service we feel are important, let’s take some time to set that straight.”

The Error Budget is not a punishment or tool to place blame.

When an SLO is missed, the Error Budget covers what happens next. In most cases, a feature freeze is put into place for a certain number of days. This allows the engineering team to shift their focus and prioritize work that will directly improve the SLO target. If the application were to suffer an outage and miss it’s 99.9% uptime target, the engineering team can now focus on, not only how to fix the issue, but how to prevent it from occurring again in the future.

Advertisements

So why is it called an “Error Budget” anyways? It’s named this way because you do in fact have a “Budget” that allows teams to identify when it’s acceptable to take risk and when it’s time to pivot. One thing you do not want to do, is approach a team that has experienced and well-trained members, and try to implement concepts and red-tape that slow them down. If you have a team that can develop features and deliver on performance at the same time, why would you want to disrupt that?

With an Error Budget, you don’t have to worry about that because the budget aspect gives teams the room they need to work quickly and efficiently, and only when an SLO is missed, does the team need to shift gears and change direction. Therefore the actual budget is calculated as a small percentage of hitting a target at 100%. Let’s look at an example:

A product organization decides that they want to ensure their application accepts incoming requests, without errors, 99.9% of the time, over a 30 day period for all of their 1,000 customers. On average, let’s say customers submit roughly 100,000 requests over that same 30 day period. Our budget then becomes 99.9% of 100,000, which allows our teams 100 failed requests before our SLO is violated and our Error Budget is enacted.

Advertisements

If you have a team that can develop features and deliver on performance at the same time, why would you want to disrupt that?

By operating in this fashion, our system can handle a small hiccup without us having to freeze all features and enact the Error Budget. We can also allow our teams to take small, acceptable risks, knowing that if that mid-day release for that key customer goes wrong, and we miss our SLO, it’s time to rethink business hour deployments, BUT on the flip side, if that release only interrupts the system for a moment, and only 10 failed requests are counted, the team is still within their SLO and should carry on as they were.

The Error Budget is more than just a way to tell teams when to pay down technical debt. It provides us the foundation for measuring and accepting risk, helps us to bridge the gap between features and reliability, and finally gets product teams and engineering teams speaking the same language.

I encourage you to take a look at Google’s example Error Budget and hit me with any questions you may have!


How To Setup A Github Repo For A Unity Project

If you’re not familiar with using Git you can take a peek at my quick guide Git It? or spend some time browsing the web to get familiar. If you are familiar with Git, you will need to install Git Bash for this tutorial. You can grab a copy of all the necessary install files right here: https://git-scm.com/downloads



The first thing we want to do is setup our new repository on Github. You will need to first create a Github account. Once you’ve created your account, you can click the little plus sign, in the top-right corner of your screen and select ‘New repository.’

This will take you to the ‘Create a new repository‘ screen to setup your new repo. You will need to provide a name for your repository and then select if you want it to be viewable by the general public. In this example, I will keep the repo private.

I do not want to create a ReadMe or .gitignore at this time but I will do so after I have created my Unity project. Go ahead and press that little green button, ‘Create respository.’

Now that the new repo is setup, Github will kindly provide you with some instructions on how to push your new project to this online repo.

Advertisements

But the first thing we need is to setup a new Unity project. I will walk through how to create a new Unity 2D project and then we will setup a .gitignore file to exclude any files we don’t want to include in our repo. Finally we will push all of our changes up to Github.

Let’s walk through the process.

You’ll want to open Unity Hub and create a new project.

Now that the project has been created you’ll want to navigate to the directory where the project has been saved. Once you are in the directory, you can right-click and select, ‘Git Bash Here.’ This will open the git command line.

The first command we want to run is ‘git init‘ this will initialize the git repo locally and allow you to start adding and commiting your unity project for source tracking.

Now that the repo is setup, we can run ‘git status‘ to view the files that we can potentially add to our repo, BUT before we add anything, we want to include our .gitignore file. This will allow us to only include the files we need, and allow us to ignore any temp or unneeded files.

To do this you will need to create a new empty text file and rename it to ‘.gitignore’ so that git will recognize this file as being the one that will tell us which files to ignore.

Now go ahead and open up the file in a text editor and in the contents of the text file we are going to include the files we wish to ignore. You can find a good example of what to include by searching on Google for a Unity .gitignore file or you can use the one here: https://raw.githubusercontent.com/github/gitignore/master/Unity.gitignore.

Advertisements

Once you’ve added that to the contents of the .gitignore file, you can save the file. Back in the git command line window you can run the ‘git status‘ command and see the remaining files we will include in our repo.

We will now run the ‘git add .‘ command which will include all of the files that are not ignored, to be a part of our project.

Next we will run the ‘git commit‘ command which will commit our newly added files to our main branch. We will actually use the ‘git commit -m‘ command so we can include a little message on what these changes are.

After adding our files it’s time to finally push them up to Github. To do this we will run two more commands. The first is the link this local git repo to the one on Github.com. The second is to push the changes from our machine, up to Github.

Github provides the first and second command for you:

Execute the ‘git remote add…’ command followed by the ‘git push‘ command (you may need to change the push command from ‘main’ to ‘master’) and you’re done.

You now have a brand new Unity project on Github. If you’re interested in knowing more on how to work with Git or Github leave a comment!


How To Create a Glow Effect In Your Unity 2D Project

This process took me entirely too long to actually figure out how to do myself. I even spent a couple of bucks on an solution off the Unity Asset store and still struggled.



I wanted to write this tutorial for anyone else that might want to add glow effects to their projects and to hopefully avoid them having to ride the struggle bus like I did. Take a peek at the end result here:

Play the game right here!

To get started, create a new Unity 2D project. Once created, open the Package Manager under the Window menu item and search for the Universal RP package, install it.

In the Project view, right-click and select “Create >> Rendering >> Universal Render Pipeline >> Pipeline Asset (Forward Renderer)”

This will create 2 objects, name the first object 2D Render Pipeline and delete the object named 2D Render Pipeline_Renderer.

Advertisements

Right-click in the Project view again and select “Create >> Rendering >> Universal Render Pipeline >> Pipeline Asset (Forward Renderer)”

Rename this object to 2D Renderer. You should now have two render objects like this:

Click on the 2D Render Pipeline object and in the Inspector, drag the 2D Renderer object into the Renderer List. Also check the HDR checkbox under Quality.

Now under the Project Settings which can be found under “File >> Build Settings >> Player Settings” select the Graphics tab and set the Scriptable Render Pipeline Settings to the new 2D Render Pipeline object we just created.

Under the Main Camera Inspector you will see some new options, check the box for Post Processing under the Rendering drop down.

If you want, you can change the background of the camera to black to help add to the glow effect.

Advertisements

In the Hierarchy, right-click and create a new 2D Sprite. On the Sprite Renderer of the new object, select the Knob sprite (or provide your own).

Also on the Sprite Renderer, set the Material option to Sprite-Lit-Default (you will have to click on the little eye, to show all of the options).

Now all you have to do is add a Light object to your new sprite and viola, he shall GLOW!

You can play around with the effect of the glow by tweaking these settings on the Point Light 2D

If you want to watch the live stream of the making of this tutorial you can check it out here: Twitch Live Stream or you can catch the shortened version on YouTube!


How To Setup A Status Page For Your Website For Free

With all the talk around SLO’s and SLI’s what soon follows is a request or need for transparency. In the age of technology, when your website or webservice goes down, people know.



In most cases, they actually end up knowing faster than you, and when there is an issue the expectation to provide an immediate response is high. Thankfully there is already a great tool to help make this happen.

Say hello to the Atlassian Status Page: The Hardest Work Status Page

This status page provides a means to communicate the status of an event in which customers or end users are affected. From the end users standpoint, they can easily subscribe to receive updates the moment an incident occurs. This can allow them to react or mitigate any down stream issues with their own services and avoids any manual processes around notifying users.

The status page also allows users to have a central place to receive additional updates on the status of an on-going issues as it is being resolved. Status Page maintainers can provide updates on the findings of the incident to customers as well as how soon they expect the issue to be resolved.

Advertisements

Status Page also allows you to break down individual components of your services to provide more accurate reporting and status updates as well. This can be useful for larger products that may have multiple services or websites.

Atlassian’s Status Page also allows a number of application integrations for alerting. Alerts can be sent out to multiple chat services like Teams and Slack as well as ticketing systems like Jira and notification systems like Opsgenie. If you really want to, you can automate a Twitter notification to be sent out as well.

In today’s world of technology things move fast. Customer expectations are higher than ever and response times matter. Keeping your end-users informed and up-to-date as production issues are resolved is vital to meeting transparency and communication expectations.

Try Status Page out for free: Atlassian Status Page


How To Mock Dependencies In Unit Tests

Writing unit tests for your code base is extremely valuable, especially in the DevOps world we’re living in. Being able to automate and execute those unit tests after each round of changes ensures your code base is, at the very least, functional.



But sometimes we need to be able to write a test for a function that connects to some other service. Maybe it connects to a database or it connects to a 3rd party service. Maybe also, this database or service isn’t currently available from your local work stations or hasn’t completed development yet. Better yet, what if we want to run our unit tests as part of our DevOps build pipeline on server that doesn’t have an open connection to our 3rd party service?

Advertisements

In any of these cases, we need to find a way to write our code, and test our code without having to connect to that extra service. In .NET this can be done with Moq. Let’s start off with what Moq is. Moq is an open source code base on GitHub. It provides functionality to write “fake” implementations of things like databases that we can then test against. We can get a better understanding by looking at some examples.

Say we have a function that looks like this:

void AddDatabaseNumbers()
{
   Database db = new DatabaseHelper();
   Connection conn = db.CreateConnection("172.23.42.134");
   string query = "select number1, number2 from numberTable";
   Table results = db.Execute(query);
   return results.number1 + results.number2;
}

When we run this function it is going to connect to our database, execute a query, and then add the two results together. If we were to write a unit test for this it might look like:

[Fact]
public void TestAddNumbers()
{
   AddDatabaseNumbers();
}

In our development environment this test might pass just fine. It connects to the database and executes the query no problem BUT when we push this code to our repository and kick off the next build, the unit test fails. How come?

Well in this case, our build server doesn’t have a direct line of sight to our database, so it can’t connect and therefore the unit test fails trying to make that connection. This is where Moq can help!

Instead of having the function connect to the database, we can write a “fake” connection that pretends to do what the “db.CreateConnection” does. This way the rest of our code can be tested and we don’t have to worry about always having a database available. We can also use Dependency Injection (more on that in a later post)!

Here’s how we can rework this with a Mock. First let’s refactor our function to pass in the Database object so we can Mock its functions:

void AddDatabaseNumbers(Database db)
{
   Connection conn = db.CreateConnection("172.23.42.134");
   string query = "select number1, number2 from numberTable";
   Table results = db.Execute(query);
   return results.number1 + results.number2;
}

Now let’s look at how our unit test will change to mock the database functionality:

[Fact]
public void TestAddNumbers()
{
   var mock = new Mock<Database>();
   mock.Setup(db => db.CreateConnection()).Returns(Connection);

   var fakeQuery = "blah do nothing";
   mock.Setup(db => db.Execute(fakeQuery)).Returns(Table);

   AddDatabaseNumbers(mock.object);
}

What we’ve done here is that instead of passing a true Database object to our function we’re passing in this mock one. The mock one has had both of it’s functions ‘CreateConnection’ and ‘Execute’ setup so that instead of actually doing anything they just return a ‘Connection’ and ‘Table’ object. We’ve essentially faked all of the functionality that relates to the database so that we can run our unit test for all of the other code.

Advertisements

Now when we push our code changes up to our repository, the unit tests run and pass with flying colors, and since we used Dependency Injection to pass our Database object to the function our unit tests can use the mock object and our actual production code can pass the real Database object. Both instances will work as they should and our code is all the better for it!

I highly encourage you to write unit tests for as much code as you can and to take a look at the Quickstart guide for writing Moq’s to get a better understanding: Moq Quickstart Guide.


How To Build A Spawner In Unity

Spawning objects in Unity exists in just about every project out there. Whether we’re spawning enemies for a player to fight or collectables for them to pickup, at some point and time you’re going to need systems that can create and manage these objects. Let’s walk through what this can look like in a Unity 2D project.

There’s no best way to implement a system that creates objects and every project has slightly different needs but there are some commonalities and ways you can create your system to be reusable. For this particular example we’re going to look at spawning for Survive!

Advertisements

There are two main objects that are currently created in Survive using the current spawning system with a handful more on the way. The great part is that implementing each objects spawning logic and system is fairly simple after the first one is in place. Let’s take a look at how that’s done.

Currently we are creating two objects at game start. The white blocks that act as walls and the green enemies the player must avoid. Both of these are built using the same spawning system and code base. We do this so that we can then re-use these objects and scripts to easily add new features to our project. Here is what our spawner code currently looks like:

using UnityEngine;

public class Spawner : MonoBehaviour
{
    public GameObject objectToSpawn;
    public GameObject parent;
    public int numberToSpawn;
    public int limit = 20;
    public float rate;

    float spawnTimer;

    // Start is called before the first frame update
    void Start()
    {
        spawnTimer = rate;
    }

    // Update is called once per frame
    void Update()
    {
        if (parent.transform.childCount < limit)
        {
            spawnTimer -= Time.deltaTime;
            if (spawnTimer <= 0f)
            {
                for (int i = 0; i < numberToSpawn; i++)
                {
                    Instantiate(objectToSpawn, new Vector3(this.transform.position.x + GetModifier(), this.transform.position.y + GetModifier())
                        , Quaternion.identity, parent.transform);
                }
                spawnTimer = rate;
            }
        }
    }

    float GetModifier()
    {
        float modifier = Random.Range(0f, 1f);
        if (Random.Range(0, 2) > 0)
            return -modifier;
        else
            return modifier;
    }
}

And here is what our script looks like from the Editor:

As you can you see, our script allows us to pass in the ‘Object To Spawn’ which is the prefab of the object we want to create. We can then assign it an empty parent object (to help keep track of our objects) and from there we are free to tweak the number of objects it spawns at a time as well as if there should be a limit and how frequently they should be spawned.

With this approach we have a ton of flexibility in how we can control and manipulate object creation. We could attach another script to this object that randomly moves the spawner to different places on the screen (this is how it currently works) or we could create multiple spawners and place them in key locations on the map if we wanted more control over the spawn locations. The point is, we have options.

The best part about this approach is that we can easily include or add another object to use this same functionality with little effort. Here’s the same script, same code, but for creating wall objects:

And a third time for a new feature I’m currently working on:

Advertisements

Each object and system has it’s own concept and design to how it should work, for example the wall objects need to create so many objects quickly (higher rate) and then stop (reach their limit). The zombie objects need to be created over and over as the player destroys them but not as fast (slower rate). The new Heart Collectible needs to be created only once until the player collects it (limit).

When building objects and writing scripts in Unity we should always be thinking of how we can create something that is reusable. It might not be reusable in this particular project, but it can save you mountains of time when you go to the next project and you already have a sub-system like spawning built and ready to go!

If you want to take an even deeper dive, take a look at an article on Object Pooling: Taking Your Objects for a Swim that can help with performance issues you may run into while working with Spawning Systems.

Don’t forget to go checkout Survive and sign up for more Unity tips and updates: