Tag Archives: How To Tech

How To Setup A Status Page For Your Website For Free

With all the talk around SLO’s and SLI’s what soon follows is a request or need for transparency. In the age of technology, when your website or webservice goes down, people know.



In most cases, they actually end up knowing faster than you, and when there is an issue the expectation to provide an immediate response is high. Thankfully there is already a great tool to help make this happen.

Say hello to the Atlassian Status Page: The Hardest Work Status Page

This status page provides a means to communicate the status of an event in which customers or end users are affected. From the end users standpoint, they can easily subscribe to receive updates the moment an incident occurs. This can allow them to react or mitigate any down stream issues with their own services and avoids any manual processes around notifying users.

The status page also allows users to have a central place to receive additional updates on the status of an on-going issues as it is being resolved. Status Page maintainers can provide updates on the findings of the incident to customers as well as how soon they expect the issue to be resolved.

Advertisements

Status Page also allows you to break down individual components of your services to provide more accurate reporting and status updates as well. This can be useful for larger products that may have multiple services or websites.

Atlassian’s Status Page also allows a number of application integrations for alerting. Alerts can be sent out to multiple chat services like Teams and Slack as well as ticketing systems like Jira and notification systems like Opsgenie. If you really want to, you can automate a Twitter notification to be sent out as well.

In today’s world of technology things move fast. Customer expectations are higher than ever and response times matter. Keeping your end-users informed and up-to-date as production issues are resolved is vital to meeting transparency and communication expectations.

Try Status Page out for free: Atlassian Status Page


How To Mock Dependencies In Unit Tests

Writing unit tests for your code base is extremely valuable, especially in the DevOps world we’re living in. Being able to automate and execute those unit tests after each round of changes ensures your code base is, at the very least, functional.



But sometimes we need to be able to write a test for a function that connects to some other service. Maybe it connects to a database or it connects to a 3rd party service. Maybe also, this database or service isn’t currently available from your local work stations or hasn’t completed development yet. Better yet, what if we want to run our unit tests as part of our DevOps build pipeline on server that doesn’t have an open connection to our 3rd party service?

Advertisements

In any of these cases, we need to find a way to write our code, and test our code without having to connect to that extra service. In .NET this can be done with Moq. Let’s start off with what Moq is. Moq is an open source code base on GitHub. It provides functionality to write “fake” implementations of things like databases that we can then test against. We can get a better understanding by looking at some examples.

Say we have a function that looks like this:

void AddDatabaseNumbers()
{
   Database db = new DatabaseHelper();
   Connection conn = db.CreateConnection("172.23.42.134");
   string query = "select number1, number2 from numberTable";
   Table results = db.Execute(query);
   return results.number1 + results.number2;
}

When we run this function it is going to connect to our database, execute a query, and then add the two results together. If we were to write a unit test for this it might look like:

[Fact]
public void TestAddNumbers()
{
   AddDatabaseNumbers();
}

In our development environment this test might pass just fine. It connects to the database and executes the query no problem BUT when we push this code to our repository and kick off the next build, the unit test fails. How come?

Well in this case, our build server doesn’t have a direct line of sight to our database, so it can’t connect and therefore the unit test fails trying to make that connection. This is where Moq can help!

Instead of having the function connect to the database, we can write a “fake” connection that pretends to do what the “db.CreateConnection” does. This way the rest of our code can be tested and we don’t have to worry about always having a database available. We can also use Dependency Injection (more on that in a later post)!

Here’s how we can rework this with a Mock. First let’s refactor our function to pass in the Database object so we can Mock its functions:

void AddDatabaseNumbers(Database db)
{
   Connection conn = db.CreateConnection("172.23.42.134");
   string query = "select number1, number2 from numberTable";
   Table results = db.Execute(query);
   return results.number1 + results.number2;
}

Now let’s look at how our unit test will change to mock the database functionality:

[Fact]
public void TestAddNumbers()
{
   var mock = new Mock<Database>();
   mock.Setup(db => db.CreateConnection()).Returns(Connection);

   var fakeQuery = "blah do nothing";
   mock.Setup(db => db.Execute(fakeQuery)).Returns(Table);

   AddDatabaseNumbers(mock.object);
}

What we’ve done here is that instead of passing a true Database object to our function we’re passing in this mock one. The mock one has had both of it’s functions ‘CreateConnection’ and ‘Execute’ setup so that instead of actually doing anything they just return a ‘Connection’ and ‘Table’ object. We’ve essentially faked all of the functionality that relates to the database so that we can run our unit test for all of the other code.

Advertisements

Now when we push our code changes up to our repository, the unit tests run and pass with flying colors, and since we used Dependency Injection to pass our Database object to the function our unit tests can use the mock object and our actual production code can pass the real Database object. Both instances will work as they should and our code is all the better for it!

I highly encourage you to write unit tests for as much code as you can and to take a look at the Quickstart guide for writing Moq’s to get a better understanding: Moq Quickstart Guide.


How To Build A Spawner In Unity

Spawning objects in Unity exists in just about every project out there. Whether we’re spawning enemies for a player to fight or collectables for them to pickup, at some point and time you’re going to need systems that can create and manage these objects. Let’s walk through what this can look like in a Unity 2D project.

There’s no best way to implement a system that creates objects and every project has slightly different needs but there are some commonalities and ways you can create your system to be reusable. For this particular example we’re going to look at spawning for Survive!

Advertisements

There are two main objects that are currently created in Survive using the current spawning system with a handful more on the way. The great part is that implementing each objects spawning logic and system is fairly simple after the first one is in place. Let’s take a look at how that’s done.

Currently we are creating two objects at game start. The white blocks that act as walls and the green enemies the player must avoid. Both of these are built using the same spawning system and code base. We do this so that we can then re-use these objects and scripts to easily add new features to our project. Here is what our spawner code currently looks like:

using UnityEngine;

public class Spawner : MonoBehaviour
{
    public GameObject objectToSpawn;
    public GameObject parent;
    public int numberToSpawn;
    public int limit = 20;
    public float rate;

    float spawnTimer;

    // Start is called before the first frame update
    void Start()
    {
        spawnTimer = rate;
    }

    // Update is called once per frame
    void Update()
    {
        if (parent.transform.childCount < limit)
        {
            spawnTimer -= Time.deltaTime;
            if (spawnTimer <= 0f)
            {
                for (int i = 0; i < numberToSpawn; i++)
                {
                    Instantiate(objectToSpawn, new Vector3(this.transform.position.x + GetModifier(), this.transform.position.y + GetModifier())
                        , Quaternion.identity, parent.transform);
                }
                spawnTimer = rate;
            }
        }
    }

    float GetModifier()
    {
        float modifier = Random.Range(0f, 1f);
        if (Random.Range(0, 2) > 0)
            return -modifier;
        else
            return modifier;
    }
}

And here is what our script looks like from the Editor:

As you can you see, our script allows us to pass in the ‘Object To Spawn’ which is the prefab of the object we want to create. We can then assign it an empty parent object (to help keep track of our objects) and from there we are free to tweak the number of objects it spawns at a time as well as if there should be a limit and how frequently they should be spawned.

With this approach we have a ton of flexibility in how we can control and manipulate object creation. We could attach another script to this object that randomly moves the spawner to different places on the screen (this is how it currently works) or we could create multiple spawners and place them in key locations on the map if we wanted more control over the spawn locations. The point is, we have options.

The best part about this approach is that we can easily include or add another object to use this same functionality with little effort. Here’s the same script, same code, but for creating wall objects:

And a third time for a new feature I’m currently working on:

Advertisements

Each object and system has it’s own concept and design to how it should work, for example the wall objects need to create so many objects quickly (higher rate) and then stop (reach their limit). The zombie objects need to be created over and over as the player destroys them but not as fast (slower rate). The new Heart Collectible needs to be created only once until the player collects it (limit).

When building objects and writing scripts in Unity we should always be thinking of how we can create something that is reusable. It might not be reusable in this particular project, but it can save you mountains of time when you go to the next project and you already have a sub-system like spawning built and ready to go!

If you want to take an even deeper dive, take a look at an article on Object Pooling: Taking Your Objects for a Swim that can help with performance issues you may run into while working with Spawning Systems.

Don’t forget to go checkout Survive and sign up for more Unity tips and updates:

How to Host Your Unity Game in AWS

One of the most difficult things to accomplish when creating new projects is making the project accessible. Typically, if you are creating a mobile application or even a web application, there is significant work to get that application deployed.

In this tutorial we’ll walk through how to get a basic Unity 2D project deployed to the web so that you can start collecting feedback.



Before we start, let’s set the bar. In no way do I claim to be a Unity or AWS expert. There are most likely pieces of this tutorial that are not complete. If you find an error or notice somewhere that something is incorrect, please let me know so that I can update it for everyone else!

Now let’s get nerdy!

So, first things first. If you haven’t already, create a Unity 2D project in Unity. If you’re not familiar with Unity, take some time to get up-to-speed at https://unity.com/. Once you feel a little more comfortable and are able to create a simple 2D project, head back over here.

Advertisements

Within Unity, head up to the Menu bar and let’s create a build. Select ‘File > Build Settings’ and under ‘Platform’ select ‘WebGL‘. If the option isn’t available you may need to open up Unity Hub and download the required assets for WebGL.

Before you build your project as WebGL, be sure to check ‘Development Build’ and that you have your Scene selected to be included in the build. Then go ahead and click the ‘Build’ button and select where you want the application to be built.

I don’t know that we need the ‘Development Build’ option checked every time but when I first attempted this process I found I wasn’t able to get the application working. After some research, I found there was a bug and/or workflow issue in Unity and creating at least one development build was required.

Now that your build is completed, the Unity side of things is done. You can test your build just to be sure everything is working by clicking on the ‘Build and Run’ button in the build settings window. I’d recommend making sure everything is kosher before going on.

Now let’s head over to AWS. If you don’t already have an AWS account setup, go ahead and take the time to do that at: https://aws.amazon.com/console/. Once you are setup, go ahead and log in to the AWS Console.

What we’re going to do is deploy our Unity project to an S3 Bucket (which is just an online folder) and host it as a static website.

Within the AWS Console, either search under Services for ‘S3‘ or find it in the list.

Once you’re in S3 you’ll want to select the orange, ‘Create Bucket‘ button on the far-right. Go ahead and give your bucket a name and then scroll down to ‘Bucket settings for Block Public Access’. Uncheck each of these options and then scroll to the bottom of the screen and select ‘Create Bucket’

You should be returned to the main screen and now see your bucket. Go ahead and select it. You should see a new button on the far-right to ‘Upload‘ with. Go ahead and select that button and upload your Unity project. When you upload, select ‘Add Files’ to add the ‘Index.html’ file and then select ‘Add Folder’ for each of the ‘Build’ and ‘Template Data’ folders. Once all items are added go ahead and upload them.

Back within our bucket, select ‘Properties‘ and scroll to the bottom of the screen under, ‘Static website hosting

Select the ‘Edit‘ button off to the right and match your settings like this:

Save those changes and head back to your bucket. Now select ‘Permissions‘ just next to the ‘Properties’ tab. Under permissions find the ‘Bucket Policy’ and select ‘Edit’. Paste in the following policy and replace the ‘Resource‘ with the name of your bucket. There are two spots to do this and I’ve named them, “YourBucketName’ below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YourBucketName/*"
        },
        {
            "Sid": "2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E1L55VHCJEWRMZ"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YourBucketName/*"
        }
    ]
}

Now we are incredibly close to being done. Back in AWS, under Services, let’s now search for ‘CloudFront‘ and select it. On the CloudFront menu we’ll select ‘Create Distribution’ and then we’ll select the ‘Get Started’ button under the ‘Web’ option.

There’s not too much we need to change here. The very first setting ‘Origin Domain Name‘ you can click on and a drop down will appear for you to select your bucket. No other option needs changed and you can scroll on down to the bottom and select ‘Create Distribution’ to complete the setup.

Once the distribution is created, go ahead and select it to open it up and we’re going to make one final change before wrapping things up.

Under the ‘Error Pages‘ tab go ahead and select to ‘Create Custom Error Response’. On the new screen, select ‘Http Error Code’ as ‘403’ and under the ‘Customize Error Response’ option select ‘Yes’. When the new options appear, set ‘Response Page Path’ to ‘/Index.html’ and the ‘Http Response Code’ to ‘200: OK’ and save your changes. It should look like this:

Now, head back over to your S3 bucket and open it up. Inside your bucket select the ‘Index.html‘ file and on the ‘Object Overview‘ panel click on the ‘Object URL‘ link.

This should fire up your new static website, which will then kickoff the Unity application! Check out my example: https://lks-survive.s3.amazonaws.com/index.html

Advertisements

I hope this tutorial was able to help you get something roughly uploaded and hosted online. Remember that with AWS S3 there are some costs incurred to host your application online. My total costs are under $10 a month. If you have any feedback or suggestions to improve this tutorial please let me know!

For more Unity and AWS tips keep up with me by subscribing:

How To Write a Valuable Commit Message

Fixed.

Fixed.

Fixed.

The best commit message out there. The one that tells you absolutely nothing about anything.



Commit messages are a funny thing. They’re not very valuable when you submit them, yet when a bug pops up in production or you’re building a release, knowing the right place to look (especially if it’s not your code change) can save hours.

The commit message is one of those things you do so that you can reap the benefits when needed. Sort of like locking your door at night before you go to bed. You don’t expect to need those locks, but when you do, you’re grateful you had them in place and locked.

So what can make a great commit message? First I think it’s better to try and ask, what makes a great commit? This question can actually be trickier to answer but I think answering it, helps us write a better commit message.

Advertisements

A good commit, can best be seen as a lot of commits. The reason being is that when you commit often you can end up with a best of both world scenario.

You see some would prefer one giant chunk of completed code to complete a feature. This let’s you see the entire solution together as one. It can be difficult to review or push changes to a main branch, one-by-one. The good news is, most source control solutions now offer a way to squash (combine) all of the new commits on a branch before merging them into main, making it incredibly easy to accomplish this.

photo cred: Melanie Hughes

However, seeing individual commits can let you break things down more easily. You can get an idea of how the changes developed over time and it can help you pinpoint issues. You can also write more frequent commit messages instead of one really long message on a single massive commit. Oh and if you ever need to switch gears and work on that production bug ASAP, having your in progress work already committed makes it easy to switch to your main branch without losing any work.

Now that we know we want to have lots of little commits, let’s talk about what to put in the commit message. There are a couple of key details to include in your commit message based on your branching strategy. If you have a good branching strategy your commit message can contain more or less information as needed but you typically want to try and cover the following:

Include the ID of the related work ticket.

Include the type of commit.

Include a brief description of the change.

Advertisements

These three items make up a tasty commit stew and can be viewed as: ID-Type: Message. Let’s look at couple of examples.

23489-Feature: Added new items to the drop down on search page

23489-Bug: Fixed the drop down from not opening completely

89012-Refactor: Cleaned up old, duplicate code for invoicing

1203-Dependencies: Updated dependencies ahead of new features

Each of these commit messages provides enough detail for you to be able to tie back the code changes directly to the work item they belong to as well as what the change is so you can easily help identify potential issues.

When pushing these changes to a main branch you then have two options, you can either hang on to the individual commits (which can be useful for future debugging) or you can squash them all together and push the changes as a whole feature to the main branch (which will give you a cleaner history for the code base).

At the end of the day, there’s no right way to write a commit message. There are however, wrong ways… The key is to do yourself and your fellow developers a favor and make it as easy as possible to identify what the code changes are.


How to Write Unit Tests in .NET

Unit testing is the idea of writing a separate set of code to automatically execute your production code and verify the results. It’s called a ‘unit’ test because the idea is that you are only testing a single ‘unit’ of code and not the entire application. Writing Unit Tests is often seen in Test Driven Development but can and should be used in any development environment.

Unit tests give your application increased testing coverage with very little over-head. Most unit tests can execute extremely fast which means you can write hundreds of them and execute them all relatively quickly. In environments with multiple developers, unit tests can provide a sanity check for other developers making changes to existing code. But how do we actually go about writing a unit test?

Advertisements

In .NET there’s a couple of ways we can do this. In Test Driven Development, you write your tests first. That sounds backwards but it’s really about teaching your mind to think a certain way before you start writing any code because to write unit tests, you need highly decoupled code with few dependencies. When you do have dependencies you’ll want to use Dependency Injection (which we can cover at another time).

The .NET stack provides a built in testing harness called MSTest. It gets the job done but doesn’t come with the bells and whistles. I personally prefer xUnit which can be downloaded as a Nuget package. There is also NUnit which is very similar but I prefer xUnit because each execution of a test is containerized where as in NUnit all tests are run in a single container.

So once we’ve installed xUnit we can start writing our first tests. The first thing to do is to create a new project in your solution. We can do this by opening up our project in Visual Studio and then right-clicking on our solution, choosing Add, and then new project. From the ‘Add a new project’ window we can search for ‘xUnit Test Project‘ and add that. I simply name the project ‘Test’ and click create.

Advertisements

By default a new class is created which contains your new test class. You should also see the ‘Test Explorer’ window in Visual Studio on the left-hand side. If you don’t, go to the ‘View’ menu and select it. This menu contains all of your tests that you write and allows you to run them all or run them individually. You can also kick off a single test to debug it.

Now the fun part, writing a test! Let’s keep it simple for starters and look at an example test:

[Fact]
public void ItCanAddTwoNumbers()
{
   var result = AddTwoNumbers(1, 4);
   Assert.Equal(5, result);
}

So this test is doing a couple of things. By defining [Fact] we are saying this function is a test function and not some other helper function. I try to name my test functions based around what the application is trying to do like, ‘ItCanAddTwoNumbers’ but that is completely up to you.

Within the test function we can then call the function we want to test which in this case is ‘AddTwoNumbers(int num1, int num2).’ Simply calling this function and making sure the application doesn’t crash or throw an error is already a little bit of test coverage which is great, but we can go farther. We can not only make sure it doesn’t error, we can make sure we get the right results back.

We can do this using ‘Assert‘ which gives us some different options for verifying the results are correct. In this case, our test will check to make sure the ‘result’ variable does equal 5. If it does, our test will pass with green colors. If not, our test will fail and show red in our Test Explorer. This is great when you already have some test written, you make some code changes, and then re-run all of your tests to make sure everything is still working correctly.

One last tip, instead of using [Fact] we can use [Theory] to allow us to pass in multiple test values quickly like this:

[Theory]
[InlineData(2)]
[InlineData(5)]
[InlineData(7)]
public void ItCanAddOneToANumber(int number)
{
    var results = AddOneToNumber(number);
    Assert.Equal(number + 1, results);
}

Hopefully this gives you a brief introduction into how to write Unit Tests in .NET using xUnit. Unit testing will save you hours of debugging and troubleshooting time when that sneaky bug shows up that you can’t quite track down. Even if the bug is not in your unit tests, it can help you track down the issue faster because you’ll know it’s not in any of your code that you have unit tests for.

Always test and happy coding!

How to Intercept WebMethods in .NET

Legacy applications often suffer from tech debt. It’s a natural part of the development life cycle and depending on the size, scope, and team, tech debt can range from a few lines, to most of the project.

In a perfect world though, tech debt wouldn’t exist, but perfect is the enemy of progress, and so progress is made and tech debt is built. As tech debt mounts it can become increasingly difficult to integrate additional features and functionality. A good refractor can always help, but what about in cases where you need to add in things like logging or metric collection?

Previously you may have been stuck with having to sprinkle code throughout the application to collect the logging that was needed or you may have had to try and hunt down each place a specific line of code was called or used (and it’s various implementations). This can be difficult and unreliable. It also means that any time you want to make a change you need to go track down each of those little sprinkles and change them.

Enter PostSharp:

photo cred: Markus Spiske

PostSharp is a 3rd party library that you can include in your project and setup through Nuget. It allows you to apply Attributes to your functions that will then override your function and pass it through as a parameter. In other words, you can add a custom Attribute to any function and then either execute your new code before or after the execution of the original function. Let’s look at some examples:

After I’ve installed the Nuget package the first thing I want to do is create a new class that will handle my needs. In this example we’ll say we want to use some sort of metric collection functionality and to inspect the results of the function.

[PSerializable]
public class CollectMetric : MethodInterceptionAspect
{

}

Our new class CollectMetric.cs class will require that we inherit from MethodInterceptionAspect and that we apply the PSeriablizable Attribute. Both are PostSharp requirements. Next we can write our actual implementation. Some of this won’t be real code but you’ll get the idea.

[PSerializable]
public class CollectMetric : MethodInterceptionAspect
{
     public CollectMetric(){}//constructor

     public override void OnInvoke(MethodInterceptionArgs args)
     {
          //more code to come
     }
}
Advertisements

So now that we have our constructor and the OnInvoke() function. We can actually wire things up and start to debug and see how this works. We can do this by going to any function that we want to intercept and adding a [CollectMetric] Attribute. Like this:

[CollectMetric]
public int MyOriginalCode(string inputs)
{
     //stuff
}

If we were to debug MyOriginalCode() we would see that before that function even gets called our OnInvoke() function will be called. Now all we have to do is decide what we want to do in our OnInvoke() function and then when to call the original MyOriginalCode(). It might look like this:

public override void OnInvoke(MethodInterceptionArgs args)
{
     args.Proceed(); //this calls MyOriginalCode() function like normal
     var value = args.ReturnValue; //output from MyOriginalCode()

     SaveMetricToDatabase(value); //some other thing we wanted to do
}

So by calling args.Proceed() we are really just calling our old original code like normal, MyOriginalCode(), and we can even capture the output of that function with args.ReturnValue. From there we can do whatever we want. Add logging, inspect the results of the function, whatever new things we want to do.

Advertisements

The big advantages we gain from this solution are that we no longer have to worry about creating bugs on existing code by modifying it. We also can encapsulate all of our changes in a single place and write unit tests around them. This is a far safer solution that going around and sprinkling in lines of code throughout the application.

If you get a chance to try the solution out let me know how your implementation went! I’d love to get some feedback on what worked and didn’t work for you and how I might improve my own implementation. If you attempt this project and get stuck, feel free to reach out to me by leaving a comment!