Tag Archives: AWS

How to Host Your Unity Game in AWS

One of the most difficult things to accomplish when creating new projects is making the project accessible. Typically, if you are creating a mobile application or even a web application, there is significant work to get that application deployed.

In this tutorial we’ll walk through how to get a basic Unity 2D project deployed to the web so that you can start collecting feedback.



Before we start, let’s set the bar. In no way do I claim to be a Unity or AWS expert. There are most likely pieces of this tutorial that are not complete. If you find an error or notice somewhere that something is incorrect, please let me know so that I can update it for everyone else!

Now let’s get nerdy!

So, first things first. If you haven’t already, create a Unity 2D project in Unity. If you’re not familiar with Unity, take some time to get up-to-speed at https://unity.com/. Once you feel a little more comfortable and are able to create a simple 2D project, head back over here.

Advertisements

Within Unity, head up to the Menu bar and let’s create a build. Select ‘File > Build Settings’ and under ‘Platform’ select ‘WebGL‘. If the option isn’t available you may need to open up Unity Hub and download the required assets for WebGL.

Before you build your project as WebGL, be sure to check ‘Development Build’ and that you have your Scene selected to be included in the build. Then go ahead and click the ‘Build’ button and select where you want the application to be built.

I don’t know that we need the ‘Development Build’ option checked every time but when I first attempted this process I found I wasn’t able to get the application working. After some research, I found there was a bug and/or workflow issue in Unity and creating at least one development build was required.

Now that your build is completed, the Unity side of things is done. You can test your build just to be sure everything is working by clicking on the ‘Build and Run’ button in the build settings window. I’d recommend making sure everything is kosher before going on.

Advertisements

Now let’s head over to AWS. If you don’t already have an AWS account setup, go ahead and take the time to do that at: https://aws.amazon.com/console/. Once you are setup, go ahead and log in to the AWS Console.

What we’re going to do is deploy our Unity project to an S3 Bucket (which is just an online folder) and host it as a static website.

Within the AWS Console, either search under Services for ‘S3‘ or find it in the list.

Once you’re in S3 you’ll want to select the orange, ‘Create Bucket‘ button on the far-right. Go ahead and give your bucket a name and then scroll down to ‘Bucket settings for Block Public Access’. Uncheck each of these options and then scroll to the bottom of the screen and select ‘Create Bucket’

Advertisements

You should be returned to the main screen and now see your bucket. Go ahead and select it. You should see a new button on the far-right to ‘Upload‘ with. Go ahead and select that button and upload your Unity project. When you upload, select ‘Add Files’ to add the ‘Index.html’ file and then select ‘Add Folder’ for each of the ‘Build’ and ‘Template Data’ folders. Once all items are added go ahead and upload them.

Back within our bucket, select ‘Properties‘ and scroll to the bottom of the screen under, ‘Static website hosting

Select the ‘Edit‘ button off to the right and match your settings like this:

Advertisements

Save those changes and head back to your bucket. Now select ‘Permissions‘ just next to the ‘Properties’ tab. Under permissions find the ‘Bucket Policy’ and select ‘Edit’. Paste in the following policy and replace the ‘Resource‘ with the name of your bucket. There are two spots to do this and I’ve named them, “YourBucketName’ below:

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YourBucketName/*"
        },
        {
            "Sid": "2",
            "Effect": "Allow",
            "Principal": {
                "AWS": "arn:aws:iam::cloudfront:user/CloudFront Origin Access Identity E1L55VHCJEWRMZ"
            },
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::YourBucketName/*"
        }
    ]
}

Now we are incredibly close to being done. Back in AWS, under Services, let’s now search for ‘CloudFront‘ and select it. On the CloudFront menu we’ll select ‘Create Distribution’ and then we’ll select the ‘Get Started’ button under the ‘Web’ option.

There’s not too much we need to change here. The very first setting ‘Origin Domain Name‘ you can click on and a drop down will appear for you to select your bucket. No other option needs changed and you can scroll on down to the bottom and select ‘Create Distribution’ to complete the setup.

Advertisements

Once the distribution is created, go ahead and select it to open it up and we’re going to make one final change before wrapping things up.

Under the ‘Error Pages‘ tab go ahead and select to ‘Create Custom Error Response’. On the new screen, select ‘Http Error Code’ as ‘403’ and under the ‘Customize Error Response’ option select ‘Yes’. When the new options appear, set ‘Response Page Path’ to ‘/Index.html’ and the ‘Http Response Code’ to ‘200: OK’ and save your changes. It should look like this:

Now, head back over to your S3 bucket and open it up. Inside your bucket select the ‘Index.html‘ file and on the ‘Object Overview‘ panel click on the ‘Object URL‘ link.

This should fire up your new static website, which will then kickoff the Unity application! Check out my example: https://lks-survive.s3.amazonaws.com/index.html

Advertisements

I hope this tutorial was able to help you get something roughly uploaded and hosted online. Remember that with AWS S3 there are some costs incurred to host your application online. My total costs are under $10 a month. If you have any feedback or suggestions to improve this tutorial please let me know!

For more Unity and AWS tips keep up with me by subscribing:

Where to Store Website Credentials

I have searched the internet far and wide trying to find the best answer to this question. Where is the best place to store website credentials and API keys?

There are a lot of options ou there but none of them have really given me any level of satisfaction that I’m looking for. I could store credentials like usernames and passwords in a web.config but then I would have to consider how I store that in source control as well as the risks of storing it in plain text.

Sure I could encrypt that web.config before deployments but I still have the problem of dealing with source control. That just isn’t going to work.

I could go about storing the credentials and API keys in a database. This is a slightly improved solution but then again… how do I store a database connection string and credentials to that database outside of the database to allow connection to it?

photo cred: Campaign Creators

I could use Windows Authentication but what if my stack changes or doesn’t support Windows Auth? I also then still need to consider encrypting my database and the credentials or API keys so they aren’t stored in plain text. And then what if I wanted to check one of those values, I’d have to decrypt it as well…. omg what a pain in the ass this is…

The real truth is there isn’t an easy or best way to store website credentials and API keys… until AWS Secrets Manager came along.

AWS Secrets Manager offers an offsite option for storing all of your credentials, database strings, and API keys. It is quite literally a one stop shop for all your secret storage needs.

Advertisements

The way it works is by allowing you to use your AWS Access Key ID and AWS Secret Access Key (basically your AWS authentication) to create an AWS Secrets Manager client. Using this client you can then request credentials from AWS to be served back to your code repository. It’s actually so easy to implement that I managed to do it in a handful of lines of code:

new AmazonSecretsManagerClient(RegionEndpoint.GetBySystemName(region));

One of the best things is AWS allows you to handle authentication by creating environment variables for your AWS client and allowing your code base to utilize those to complete the authentication to AWS. This makes it very easy to move your code base through various test environments without having to worry about changes in each environment.

AWS also offers a cache solution for your Secrets Manager Client. You see each time you connect to AWS to retrieve a secret like credentials there is a small cost associated (we’re talking pennies here). This isn’t much until you consider having to grab these credentials on each web request which can easily climb into the thousands and skyrockets costs.

Advertisements

Thankfully AWS offers the cache solution which let’s you pull the secrets once and then cache them on the server side. This avoids having to poll AWS for a the secret every time it’s needed and greatly reduces cost. Thanks AWS! Oh and it’s also very easy to implement:

new SecretsManagerCache(client);

And when you’re ready to actually pull the secret from AWS:

var secret = await cache.GetSecretString(secretId);

AWS Secrets Manager also offers a handful of other sweet benefits. You’re able to automate password rotation for AWS services like RDS, Redshift, and DocumentDB and of course you get the security that comes with using AWS. There are also some fantastic Auditing features, my favorite being that when a password is deleted it is stored for 7 additional days and managing parties are notified of the delete. No accidents!

All-in-all AWS Secrets Manager is a fantastic solution to the, “How do I manage my credentials for my applications” problem. The cost is very minimal when used in conjunction with the caching feature and the overall complexity to implement the solution is minimal. If you’re looking for a next-gen solution to managing sensitive application data I highly recommend AWS Secrets Manager.

What is an SLO?

It means that you should work carefully and SLOwly…

Nah, I’m just kidding that’s not what it means at all. It actually stands for Service Level Objective, but what does that even mean? Is it like an SLA? What’s an SLA? Is that like an SLI? What the hell is an SLI…?

Don’t sweat any of it as this is the first part in an upcoming mini-series on what the hell all of the SL(insert letter)’s really are. Let’s dive in!



An SLO represents a level of service that a business intends to meet for it’s customers. In particular, it is an objective, a goal, or a bench mark. It is the target that the company has set to aim for and reach for and it is the mark the customers and clients will come to expect. So what goes into an SLO?

Defining an SLO can be done in a number of ways. Some of the easier ways to define and set an SLO directly relate to technology. For example, a company such as AWS may set an objective at having their services up and running 99.99% of the time. That is their objective and goal. It is what they work towards maintaining and being at, at all times.

photo cred: Christian Wiediger

If AWS has an outage, let’s say the power goes out somewhere, and their system goes down for a couple of hours they would no longer be at their objective of being up for 99.99% of the time. This would let the AWS team know they need to create and invest in ways to mitigate such outages like routing traffic to a different data center.

AWS just so happens to provide an SLA (Service Level Agreement) which states some of their SLO’s, you can view it here: Amazon Computer Service Level Agreement. An SLA is merely the agreement between AWS and their customers so that if they are not meeting their SLO they can provide credit in return for the lack of service they have agreed to meet. Think of it as a way of saying, “hey we’re sorry we didn’t do what we said we were going to do. Here’s a refund.

Obviously missing their SLO’s and having to offer up credits is not something AWS wants to do, which is why you’ll notice there is rarely a service outage for AWS. IT does however let customers and clients know that AWS is committed to providing top-tier service. I wonder if that has anything to do with why they are so widely used…. 😉

Advertisements

If you take a peek at the AWS SLO link above you can see that they don’t actually target having their systems up and running 100% of the time. Why is that? The reality is that 100% is not realistic.

Consider the following example, in a single day there are 1440 minutes and let’s say that there is one tiny, minor, little hiccup in the internet. Let’s say it’s so tiny that it doesn’t even take up a full second. Instead it takes up milliseconds… like… .0144 seconds. That little blip would cause AWS to miss their 100% mark. Perfection is the enemy of progress. Remember that.

Instead, most services aim for somewhere that’s more acceptable. In some cases it can be 99.999% and in other cases it can be 80% (think of an internal service that provides customer data back to AWS. It’s not a critical system so if it fails 20% of the time, it’s not the end of the world). The point is that an objective is set and the company strives to achieve it.

Advertisements

Now I know we dove in a little deep there and the turns got twisty. That tends to happen when you start talking SL(insert letter here)’s because there is no hard and fast right way, BUT there are some best practices and I’ll continue this series and dive in a little deeper each time.

Hopefully you learned a little bit about what an SLO is and how it relates to the service a company is aiming to achieve for it’s customers. I recommend taking a look at another SLA from Google to help paint the picture. (remember SLA is the agreement between company and customer, the SLO is the actual target the company is aiming for, the 99.99%): Google Computer Engine Service Level Agreement.

Read the next article in the series: What is an SLI?


AWS Deep Racer: Part III

Finally!!!

Finally I feel like I’m making some progress with my racer. The first few times around I felt like I was just trying things and then sending my model through training with crap results. My little guy would constantly veer off course from the track for no real reason… it was frustrating to say the least…

It also didn’t help that AWS DeepRacer appeared to have a brief outage last week. I was unable to train or evaluate any of my models last Tuesday and Wednesday.

After that little blip though, the race was back on!

Mark-5:

At first I wanted to try and improve the speed of my model. After watching some of the top racers fly around the track I figured I needed to go fast or go home Ricky Bobby style. I started with offering just a bit more of a variety in regards to rewards with various speeds:

    if speed < 1:
        reward = 0
    elif speed >= 1 and speed < 2:
        reward += 1
    elif speed >= 2:
        reward += speed

This didn’t really seem to get me anywhere though. I couldn’t tell if it was actually any faster or not….

Part of the problem was the little guy kept driving off the course. I realized it didn’t really matter how fast I was going if I couldn’t keep the model on the track so…

Advertisements

Mark-6:

With this version I tried to focus more on keeping the model on the track. I added a small little function to detect when the wheels went off the track and if they did, to reduce the reward. I had meh results…

The green line is the reward my model is getting. Meaning he thinks he’s doing a good job. The blue and red lines represent how well he’s actually doing… fml…

Mark-7:

This time around I upped some of the reward values for staying towards the center line of the track and increased the penalty for going off track. The results were… sort of positive… I still wasn’t keeping him on track though and he would randomly drive off the course completely…

Mark-8:

This time I wanted to double down on keeping the model on track. It was still veering off course far too often to really make any progress or improvements. So along with the function to try and keep the model towards the center I added another function to reward the model based on his distance from the borders of the track.

    distance_from_border = 0.5 * track_width - distance_from_center
    
    if distance_from_border >= 0.05:
        reward *= 1.0
    else:
        reward = 1e-3

And we started to see a little more progress…

Advertisements

Mark-9:

A few more tweaks on reward values and….

Mark-10:

Hell to the yes we’ve done it!

The biggest change in Mark-10 was that I really fleshed out the function for keeping distance from the border. It seems like the more “levels” you provide the model to be rewarded for, the better it can progressively learn what you want it to do.

What I mean by this is instead of say… giving the model a full 1 point reward for staying on the center of the track and then a 0 for everything else, you give the model a gradual range of options to earn rewards on. This sort of coaxes the model towards the behavior that you want.

I’m currently ranked at 525 of 1135 racers for the October Qualifier and you can check out my latest qualifying video here:

AWS Deep Racer: Part II

The Race is on! Sort of…

So I’ve been diving in deeper to AWS Deep Racer, pun not intended… and I’ve managed to work through a handful of tutorials. I’ve built 4 different models to race with so far.

My first model, Mark-1, currently holds the best record on lap time, of my 4 models, for the October race sitting at position 737 of 791 total racers. The October 2020 track looks like this:

Mark-1:

The Mark-1 model used a very basic reward function that simply trained the model to attempt to follow the yellow dashed line on the center track. It’s a default reward function that reinforces the model attempting to stay towards the center of the track. It doesn’t get you far… but it does get the job done. Total race time: 08:29.214

Advertisements

Mark-2:

With the Mark-2 model I tried to just build off of the first iteration. I added some additional logic to keep the model from trying to zigzag too much on the track as it worked towards finding the best center line. I also made a classic rookie mistake and implemented a bug into the reward function where instead of incrementing the reward value when the model stayed on track, I was just applying a default… this model didn’t even rank.

Mark-3:

With the Mark-3 model, still containing the defect from Mark-2, I played around with the Hyperparameters of the reinforcement model. As a part of the training system you can do more than just modify the reward function that trains the model. You can also adjust and modify the batch size, number of epochs, learning rate, and a handful of other Hyperparameters for training the model. Since I still had the bug in my reward function from the previous model, this model didn’t rank either.

Mark-4:

On to Mark-4… I caught my defect.. ugh… and fixed that up. I also decided that at this point I was ready to move beyond the tutorials and really start adjusting the models. So the first thing I did was take a look at what the competition was doing… Here’s a look at the number 1 model currently sitting at the top of the October qualifier…

Advertisements

Well shit… I’ve got a ways to go to compete with this model. I mean this model is flying around the track. Mine looks like a fucking drunk wall-e….

So I’ve got some work to do but the current AWS Deep Racer app is down. As soon as it comes back I’m refocusing on speed first. I need to Lightning McQueen this bad boy if I’m going to stand a chance of even ranking in the top 100 for the October Qualifier.

Check back with me next week and we’ll see if I can get this little guy going!

AWS Deep Racer: Part I

So after my little experiment with Predicting Stock Prices with Python Machine Learning I came across the AWS Deep Racer League. A machine learning racing league built around the concept of a little single camera car that uses Reinforcement Learning to navigate a race track. I’ve decide to sign up and Compete in upcoming the October race and share what I learn along the way.

photo cred: aws

Let me give a little bit of background on how the Deep Racer works and the various pieces. There are two ways to train your little race car, virtually and physically. If you visit the Deep Racer site you can purchase a 1/18th scale race car that you can then build physical tracks for, train, and race on. There are also various models you can purchase with a couple of different input sensors so you can do things like teach the car to avoid obstacles or actually race the track against other little cars.

This little guy has 2 cameras for depth detection and a back-mounted sensor for detecting cars beside/behind it

The good news is you don’t actually have to buy one of these models to race or even compete. You can do all of the training and racing virtually via the AWS console. You also get enough resources to start building your first model and training it for free!

Now let’s get into how this actually works. What AWS has done is built a system that does most of the heavy complex machine learning for you. They offer you the servers and compute power to run the simulations and even give you video feedback on how your model is navigating the track. It becomes incredibly simple to get setup and running and you can follow this guide to get started.

Advertisements

When you get setup you’ll be asked to build a reward function. A reward function contains the programming logic you’ll use to tell your little race car if it’s doing it’s job correctly. The model starts out pretty dumb. It basically will do nothing or drive randomly, forward, backwards, zig-zags…. until you give it some incentive to follow the track correctly.

This is where the reward function comes in. In the function you provide the metrics for how the model should operate and you reward it when it does the job correctly. For example, you might reward the model for following the center line of the race track.

On each iteration of the function you’ll be handed some arguments on the model’s current status. Then you’ll do some fancy work to say, “okay, if the car is dead center reward it with 1 point but if it’s too far to the left or the right only give it a half point and then if it’s off the track give it zero.”

The model tests and evaluates where the best value is based on the reward function

The model will then run the track going around, trying out different strategies and using that reward function to try and earn a higher reward. On each lap it will start to learn that staying in the middle of the track offers the highest, most valuable reward.

At first this seems pretty easy… until you start to realize a couple of things. The model truly does test out all types of various options which means it may very well go in reverse and still think it’s doing a good job because you’ve only rewarded it for staying on the center line. The model won’t even take into account speed in this example because you’re not offering a reward for it.

Advertisements

As you start to compete you’ll find that other racers are getting even more complex like finding ways to identify how to train the model to take the inside lane of a race track or how to block an upcoming model that is attempting to pass it.

On the surface, the concepts and tools are extremely easy to pick up and get started with, however the competition and depth of this race are incredible. I’m looking forward to building some complex models over the next couple of weeks and sharing my results. If you’re interested in a side hobby check out the AWS Deep Racer League and maybe we can race against each other!