Tag Archives: TechTuesday

AWS Deep Racer: Part II

The Race is on! Sort of…

So I’ve been diving in deeper to AWS Deep Racer, pun not intended… and I’ve managed to work through a handful of tutorials. I’ve built 4 different models to race with so far.

My first model, Mark-1, currently holds the best record on lap time, of my 4 models, for the October race sitting at position 737 of 791 total racers. The October 2020 track looks like this:

Mark-1:

The Mark-1 model used a very basic reward function that simply trained the model to attempt to follow the yellow dashed line on the center track. It’s a default reward function that reinforces the model attempting to stay towards the center of the track. It doesn’t get you far… but it does get the job done. Total race time: 08:29.214

Advertisements

Mark-2:

With the Mark-2 model I tried to just build off of the first iteration. I added some additional logic to keep the model from trying to zigzag too much on the track as it worked towards finding the best center line. I also made a classic rookie mistake and implemented a bug into the reward function where instead of incrementing the reward value when the model stayed on track, I was just applying a default… this model didn’t even rank.

Mark-3:

With the Mark-3 model, still containing the defect from Mark-2, I played around with the Hyperparameters of the reinforcement model. As a part of the training system you can do more than just modify the reward function that trains the model. You can also adjust and modify the batch size, number of epochs, learning rate, and a handful of other Hyperparameters for training the model. Since I still had the bug in my reward function from the previous model, this model didn’t rank either.

Mark-4:

On to Mark-4… I caught my defect.. ugh… and fixed that up. I also decided that at this point I was ready to move beyond the tutorials and really start adjusting the models. So the first thing I did was take a look at what the competition was doing… Here’s a look at the number 1 model currently sitting at the top of the October qualifier…

Advertisements

Well shit… I’ve got a ways to go to compete with this model. I mean this model is flying around the track. Mine looks like a fucking drunk wall-e….

So I’ve got some work to do but the current AWS Deep Racer app is down. As soon as it comes back I’m refocusing on speed first. I need to Lightning McQueen this bad boy if I’m going to stand a chance of even ranking in the top 100 for the October Qualifier.

Check back with me next week and we’ll see if I can get this little guy going!

AWS Deep Racer: Part I

So after my little experiment with Predicting Stock Prices with Python Machine Learning I came across the AWS Deep Racer League. A machine learning racing league built around the concept of a little single camera car that uses Reinforcement Learning to navigate a race track. I’ve decide to sign up and Compete in upcoming the October race and share what I learn along the way.

photo cred: aws

Let me give a little bit of background on how the Deep Racer works and the various pieces. There are two ways to train your little race car, virtually and physically. If you visit the Deep Racer site you can purchase a 1/18th scale race car that you can then build physical tracks for, train, and race on. There are also various models you can purchase with a couple of different input sensors so you can do things like teach the car to avoid obstacles or actually race the track against other little cars.

This little guy has 2 cameras for depth detection and a back-mounted sensor for detecting cars beside/behind it

The good news is you don’t actually have to buy one of these models to race or even compete. You can do all of the training and racing virtually via the AWS console. You also get enough resources to start building your first model and training it for free!

Now let’s get into how this actually works. What AWS has done is built a system that does most of the heavy complex machine learning for you. They offer you the servers and compute power to run the simulations and even give you video feedback on how your model is navigating the track. It becomes incredibly simple to get setup and running and you can follow this guide to get started.

Advertisements

When you get setup you’ll be asked to build a reward function. A reward function contains the programming logic you’ll use to tell your little race car if it’s doing it’s job correctly. The model starts out pretty dumb. It basically will do nothing or drive randomly, forward, backwards, zig-zags…. until you give it some incentive to follow the track correctly.

This is where the reward function comes in. In the function you provide the metrics for how the model should operate and you reward it when it does the job correctly. For example, you might reward the model for following the center line of the race track.

On each iteration of the function you’ll be handed some arguments on the model’s current status. Then you’ll do some fancy work to say, “okay, if the car is dead center reward it with 1 point but if it’s too far to the left or the right only give it a half point and then if it’s off the track give it zero.”

The model tests and evaluates where the best value is based on the reward function

The model will then run the track going around, trying out different strategies and using that reward function to try and earn a higher reward. On each lap it will start to learn that staying in the middle of the track offers the highest, most valuable reward.

At first this seems pretty easy… until you start to realize a couple of things. The model truly does test out all types of various options which means it may very well go in reverse and still think it’s doing a good job because you’ve only rewarded it for staying on the center line. The model won’t even take into account speed in this example because you’re not offering a reward for it.

Advertisements

As you start to compete you’ll find that other racers are getting even more complex like finding ways to identify how to train the model to take the inside lane of a race track or how to block an upcoming model that is attempting to pass it.

On the surface, the concepts and tools are extremely easy to pick up and get started with, however the competition and depth of this race are incredible. I’m looking forward to building some complex models over the next couple of weeks and sharing my results. If you’re interested in a side hobby check out the AWS Deep Racer League and maybe we can race against each other!

GIT it?

Well did you get it? The Dad jokes are real…

Today, for Tech Tuesday, I want to share some of my most commonly used GIT commands and workflows. If you’re a developer and you’re not using GIT I highly recommend you start learning. GIT usage is easily becoming a must-have in a developers toolbox. I won’t cover the basics in this post but if there’s interest, reach out to me and I could be convinced to write a Beginners Guide.

Let’s start by breaking down the workflow. Let’s assume I’ve already performed a ‘git clone‘ and downloaded the code repository to my machine. Before doing any real work, my first call is almost always to break out into a new branch, which means my first call after a clone is most often, ‘git checkout -b feature-id-name‘. This gets me in my own workspace and allows me to move forward without worrying about any other developers work or changes.

Advertisements

Now let’s say I’ve made a couple of changes to a couple of files and now I want to add them to my commit history. Usually at this point I could perform a simple ‘git add .‘ or ‘git add fileName‘ and both options would include either all of my changes or a single file worth of changes.

Often times though I find that this doesn’t give me as detailed of a break down in my commit history. I may have multiple changes, of various context, over the course of an hour and if I represent those as separate commits, instead of one big commit not only do I have a clean, readable commit history for my fellow developers, I also have the ability to cherry-pick specific commits or even skip commits that maybe have a bug or mistake in them.

To that end, I like to use ‘git add -p‘ for staging my commits. The ‘-p‘ argument stands for ‘–patch‘ which really just means I can breakdown my changes into chunks. Git will try to give you parts of files and let you decide if you want to stage them. I find this perfect for when I have a bug fix over here, a feature over there, a couple of tests mixed in…. I can now break these down and say, “okay this chunk of code goes with this feature and this test but doesn’t involve this bug fix.” You also get some nice options on how you want to stage the changes: You can use y to stage the chunk, n to ignore the chunk, s to split it into smaller chunks, e to manually edit the chunk, and q to exit.

photo cred: Josh Carter

Now, let’s say I wasn’t being a good little dev and I accidently staged and then even worse, committed some changes that I didn’t mean to. Well it’s really not so bad to fix up. A little, ‘git reset –soft HEAD~1‘ will undo that last commit and get you back to having all of your changes staged. Now you can add/remove any other changes you need and set things back straight.

But let’s say you did you really bad thing… let’s say not only did you commit a password to your repo but then you pushed that change up to a repo… shame… shame… shame… but don’t sweat it. We can fix that too. We can still use a little reset magic like we did above, undo our change, remove our commit of the password, and then when we are ready to push our changes back up with ‘git push -f‘ to force those changes up. What this will do is re-write our git history with what we fixed locally. I can’t say how I know how to do this… I’ve never pushed a password to a repo….

Before I perform a force push of anything though, I run a ‘git log‘. This is one of my favorite commands because it allows me to see every commit that is in my local repo and then match that up with what’s in my remote repo. When I’m building releases this is extremely valuable for me to make sure I’ve captured every commit for a feature and nothing was missed or anything extra added. Use ‘git log‘ as your sanity check.

Advertisements

Speaking of building releases, ‘git rebase‘ can be one of your best friends… or your worse enemies… A rebase call is great to take a feature branch and update it so that when you look at the history of commits, it looks as if the feature was just worked on today. The rebase will take all of the other changes from other developers, put them at the bottom, and then put your feature on top of all those. This can make merging features into a release branch or a master branch easy peasy, however… if that branch is extremely old, if there is an enormous amount of conflicts in the the rebase, you may be better off performing your own merge. Don’t be afraid to call ‘git rebase –abort‘ and look for a cleaner solution.

When a rebase fails, I typically look to either perform a merge OR to call ‘git cherry-pick commitHash‘. Cherry-picking allows me to pull one single commit out of one branch and into my local repo. This can be very handy for grabbing a change here and there and pulling it into a release. After all, who doesn’t like cherries?

photo cred: T. Q.

And typically when I’m building releases I’m pulling changes from various branches other developers have worked on. Sometimes I’ll jump into their branch and perform a rebase to clean things up and make the merge into master run smoothly. But to make sure I’ve got all the latest on whose branches are out there with what commits I’ll run a simple ‘git fetch‘, which downloads all of the latest branches and their commits.

So, we covered quite a bit so let’s recap on what we’ve discussed so far. We can run a, ‘git checkout -b feature-id-name‘ to start a new branch and then use ‘git add -p‘ to stage and commit just the changes we want. If we make a mistake we can ‘git reset –soft HEAD~1‘ to undo our last commit and if we really messed things up we can ‘git push -f‘ to force our remote branch to accept our new changes.

To make sure everything is just how we want it we can use ‘git log‘ to review our commit history. If we need to update an old, stale branch we can call ‘git rebase‘ and if things get hairy we can back out with ‘git rebase –abort‘. If our rebase is a no go we might look to ‘git cherry-pick commitHash‘ and grab exactly what we want. Finally, we can use ‘git fetch‘ to make sure we’ve got the latest and greatest on everything in the repo.

These are my most commonly used git commands but I’d love to learn more. If you’ve got some commands you’re using often that aren’t on this list let me know so I can include them. If you’re using any of the these commands in a different way I’m interested to hear how!

Taking your Objects for a Swim

There are times while we’re coding that we need to create a number of objects to handle various workloads. This could be a Factory producing a number of enemies for a game or a SaaS application firing up multiple instances of a worker process.



Whenever we start dealing with a number of the same object like this we have to start considering object management, the lifetime of the objects, and garbage collection. If we don’t we’ll have to deal with out of memory exceptions, rouge processes, and bugs, bugs, bugs…

photo cred: Wynand Uys

Thankfully there’s a solid way to handle all of these objects, not only in a clean way, but in an efficient way. That way is called, Object Pooling.

Object Pooling is the concept of taking each of our created objects and placing them into a group or “pool“. Then when our game or application needs to use one of these objects, we provide a way to check that pool, find an inactive object, activate it, and return it to the caller.

Advertisements

If we check our pool and find there are no free objects available because they’re all already in use somewhere else, then we can provide the functionality for our pool to create us a brand new object, add it to the pool, and return it to our caller.

When our callers are done with their objects they can shut themselves off and remain inactive in our pool until another caller needs to request the object. Whew… okay that got technical so let’s break it down a little bit and see how it looks in code:

//our "pool" of enemies
List<Enemy> enemies = new List<Enemy();

//our function to check our pool for an enemy that is inactive
public Enemy GetEnemy()
{
   foreach (var enemy in enemies)
   {
       if (!enemy.IsActive())
         return enemy;
   }
}

So in a very basic example we have our “pool” and a way to get inactive objects out of that pool. But what happens if there aren’t any objects in our pool to begin with? What happens if there are some objects in our pool but they’re all in use and there aren’t any in active ones to be returned? Let’s see what else we can add to fix this.

Advertisements
//let's add some functionality to create a new enemy if our list has none 

//our function to check our pool for an enemy that is inactive
public Enemy GetEnemy()
{
   //loop over our pool and find an inactive enemy to return
   foreach (var enemy in enemies)
   {
       if (!enemy.IsActive())
         return enemy;
   }

   //if no enemy returned, create a new one and add it to the pool
   var enemy = new Enemy();
   enemies.Add(enemy);
   return enemy;
}

So what did we change here? Well we gave our GetEnemy() function the ability to not only find an inactive object to return to the caller but we added the ability for it to create a new enemy if there aren’t any available enemies to return. This means that no matter what, when we call GetEnemy() we are guaranteed to get an available object returned back to us.

photo cred: Joe Calata

This is essentially the bread and butter of Object Pooling, but why is this better and what are the advantages?

  1. We get a single place to find and get an available object
  2. We’re not forcing our application to create and destroy objects over-and-over, eating up resources.
  3. We’re not allowing our objects to run free. They’re all contained and managed in our “pool” and if we wanted to, we could turn them all off.

Now how do we know when the best time is to use Object Pooling? If you find you’re creating and destroying a number of the same object, repeatedly overtime, you should be using Object Pooling. The performance gains are significant and the pattern can be implemented in any scenario.