Tag Archives: Programming

AWS Deep Racer: Part I

So after my little experiment with Predicting Stock Prices with Python Machine Learning I came across the AWS Deep Racer League. A machine learning racing league built around the concept of a little single camera car that uses Reinforcement Learning to navigate a race track. I’ve decide to sign up and Compete in upcoming the October race and share what I learn along the way.

photo cred: aws

Let me give a little bit of background on how the Deep Racer works and the various pieces. There are two ways to train your little race car, virtually and physically. If you visit the Deep Racer site you can purchase a 1/18th scale race car that you can then build physical tracks for, train, and race on. There are also various models you can purchase with a couple of different input sensors so you can do things like teach the car to avoid obstacles or actually race the track against other little cars.

This little guy has 2 cameras for depth detection and a back-mounted sensor for detecting cars beside/behind it

The good news is you don’t actually have to buy one of these models to race or even compete. You can do all of the training and racing virtually via the AWS console. You also get enough resources to start building your first model and training it for free!

Now let’s get into how this actually works. What AWS has done is built a system that does most of the heavy complex machine learning for you. They offer you the servers and compute power to run the simulations and even give you video feedback on how your model is navigating the track. It becomes incredibly simple to get setup and running and you can follow this guide to get started.

Advertisements

When you get setup you’ll be asked to build a reward function. A reward function contains the programming logic you’ll use to tell your little race car if it’s doing it’s job correctly. The model starts out pretty dumb. It basically will do nothing or drive randomly, forward, backwards, zig-zags…. until you give it some incentive to follow the track correctly.

This is where the reward function comes in. In the function you provide the metrics for how the model should operate and you reward it when it does the job correctly. For example, you might reward the model for following the center line of the race track.

On each iteration of the function you’ll be handed some arguments on the model’s current status. Then you’ll do some fancy work to say, “okay, if the car is dead center reward it with 1 point but if it’s too far to the left or the right only give it a half point and then if it’s off the track give it zero.”

The model tests and evaluates where the best value is based on the reward function

The model will then run the track going around, trying out different strategies and using that reward function to try and earn a higher reward. On each lap it will start to learn that staying in the middle of the track offers the highest, most valuable reward.

At first this seems pretty easy… until you start to realize a couple of things. The model truly does test out all types of various options which means it may very well go in reverse and still think it’s doing a good job because you’ve only rewarded it for staying on the center line. The model won’t even take into account speed in this example because you’re not offering a reward for it.

Advertisements

As you start to compete you’ll find that other racers are getting even more complex like finding ways to identify how to train the model to take the inside lane of a race track or how to block an upcoming model that is attempting to pass it.

On the surface, the concepts and tools are extremely easy to pick up and get started with, however the competition and depth of this race are incredible. I’m looking forward to building some complex models over the next couple of weeks and sharing my results. If you’re interested in a side hobby check out the AWS Deep Racer League and maybe we can race against each other!

Predicting Stock Prices with Machine Learning in Python: Part I

Over the last few weeks I’ve been keying away at building an application that can analyze stock prices and make use of Python Machine Learning libraries to predict stock prices.

This is the first part of a series of diving into machine learning and building this application. I’ve uploaded the entire project thus far to my personal GitHub repo at: https://github.com/Howard-Joshua-R/investor

Advertisements

I invite anyone and everyone to take a look at the project, fork it, add to it, point out where I’m doing something stupid, and build it with me! If you help, you’re more than welcome to use it for your own advantage.

photo cred: Shahadat Rahman

For this first post, I’ll walk through what I’ve built so far and how the meat and potatoes work.

If you drill into the directories and find the ‘spiders’ folder you’ll find the ‘lstm.py’ file. This particular spider is using Scrapy and an LSTM model to predict the stock price of any stock ticker you pass to it. Let’s take a look at the first piece of this tool, the scraper:

Advertisements
    def start_requests(self):                       

        ticker = getattr(self, 'ticker', None)      
        if (ticker is None):
            raise ValueError('Please provide a ticker symbol!')

        logging.getLogger('matplotlib').setLevel(logging.WARNING) 
        logging.getLogger('tensorflow').setLevel(logging.WARNING)  

        apikey = os.getenv('alphavantage_apikey')                   
        url = 'https://www.alphavantage.co/query?function=TIME_SERIES_DAILY
               &symbol={0}&apikey={1}&outputsize=full'.format(ticker, apikey) 

        yield scrapy.Request(url, self.parse) 

This first function uses Scrapy to reach out to AlphaAdvantage and pull down stock information in JSON format. AlphaAdvantage provides fantastic stock data on open and close prices over the last decade or longer. All it requires is that your register with them to obtain an APIKey. Best part, it’s’ free!

Advertisements

Now let’s break down each piece.

def start_requests(self):                       

        ticker = getattr(self, 'ticker', None)      
        if (ticker is None):
            raise ValueError('Please provide a ticker symbol!')

Here we define the name of our first function ‘start_requests(self)’ and allow Scrapy to know where to start our spider. From their we look to grab the ‘ticker’ argument which is what we are passing through to tell the spider what stock data to collect. I’ve currently tested TSLA = Tesla, AMZN = Amazon, and TGT = Target. Simply providing the ticker in the ‘main.py’ file is enough to set the target ticker. The final 2 lines here simply validate you’ve passed in the ticker argument.

The next two lines suppress logs from two of the libraries we’ll use later for predicting our model. MatPlotLib is used to plot points on a graph and TensorFlow is used to help us implement the LSTM training model.

logging.getLogger('matplotlib').setLevel(logging.WARNING) 
logging.getLogger('tensorflow').setLevel(logging.WARNING)  

The following lines set our AlphaAdvantage APIKey and the URL we’re going to hit for our stock data. In this case you’ll want to store your AlphaAdvantage APIKey in your environment variables on your machine under the name ‘alphavantage_apikey’.

apikey = os.getenv('alphavantage_apikey')                   
url = 'https://www.alphavantage.co/query?function=TIME_SERIES_DAILY
&symbol={0}&apikey={1}&outputsize=full'.format(ticker, apikey)

The final piece kicks off the Scrapy spider and once it completes passes the results to our parse function. We provide our newly built URL, which contains our target ticker and APIKey, and our parse function.

yield scrapy.Request(url, self.parse) 
Advertisements

So far just this piece uses a Scrapy spider to reach out to AlphaAdvantage and download stock data in JSON format. In the next piece I will dive more into the parsing and building the machine learning model.

In the meantime feel free to jump out to my GitHub Repo and read through the comments of the lstm.py file. I’ve attempted to include as many notes as I could and left some open ended questions as well. If you have any feedback I’d be more than happy to discuss! If you’re feeling brave and want to submit your own pull request please do!


GIT it?

Well did you get it? The Dad jokes are real…

Today, for Tech Tuesday, I want to share some of my most commonly used GIT commands and workflows. If you’re a developer and you’re not using GIT I highly recommend you start learning. GIT usage is easily becoming a must-have in a developers toolbox. I won’t cover the basics in this post but if there’s interest, reach out to me and I could be convinced to write a Beginners Guide.

Let’s start by breaking down the workflow. Let’s assume I’ve already performed a ‘git clone‘ and downloaded the code repository to my machine. Before doing any real work, my first call is almost always to break out into a new branch, which means my first call after a clone is most often, ‘git checkout -b feature-id-name‘. This gets me in my own workspace and allows me to move forward without worrying about any other developers work or changes.

Advertisements

Now let’s say I’ve made a couple of changes to a couple of files and now I want to add them to my commit history. Usually at this point I could perform a simple ‘git add .‘ or ‘git add fileName‘ and both options would include either all of my changes or a single file worth of changes.

Often times though I find that this doesn’t give me as detailed of a break down in my commit history. I may have multiple changes, of various context, over the course of an hour and if I represent those as separate commits, instead of one big commit not only do I have a clean, readable commit history for my fellow developers, I also have the ability to cherry-pick specific commits or even skip commits that maybe have a bug or mistake in them.

To that end, I like to use ‘git add -p‘ for staging my commits. The ‘-p‘ argument stands for ‘–patch‘ which really just means I can breakdown my changes into chunks. Git will try to give you parts of files and let you decide if you want to stage them. I find this perfect for when I have a bug fix over here, a feature over there, a couple of tests mixed in…. I can now break these down and say, “okay this chunk of code goes with this feature and this test but doesn’t involve this bug fix.” You also get some nice options on how you want to stage the changes: You can use y to stage the chunk, n to ignore the chunk, s to split it into smaller chunks, e to manually edit the chunk, and q to exit.

photo cred: Josh Carter

Now, let’s say I wasn’t being a good little dev and I accidently staged and then even worse, committed some changes that I didn’t mean to. Well it’s really not so bad to fix up. A little, ‘git reset –soft HEAD~1‘ will undo that last commit and get you back to having all of your changes staged. Now you can add/remove any other changes you need and set things back straight.

But let’s say you did you really bad thing… let’s say not only did you commit a password to your repo but then you pushed that change up to a repo… shame… shame… shame… but don’t sweat it. We can fix that too. We can still use a little reset magic like we did above, undo our change, remove our commit of the password, and then when we are ready to push our changes back up with ‘git push -f‘ to force those changes up. What this will do is re-write our git history with what we fixed locally. I can’t say how I know how to do this… I’ve never pushed a password to a repo….

Before I perform a force push of anything though, I run a ‘git log‘. This is one of my favorite commands because it allows me to see every commit that is in my local repo and then match that up with what’s in my remote repo. When I’m building releases this is extremely valuable for me to make sure I’ve captured every commit for a feature and nothing was missed or anything extra added. Use ‘git log‘ as your sanity check.

Advertisements

Speaking of building releases, ‘git rebase‘ can be one of your best friends… or your worse enemies… A rebase call is great to take a feature branch and update it so that when you look at the history of commits, it looks as if the feature was just worked on today. The rebase will take all of the other changes from other developers, put them at the bottom, and then put your feature on top of all those. This can make merging features into a release branch or a master branch easy peasy, however… if that branch is extremely old, if there is an enormous amount of conflicts in the the rebase, you may be better off performing your own merge. Don’t be afraid to call ‘git rebase –abort‘ and look for a cleaner solution.

When a rebase fails, I typically look to either perform a merge OR to call ‘git cherry-pick commitHash‘. Cherry-picking allows me to pull one single commit out of one branch and into my local repo. This can be very handy for grabbing a change here and there and pulling it into a release. After all, who doesn’t like cherries?

photo cred: T. Q.

And typically when I’m building releases I’m pulling changes from various branches other developers have worked on. Sometimes I’ll jump into their branch and perform a rebase to clean things up and make the merge into master run smoothly. But to make sure I’ve got all the latest on whose branches are out there with what commits I’ll run a simple ‘git fetch‘, which downloads all of the latest branches and their commits.

So, we covered quite a bit so let’s recap on what we’ve discussed so far. We can run a, ‘git checkout -b feature-id-name‘ to start a new branch and then use ‘git add -p‘ to stage and commit just the changes we want. If we make a mistake we can ‘git reset –soft HEAD~1‘ to undo our last commit and if we really messed things up we can ‘git push -f‘ to force our remote branch to accept our new changes.

To make sure everything is just how we want it we can use ‘git log‘ to review our commit history. If we need to update an old, stale branch we can call ‘git rebase‘ and if things get hairy we can back out with ‘git rebase –abort‘. If our rebase is a no go we might look to ‘git cherry-pick commitHash‘ and grab exactly what we want. Finally, we can use ‘git fetch‘ to make sure we’ve got the latest and greatest on everything in the repo.

These are my most commonly used git commands but I’d love to learn more. If you’ve got some commands you’re using often that aren’t on this list let me know so I can include them. If you’re using any of the these commands in a different way I’m interested to hear how!

Taking your Objects for a Swim

There are times while we’re coding that we need to create a number of objects to handle various workloads. This could be a Factory producing a number of enemies for a game or a SaaS application firing up multiple instances of a worker process.



Whenever we start dealing with a number of the same object like this we have to start considering object management, the lifetime of the objects, and garbage collection. If we don’t we’ll have to deal with out of memory exceptions, rouge processes, and bugs, bugs, bugs…

photo cred: Wynand Uys

Thankfully there’s a solid way to handle all of these objects, not only in a clean way, but in an efficient way. That way is called, Object Pooling.

Object Pooling is the concept of taking each of our created objects and placing them into a group or “pool“. Then when our game or application needs to use one of these objects, we provide a way to check that pool, find an inactive object, activate it, and return it to the caller.

Advertisements

If we check our pool and find there are no free objects available because they’re all already in use somewhere else, then we can provide the functionality for our pool to create us a brand new object, add it to the pool, and return it to our caller.

When our callers are done with their objects they can shut themselves off and remain inactive in our pool until another caller needs to request the object. Whew… okay that got technical so let’s break it down a little bit and see how it looks in code:

//our "pool" of enemies
List<Enemy> enemies = new List<Enemy();

//our function to check our pool for an enemy that is inactive
public Enemy GetEnemy()
{
   foreach (var enemy in enemies)
   {
       if (!enemy.IsActive())
         return enemy;
   }
}

So in a very basic example we have our “pool” and a way to get inactive objects out of that pool. But what happens if there aren’t any objects in our pool to begin with? What happens if there are some objects in our pool but they’re all in use and there aren’t any in active ones to be returned? Let’s see what else we can add to fix this.

Advertisements
//let's add some functionality to create a new enemy if our list has none 

//our function to check our pool for an enemy that is inactive
public Enemy GetEnemy()
{
   //loop over our pool and find an inactive enemy to return
   foreach (var enemy in enemies)
   {
       if (!enemy.IsActive())
         return enemy;
   }

   //if no enemy returned, create a new one and add it to the pool
   var enemy = new Enemy();
   enemies.Add(enemy);
   return enemy;
}

So what did we change here? Well we gave our GetEnemy() function the ability to not only find an inactive object to return to the caller but we added the ability for it to create a new enemy if there aren’t any available enemies to return. This means that no matter what, when we call GetEnemy() we are guaranteed to get an available object returned back to us.

photo cred: Joe Calata

This is essentially the bread and butter of Object Pooling, but why is this better and what are the advantages?

  1. We get a single place to find and get an available object
  2. We’re not forcing our application to create and destroy objects over-and-over, eating up resources.
  3. We’re not allowing our objects to run free. They’re all contained and managed in our “pool” and if we wanted to, we could turn them all off.

Now how do we know when the best time is to use Object Pooling? If you find you’re creating and destroying a number of the same object, repeatedly overtime, you should be using Object Pooling. The performance gains are significant and the pattern can be implemented in any scenario.


The Factory Pattern

In part II of our programming mini-series we’ll dive into the Factory Pattern, how it can be used, and when it can be beneficial.

The Factory Pattern itself is fairly straight forward. When we use the Factory Pattern we are building a Factory that will produce objects for us. There’s two key reasons this pattern is helpful and I’ll share some examples of how we can use it.

First and foremost, the Factory Pattern allows us to encapsulate (wrap and hide) the creation of objects. This means that instead of coding ourselves into a maintenance nightmare where we are instantiating concrete class after concrete class, we can instead code to a single factory and allow it to provide our instantiation. Let’s look at a bad example:

public class LevelOne
{
	public void CreateEnemy(string enemyName)
	{
	   if (enemyName == "HammerBro")
		  HammerBro hammerBro = new HammerBro();
	   else if (enemyName == "BulletBill")
		  BulletBill bulletBill = new BulletBill();
	   else if (enemyName == "PiranhaPlant")
		  PiranhaPlant piranhaPlant = new PiranhaPlant();
	   else if (enemyName == "KoopaTroopa")
		  KoopaTroopa koopaTroopa = new KoopaTroopa();
	   else if (enemyName == "Goomba")
		  Goomba goomba = new Goomba();
	}
}

What you see above is pretty common in Software Development. We need a function that we can tell what kind of enemy we need and it will create it for us. However, this approach comes with a lot of headaches.

Any time we want to add a new enemy we now have to open this class back up and make a change. Anytime we have to make a change to existing code, we’re introducing risk. Now, we’ll have to change this code no matter what if we want to add a new enemy BUT we can do this in a way that reduces our risk and the number of changes we have to make in our application.

Consider that instead our class is used by another development team. Now each time we add a new enemy we have to ask that team to update their create function to be compatible with ours. Now consider this other team is actually a customer or vendor. How easy is it to ask them to make changes every time we make a change?

And let’s say we want this function that creates our enemy to return our enemy so he can do things like attack. We’ll we’re going to need a lot of code and a lot more if statements to do things like:

if (koopaTroopa)
   koopaTroopa.Attack();
else if (goomba)
   goomba.Attack();

The Factory Pattern makes this all go away. It does away with the dependencies on concrete classes, it abstracts the creation of enemies, and it reduces the amount of code that needs changed when new Enemy class are created or changed.

The second benefit we’ll get from our Factory is a centralized place for anyone to get an enemy from. This means that on Level 1 we can use the same factory to create the same enemies as we use on Level 20. We don’t need to re-implement enemy instantiation or repeat any of our enemy code. Woot!

So what can our Factory look like? Well with the help of an Interface it can look like this:

public class EnemyFactory
{
	public IEnemy CreateEnemy(string enemyName)
	{
	   if (enemyName == "HammerBro")
		  return new HammerBro();
	   else if (enemyName == "BulletBill")
		  return new BulletBill();
	   else if (enemyName == "PiranhaPlant")
		  return new PiranhaPlant();
	   else if (enemyName == "KoopaTroopa")
		  return new KoopaTroopa();
	   else if (enemyName == "Goomba")
		  return new Goomba();
	}
}

Now you might say, wait a minute… this looks exactly the same as what we had before. And you’d be right! It’s very close, so let’s take a look at what’s different.

Instead of instantiating the enemy and storing a copy of the concrete class within LevelOne we have abstracted the creation to another class (our Factory!). We’ve also implemented an Interface, think of an interface like wrapping paper around our object, it will allow us to treat the enemy in a generic way. Our interface might look like this:

public interface IEnemy
{
   public void Attack();
}

With our new Factory and Interface we can now flexibly create the enemies we need, have them attack (and perform the correct attack), and our friends on the other development team will love us so much more because they no longer need to refactor and write new code each time a new enemy class is created. Let’s see an example:

public class LevelOne
{
   IEnemy currentEnemy;//our enemy to fight
   
   public LevelOne()
   {
      //create our factory
      EnemyFactory factory = new EnemyFactory();	  
   }
   
   public void CreateGoomba()
   {
      //let our factory create the enemy
      currentEnemy = factory.CreateEnemy("Goomba");
   }
   
   public void AttackPlayer()
   {
      //our interface knows which enenmy Attack() function to call
      enemy.Attack();
   }
}

So what’s so different and special about this design? Well for one, our LevelOne class no longer depends on 6 other enemy classes. If our game designer comes along and decides that, “Oh we want to create a new Goomba class called SuperGoomba” we can now easily make that change on one line of code.

What else is helpful here? Well now we have a single place to go to for enemy creation. Not only can LevelOne create any enemy it wants, so can LevelTwo, and LevelTen, and so on. And if new enemies are added, none of the code needs to change for existing levels.

The final benefit we gain from using the Factory Pattern is the advantage of using interfaces. By allowing our Factory to return an interface instead of a concrete class we can make it much easier for other developers to use our factory. Developers won’t need to code for 6 different Attack() functions for each enemy type. They can instead code a single ‘enemy.Attack()’ function which will allow us to call each enemies Attack() without actually having to know which enemy it is.

In closing, the Factory Pattern is simply a way for us to create a Factory that does all the hard work of creating objects and gives them back to us in a generic fashion by using an Interface. It allows us to decouple our code, encapsulate instantiation, and provide highly maintainable solutions. If you find that you’re using the ‘new’ keyword to create objects it might be a good time to consider using the Factory Pattern.

The Singleton

This is the first part in a mini-series of posts around design patterns in software engineering.

The Singleton Design Pattern is one of the most commonly used design patterns in game development. I’ve personally used it in every game project I’ve worked on and it has been extremely valuable. However, in my professional development I’ve yet to find a truly valuable place to use this design pattern. Now that doesn’t mean it has no value in the professional world, it’s just not as commonly used.

So what is the Singleton pattern? What does it do and when/where should it be used? The Singleton pattern is named as such because it is used when you want to enforce your application to only ever have a single instance of an object. The most common place I’ve implemented this design pattern is when I need to implement an object “manager.” This happens frequently in game development.

Imagine your developing a game and in that game enemies are created. There are numerous amounts of these enemies and you need a way to manage these enemies. Having a long list of enemies to keep track of and then pass around to various other functions and objects in the game can quickly turn into a maintenance nightmare. To get around this, we can lump all of these enemies under a single umbrella, a single manager.

Now that we’ve got ourselves a single EnemyManager we need a way to make sure that the next developer that comes along doesn’t go and create a second EnemyManager in another area of the game. Think how hard it would be to get your job done if you had two bosses. The same concept applies here. We don’t want two different managers giving orders to our group of enemies. This is where the Singleton pattern comes into play.

The Singleton pattern ensures that no matter what, no matter when, where, or how… our application will only ever have one single manager. I find that this pattern is most valuable when you have an application that can spring into action in the middle of it’s workflow. For example, think of a game developer working on level 20. Without this pattern, he needs to start the game from the very beginning where the managers are initially setup, walk through all of the start menu’s, just to get to his level so he can test his changes. This can be extremely time consuming.

BUT… if he implements the Singleton pattern, he can create his EnemyManager on every level of his game and the pattern will ensure the manager is created correctly on level 20 and all other levels of the game without any conflicts or duplication. So what does this pattern look like in code? I’ll share an example in .NET:

using System;  
   
public class EnemyManager
{  
   private static EnemyManager instance;
   
   public static EnemyManager Instance  
   {  
      get  
      {  
         if (instance == null)  
         {  
            instance = new EnemyManager();  
         }  
         return instance;  
      }  
   }  
}

So what are we looking at here? Let’s break it down.

private static EnemyManager instance;

This is our single instance of the EnemyManager. You’ll notice that our class itself is not static but does hold a static instance of itself. There are a couple advantages here:

  1. A purely static class can achieve similar results as the Singleton pattern but it cannot implement interfaces that we might use like in the Observer Pattern (we’ll cover in another post).
  2. The Singleton pattern can allow itself to be passed around as an object for Dependency Injection (we’ll cover in another post).

The next piece is how we enforce only a single instance of our EnemyManager to exist and how we protect the one true instance from duplication.

public static EnemyManager Instance  
   {  
      get  
      {  
         if (instance == null)  
         {  
            instance = new EnemyManager();  
         }  
         return instance;  
      }  
   }  

This function is what all other objects will call when they need to grab the EnemyManager instance. When another object calls:

var enemyManager = EnemyManager.Instance;

Our getter will first check if the local ‘instance‘ variable is null. This is where the first check is made to see if an instance of the EnemyManager has already been created at some other time. In our example, we’ll assume this is the first call to EnemyManager and therefore the ‘instance‘ is null. What happens next is we assign a new EnemyManager to the ‘instance‘ variable and then we return it.

Now if we were to make the above call again… when our getter function reaches:

if (instance == null)

We’ll see that we’ve already created our instance of the EnemyManager and instead of creating a new EnemyManager, we’ll return the one that was previously created.

So why does this help our game developer while he’s working on level 20? Well instead of having to create our EnemyManager only at the beginning of the game (to avoid duplication) we can actually call to create our EnemyManager anytime we need it and always guarantee we’ll get the one single instance. This is because even if we start our game on level 20, the Main Menu, or anywhere, our code will make sure that only one EnemyManager ever exists and that we’ll never end up with two managers (having two bosses is such a terrible thought…).

The Singleton pattern is fairly easy to implement but it’s important to know when to use it. Here are the times I’ve found it most useful:

  1. If you know you only ever need a single instance of the object that’s a good sign you’ll want to use the Singleton pattern.
  2. If your application can be started in various states, like level 20 in our example above, and you want to ensure the right objects are created only once.
  3. If you need to have a single instance of an object but you also need to implement Interfaces and/or use Dependency Injection.

If you do find that you need a single instance of an object BUT you do not need to use Interfaces, Dependency Injection, or other Object Oriented Principles then you do have the option to make your entire class static.

Welcome to The Hardest Work

I’ve recently become motivated to share my journey through life, my experiences, my failures, and my successes in hopes to inspire others as I’ve been inspired.

Since this is my first post I’m still working out the details of setting up this blog. I’m hoping to cover a wide variety of content and topics including:

  • Productivity
  • Motivation
  • Parenting
  • Programming
  • Fitness
  • and More!

I don’t have a set plan yet but I hope to share as much as I’ve learned about various topics and that readers of this blog can share their experiences back with me.

If you have suggestions for content or topics you’d like to hear about please let me know!