Tag Archives: coding

How To Write a Valuable Commit Message

Fixed.

Fixed.

Fixed.

The best commit message out there. The one that tells you absolutely nothing about anything.



Commit messages are a funny thing. They’re not very valuable when you submit them, yet when a bug pops up in production or you’re building a release, knowing the right place to look (especially if it’s not your code change) can save hours.

The commit message is one of those things you do so that you can reap the benefits when needed. Sort of like locking your door at night before you go to bed. You don’t expect to need those locks, but when you do, you’re grateful you had them in place and locked.

So what can make a great commit message? First I think it’s better to try and ask, what makes a great commit? This question can actually be trickier to answer but I think answering it, helps us write a better commit message.

Advertisements

A good commit, can best be seen as a lot of commits. The reason being is that when you commit often you can end up with a best of both world scenario.

You see some would prefer one giant chunk of completed code to complete a feature. This let’s you see the entire solution together as one. It can be difficult to review or push changes to a main branch, one-by-one. The good news is, most source control solutions now offer a way to squash (combine) all of the new commits on a branch before merging them into main, making it incredibly easy to accomplish this.

photo cred: Melanie Hughes

However, seeing individual commits can let you break things down more easily. You can get an idea of how the changes developed over time and it can help you pinpoint issues. You can also write more frequent commit messages instead of one really long message on a single massive commit. Oh and if you ever need to switch gears and work on that production bug ASAP, having your in progress work already committed makes it easy to switch to your main branch without losing any work.

Now that we know we want to have lots of little commits, let’s talk about what to put in the commit message. There are a couple of key details to include in your commit message based on your branching strategy. If you have a good branching strategy your commit message can contain more or less information as needed but you typically want to try and cover the following:

Include the ID of the related work ticket.

Include the type of commit.

Include a brief description of the change.

Advertisements

These three items make up a tasty commit stew and can be viewed as: ID-Type: Message. Let’s look at couple of examples.

23489-Feature: Added new items to the drop down on search page

23489-Bug: Fixed the drop down from not opening completely

89012-Refactor: Cleaned up old, duplicate code for invoicing

1203-Dependencies: Updated dependencies ahead of new features

Each of these commit messages provides enough detail for you to be able to tie back the code changes directly to the work item they belong to as well as what the change is so you can easily help identify potential issues.

When pushing these changes to a main branch you then have two options, you can either hang on to the individual commits (which can be useful for future debugging) or you can squash them all together and push the changes as a whole feature to the main branch (which will give you a cleaner history for the code base).

At the end of the day, there’s no right way to write a commit message. There are however, wrong ways… The key is to do yourself and your fellow developers a favor and make it as easy as possible to identify what the code changes are.


How to Intercept WebMethods in .NET

Legacy applications often suffer from tech debt. It’s a natural part of the development life cycle and depending on the size, scope, and team, tech debt can range from a few lines, to most of the project.

In a perfect world though, tech debt wouldn’t exist, but perfect is the enemy of progress, and so progress is made and tech debt is built. As tech debt mounts it can become increasingly difficult to integrate additional features and functionality. A good refractor can always help, but what about in cases where you need to add in things like logging or metric collection?

Previously you may have been stuck with having to sprinkle code throughout the application to collect the logging that was needed or you may have had to try and hunt down each place a specific line of code was called or used (and it’s various implementations). This can be difficult and unreliable. It also means that any time you want to make a change you need to go track down each of those little sprinkles and change them.

Enter PostSharp:

photo cred: Markus Spiske

PostSharp is a 3rd party library that you can include in your project and setup through Nuget. It allows you to apply Attributes to your functions that will then override your function and pass it through as a parameter. In other words, you can add a custom Attribute to any function and then either execute your new code before or after the execution of the original function. Let’s look at some examples:

After I’ve installed the Nuget package the first thing I want to do is create a new class that will handle my needs. In this example we’ll say we want to use some sort of metric collection functionality and to inspect the results of the function.

[PSerializable]
public class CollectMetric : MethodInterceptionAspect
{

}

Our new class CollectMetric.cs class will require that we inherit from MethodInterceptionAspect and that we apply the PSeriablizable Attribute. Both are PostSharp requirements. Next we can write our actual implementation. Some of this won’t be real code but you’ll get the idea.

[PSerializable]
public class CollectMetric : MethodInterceptionAspect
{
     public CollectMetric(){}//constructor

     public override void OnInvoke(MethodInterceptionArgs args)
     {
          //more code to come
     }
}
Advertisements

So now that we have our constructor and the OnInvoke() function. We can actually wire things up and start to debug and see how this works. We can do this by going to any function that we want to intercept and adding a [CollectMetric] Attribute. Like this:

[CollectMetric]
public int MyOriginalCode(string inputs)
{
     //stuff
}

If we were to debug MyOriginalCode() we would see that before that function even gets called our OnInvoke() function will be called. Now all we have to do is decide what we want to do in our OnInvoke() function and then when to call the original MyOriginalCode(). It might look like this:

public override void OnInvoke(MethodInterceptionArgs args)
{
     args.Proceed(); //this calls MyOriginalCode() function like normal
     var value = args.ReturnValue; //output from MyOriginalCode()

     SaveMetricToDatabase(value); //some other thing we wanted to do
}

So by calling args.Proceed() we are really just calling our old original code like normal, MyOriginalCode(), and we can even capture the output of that function with args.ReturnValue. From there we can do whatever we want. Add logging, inspect the results of the function, whatever new things we want to do.

Advertisements

The big advantages we gain from this solution are that we no longer have to worry about creating bugs on existing code by modifying it. We also can encapsulate all of our changes in a single place and write unit tests around them. This is a far safer solution that going around and sprinkling in lines of code throughout the application.

If you get a chance to try the solution out let me know how your implementation went! I’d love to get some feedback on what worked and didn’t work for you and how I might improve my own implementation. If you attempt this project and get stuck, feel free to reach out to me by leaving a comment!

How I Conduct a Code Review

Code reviews have been and always will be tricky business. If you’re familiar with performing code reviews or having someone else review your code, you hopefully can understand and see the benefits. For some though, a code review can be a hit to the ego. Here’s how I’ve approached code reviews and what I find works best.

First and foremost, you have to have some formal code review practices in place. If you don’t, I suggest you start by offering up your services. Not in a way that says, “I’m a better developer and I know more so let me make sure your code is right.” If you give off any type of vibe like this, your code review is going to come off as arrogant and unwanted.

Offer your services for a code review as a means to help take some of the pressure off of the other developer. Your code review is to help them look for bugs, typos, or little details they may have missed. It’s also to provide a challenge back to that person to ask both you and them to think critically.

Once you’re able to agree to a code review with your peer, the next step is actually conducting the review. But how do you go about performing a code review? What should you be looking for? What really works? I follow these same steps, every single time, and find they always hit the nail on the head.

Advertisements

Step 1: Compare

Before I actually do any real review, I start by performing a compare. There are a lot of ways to compare the changes, before and after, and I’ll let you figure that part out. I start with a compare to get a sense of what I’m going to be reviewing.

How much code has changed and how much did it change by? At times I may find that too much has changed, making the review nearly impossible. When this happens, it’s best to recommend that the developer try to create smaller commits. BOOM! You just gave your first piece of code review feedback.

When looking through the compare, don’t try to focus on individual lines of code yet. We’re not at that stage. Stay at a high level and focus on structure and getting familiar with what you’re working with. Are there new classes or functions? Is it a complete rework or just a quick change? Are there unit tests? Did the developer leave any notes for the reviewer? Focus on gathering information and making an initial assessment.

If I suspect that the developer is not done working on an area or that there may be major, additional changes I will stop my review right there on that area. There is no sense in going to the next level of review if the section of code will potentially be refactored or changed. Save yourself some time and simply give the developer your high level assessment and tell them you look forward to diving deeper once they are closer to completing their changes.

photo cred: Pankaj Patel

Step 2: Debug

This is a step I think a lot of inexperienced code reviewers skip over. DEBUG THE F%$*@!+ CODE! For real though, pull the code down, set breakpoints on all of the changes (because you know where they are because you just did a before and after compare on them) and start to debug. I find that it helps me if I just go line-by-line and get an understanding of what each piece is doing.

This is where I really start performing the review. I look for any errors, bugs, or design flaws. I also ask questions about anything I’m unfamiliar with myself. If the developer created Unit Tests I try to either add my own data to those tests or see if I can break them. This helps expose any areas the developer over-looked.

You might be thinking that stepping through each line is not needed or that it’s a lot of extra work but in all honesty it’s not. You can debug a few hundred lines of code in a matter of minutes. Most developers break things out into more lines than what they would ever really need and stepping through that code becomes even easier.

photo cred: Agence Olloweb

Step 3: Feedback

After stepping through every line of code you should be prepared to provide your feedback. When it comes to feedback I find there is really only one good way to do it. Keep it positive and make it suggestive. At the end of the day, the developer who wrote the code has to be responsible for the quality of that code. Often times if you try to force a change in your code review you’ll end up with a You vs Me scenario that isn’t really productive for anyone.

I prefer to start with something positive. I try to take into account the amount of time and effort the developer had to put into their work and look for any design patterns or usage of good programming practices that I can point out right a way. It’s a lot more fun to code review someone’s work when they are using best practices.

For the harder stuff, I try to offer up changes as a suggestion. Unless I see a clear cut bug. If I see a bug or true flaw I try to provide a Unit Test or solid steps to reproduce to help the developer out. After all, we’re all in this together.

If the change is more about design or best practice, then I offer it as a suggestion. I might say, “Hey you could try this technique here to reduce x amount of code” or “If you try this, it can help keep things easier to maintain down the line. Let me know if I can help!” Offering up additional services or help is often a great way to provide feedback.

Advertisements

Step 4: Rinse & Repeat

From there you can send your code review back. Try to capture as much as you can in that first review. It’s much less grueling if you have to go back-and-forth and back-and-forth because you keep only catching one issue at a time. Try to capture all of it, up front, on the first review.

When the code comes back again for a second review. Hopefully all items are addressed and you can pass the review BUT don’t be afraid to go back through steps 1-3. Actually I encourage you to go through them each time, just maybe at a faster pace or a high level. For example, your first pass you should debug every line of code, maybe on your second review you just stick to the areas of change.

Hopefully this gives you a good idea of how to conduct a code review! If you have any personal tips on your best practices for code reviews I’d love to hear them!

README

Developers tend to be really great at writing code but not so great at documenting that code. It’s not that they can’t or don’t want to, it’s that often times the effort to write that documentation isn’t captured within the scope of the requirements for the feature.

But as developers we shouldn’t let that stop us!

photo cred: Shahadat Rahman

I’ll share a couple of good reasons to document your code and some really easy ways to make it happen. While we’re at it, let’s stop using the gross word “document.” What you want to write is a solid README!

Why You Should Document Write a README:

  1. Writing a README gives you a means to not be the sole owner of that code. When you have notes around how your code works, what it intends to accomplish, and how others can contribute the project, it stands a much greater chance of getting additional buy-in from other developers and stakeholders. When you’re the only one who knows how it works, be prepared to be the only one whoever gets asked to work on it.
  2. Writing a README for your code helps you be able to come back to that code and remember that one piece of how you set it up originally. It allows you to go back 6 months later and say, “Why the hell did I do it this way… Ohhhh… right… here’s why.” Save yourself, write a README for your code.
  3. Writing a README for your code helps you think through the scope of your project and its functionality. It helps you take a step back and really consider what you’re doing and why you’re doing it. It helps you put that official stamp of approval that you’ve completed this version and it is DONE. Scope creep is dangerous, writing a README on the current state of the project and any potential future work can help keep that project from living forever.
Advertisements

How You Should Document Write a README:

  1. README, README, README… most source control services provide some extra functionality around writing README’s. Some really great ones like Gitlab and Github will display the contents of the README in a web browser making for easy access. If you’re not using source control, YOU SHOULD BE!. If you are using source control, you should be writing a README and including IT with your source code.
  2. There are some great README templates out there! Save yourself a ton of headache and find one that works for you. A template gives you a head start and helps you focus in on what you need to cover. It also helps maintain the scope of your README. Here’s one that I’ve used: https://github.com/othneildrew/Best-README-Template
  3. Don’t just rely on a template. Use the template as a starting point, but think about what is going to be important for your team, users, and stakeholders to know when viewing your README. Do they need to know how to set it up? Do they need to request any special permissions from anyone? Can you reference other documentation that might help give them a deeper understanding? Try to think of your README as being something you can hand to another person and they do not need to come back to you with questions.
  4. Maintain your README. As you do get additional questions on the project, update your README. Add a Frequently Asked Questions section or fix that typo. Include an area you may have missed from the first pass and encourage others to contribute to your README. Continue to build into your README as you add new functionality to your project and just remember to Keep It Simple Stupid.
Advertisements

README’s often get skipped over as we jump from project to project. I’ve been guilty of missing out on including them before but I’ve also seen the advantages of including them and I’m working to continue working on them.

The best place to start is to just start. Go grab a README template, paste it into your project and fill it out bare bones. From there you can build on to what you’ve started and the next thing you know, you’ll have a solid document around the who, the what, and the why.

Happy Coding!

How to Debug a Unity APK on an Android device with Visual Studio

Oh my goodness this was far too difficult… so I had to write it up! This guide assumes you have some experience with Unity, debugging in Visual Studio, and building APK’s (let me know if you don’t in the comments!).

If you’re using Unity, you’re coding in C#, and you’re using Visual Studio… Here’s how you can setup a remote debugger to debug your APK on an Android device.

I’m currently using the following tools but the steps are generally the same:

Unity v2020.1.7f1

Visual Studio Community 2019

Android Studio v3.5 & SDK

Samsung Galaxy S9

Step 1: When you build your APK you’ll want to tick 2 boxes on the build settings menu. ‘Development Build‘ & ‘Script Debugging

Step 2: Create your build as usual and copy the build to your Android device. Connect your Android Device to your PC via USB. Be sure your machine and Android device are connected to the same wifi network.

Step 3: If you haven’t previously, Enable USB Debugging on your Android Device.

  1. Open Settings
  2. Select System
  3. Scroll to the bottom and select About Phone
  4. Scroll to the bottom and tap ‘Build Number’ 7 times
  5. Return to the previous screen and find ‘Developer Options’ near the bottom
  6. Scroll down and select to enable ‘USB Debugging’
Advertisements

Step 4: Back on your computer, open up a cmd prompt (or terminal) as an administrator and change your directory to:

C:\Users\[user]\AppData\Local\Android\sdk\platform-tools

Step 5: Back on your Android Device, find the Android Device’s IP by going to:

  1. Go to Settings
  2. Go to ‘About Phone’
  3. Got to ‘Status’
  4. Find ‘IP Address’
  5. Write it down…

Step 6: Back in your command prompt enter the following command:

adb tcpip 5555

If you need some extra help see the Unity Documentation here

Step 7: Connect to your Device with this command using your IP from Step 5:

adb connect [YourIpAddress]:5555
Advertisements

Step 8: Open Visual Studio from within Unity and go to the ‘Debug’ menu option. Select ‘Attach Unity Debugger’

Step 9: You should see a small menu popup and it should display both your computer and your connected Android device. Select your Android device, set some break points in your code, and have fun!

Hopefully this guide saves you some time and headache on trying to figure this out. If you need more information on connecting your device that this guide doesn’t provide let me know so I can update it!

Predicting Stock Prices with Machine Learning in Python: Part I

Over the last few weeks I’ve been keying away at building an application that can analyze stock prices and make use of Python Machine Learning libraries to predict stock prices.

This is the first part of a series of diving into machine learning and building this application. I’ve uploaded the entire project thus far to my personal GitHub repo at: https://github.com/Howard-Joshua-R/investor

Advertisements

I invite anyone and everyone to take a look at the project, fork it, add to it, point out where I’m doing something stupid, and build it with me! If you help, you’re more than welcome to use it for your own advantage.

photo cred: Shahadat Rahman

For this first post, I’ll walk through what I’ve built so far and how the meat and potatoes work.

If you drill into the directories and find the ‘spiders’ folder you’ll find the ‘lstm.py’ file. This particular spider is using Scrapy and an LSTM model to predict the stock price of any stock ticker you pass to it. Let’s take a look at the first piece of this tool, the scraper:

Advertisements
    def start_requests(self):                       

        ticker = getattr(self, 'ticker', None)      
        if (ticker is None):
            raise ValueError('Please provide a ticker symbol!')

        logging.getLogger('matplotlib').setLevel(logging.WARNING) 
        logging.getLogger('tensorflow').setLevel(logging.WARNING)  

        apikey = os.getenv('alphavantage_apikey')                   
        url = 'https://www.alphavantage.co/query?function=TIME_SERIES_DAILY
               &symbol={0}&apikey={1}&outputsize=full'.format(ticker, apikey) 

        yield scrapy.Request(url, self.parse) 

This first function uses Scrapy to reach out to AlphaAdvantage and pull down stock information in JSON format. AlphaAdvantage provides fantastic stock data on open and close prices over the last decade or longer. All it requires is that your register with them to obtain an APIKey. Best part, it’s’ free!

Advertisements

Now let’s break down each piece.

def start_requests(self):                       

        ticker = getattr(self, 'ticker', None)      
        if (ticker is None):
            raise ValueError('Please provide a ticker symbol!')

Here we define the name of our first function ‘start_requests(self)’ and allow Scrapy to know where to start our spider. From their we look to grab the ‘ticker’ argument which is what we are passing through to tell the spider what stock data to collect. I’ve currently tested TSLA = Tesla, AMZN = Amazon, and TGT = Target. Simply providing the ticker in the ‘main.py’ file is enough to set the target ticker. The final 2 lines here simply validate you’ve passed in the ticker argument.

The next two lines suppress logs from two of the libraries we’ll use later for predicting our model. MatPlotLib is used to plot points on a graph and TensorFlow is used to help us implement the LSTM training model.

logging.getLogger('matplotlib').setLevel(logging.WARNING) 
logging.getLogger('tensorflow').setLevel(logging.WARNING)  

The following lines set our AlphaAdvantage APIKey and the URL we’re going to hit for our stock data. In this case you’ll want to store your AlphaAdvantage APIKey in your environment variables on your machine under the name ‘alphavantage_apikey’.

apikey = os.getenv('alphavantage_apikey')                   
url = 'https://www.alphavantage.co/query?function=TIME_SERIES_DAILY
&symbol={0}&apikey={1}&outputsize=full'.format(ticker, apikey)

The final piece kicks off the Scrapy spider and once it completes passes the results to our parse function. We provide our newly built URL, which contains our target ticker and APIKey, and our parse function.

yield scrapy.Request(url, self.parse) 
Advertisements

So far just this piece uses a Scrapy spider to reach out to AlphaAdvantage and download stock data in JSON format. In the next piece I will dive more into the parsing and building the machine learning model.

In the meantime feel free to jump out to my GitHub Repo and read through the comments of the lstm.py file. I’ve attempted to include as many notes as I could and left some open ended questions as well. If you have any feedback I’d be more than happy to discuss! If you’re feeling brave and want to submit your own pull request please do!


GIT it?

Well did you get it? The Dad jokes are real…

Today, for Tech Tuesday, I want to share some of my most commonly used GIT commands and workflows. If you’re a developer and you’re not using GIT I highly recommend you start learning. GIT usage is easily becoming a must-have in a developers toolbox. I won’t cover the basics in this post but if there’s interest, reach out to me and I could be convinced to write a Beginners Guide.

Let’s start by breaking down the workflow. Let’s assume I’ve already performed a ‘git clone‘ and downloaded the code repository to my machine. Before doing any real work, my first call is almost always to break out into a new branch, which means my first call after a clone is most often, ‘git checkout -b feature-id-name‘. This gets me in my own workspace and allows me to move forward without worrying about any other developers work or changes.

Advertisements

Now let’s say I’ve made a couple of changes to a couple of files and now I want to add them to my commit history. Usually at this point I could perform a simple ‘git add .‘ or ‘git add fileName‘ and both options would include either all of my changes or a single file worth of changes.

Often times though I find that this doesn’t give me as detailed of a break down in my commit history. I may have multiple changes, of various context, over the course of an hour and if I represent those as separate commits, instead of one big commit not only do I have a clean, readable commit history for my fellow developers, I also have the ability to cherry-pick specific commits or even skip commits that maybe have a bug or mistake in them.

To that end, I like to use ‘git add -p‘ for staging my commits. The ‘-p‘ argument stands for ‘–patch‘ which really just means I can breakdown my changes into chunks. Git will try to give you parts of files and let you decide if you want to stage them. I find this perfect for when I have a bug fix over here, a feature over there, a couple of tests mixed in…. I can now break these down and say, “okay this chunk of code goes with this feature and this test but doesn’t involve this bug fix.” You also get some nice options on how you want to stage the changes: You can use y to stage the chunk, n to ignore the chunk, s to split it into smaller chunks, e to manually edit the chunk, and q to exit.

photo cred: Josh Carter

Now, let’s say I wasn’t being a good little dev and I accidently staged and then even worse, committed some changes that I didn’t mean to. Well it’s really not so bad to fix up. A little, ‘git reset –soft HEAD~1‘ will undo that last commit and get you back to having all of your changes staged. Now you can add/remove any other changes you need and set things back straight.

But let’s say you did you really bad thing… let’s say not only did you commit a password to your repo but then you pushed that change up to a repo… shame… shame… shame… but don’t sweat it. We can fix that too. We can still use a little reset magic like we did above, undo our change, remove our commit of the password, and then when we are ready to push our changes back up with ‘git push -f‘ to force those changes up. What this will do is re-write our git history with what we fixed locally. I can’t say how I know how to do this… I’ve never pushed a password to a repo….

Before I perform a force push of anything though, I run a ‘git log‘. This is one of my favorite commands because it allows me to see every commit that is in my local repo and then match that up with what’s in my remote repo. When I’m building releases this is extremely valuable for me to make sure I’ve captured every commit for a feature and nothing was missed or anything extra added. Use ‘git log‘ as your sanity check.

Advertisements

Speaking of building releases, ‘git rebase‘ can be one of your best friends… or your worse enemies… A rebase call is great to take a feature branch and update it so that when you look at the history of commits, it looks as if the feature was just worked on today. The rebase will take all of the other changes from other developers, put them at the bottom, and then put your feature on top of all those. This can make merging features into a release branch or a master branch easy peasy, however… if that branch is extremely old, if there is an enormous amount of conflicts in the the rebase, you may be better off performing your own merge. Don’t be afraid to call ‘git rebase –abort‘ and look for a cleaner solution.

When a rebase fails, I typically look to either perform a merge OR to call ‘git cherry-pick commitHash‘. Cherry-picking allows me to pull one single commit out of one branch and into my local repo. This can be very handy for grabbing a change here and there and pulling it into a release. After all, who doesn’t like cherries?

photo cred: T. Q.

And typically when I’m building releases I’m pulling changes from various branches other developers have worked on. Sometimes I’ll jump into their branch and perform a rebase to clean things up and make the merge into master run smoothly. But to make sure I’ve got all the latest on whose branches are out there with what commits I’ll run a simple ‘git fetch‘, which downloads all of the latest branches and their commits.

So, we covered quite a bit so let’s recap on what we’ve discussed so far. We can run a, ‘git checkout -b feature-id-name‘ to start a new branch and then use ‘git add -p‘ to stage and commit just the changes we want. If we make a mistake we can ‘git reset –soft HEAD~1‘ to undo our last commit and if we really messed things up we can ‘git push -f‘ to force our remote branch to accept our new changes.

To make sure everything is just how we want it we can use ‘git log‘ to review our commit history. If we need to update an old, stale branch we can call ‘git rebase‘ and if things get hairy we can back out with ‘git rebase –abort‘. If our rebase is a no go we might look to ‘git cherry-pick commitHash‘ and grab exactly what we want. Finally, we can use ‘git fetch‘ to make sure we’ve got the latest and greatest on everything in the repo.

These are my most commonly used git commands but I’d love to learn more. If you’ve got some commands you’re using often that aren’t on this list let me know so I can include them. If you’re using any of the these commands in a different way I’m interested to hear how!