Quantcast
Channel: Dev Blog by Axosoft
Viewing all 373 articles
Browse latest View live

U, I and Everything In-between…

$
0
0

Until recently, some of our customers have been, shall we say, forthcoming in their feedback on our UI and their frustrations with it. Some were eloquent in their summaries. Here’s a good example:

OUCH. But fair.

We appreciate all criticism, and we take it seriously (especially because we don’t want to be responsible for thousands of designer deaths per day). We knew that the time was right for UI improvements, but the opportunity really presented itself recently, when Axosoft’s marketing team finished a rebrand with a new look and visual aesthetic. This was the perfect starting point for our designer to create a visual exploration of how a reimagined Axosoft UI might look.

And so the journey begins

With a laptop and a dream, the Axosoft team embarked on what would end up being a 4-month journey through a forest of UI and UX micro-improvements. There were arguments. There were certainly tears. There were moments when discussions about hex values produced faces in pure #ff0000. But we came through it, and the process helped us set the stage for the next version of Axosoft. Here is a rundown of what went into just a few of the many changes we made, and why we made them.

Colors… more colors!

The most noticeable change in Axosoft is the color palette that was applied across the system. The design and marketing team had developed the brand and provided general direction, using our internal style guide to help us apply things to Axosoft. One of the ways we applied color to enhance the experience was through making more visual distinctions between larger segments of the application to allow people to see groups of content more clearly and to make sense of a system with less thinking.

Principles like this are nothing new, and in fact, they correlate to a well-known theory in cognitive physiology: Gestalt. The mind wants to group things together that make sense, and in this case, the Organize panel’s behavior and functionally made it a candidate to visually separate it from the list of items in the main view of Axosoft.

Comparison of the color and tone of the UI before and after V17.
On the left (before), all 3 regions are of similar color and tone, suggesting equal relation. On the right (V17), the Header and Organize Panel are recolored to provide a visual connection between those panels. This reinforces the fact that these two panels contain UI elements that control the display of the Main Panel.

Additionally, many people often found themselves confused about why certain work items were not in view. It turned out that for most of these cases, certain selections and filters were applied without the user being able to clearly see that this was the case.

Utilizing color to help identify some of these states has made it easier to see that there is a change in the system and that your view has been altered. One area where this revision can be seen is in searching your work item list. Along with in-context search messaging, we added in some of the key alert-focused colors from the design system to make it easier to identify when a search is active in the UI.

Signal to noise, I can’t hear you. Is this thing on?

What are we looking at? This may seem like an obvious question, but it can be hard to answer if you work with any large-scale application. In an ideal world, the tasks and behaviors that are used most routinely are the ones most visible. We felt it was time to move some of the less used UI elements of Axosoft out of the main area so they would compete less with the more important elements. These lower-order elements are still easily accessible but aren’t clogging up the main area. The visible toolbars are the biggest areas where we applied these types of changes–reducing the number of visual options helps make it easier to choose the existing ones. We also increased the size of the components within the main toolbar, while reducing the size of surrounding elements to reflect the priority they should have when you’re using Axosoft.

By reducing the number of options in the main toolbar and removing the toolbars from individual panels in the UI, it is far easier to focus on relevant content instead of deciphering where things are and what they do.

I suggested the tagline “Axosoft, now with 72% fewer toolbars!” to the marketing team, and the enthusiasm in the room was palpable:

It’s the little things…

Some UI changes are more subtle than others, yet can have a huge impact on the day-to-day usage of an application. We made quite a few of these changes in version 17.

Counts for your item details

While we removed things that were less relevant, we also added some that were more so. Simply by adding a number next to each item in the right panel, it is now easier to see not only whether information exists or not, but also how much information is there. Knowing the difference between 2 work logs vs 14 can be important in understanding how much attention a particular work item is receiving compared to another.

Sorting, sizing and ordering your stuff

One of the most powerful aspects of Axosoft is its customized views, which can be set up and defined by each user. There were particular areas in the application that made it difficult to perform some expected adjustments (or did not allow it at all), such as moving content around within the Organize panel, or resizing the lower details when viewing an item in a new window. As these issues were identified, we tried to make it easier for users to manipulate the Axosoft UI for their needs.

The Organize panel sections now have menu options that can easily be repositioned within the group, and you can immediately sort your comments or history from within each detail section, instead of having to manage this elsewhere.

These changes may seem like subtle solutions to trivial issues, but such issues can add up to be considerable annoyances. Our aim is to continue making these improvements to annoy you less!


GitKraken v2.2

$
0
0

Did you enjoy the Oscars the other week? Well, we’ve got a few announcements of our own to share, all of which we’re certain are correct, all focusing on version 2.2 of the “Best Git GUI in a Leading Role,” GitKraken.

So, take a seat out front between Jack Nicholson and George Clooney, and let’s open some envelopes!

GitLab Integration

GitKraken has had GitHub integration and Bitbucket integration for some time. And now, we’re excited to add GitLab to our list of nominees for “Best Remote Service in a Supporting Role,” as we add it to our family of integrated services!

But couldn’t I already connect to GitLab repos, John?
You

You could indeed; GitKraken has always allowed you to connect to most remotes on most services, but we’re talking integration. Consider the GitLab tanuki. Before, it was a tanuki, and that was cool. Now, it’s like a tanuki with a rolodex, a filofax, a cellular phone and a sharp suit with shoulder pads. It’s an on-task tanuki, pumped and ready to go, and it remembers who you are.

Rough impression of what a turbocharged tanuki might look like

So, now you’ll see that darling raccoon in the tabs for integration services, just like you see GitHub and Bitbucket. Here’s what you can do to make using GitKraken with GitLab that much more simple:

  • Add and remove SSH keys: From Preferences Authentication, you can now quickly generate and manage your keys. It’s easier than announcing the correct winner of an Academy Award for Best Picture!
  • Initialize a repo
  • Clone a repo from a GitLab account by browsing for it and selecting it
  • View GitLab remote avatars in the graph and (**spoiler alert**) left panel

We’ve been really excited about getting this integration into the app. We will continue to work closely with GitLab to leverage the capabilities of their API.

New Repository Management View

Take a peruse of your repos in GitKraken for a second, and you’ll notice some big changes that may just wow you on the red carpet. Plenty of users have requested that the repo management interface be tidier and more intuitive, so the new Repository Management View has been created as an entirely different way to organize and open repos. In this view, users can now:

  • Browse the file system for a repo to open
  • Open a repo from a list of recently opened repos
  • Create custom project folders that contain groups of repos.

That last one is a big deal for the convenience of GitKraken users. Folders that contain groups of repos can now be added to GitKraken as Project Folders, and these folders can be discretely named in the app. Your folder outside of GitKraken will keep its name, of course.

Needless to say, you can still clone and init repos as usual, just in a spiffier UI that makes working with connected services more clear.

Avatars in the Left Panel

In a controversial leak earlier in this article, we shockingly exposed the inclusion of left panel avatars in v2.2. Version 2.1 introduced avatars in the graph, and the addition of the left panel means that the owners of remotes are now clearly visible and more instantly identifiable at a glance.

HTTP and Proxy Credential Storage

Such is the drama, intrigue, excitement and sheer sexiness of this topic that it can be hard to relate the details over all the commotion. I’ll give it a go.

When entering a username/password for a host, GitKraken will now ask if you’d like to remember those credentials. Changed your mind? We all get cold feet once in awhile so stored credentials can be purged in Preferences Authentication.

That just about covers the major new features in this release. OMG OMG there are just too many people to thank! Please be sure to check out the release notes for a full run-down of what’s new, including features, improvements and bug fi–<cut out by orchestra>

5 Electron Apps You Need to Try

$
0
0

As you may already be aware, GitKraken owes the consistency of its cross-platform experience to the fact that it is built on Electron.

Electron is a powerful framework that allows developers to create OS-native applications through web-based technologies; essentially packaging web apps into native desktop apps that look and behave consistently across operating systems. The framework has gained significant traction in a relatively short period of time, with the official list of apps built on Electron continuing to grow.

There are some Electron apps that I would select as obvious favorites, including:

  • GitKraken (of course!)
  • Atom
  • Slack’s desktop app

I discussed the apps above in an earlier post, 10 Apps You Can’t Work Without, so I won’t repeat them here. Instead, I want to offer up a handful of lesser-known apps built on Electron that you might not have heard of and might want to try out. All these apps are listed on Electron’s extensive page of Electron apps.

5 Electron apps you should try

1. GIF Maker

GIF Maker is a utility to create GIFs from a variety of video services. For example, you can insert a YouTube URL in the URL field, hit the ‘create’ button and GIF Maker will generate a working GIF file for you. From there, you can make some edits to trim the file, apply balance adjustments, and resize the video.

The website touts that you can source from over 200 websites (although I’ve not had luck when testing it with Netflix), and you can also select a local video file for conversion.

GIF Maker comes in free and pro versions. Pro has more options; such as, video filters and more advanced editing capabilities. But the most obvious difference is that the pro version doesn’t watermark your outputted GIF.

2. ndm

ndm offers a GUI for managing your node packages. It has a logical tree view for differentiating globally installed and per-project modules. At a glance, you can see:

  • The modules you have installed for the current project/globally.
  • The current version of an installed module.
  • Whether or not there is an update available for each module.

You can also install new packages, and even update npm itself through the app.

Though not as fully featured as an app like CodeKit (which, in addition to offering a GUI for package management, can run compiling tasks through the client instead of via the command line), the experience in terms of package management is similar. It’s potentially a real time-saver for housekeeping node modules, seeing clear lists of which versions of packages you have, and seeing where package updates exist.

3. Kap (MacOS only)

I’ve been using screen capture software for years. In my freelance days, I would make short screencasts as educational resources; mini-tutorials to show how to make certain changes. However, there was never that sweet spot for functionality that covered all my use cases. Initiating a capture would often involve having to set numerous options, such as: compression settings, framerates, etc.–frankly the kinds of options I’d rather see after recording in a compression app. GIF creation for shorter captures wasn’t even an option, and although we now have functional apps like LICEcap, Kap  covers all the bases, offering a simple MP4/WebM format, or a GIF capture option from the same app. Define your capture area, click record and you’re on your way.

The real power in this app is its simplicity, which I would argue helps get work done better than more feature-rich or option-heavy screen recording apps. It’s solid, stable, and, like the best restaurants, doesn’t give you too much on the menu as an obstruction to getting started.

4. Hyper

Hyper  is a terminal app that is based on HTML, CSS and JavaScript. It is inspectable, meaning that from within the app you can look at and manipulate the UI. Like this, for example:

You can modify the app’s config file to apply changes to the UI according to your needs and tastes. In this regard, Hyper has a similar to Atom in its ease of hackability, and like Atom, it has a plugin system (Hyper’s plugin management uses npm).

Hyper is also open source.

5. Google Play Music Desktop Player

If, like me, you’re using Google Play as your music service of choice, you’ll know that your only real official desktop solution is opening the web app in your browser. Google Play Music Desktop Player  (how did they think of that name?) is a third-party app offering a material-like interface for your Google Play music playback, as a discrete application.

It doesn’t offer offline playback, but it does allow you to keep your listening separate from your browsing. It also allows for deeper levels of customization than would otherwise be easy to achieve from the browser, such as notifications, hotkeys, and minimized playback. It also allows you to customize the look of the app.

Additionally, Google Play Music Desktop Player is open source!

Axosoft Tips III

$
0
0

Are you looking for ways to use Axosoft more efficiently? Well, you’ve come to the right place! Here are our 10 latest Axosoft tips.

Axosoft Tips

  1. You can quickly add a work log by right-clicking on any item OR by using the keyboard shortcut W .

  2. In Axosoft v17, quickly access sub-menus with our NEW Command Palette. Use the keyboard shortcut shift + p and start searching!

  3. Quickly refresh your workspace by clicking on the refresh button in the upper left corner of the main menu bar.

  4. Use the Show Charts button to quickly review the burndown chart and projected ship date for the selected release.

  5. In the Release Planner, click the remove all users with no work assigned to them icon to remove them from view.

  6. Collapse your workflow swimlanes in the kanban Card View by clicking the arrow icon in the corner.
  7.  Click the stamp icon in the Details panel to auto-stamp your description.
  8. Click the List View dropdown and select Sort by rank to see your items organized by stack rank.
  9. Want to change the order of items in your view? Click a column header to sort numerically or alphabetically.
  10. Once you’ve enabled Global Dashboard Settings for your dashboard widgets, you can update widgets all at once in Dashboard Settings.

Axosoft Dev Talk: React and Redux

April Fools Ideas for Techies

$
0
0

I am not much of an April Fool’s Day fan. Not everyone can achieve the giddy heights of, say, the BBC’s oldie but goodie prank on its viewers 60 years ago, and attempts to do so can be sadly ineffective. Nonetheless, April 1st is almost upon us, and as we welcome Spring back into our lives, we must also endure the one day that our workplace prankster (we’ll call him Tad) comes into his or her own.

It doesn’t have to be this way. You can strike at Tad first, as long as you have a few ideas up your sleeve. Here are some classic and alternative ways to make sure that Tad—and his silly unmatching socks and zany ties—don’t mess with you in the future.

WARNING: The following pranks can result in being:

  1. Tame to the point of failure: Prank is so tame it may genuinely go unnoticed, forever, with no consequences.
  2. Pretty underwhelming: Tad can easily shake it off.
  3. Mildly-to-actually annoying: This is the level you’re shooting for. These pranks hopefully disrupt Tad’s flow for long enough to give him a taste of his own medicine.
  4. Instant dismissal: Tad’s a real bore, but that doesn’t make it a good idea to pants him in the cafeteria or toss his laptop over the side of the balcony.
  5. Arrest: This includes but is not limited to: murder (murder is not a prank), violence, aggressive nudity, arson.

Ok, let’s get this out of the way so we can carry on enjoying our lives!

NOTE: A couple of these pranks are aimed at MacOS users, although they’re likely achievable on other platforms. They also require access to Tad’s work machine. To gain the element of surprise, consider a diversion tactic of your choice, or implement one or more of these pranks early (or even better, in August).

1. The low hanging fruit: Chrome Extensions

A good prank needn’t take the week to plan, and for the half-hearted pranksters out there, most of the heavy lifting has been done for you through the gift of Chrome extensions. It turns out there are a lot of good ones out there, but here are a few to get your creative juices flowing:

  • April First Prank Toolkit: You’ll be shocked to hear that April First Prank Toolkit is a toolkit specifically aimed at pranking people on April first. It has a bunch of options for easy-yet-effective prankage; some pranks are more subtle than others (and therefore, IMO, more effective as slow-burner pranks). These include hiding the cursor and randomly reloading tabs–actions that make it less immediately obvious that your victim has fallen fowl to a dastardly prank!
  • Prank ‘Em: Another toolkit; this one allows for subtle irritations such as a flash on the screen. Every setting has a frequency slider too, so you can make things happen intermittently for better covert prankular tactics.
  • Cenafy: This does one thing and one thing only (1/100 of the time): Gives you John Cena. It’s infrequent enough to be another winning slow-burner.

2. Keyboard typing replacements

MacOS only

This one is short and sweet. Go to System Preferences > Keyboard and click the Text tab. Click the + symbol. From there you can replace ‘the’ with ‘teh’, for example.

NOTE: This replacement doesn’t work on every application, so you may have to be patient to see the fruits of your labors.

3. Application Overload

MacOS only

Let’s get a bit more serious. In this prank, we’re going to create an AppleScript application that launches applications in a way that Tad isn’t expecting. When he launches Word, he expects Word to launch just once! Classic Tad. Let’s say you want to launch 20 instances of Word instead. Here goes:

  1. Open Script Editor (found in Applications Utilities)
  2. Select New Document
  3. Add the following script:
    on run
        set wordPath to "/Applications/Microsoft Office 2011/Microsoft Word.app"
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
    end run
  4. What does that do? Well, it assigns the path of Word to a variable. We can then run a shell script to open that path and activate the app. Now, lines 3 and 4 are crucial, because we’re going to call them again—19 more times (or however many you want to do:
    on run
        set wordPath to "/Applications/Microsoft Office 2011/Microsoft Word.app"
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
        do shell script "open -n " & quoted form of wordPath
        tell application "Microsoft Word" to activate
    end run
  5. Go to File Save
  6. Choose File Format: Application and call it “Microsoft Word”. Save it in a discrete location

  1. Hit Save
  2. You now have an app that will run Word 20 times when opened. But it still looks like a script!

  1. Find the REAL MS Word, right-click and select Get Info
  2. Click the application icon in the top-right and hit command + c
  3. Find your script, get info and click the script’s icon
  4. Hit command + v
  5. You now have an app that looks just like Word. Replace the actual Word icon in the dock and you’re set! This can be adapted to suit your needs. Maybe only have it launch Word twice—that’s enough to be a mild yet persistent annoyance. Or, have it open a nice selection of other applications.

4. Grunt Task Tomfoolery

This one’s a tad (ha!) obscure, but might be fun. WARNING: It could also be a fireable offense if your prank somehow makes it to production. You have been warned!

Let’s say Tad is working on a bunch of projects (typical Tad), and he uses Grunt to automate his tasks and build his websites. Open up one of his projects in the terminal and type:

npm install grunt-string-replace

Now open up Tad’s Gruntfile and add the task:

grunt.loadNpmTasks('grunt-string-replace');

In his build tasks, add 'string-replace' to the array of tasks.

Next, define that task! In the following example we’re focusing on the build folder, and it’s last in our list of tasks (this ensures that our hard work isn’t overwritten by subsequent tasks). We’re searching for all html files and replacing ‘the’ with ‘teh’ using a regular expression (defining a string as a pattern will only change the first instance).

'string-replace': {
    dist: {
        files: [{
            expand: true,
            cwd: 'build/',
            src: '**/*.html',
            dest: 'build/'
        }], 
        options: {
            replacements: [{
                pattern: /the /g,
                replacement: 'teh '
            }]
        }
    }
}

That’s it. Sit back and wait for Tad to run a build task. Of course, you could get more inventive with the replacements and perhaps only target one or two files. However, most importantly, remember that I never, ever, recommended you do this to anyone. Ever. Don’t do it.

An Introduction to Monoids

$
0
0

When you think of programming, you might not immediately think of mathematics. In the day-to-day practice of writing software it’s often hard to see much theory behind REST requests and database schema migrations.

There is, however, a rich world of theory that applies to the work we do, from basic data structures to architectural patterns. This is the field of category theory, sometimes described as “the mathematics of mathematics”. Category theory is concerned with structure and the ways of converting between structures.

NOTE: This article is based on an Axosoft Dev Talk I recently gave, also titled Practical Category Theory: Monoids. Watch that video or keep reading!

Hopefully this sounds like the kind of thing you’re familiar with as a programmer, and if it doesn’t, hopefully it just sounds like a good ol’ time. After all, category theory doesn’t care about specifics; it’s abstract, which helps it be applicable to a great many fields.

Software therefore needs to apply the concepts of category theory to match the realities of programs that execute on real computers. You may have heard of some of these, especially if you’ve looked into statically typed functional programming languages such as Haskell, OCaml, F#, or Scala.

Semigroups

In talking about Monoids, we actually need to talk about two structures: the Semigroup and the Monoid. A Monoid is a superset of a Semigroup, so let’s start there. The Semigroup is a simple structure that has to do with combining. In the following example, I’ve combined two strings with an operator such as +:

helloWorld = “Hello” + “world”

And in this example, I’ve created a concatenating list:

parts = concat([“Parsley”, “Sage”, “Rosemary”], [“Thyme”])

Both these examples have some similar properties. They take two values of a certain shape and combine them into a single value of the same shape. We don’t take two strings and combine them to get a number or an array; it’s quite reasonable to expect the result remain a string.

Let’s start with just this single property, that there is some “append” operation that takes in two values of some type and gives back a combined result of that same type. In various languages this might look like:

C#

T Append(T first, T second)

Javascript with Flow

append(first : T, second : T) : T

Haskell, Elm, PureScript

append :: t -> t -> t

This may not seem like a particularly novel thing to do, after all, you probably do this sort of operation all the time on various kinds of data. Remember though, that we’re talking about a concept from category theory, so while the above type signatures are the end of the story in terms of what that compiler can enforce, we are obligated to obey a few more rules.

These rules are “things that must remain true,” and in math we call these laws. You’re already familiar with tons of these laws even if you haven’t heard them referred to as such. One example is with addition. When adding two numbers you can freely swap the order of the arguments, so 1 + 2 is the same as 2 + 1. This is the commutative property, and the law here is that swapping the order doesn’t affect the resulting value.

For Semigroups, we must obey the associative property, which has the law that the order in which you combine things does not matter. That is

(“hello” + “ “) + “world”

is the same as

“hello” + (“ “ + “world”)

As you can see this doesn’t mean we can swap the order, just that we can do our combine operation on any two adjacent elements, and when we’re all done we’ll end up with the same result.

The append operation is whatever you want it to be (as long as it obeys the rules)

Consider CSS classes. They’re kind of like a string, except when you combine them you don’t just shove them together; it’s more like we’re doing a join operation using a comma and space as the separator:

“app” + “button” + “selected”

should turn out to be

“app, button, selected”

CSS classes aren’t the only type where this is true; if you think in terms of Semigroups, there are lots of cases where fundamentally we’re combining two things of the same type, and that type has a unique rule around how to do the combination.

That’s actually all there is to Semigroup, an append operation that is associative. If you’re feeling underwhelmed, don’t feel bad! I was hardly excited when I first encountered the idea. Let’s dig a bit deeper to see how this unassuming structure punches quite a bit above its weight class.

For the purposes of grounding our discussion, let’s pick a few types that satisfy the rules of Semigroup. List is a straightforward data structure that every mainstream language supports (even if it’s named something else), so we’ll talk in terms of Lists moving forward.

Depending on the language you’re using, you might be able to express Semigroup as an interface or protocol, and you might be able to implement that interface/protocol on your language’s List type. If not, don’t fret! These ideas are still useful and applicable even if your language doesn’t give you a way to represent it in your type system (or if you’re using a dynamic language and don’t have a static type system). I’ll use square brackets to represent a list and the ++ operator for my List’s “append” operation.

bandMembers = [“Stevie”, “Lindsay”] ++ [“Christine”, “John”, “Mick”]

The second type we’ll consider is the String. For our discussion, we’ll use the same ++ operator to concatenate Strings.

greeting = “Hello” ++ “ “ ++ “World”

Interesting attribute #1: Semigroups are partition friendly

Consider the following SQL related code:

databaseQuery = selection ++ table ++ constraints

At the end of the day, we want to have a nice query we can use to get a result from the database. If our top level value is a String that makes up our query, then the associativity of Semigroups means that we have a lot of flexibility in building this up.

One example of this might be in building up the selection. We want to end up with something like:

select * from

or if we want specific columns:

select a, b from

It seems reasonable to build up this String from some fixed parts, the “select” and “from” mixed with some customizable parts, the “*” or “a, b”.

selection = “select “ ++ ”a, b” ++ “ from “

The resulting String will be combined with the other elements of the query, but it doesn’t matter when we decide to combine the elements, we could defer the column name conversion to a String by leaving it as a List.

selectionParts = [“select ”, [“a”, “b”], “ from”]

When we’re ready for a selection String, we could combine the column names first, then combine the resulting String with the other two parts. This same flexibility in when to go from a more structured form into a final string, is present in other parts of the query as well.

table = “myTableName”
constraintParts = [“where “, constraint1, “ AND “, constraint2]
constraint1 = [columnName, “ IS “, value]
constraint2 = [columnName2, “ NOT “, value2]

As developers, we can decide when it’s best to go from the representation of values in Lists to a final String form—either for a part or for the whole of our query. We can defer this decision through the layers of functions used to build up the query, and we know we can go from parts to whole for any section of the query—independent of any other part, and nothing will change.

We originally had:

databaseQuery = selection ++ table ++ constraints

Let’s see what the query might look like if we had all of the bits inlined:

databaseQuery = “select “ ++ “a, b” ++ “ from “ ++ “myTableName” ++ “ where “ ++ columnName ++ “ IS “ ++ value ++ “ AND “ ++ columnName2 ++ “ NOT “ ++ value2

If we were to recreate the order of operations in the original version, it would require us to group our values into 3 parts:

databaseQuery = (“select “ ++ “a, b” ++ “ from “) ++ (“myTableName”) ++ (“ where “ ++ columnName ++ “ IS “ ++ value ++ “ AND “ ++ columnName2 ++ “ NOT “ ++ value2)

But because of associativity, these are exactly the same. We can group the individual combinations however we want, which is effectively what we would be doing along the way by collapsing the various query parts into selection, table, and constraints.

This might seem obvious (after all we’re dealing with Strings), but the ability to arbitrarily partition your data and combine it, is useful for more exotic types as well. Imagine you have a type Log that represents a single logging event and is a Semigroup. You might have files filled with Logs spread across files that are rotated, grouped by date or time range, or some other bucketing criteria. Being a Semigroup means you can combine adjacent Logs into an aggregate Log in any order you want. You could split up the work of combining Logs across multiple threads and know that it is still 100% safe to combine the individual thread’s results into a final result.

Interesting attribute #2: Semigroups are “incremental combination” friendly

Carrying on with the idea of Logs, consider a remote logging service that accepts Logs from your various servers and aggregates them down into something you can do reporting on. If we still have a Log type that is a Semigroup, we have a lot of freedom in how we proceed. For example, when a Log is generated by one of your servers, it could send it immediately, or combine some number of Logs before sending them:

Remote Logging Server Worker Server
Log <-worker sends Log immediately– Log
Log ++ Log
Log
Remote Logging Server Worker Server Local Batching
Log Log –>
Log Log
Log Log –> Log
Log Log ++ Log
…repeat until n logs or time limit reached…
Log <-worker sends batched Log– Log

On the receiving end, you could accept the Logs and combine them with the main log as they arrive, or you could batch them:

Remote Logging Server Worker Servers
Log <– Log
Log ++ Log
Remote Logging Server Local Batching Worker Servers
Log <– Log
Log Log
Log Log <– Log
Log Log ++ Log
…repeat until n Logs or time limit reached…
Log <– Log
Log ++ Log

Monoids

Ok, that’s a lot of things we can do with Semigroups, so where do Monoids fit in? Thankfully, Monoids are, by comparison, very simple. They are everything that a Semigroup is, plus an “empty” value of the type.

For Strings, this would be an empty string; for Lists it would be an empty List; for Logs… well, that’s slightly less clear. If you can define a concept of an empty value for your type though, then congratulations, your type is a Monoid. The types signatures for this sort of thing look like this:

C#

T Empty

Javascript with Flow

empty : T

Haskell, PureScript, Elm

empty :: t

You guessed it! There are laws that go along with Monoids, and this time there are two: the left and right identity. These state that combining the empty element with any value shouldn’t change the meaning of the value. That is, the empty element is an identity value for that type. Looking at the String we see that

“” ++ “some string”

is the same as

“some string”

This works on either the left or right side (hence the left/right identity):

“some string” ++ “”

is the same as

“some string”

This comes in handy when you want to append values but you don’t want to distinguish between having zero and having one already. Consider the case of the remote logging service:

Remote Logging Server Worker Server
Log <-worker sends Log immediately– Log
Log ++ Log

Where did that Log that the Remote Logging Service starts with come from? If Log is a Semigroup then we’d need to generate it somehow. However, if Log is a Monoid, we have our answer: the initial Log value on the Remote Logging Server is the “empty” value of Log. Since combining the empty value with a Log doesn’t change the Log, this is safe to assume, and now we can deal exclusively in terms of Log ++ Log operations. Less special cases means happier developers™.

There are lots more fun use cases for Monoids. One example is taking a Monoidal type in a list and running it through a fold/reduce/Aggregate operation, where the fold function is appended and the initial value is empty. You could write a specialized fold that only works for Lists of Monoids but can aggregate them with no other information needed.

Go forth and aggregate!

For something so simple as having an append operation, an empty value, and the associative property, I hope you’ll agree there’s a lot of depth to Semigroups and Monoids. This is just the tip of the categorical iceberg though. Due to its abstract nature, category theory can describe a variety of structures, and best yet, be supported by lots of formal reasoning (aka proofs).

This formal reasoning can give us a lot of confidence that what we’re doing really will work as long as we’re implementing our types in accordance with the laws of the structure. If you’re looking for next steps, Functor is a mild step up in complexity, but it’s equally as rich in terms of applications and reach.

Performance Problems and Solutions in React.js

$
0
0

This is part 1 of a 4-part series!

This post is the first in a 4-part series looking at the performance issues that GitKraken developers faced. This post outlines the problems themselves, and subsequent posts in the series will focus on how the solutions to each problem were developed.

Part 1: The Problems

It may sound obvious, but in order to improve an app, you have to identify the pain points and precisely what is causing them.

A major issue we noticed in GitKraken was that the more a repository grew, the more everything slowed down. There were some tools we used to get more specific with what was at the root of such performance degradation; such as the Chrome dev tools in Electron, which contain a profiler that’s handy for giving you an idea of where you’re spending most of your time.

A profile of an action in GitKraken

After some investigation, it was clear that the app was spending an unwelcome amount of time tied up in React processes, frequently rendering and updating. Thankfully, React has its own tools for discovering performance issues, so we were able to move over to that to get more granular with the issues. It turned out that most of the expended time was in the graph, and that most of it was what we in the software development industry like to call wasted time. Wasted time is probably exactly what you expect it is–it’s time spent in processes that weren’t required in the first place.

In the context of React, the process was to go through a whole render cycle comprised of pulling new data, updating components, rendering, and constructing a virtual DOM. In the end, you compare the actual DOM to the virtual DOM, and you may conclude that the two DOMs are the same. That’s wasted time because no actual DOM updates needed to happen, and you just performed a bunch of work for nothing. Nothing!

This scenario was starting to creep up into seconds of wasted time. A couple wasted seconds might not seem like very much, but in Computerland, seconds of wasted time is comparable to watching season 5 of Lost: it might seem like there’s a point to it, and you’ve come this far so you kind of need to see it through to completion, but in reality it’s taking an excruciating amount of time, becoming increasingly irritating and turns out to be a genuinely bad user experience.

Yikes.

Anyways… The point is, at this time, every action in GitKraken would cause graph renders. That’s every action, even if no refs changed (for example, if a new PR came through, or one of the timelines on the graph updated), a whole graph refresh would still be performed. The subsequent frequency of repository refreshes, alongside the graph rendering process itself being slow, made the whole app feel slow.

Attempted Solution #1: Unmounting the graph

We tried to remedy this by unmounting the graph as something was loading. So, during that process, the whole graph component would be removed from React. That increased how fast the repository would load, but as a result, the amount of special-case application of this method would make the app code far more complicated and less sustainable long-term.

Attempted Solution #2: Flux implementation and Immutable.js

In our Flux implementation at the time, we had a store for each domain, and as a domain was updated, that update would cause a refresh of the graph. But, if you had a big refresh coming through, with multiple domain updates, you’d get a cascading effect of a graph refresh being calculated with every one of those domain updates. To put that into an actual use-case context, refreshing a repository would essentially result in around 8 graph rerenders, producing significant performance consequences in the app.

How so? A quick background about how Flux operates: There is a dispatch of data, and that dispatch goes from store to store, updating things as it goes. Each store—if its data changes—emits an event saying that some data has changed. React then responds to this event, grabbing the new data from the store and performing a render process.

This is all well and good, but the kicker here is that no subsequent store would update until that render process was fully resolved. So, for a single dispatch of data that updated multiple stores, this chaining effect would get costly. This was a fundamental bottleneck in our implementation of Flux.

This performance hit was compounded by the rendering process itself. When you grabbed new data from the store, the store would give you a deep copy of the data rather than its actual original data, to protect that original data from any mutations that may be caused by naïvely-written React components. We’ve since repented for our sins and now follow the one true path.

This deep copying proved to be expensive. When a component would get that data copy, it would perform a deep comparison between that data (copied from the store) and the data it already had, to ascertain if an update was necessary.

Though somewhat of a time-saver in the respect that it worked out whether or not an update needed to be performed, this check was in and of itself very expensive. However, this deep comparison was actually faster on average than just doing the update. All the rows (a commit in the graph would be considered a row), each made of multiple components and subcomponents, were causing multiple verifications that their data was the same. Faster, but still an expensive chain of events.

So, we decided to bring in a library called Immutable.js, which made immutable arrays and objects, allowing us to quickly compare if part of the object had changed because we could do a quick memory address comparison to see if that changed. Although this helped, it was extremely unwieldy to get shoehorned into our existing infrastructure without breaking ‘a lot of stuff’, and (you guessed it!) it was really slow to update objects. Even when batching updates using the built-in methods. This made our updates to the data actually take longer than the renders, so we had to ditch using Immutable as a solution. Womp womp.

Attempted Solution #3 (Bonus Fail): PureScript

We tried migrating lots of stuff over to PureScript. However, once we got started, we soon realized that this wasn’t going to be the right fit for our team.

So, by this point, we had established 3 solid areas that were pain points, causing performance issues that we needed to remedy:

  1. How we modified the state of the application with new data that came through.
  2. Retrieving data out of stores.
  3. Determining how to update components in the UI in as fast and efficient a way as possible.

These were the main 3 points that each required significant rethinking in how we were building the app.

The next 3 parts of this blog post series will focus on each of these issues respectively, the solutions we implemented, and how we implemented them. Subscribe to our blog to get the next 3 parts delivered to your inbox!


Introducing GitKraken Walk-In Stores

$
0
0

We’re incredibly excited to reveal that, beginning Q1 2018, we will be offering a brand new way to get your hands on GitKraken: through a physical version of the app purchasable at actual walk-in stores!

For almost 2 years now, we’ve been offering a Git GUI – GitKraken, that we’ve constantly been updating and improving because we’re determined to provide you with the most luxurious Git client imaginable. Part of that luxury has always been about the UX. What are the fundamental UI traits that transpose to actual, physical actions that in turn make for a more efficient, simplified and robust experience for the end user?

In addition to the development of the app itself, we felt there was something missing from the way in which we provided GitKraken to our users.

Until now, our flow has been:

  1. Go to the GitKraken website
  2. Download GitKraken
  3. Install GitKraken
  4. Use GitKraken

Sound familiar? Of course, it does. It’s literally what almost every software company has you do. It’s standard and mediocre. It champions supposed ‘efficiency’ over the experience of luxury. But what do users actually want?

We needed some sound data to help us strategize. We sampled 5 members of the public, over the age of 80 years old, and asked them 4 simple questions:

  1. Is internet security important to you, or is it the most important thing in your life?
  2. Would you rather obtain and install a Git GUI client online or face-to-face?
  3. Tabs or spaces?
  4. If you were to install and configure a Git GUI on your Linux distribution, would you need expert help?

The results were pretty conclusive:

We convened a strategy meeting in which all stakeholders almost unanimously agreed that some sort of physical presence was missing from GitKraken’s marketing mix, and a casual mention of brick-and-mortar stores quickly escalated into animated discussion.

We looked at other companies who offer tangible experiences as part of their marketing and purchasing funnel. Apple was, of course, the first company who sprang to mind, and Hamid Shojaee, VP of Product, tasked himself with revising GitKraken’s roadmap accordingly. It wasn’t long before he had drawn up some compelling arguments, impressively articulated:

We quickly had an agreed plan in process, including architectural experts who would help us design the GitKraken Stores not only to look great but also to stimulate conversations among fellow Git users browsing the store.

Artist mockup of the first GitKraken store, opening Q1 2018. The entrance can be seen between the glass doors. At the back of the store is a large monitor to display visual items, and headphones are available to listen to music on iPods.

The aim of this venture is to provide a literal one-stop shop for you to purchase GitKraken (and install in-store, if you wish). The convenient and highly portable diskette format for the app will mean you can take GitKraken anywhere with you, even without wifi.

Perhaps you’re camping at a California state park. With your boxed copy of the app, you’re only ever one step away from installing GitKraken. On vacation at a hotel? You can install GitKraken on any hotel PC that has a 5.25 microfloppy drive. It really is that simple.

Here’s to the next exciting adventure!

GitKraken v2.3

$
0
0

As you’re probably aware by now, we work hard to address the needs and requests of ye faithful users of your favorite Git client, GitKraken. Version 2.3 implements a widely requested feature that everyone here at Axosoft is excited to see in a release: Git hooks!

We’re aware that this has been a barrier that has prevented some users from being able to adopt GitKraken for use in their teams, where specific functionality during certain actions is an absolute necessity. With Git hooks support, we’re hoping that GitKraken now incorporates the best of both worlds: an intuitive and simple to use interface, with the super-user functionality of Git hooks.

What are Git hooks?

Well, a hook might be defined as a trigger. If you’re familiar with JavaScript, you’ve probably used hooks before, in the form of events. Event listeners can be set up to fire custom actions when certain events (e.g. click, ‘click’) occur. Those events might be considered “hooks,” since you’re ‘hooking’ into them to do what you need to do.

WordPress users might also be familiar with hooks in the context of action hooks. At certain points in a page or post being rendered, various actions are fired off, into which the programmer can hook custom functions to work with the information at hand at that point in the rendering process.

Git hooks are very similar. They allow a user to create custom scripts that fire off at certain points during Git processes. GitKraken does not require that you install Git on your system, so until now, that independence had meant no Git hooks support. But, with a lot of blood, sweat and tears, v2.3 allows you to hook your way to a bounty of control over your Git actions!

Watch this short video to learn about Git hooks, and to see how Git hooks work in GitKraken.

What hooks are supported by GitKraken?

Beneath each hook is a list of the actions during which GitKraken calls that hook:

  • pre-commit:
    • Commit
    • Amend
    • Merge Resolve
  • prepare-commit-msg:
    • Commit
    • Amend
    • Cherrypick
    • Merge
    • Squash
    • Revert
  • commit-msg:
    • Commit
    • Amend
    • Merge Resolve
  • post-commit:
    • Commit
    • Amend
    • Cherrypick
    • Merge Resolve
    • Revert
  • pre-rebase:
    • Rebase
    • Squash
  • post-checkout:
    • Checkout
    • Discard Changes (selectively)
  • post-merge:
    • Merge (Without Conflicts)
    • Fast-Forward
  • post-rewrite:
    • Amend
    • Squash
    • Rebase
  • pre-push:
    • Push Branch
    • Push Tag
    • Delete Remote Branch
    • Delete Remote Tag

So that’s Git hooks. We hope you enjoy getting your tentacles all up in our actions!

Regional Date Settings

Another widely-requested feature has been the ability to set region-specific display dates for commits. Y’all might not be from these here parts and might have some region-specific ways of presenting your dates. Viewing another format can be jarring and counter-productive when you’re trying to decipher dates at-a-glance.

Well, guess what? GitKraken will now think to itself, where am I? and will update its date format accordingly, based on your system locale. You’re welcome! De rien! Bitte schön! De nada! Don’t mention it! Pip pip! As you were.

New Onboarding Experience

It’s now easier than ever before to get the rest of your team set up in GitKraken. V2.3 introduces a brand new onboarding screen for first-time users. It’s easier to see where to set preferences and start working with repos. It also introduces users to our Intro to GitKraken video which gives a quick 90-second overview of GitKraken’s functionality, our support site which provides lots of helpful documentation, and the GitKraken Slack Community where our users come together to help each other and help our team improve GitKraken.

But, there is just one more thing…

Try GitKraken Pro for free

If you’ve been wanting to try out GitKraken Pro features—like the merge conflict output editor, multiple profiles for work and personal use, or GitHub Enterprise integration—nows your chance!

Start a free GitKraken Pro trial by simply clicking the 

button in-app. You’ll be able to test these awesome features for up to 14 days before deciding if you want to upgrade to a paid account!

Axosoft Dev Talk: Practical Category Theory

$
0
0

It’s time for another video in our Axosoft Dev Talk series! In the first of two talks about Practical Category Theory, David Koontz explains Semigroups and Monoids. Watch this video to learn more, and don’t forget to subscribe to our YouTube channel for more videos about software development.


Could your Git client list be represented as the monoid ""? Maybe you should add GitKraken!

Reduxifying GitKraken

$
0
0

GitKraken is a React app. We’ve been using React since GitKraken version 0.12.2 (in January of 2015) when we migrated from Angular.js. When we started using React, we architected with the flux library from Facebook as our state model and forged ahead into glory. At first, it was good. Much performance. Many code. Wow!

The initial excitement subsided, and the honeymoon was over. We looked back at our strange mess of state and decided to make a move to Redux. It’s fair to say we had problems scaling with Flux.

The GitKraken team is now finishing up the transition from Flux to Redux, and everything is looking really amazing. There are already a lot of benefits that we are seeing from Redux as we write new code for the application.

Redux vs Flux

For those unfamiliar with Redux, but familiar with Flux, you can think of Redux as a stricter implementation of Flux. Instead of multiple stores and a dispatcher to bind all of the stores together, there is one store that holds all of the state in the application.

For those unfamiliar with both libraries, the Flux library is the implementation of the Flux pattern. The Flux pattern is similar to Model View Controller (MVC), but has a strict one-way data flow constraint.

Flux pattern

At each step in the flow, data is limited to only one movement direction. A view can start an action at the request of a user, the action can generate new data and pass it to the dispatcher, the dispatcher dispatches the results of actions to the stores, and the stores can then emit an update to the views.

In the Redux library, we’ve reduced our total store count to one, and thus we don’t need a dispatcher at all. The dispatcher was originally there to manage the order in which we update stores with action results and keep the stores behaving. Instead, we just directly inform the Redux store of an action that has occurred. So we’ve kept the basic premise of the Flux pattern, but shrank the pattern’s flow by combining the dispatcher and the store.

Redux state

Since we only have one store now, my first reaction was that the store would be a monolithic hellspawn of a maintenance issue, but we actually keep some semblance of order by using Redux’s reducer pattern for our separating concerns. The main mechanism for Redux is the reducer pattern; we have one top level reducer, and we can branch substate trees into smaller reducers.

A reducer is a pure function of state and message to state. We take the previous state tree, a message, and apply some transformation of the state to produce a new state tree. The top level reducer has this shape, and any subreducers also have this shape.

The process of reducing utilizes a constraint we place on the Redux state, that it is immutable. When a message is passed into Redux land, the message is passed through a series of these reducers. Those reducers then decide whether or not to perform an update according to the message.

In our case, the reducers decide whether or not to produce a brand new object. To clarify, we update every reference along a path to an updated value, such that we have made no mutations to the previous state tree. When our reducers produce a brand new object, we know that changes happened to that particular substate tree. In fact, we can trace the new object references to the exact set of changes that have taken place between the previous state and the next state.

Benefits of Redux

What makes this transition so nice to work with is that exactly one message changes state at a time in a very consistent and straightforward manner (a → b). It might seem a bit daunting at first to hoist all state of the app into a single state tree, but the benefits are an amazing trade-off.

Things like time travel can be implemented in a trivial fashion (just store the sequence of state updates). Holding all state in an immutable data structure also allows React to, erm, react better! We can utilize referential transparency when the Redux store emits a change. React can perform a check before updating to see if the top level object reference has changed, and it if hasn’t, shortcircuit the entire rendering tree.

Another nice benefit of the pattern we build with Redux comes into play when we organize our view around our Redux state. We build containers which listen to Redux state, and when a state update occurs, those containers retrieve the relevant changes and choose whether or not to react. Those containers pass any state they care about to a presentation layer. These layers are largely stateless view components (components that only receive props).

Ok! Ok. The benefit I’m describing is that the view layer scales horizontally by top level containers, which hold onto a pure render tree. When we want to add more containers, we can do it in a clean manner (a container is responsible for a full view), even though every container talks to the same store. We’ve basically built an architecture that adds one connection per new UI container when scaling. That’s really clean!

That’s not the only place we scale better. The Redux state itself scales per reducer. We can grow the size of the total state by building new reducers with their own substate, but we don’t have to increase the complexity of already written reducers, nor do we have to manage an explicit dispatch order like we did in Flux. There’s only one store, and our reducers run synchronously, producing a new state one message at a time.

Scaling example

So there you have it. We have an architecture that now provides a cleaner scaling experience in GitKraken.

Introducing Axosoft Version 17.1

$
0
0

Some software releases have big, visual changes that you see the very moment you open the app. Version 17.0 of Axosoft was one of those big ones, with a huge visual overhaul that tidied up the UI, and big improvements to the user experience.

However, version bumps are also often cause for a large amount of development work being applied to complex solutions that are designed to be, at the front end, almost invisible. These feature sets are in place to remove friction, make you notice the app less, and so you can spend less time doing things.

Axosoft version 17.1 is one such release. In this release, not only have we fixed a bunch of smaller issues that some users on previous versions were experiencing, but we’ve continued the tradition of introducing subtle, elegant solutions to “quality of life” issues that have, until now, made certain repeated tasks less efficient than they could have been.

Version 17.1 has several marked interaction improvements that will soon become so commonplace in your day-to-day use of Axosoft, you’ll forget they’re there at all. So, what’s new?

Fuzzy Finding Duplicates Before You Duplicate Them

Collaborating on projects and releases can create duplication. For example, more than one person might create a task that has been discussed communally. In an effort to reduce the likelihood of the same item being created twice, Axosoft now has a fuzzy finder style drop down to show you existing items in your account that match or share similar names to the item name you are typing.

You can still create the new item as you did before, but if you happen to notice an existing item that you’d like to view or edit, simply select it from the drop-down. Cutting down on duplicated items means less confusion across your team.

Renaming Email Accounts

A lot of users have requested the ability to rename email accounts to something a little more friendly than ihatejira4353234@hotmail.com.

As an admin, simply go to Manage Account > Other Settings > Email Accounts. Open the account whose name you wish to change, and in Account Settings, use the Account Name field to edit the name to whatever you want. Simple!

You Literally Come First

When editing the Assigned To field, there’s a fairly good chance you’ll want to assign yourself to that item, so that’s a pretty reasonable default, right? When creating an item in v17.1, you will now appear at the top of the list of users. Typing another name will narrow down the options as before.

Other Improvements

Reports

When exporting a report, you can now include a new field: Parent Item. This allows you to see more easily where sub-items sit in your projects.

Email

In your email lists, you can now filter by emails that are auto-reply emails. Simply click the gear icon in the top-right of the email list, and then select Email > Is Auto Reply filter.

You can also now view the size of attachments for an email, so you know just how large they are.

We hope you enjoy using these new features in version 17.1. As always, this release brings a bunch of other improvements and fixes to the app, and you can see the full list in the 17.1 version history.

Learning Git with GitKraken: Rebasing in GitKraken vs CLI

$
0
0

In these videos, Brett Goldman compares the experience of performing a very basic rebase in the CLI vs GitKraken, followed by a demonstration of what happens, and what to do, when conflicts occur. Take a look and subscribe to our YouTube channel for more videos about learning Git with GitKraken.

Axosoft v17.1: Burndown Chart Update

$
0
0

In Axosoft v17.1, we made a small adjustment to the way burndowns work, which should provide more accurate velocities for our users. Prior to v17.1, velocity was strictly based on the number of hours that were entered using work logs. This was great for the apple polishers that entered all of their work logs at the end of the day, for all items, without exception—but it lead to a lot of confusion for teams that weren’t adding work logs for all of their items (or any of their items).

We heard your protests about not having to add work logs for every single item, and we’ve accepted your peace offering of a Pepsi can to free you from the oppression of work logs.

pepsi commercial

Too soon? Sorry.

Burndown Velocity Update

Prior to this release, teams that used story points for estimation had burndowns that were often nonsensical—or that disappeared because work logs didn’t often make sense when completed work was estimated in points. The one behavior that changed in this release is decreasing an item’s remaining estimate manually, or by setting the item to `completed`. Axosoft will now update the burndown velocity as you’re getting work done.

For example, let’s say you have a bug fix that is estimated to be 4 hours worth of work, and you move the item to ‘completed’ without adding a work log. Previously, Axosoft would update all of the data points in the burndown and subtract 4 hours worth of work, as if the item was never in the release.

Prior to v17.1:

burndown prior to v17.1
4 hours of work removed from all days. (First day goes from 164 down to 160 hours.)

Now, moving an item with 4 hours of work remaining to ‘completed’, will only subtract the 4 hours from the current day, and the work you completed will be reflected in the velocity.

After v17.1:

burndown after v17.1
4 hours removed only from today. (First day remains at original value.)

What you can expect with this change

Because Axosoft was previously only using logged work for velocity, you may notice that your velocity is now greater than it was for any previous sprint. This should be a more accurate representation of the rate at which your team is getting work done because Axosoft is now taking into account all the work you’ve completed for your items.

For more details about Axosoft burndown charts and velocity calculations, check out our support documentation.


GitKraken v2.4

$
0
0

In GitKraken version 2.4, substantial improvements have been made to lots of actions you perform every day. You know those little quirks in GitKraken that sometimes made you say an expletive out loud? It turns out that one of our own, Dan Suceava, regularly swears at his monitor, oftentimes with GitKraken being the recipient of his wrath.

Who is this Dan Suceava?

Hmm, where should we begin…. You don’t know Dan, but you probably use his deftly-coded API regularly. Dan is the VP of Engineering here at Axosoft and has been with the company for more than 11 years. Even though he’s not an active GitKraken developer, his work touches all aspects of Axosoft as a company. You could say that a piece of Dan goes into every release—but that’s a somewhat disturbing thought!

Anyways, what does he actually do, you might also ask? This question is harder to answer. All we know is, he turns up to work, and then, later, he leaves. Between his comings and goings, Dan enjoys saying “no,” a lot, he swears at his computer, and he drinks more Jack Daniel’s than any mortal man should. A sort of engineering equivalent to a Boo Radley–Sasquatch hybrid; he sits in a dark corner of one of our dev rooms, only to be rarely spotted in the kitchen. Some say he eats squirrels. Some say he uses Windows ME. But one thing no-one disputes is that Dan is a coding powerhouse. Much of Axosoft’s success can be attributed directly to Dan!

A rare sighting of Suceava outside of his natural habitat

So when it became apparent that one of Dan’s favorite products, GitKraken, is also the recipient of some of his curse words, the GitKraken team wanted to make things right. As a tribute to Dan, the GitKraken team is dedicating a release (or two) to fixing the issues that made Dan go through his stockpile of Jack Daniel’s at twice the rate he normally would. After getting a demo of his issues with GitKraken, the team realized these issues are going to make a lot of people (except for Jack Daniel) very happy.

Suceava updates

  • Before: GitKraken would dismiss 99.7% of issues as “user error,” muttering profanities under its breath.
  • Now: GitKraken is polite as can be, updating submodules correctly when switching branches, and initializing them faster (and recursively, if a submodule has submodules).
  • Before: When refreshing, there was a 90% chance that GitKraken DSRC either shrugged or barked “NO!” The other 10% of the time, the app took 3 days to complete the action.
  • Now: Commit sorting algorithm improvements mean the app is faster when refreshing.
  • Before: The app got drunk and forgot where it was, randomly disappearing only to reappear several hours later.
  • Now: the app remembers whether or not it was in full-screen mode when shut down, and the location of its Window. It will restore these settings when restarted.
  • Before: Checking out a remote branch beyond the graph history made the app highly irritable, giving the message “I should have started a farm,” and then accusing you of user error.
  • Now: Checking out a branch beyond 2,000 commits creates a local ref and checks out the branch, error-free.

This release includes 15 more bug fixes and other improvements. See the release notes for all the details.

P.S. Release notes can be translated from English to Suceava—enjoy!

An Introduction to Functors

$
0
0

Introduction

In a previous article, we talked about Semigroups and Monoids, which are abstractions that let us combine values of the same type together. In this post, we’ll take a look at Functors, which allow us to operate on values inside containers without having to know any of the implementation details of the containers themselves.

NOTE: This article is based on an Axosoft Dev Talk titled Practical Category Theory: Functors. Watch that video or keep reading!

Before we embark on our journey, we should probably take a quick trip through higher-kinded types!

Higher-Kinded Types

When we program in a language like C# or Java, we often run into “concrete” types like int, string, and bool. The neat thing about concrete types is that we always know all the operations that we can perform on them (ints can be added, bools can be negated, and so on).

One step up on the abstraction ladder are “generic” types, like List<T>. A fancy term for generic types is “parametric polymorphism:” “parametric” because we’re working with a type parameter, and “polymorphism” because the parameter in question can have multiple (“poly-“) shapes (“-morphism”). So, we know what operations we can perform on the List itself (iterate over all its elements, Add an item, etc.), but we know absolutely nothing about the type represented by T. This gives us a lot more power of abstraction because we can write methods that manipulate these generic structures and have them guarantee to work no matter what type we eventually fill in for the type parameter.

In Haskell and Purescript, we’d write List<T> as List t: the name of the generic type (List) comes first, and the name of the type parameter (t) comes next, separated by a space. Haskell and Purescript know that t isn’t the name of a concrete type (like if we had written Int or String) because it starts with a lowercase letter.

We can go one step further and write a type like IEnumerable<T>, where our container type isn’t a concrete type but is only an interface. Now, we know only a little bit about the container type itself (specifically, that it has elements over which we can iterate), and we still know nothing about T. The number of operations we can perform on a value of type IEnumerable<T> is even smaller than those for List<T>. This limitation is actually a good thing because now we can pass in a Stack or a Queue to a method that takes an IEnumerable, for example, and the method will work as expected.

Usually, this is where we would have to stop because most languages don’t let us go any more abstract. However, Haskell and Purescript don’t have this restriction and support so-called “higher-kinded” types, where we can make both the internal type and the container type fully generic. If C# had syntactical support for this, it might look something like T1<T2>. T1 could be IEnumerable, Queue, etc., while T2 could be int, string, etc. Haskell and Purescript, however, do support this concept of higher-kinded types and use this syntax: f t (where f is the container type and t is the type parameter). Because f is lowercase, we know that it’s generic as well as t, and so can be any type that requires exactly one type parameter. For example, the following types would fulfill f:

  • List, which holds zero or more values of the same type.
  • Queue, which also holds zero or more values of the same type, but doesn’t allow random access.
  • Maybe (A.K.A. Option or Optional), which can contain either one value or be empty.
  • Promise, which eventually produces a value.
  • A Redux store, which we can think of being parameterized by the type of the state it contains.

Whereas the following cannot be used for f:

  • Map, because that requires two type parameters, one for the key and one for the value.
  • string, because that doesn’t have any type parameters (another way of saying that is that the type string is already fully concrete).
  • Tuple, because that also requires two or more type parameters (depending on the number of elements it contains).
  • A Redux reducer, which requires two type parameters, one for the message and one for the state.

Fun Fact!

In Haskell or Purescript, the higher-kinded type parameter (f in f t) is usually named f or m, while the type parameter it takes (t in f t) is usually named a (or b or c if more than one is needed).

Constraints

Granted, we really can’t do very much with a value of type f t because we intentionally don’t know very much at all about it. However, we can put constraints on our type parameters. In C#:

class Foo<T> where T : SomeConstraint

Now, T can’t be just anything but instead has some restrictions on what can be plugged in for it. After we’ve put one or more constraints on T, we now know more about what we can do with it. For example, if T is constrained to be an instance of IComparable, that means Foo will only accept types that support the CompareTo method, like int or char (but not, say, List<string>). In Haskell or Purescript, this type can be written SomeConstraint t => Foo t, which means that type t must support the operations in SomeConstraint.

In Haskell and Purescript, we can also place constraints on our higher-kinded type parameters, like so: SomeConstraint f => f t. Now, we know that f supports whatever operations are included in SomeConstraint, and we can plug in any type that supports them. This is approximately analogous to the idea behind IEnumerable<T>, where the container type is abstract and so we can choose a specific type, like List or Queue, to make the type concrete. (We might write that example in Haskell and Purescript as IEnumerable f => f t.) We can also use more than one constraint on a single parameter or place constraints on more than one parameter at a time, like so:

(SomeConstraint f, SomeOtherConstraint f, AnotherConstraint t, YetAnotherConstraint t) => f t

Recap

What all this allows us to do is to have a system of specifying the structures of values, which will come in handy when we get to Functors below. Higher-kinded types allow us to focus on values that have general structure, and we can narrow the possibilities down with constraints.

Phew, that’s a lot to digest. Here’s a summary of how the different concepts (roughly) translate between C# and Haskell and Purescript.

Concrete Types

We know everything there is to know about these types.

C#

int

Haskell, Purescript

Int

Generic Types

We know everything there is to know about part of these types.

C#

Foo<T>

Haskell, Purescript

Foo t

Constraints

We know some things about various parts of these types.

C#

// Constraint on the "traditional" type parameter
Foo<T> where T : IBar
// Constraint on the container type itself
// Approximately:
IFoo<T>

Haskell, Purescript

-- Constraint on the "traditional" type parameter
IBar t => Foo t
-- Constraint on the container type
IFoo f => f t

Higher-Kinded Types

We only know about the very general shape of these types, but we can place constraints on them in order to do useful things with them.

C#

// Doesn't exist, but might look something like:
T1<T2>

Haskell, Purescript

f t

Functors

Lo and behold, we’ve made it to Functors!

We’ve been making statements about the “structures” of types without a clear definition, so here goes:

Structure

Rules about how you can use a type (i.e. the operations that can be performed on it). In an object-oriented (OO) language, this might be represented by interfaces; in a functional language, this might be represented by modules or typeclasses.

IEnumerable<T>, for example, has a different structure from IComparable<T>, because IEnumerables can be iterated over, sorted, etc., while IComparables can be compared.

Functor is a specific structure that supports exactly one operation: map. map allows us to operate on value(s) inside a Functor (which we can think of as a container of some sort) without knowing (or caring) how the Functor is implemented. Each Functor has its own rules for how map works, but this general theme of manipulating contained values is the same.

You’ve already used map if you’ve ever used Array.prototype.map in JavaScript or LINQ’s IEnumerable.Select in C#, for example. You didn’t have to know how the container (the Array or specific IEnumerable) was implemented, since its implementation of map took care of that for you. All you needed to do was provide a function that takes an element of the list and returns something else, and the container handled the rest.

Without further ado, let’s take a look at map‘s type signature!

C#

public interface IFunctor {
    // There's no second parameter, because this method is on the `IFunctor` we want to map over
    IFunctor<TResult> Map<TResult>(Func<T, TResult> fn);
}

Haskell, Purescript

-- Here, we explicitly write the Functor to map over as the second parameter, because Haskell is functional rather than object-oriented
map :: Functor f => (a -> b) -> f a -> f b

map takes a function and a Functor and returns the Functor after the function has been applied to its element(s). The type of the element(s) inside the Functor is allowed to change.

So, Functors are really, really simple. Their only special “skill” is that they allow you to apply a function to what’s inside them.

What Can (and Can’t) Be a Functor?

So far, we’ve only seen List‘s implementation of Functor, but what else could be a Functor?

Queue can, because it can allow a function to be applied to all its elements. Maybe can also be a Functor: if there is no value, then an empty Maybe is returned; otherwise, a Maybe with its value modified by the provided function is returned. Promise can implement map by allowing you to modify the value that the Promise will eventually resolve to (e.g. somePromise.then(x => x + 1)).

So what can’t be a Functor? Well, a Redux store probably can’t. While it certainly allows its state to be modified, being a Functor would mean that the type of the state could change, since any function we use with map with must be allowed to change the type of the elements inside the Functor. We usually don’t want the type of our Redux state to change over time, so a Redux store isn’t a valid Functor.

Let’s see some quick examples of Functors in use. Here’s what it looks like to map over a list in JavaScript, C#, and Haskell and Purescript:

JavaScript

[1, 2, 3].map(x => x.toString()) // ["1", "2", "3"]

C#

new[] { 1, 2, 3 }.Select(x => x.ToString()).ToList() // new[] { "1", "2", "3" }

Haskell, Purescript

map show [1, 2, 3] -- ["1", "2", "3"]

Note that in the above example, the List started out as a List<int> but was converted to a List<string>. This is another important property of map that we’ll get to below.

And here’s what it looks like to map over JavaScript’s Promise type instead:

JavaScript

Promise.resolve(1)
// Think of `then` as `map`
.then(x => x.toString()) // resolves to "2"

In both cases, we’re handing map a function and each Functor is doing the rest for us in its own specific way.

As we can see, the concept of a Functor is intentionally very general, and this idea of very general abstractions is what makes category theory so powerful.

Example

To summarize: Functors allow us to treat “what to do” and “how to do it” as separate concerns. After a map implementation is written for a type, it can be used the same way with any function (as long as its parameters are the correct types, of course).

Now for a real-world example! Let’s pretend that we have a Maybe type of some sort in JavaScript, along with a corresponding fromMaybe function that takes a modifying function, a default value, and the Maybe to map over. If the Maybe contains a value, fromMaybe will return a modified version of that value; if, however, the Maybe does not contain a value, fromMaybe will return the default value that it was passed.

As an example, we might traditionally work with possibly nonexistent values like so:

const result = thingThatMightFail();
return result.value
    ? result.value + otherValue
    : 0;

The issue with this approach, however, is that we can forget to check that result is null or undefined (‘cannot read property of undefined,’ anyone?). If, however, we use some sort of Maybe object, we might be able to rewrite the above using fromMaybe as follows:

const result = thingThatReturnsAMaybe();
return fromMaybe(
    v => v + otherValue, // Function to apply if we have a value
    0, // Default value
    result // Maybe object
);

Now, if we forget to use fromMaybe on our result, we’ll catch it when we try to actually use this value. Granted, this isn’t as useful in JavaScript as it is in a statically-typed language like C# where we’d be able to catch such errors at compile-time, but using Maybe like this everywhere we’d use null or undefined still allows us to reason about our code and know what can return a value that might not exist (instead of just hoping that we remembered to check for null or undefined everywhere we use a potentially nonexistent value).

If we want to run multiple operations in sequence on a value that might not exist, we might think to do it like this:

const json = someAjaxRequest();

let result = defaultUser;
if (json) {
    const id = extractId(json);
    const user = getUserById(id);
    result = upgradeUser(user);
}

However, with Maybe, we can write it differently:

Note: R.pipe is a function from the Ramda.js library that “composes” functions: it takes several functions and chains them together to create a new function (for more details, see the docs).

Also, for clarity, psuedo Flow function type annotations are included.

const maybeJson: Maybe<JSON> = someAjaxRequest(); // someAjaxRequest: () => Maybe<JSON>

const result: User = R.pipe(
    (maybeJson: Maybe<JSON>) => map((extractId: (JSON) => Id), maybeJson),
    (maybeId:   Maybe<Id>)   => map((getUserById: (Id) => User), id),
    (maybeUser: Maybe<User>) => fromMaybe(
        (upgradeUser: (User) => User),
        (defaultUser: User),
        maybeUser
    )
)(maybeJson);

And here’s the same example, but this time we’ll remove unnecessary Flow type annotations and take advantage of a technique called currying in order to see what the same code might look like in a real-world situation:

const maybeJson: Maybe<JSON> = someAjaxRequest();

const result: User = R.pipe(
    map(extractId),
    map(getUserById),
    fromMaybe(upgradeUser, defaultUser)
)(maybeJson);

So, let’s see what we gain from using Functors. As opposed to in the first, ‘traditional’ example, we didn’t have to manually handle the case where no value was returned. The Maybe Functor itself was able to decide how to do so, and all we needed to worry about was giving map the functions that we’d like to perform if a value does happen to exist. We can use any functions we want for each step in the pipeline, as long as they have the correct types (namely, the return type of each one must be the type of the next one’s parameter).

Functors are incredibly common in practice, and a good rule-of-thumb is to ask yourself whenever you begin to write a function that operates on a data structure if the function could be made general enough to operate on a Functor instead. For example, Haskell/Purescript functions like replace and zip could be defined to work with lists specifically, but they’re instead implemented to only require that their parameters are Functors; as such, they work on many different structures for free! They don’t need to know anything about how each of those Functors works on the inside because map handles the details for them.

Laws

Finally, Functors have two associated “laws” to ensure that the Functor will behave as one would expect. Don’t worry, you won’t go to jail if you get these laws wrong (although it is certainly an unpleasant surprise when a Functor is “unlawful”).

Law #1: Identity

Mapping the identity function over a Functor has no effect.

Fancy pseudo-Haskell way to say it:

map id = id

This one’s pretty self-explanatory: if your function doesn’t actually do anything to the element(s) inside the Functor, then it would certainly be surprising if you got back a different result than what you passed in.

JavaScript Example

map(x => x, [1, 2, 3]) // [1, 2, 3]

Law 2: Composition

Multiple maps in a row can be composed into one without any changes in the overall behavior.

Fancy pseudo-Haskell way to say it ((.) is the function composition operator):

map f . map g = map (f . g)

JavaScript Example

We could write our earlier JSON-manipulation example like so if we combine the two-in-a-row maps into a single map, with the new function to use being the composition of the two steps in our pipeline.

R.pipe(map(extractId), map(getUserById))(json) // map(R.pipe(extractId, getUserById), json)

Go forth and map!

And that’s it for Functors! They’re very simple structures that give us a lot of power. Just like Semigroups and Monoids, they’re used very commonly because they’re so simple to reason about. And, because they’re so general they allow us to write functions once and use them on all sorts of different data structures, guaranteed!

GitKraken v2.5

$
0
0

While working on improvements for GitKraken v2.4, we noticed that GitKraken was not running as efficiently as we would like, especially on Windows. As many Windows Git client users may know, most Git GUIs run at the speed of a tortoise, while GitKraken runs its race as the hare.

Sadly, we all know how this story goes… Similar to the hare, GitKraken would blaze ahead when performing certain actions and take its time when performing other actions. So, we decided to give GitKraken a bit of a jolt to see what would happen!

the flash

GitKraken’s new found power of super speed kicked in, and the performance improvements were immediately noticeable when checking out a branch:

branch checkout v2.5

That jolt really got GitKraken going—it will no longer take a nap when you request to view the history of a file:

file history v2.5

While experimenting, and benchmarking the improvements GitKraken had made, we noticed that another Mac and Windows Git client (that is most definitely not SourceTree) had made “significant” improvements to its Windows platform. While they were impressed with their results, we really only had one thing to say:

the Flash

For a bit of fun, we decided to benchmark their improvements against our own. And hey, the more the merrier, right? So, we threw the CLI in there too:

windows checkout speeds

That’s right, GitKraken v2.5 is nearly 3 times faster than SourceTree v2.0! And if you’re wondering ‘how can GitKraken be faster than the CLI?!’ The answer is: because GitKraken does not rely on Git tools and because it does all Git operations directly, speeds can be increased through multi-threading and other techniques. Game, set, match!

Be sure to check out our release notes to see the rest of the improvements and bug fixes in v2.5. 

GitKraken.com Now 100x Artsier

$
0
0

What do you get when you cross a traveling salesman with a Kraken? Art. You get art.

GitKraken v2.5 is now 3x faster than SourceTree, and gitkraken.com is 100x artsier. We’ve combined our love for art and technology and launched a brand new version of our website. Turns out we have a lot of digital artists at Axosoft! One of which is Kyle Smith, a GitKraken developer, a master at digital mark-making, and our first featured artist on the website.

GitKraken v2.5 Featured Artwork

Kyle has been an artist since he started doodling in high school. “My default doodle was a single squiggly line that tightly curved around itself, which from a distance just looked like one large shape,” said Kyle.

He reminisces, “When I began programming, I loved the idea of one day reproducing my high school doodles with code.” So, he started to learn about computer generated art in college, and after graduation, Kyle stumbled upon a paper that described exactly what he wanted to do: turn an image into points on a grid (in a way that resembles stipple art), then connect them using a solution to the traveling salesman problem. He implemented the process in JavaScript and created an SVG, which when animated with vivus, made the image appear to be drawn.

*Knock Knock* Enter Traveling Salesman

The traveling salesman problem (TSP) is a well-known problem in which one must find the shortest route between a set of cities, where each city is visited once and only once. “The great thing about an optimal solution to a 2D traveling salesman problem is that the lines are proven to never cross. So, like my early doodles, this gives images the effect of lines tightly curving around themselves,” said Kyle.

However, rather than creating a single path, as the paper describes, he split the image into many paths. Now, when the svg is animated with vivus, multiple paths are drawn at once, creating a more visually interesting animation.
v2.5 featured artwork
Like I said, we have a lot of digital artists at Axosoft, and it got us thinking that we probably have a lot more in our community too. So, we invite you to create a GitKraken-inspired digital art piece that could be featured on GitKraken.com. Check out our submission guidelines, and get to it!

GitKraken Tips V

$
0
0

Here is a roundup of our most recent 11 tips to help you become a bit more productive when you’re working. If this series is new to you, check out our previous tip roundups.

GitKraken Tips

  1.  Instantly open your current repo in a terminal window with alt/option + t, or from the File menu.
  2.  Push changes and start a pull request with one action. If you don’t have an upstream set, you’ll be prompted to set one first.
  3. Remote avatars in the graph help you see who is working on a branch. Get more info by hovering over those and other icons.

  4. The “Viewing” count displays how many branches/tags are visible in the graph. Quickly show all hidden items with “Show All”.
  5. Store HTTP & Proxy credentials to save time when pushing to remotes. They can be cleared in Preferences > Authentication
  6.  Create project groups in the new repo management view to keep your repositories organized.
  7. Open the Command Palette (cmd/ctrl + shift + p) or Fuzzy Finder (cmd/ctrl + p) and arrow down to see a list of shortcuts.

  8. You can drag-and-drop ref labels in the graph to merge, rebase, reset, etc. Multiple refs on one commit will expand on hover. 
  9. Pro users can create and switch between multiple profiles, each with unique settings and hosting service account integrations. 
  10. GitKraken’s easy-to-use conflict tool is even more powerful with Pro, giving you the ability to edit and save the output. 
  11. Hover icons on ref labels to view PR numbers and titles. Right-click the label for options to open them on GitHub.com.
Viewing all 373 articles
Browse latest View live