Developer Learning

Recently, I’ve been giving more thought to the idea of how can we improve the training/development of developers. My main curiosity has been revolving around preparing new developers for what they will encounter in the “real world.” I keep bouncing back and forth between thinking developer boot camps are the way to go and thinking the traditional four year degree in CS, CIS/MIS, SE, etc. is the best way. However, I honestly think those two options are just the extreme ends of a spectrum of options.

Developer Boot Camps

I’ve had the privilege of working with a few developers that have come out of developer boot camps. What I have gathered from this experience is that developer bootcamps focus on getting new developers up-to-speed on one technology stack. For example this would mean taking a person from knowing nothing about development to being comfortable working in a Ruby on Rails application. In theory this sounds fantastic and from my experience it works pretty well. The caveat here is that theory and the like are go unknown to boot camp graduates, unless graduates pursue that kind of information on their own. (Disclaimer: I’m not an expert on boot camps and their curriculum) To be clear when I say theory I’m talking about concepts like encapsulation, polymorphism, etc.

Looking at boot camps like this they appear to live on the extreme side of just get used to the tools, one language, and a small set of technologies. The developer is then left with the knowledge of the mechanics, but I feel has missed quite a bit of the why. The “why” is extremely important when it comes to deciding how to break down a system, how to split up work, how to increase application flexibility, etc. Essentially without having a deep understanding of “why” a developer may not have the knowledge to make good design decisions. The design decisions I’m talking about here aren’t overarching application architecture, but decisions like: should I use inheritance here? should I break this class/module/function down? where does this functionality belong? These are questions every developer has to answer multiple times a day. Making good decisions at a micro level contributes to the overall maintainability of the system.

Now let me be clear here I think developer boot camps offer a tremendous way to enter the field of software development at a much more reasonable cost than a traditional four year degree.

Traditional Four Year Degree (CS, MIS/CIS, SE)

I can’t speak for everyone’s four year degree, but I know that when I entered the work force after college I was slapped in the face with here’s an application we need you to work on. I was a 22 year old kid just out of school and now working on a system that would be handling billions of dollars worth of transactions and inventory. I wasn’t a lead or anyone special I was just another developer on the team. I did however feel unprepared for the development that was needed. For all my four year degree taught me I hadn’t actually built a full blown application of any real size. The largest application I had built in college was probably ~2,000 lines of code max, I feel that’s a gross over estimation. My four year degree had focused almost entirely on theory subjects encapsulation, polymorphism, algorithms, etc. This kind of learning prepared me for thinking through problems and understanding what terms meant, but I had no idea how to apply most of those theories in practice.

To mean it is the putting theories to practice that is completely lacking from at least my curriculum, and from what I’ve heard, other curriculum as well. My four year degree didn’t teach me how to build applications. The closest thing to an application I built was a very simple php application that equated to about 500 lines total. The one application was the only application I had built that used an actual database. In the “real world” every application I’ve worked on has used at least one if not a couple different databases. Four year degrees seem to miss the mark when it comes to teaching students about the mechanics of building applications. Four year degrees, to me, represent the theory extreme end of the spectrum.

Finding the Middle Ground

Thinking of developer boot camps and traditional CS, MIS/CIS, SE degrees as extreme opposites pushes me to want to find a middle ground. There must be some way to combine the two ideas into a more effective approach. I can think of a few options that could be a middle ground:

  • In a four year degree have students select an application they will build throughout their degree?
  • In a boot camp could students spend a week or so pairing with experienced developers?
  • Should four year programs begin to partner with companies to get their students exposure to “real world” applications?
  • Should boot camps spend a week or more looking at code that exhibits a good and bad use of polymorphism, algorithms, etc?

Those are just a few ideas that could help close the gap. There are probably examples of each of these occurring already I’m just unaware of them.

Advertisements
Developer Learning

Deploy Azure Functions App From AppVeyor

At work recently we have been starting to use Azure functions for a fairly small job in our infrastructure. Also at work we use AppVeyor for our CI/CD server. I like AppVeyor and it has served us well thus far. However, since the tooling and idea of Azure Functions is relatively new it was a bit hard to find a good example of a way to deploy an Azure Functions App from AppVeyor. Luckily one blog had a good starting point, however there are things I was unaware of about Azure Functions that were missing from the blog post. One of the biggest ones was how the function.json and host.json files are created.

Function.json

If you are new or haven’t researched the output of building an azure function there are a few things you need to be aware are happening behind the scenes, especially if you want some sort of CI/CD. In a functions app project you define your functions like below:

using System;
...

namespace Your.Function.App
{
    public static class YourFunction(
        // This trigger piece is important.
        [ServiceBusTrigger("{topic}", "{subscription}", AccessRights.Listen, Connection = "{connectionName}")] object message,
        TraceWriter logger)
    {
        // Perform you function logic here
    }
}

The important piece to notice here is the function trigger. In this case the trigger is a service bus message arriving for a specific service bus topic. For a list of triggers available take a look at here. If you are familiar with Azure WebJobs you may think that this is all you need, however that trigger attribute is really just the first piece of the puzzle. The part of the puzzle we can’t see yet is the generated json file this attribute helps create. Here is a sample of the json that would result from the above code.

{
  "generatedBy": "Microsoft.NET.Sdk.Functions-1.0.0.0",
  "configurationSource": "attributes",
  "bindings": [
    {
      "type": "serviceBusTrigger",
      "connection": "{connectionName}",
      "topicName": "{topic}",
      "subscriptionName": "{subscription}",
      "accessRights": "listen",
      "name": "message"
    }
  ],
  "disabled": false,
  "scriptFile": "{relative path to output}\\{Your assembly name}",
  "entryPoint": "Your.Function.App.YourFunction.Run"
}

The big question here is how does this file get generated? You won’t see it in your solution and if you deploy from Visual Studio you won’t even know, unless you look for it, that the file even exists. I didn’t know this and didn’t realize it was needed, however this file is what Azure uses to know how and when to trigger your function so without it you won’t see any functions in your Azure Functions App. I know you won’t see any functions because that’s exactly what happened to me when attempting to duplicate Alastair Christian’s blog post. The issue I was seeing had nothing to do with the content of the blog it was my lack of knowledge of how Azure Functions work.

What were we missing?

The part I missed when attempting to duplicate Alastair’s solution was a small one but extremely important in this context. He was using MSBuild to build his solution and I was using the dotnet cli.

In this case: msbuild.exe Solution.sln != dotnet build

Remember the all important function.json file that gets generated? It turns out that is generated from some of the targets that Visual Studio’s version of msbuild uses that the dotnet cli’s msbuild won’t have available (to be clear there is likely a way to resolve this, but I didn’t dig for it).

How did we fix it?

When using AppVeyor, and likely any other CI server, is fairly straight forward. If you are using the dotnet cli for your existing builds, but want to start using Azure Functions be sure that you change your scripts to use:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\msbuild.exe {solution}.sln

instead of:

dotnet build {solution}.sln

This will ensure that your output generates the function.json appropriately. You could attempt to maintain the function.json yourself, however if the file can be generated from our code I don’t want to worry about it.

Now that we build, how do we deploy?

This part is very straight forward and I was able to use Alastair’s example with one small tweak, I prefer to use appveyor.yml files instead of the AppVeyor interface for configuring our builds (this is a preference and direction we have chosen at work). So as an alternative to the user interface you can add deployment to your AppVeyor build altering the below example:

image: Visual Studio 2017
...
artificats:
- path: '{relative path to functions output}'
  name: '{name of artifact}'
...
deploy:
- provider: WebDeploy
  server: https://{azure site name}.scm.azurewebsites.net/msdeploy.axd
  website: {azure site name}
  username: {user name that has deployment access}
  password:
    secure: {secured password}

And that’s it now you have an automated deployment of Azure Functions from AppVeyor. I hope that helps you or someone you know.

Deploy Azure Functions App From AppVeyor

Thinking Reactively

Recently I’ve started a new job working with technology in the IoT space. This new experience is making me start to think about building applications differently. One of the main things I’ve noticed with this new work is the reliance on events coming from one or many different devices. The application we work on needs to react accordingly to each of these events. Reacting to all of these events makes me wonder if the usage of observables and a more reactive paradigm would be helpful.

Part of this train of thought comes from starting to work and see how React and Redux have changed the way we look at building front ends. I believe that React and Redux/Flux/{pick your fav. flux implementation} have shown a different and interesting way to think about building interactions. However, the only real change with these ideas is putting structured management around the state of your application and ensuring that actions (events) flow up and the changes to state flow down. This is the crux and great thing React and flux implementations have brought. For the record I have found these helpful.

Since I’ve found this idea/thinking helpful on the front-end I’ve been curious could we apply the same idea to the back-end? For our application I’ve been toying with the idea of maintaining a single global state (Ala redux). For applications consuming terabytes of data this likely wouldn’t be an option, however our application is smaller scale and would need to keep a few thousand IoT devices in memory. The goal with this approach would be to aid in handling the random sequence of events the devices are outputting. Similar to redux each incoming event from devices would be treated as an action that one or more reducers would handle to result in a new global state. Once the new state is available it would then be broadcast (using websockets, http, webhooks, database?) to interested parties.

I’m unsure if our back-end would be able to sustain this kind of architecture. I’m also unsure of a good way to test this idea out to get a feel if back-end systems would benefit greatly from this approach. One of the issues we are seeing now is that we are placing caching in specific places or attempting to minimize trips to the database because they are slow. I’m curious if we could increase the amount of memory to support keeping nearly everything in a single global state. Once actions occur we could then handle them in memory and sync them to a database asynchronously while allowing the rest of the application to continue handling events.

I’d be interested if anyone has tried or is doing this kind of thing with back-end systems. Does this even sound like an idea worth trying?

Thinking Reactively

Trader-App: Meet Phoenix

In the previous post Trader-App: Hello Elixir we became more familiar with Elixir’s syntax. Now I think it’s time to work on getting a working server setup so that we can start building an API for our trader application.

Source: https://github.com/bryceklinker/trader

Language: Elixir

Frameworks: Phoenix, Ecto, ExUnit

Tools: Hex

I’m sure you are wondering what the hell is Phoenix. In short Phoenix is a web application framework for building web applications, apis, and other applications built using Elixir. I think of Rails, ASP.NET MVC, or ASP.NET WebApi when thinking of Phoenix the difference is the language used to build the applications. To get Phoenix locally we need to do a few things in the terminal:

mix local.hex

This command will install or upgrade hex. Hex is a package manager used for Elixir and Erlang, think npm (nodejs), nuget(.NET) or bundler (ruby). The next thing we need to do is install Phoenix using Hex:

mix archive.install https://github.com/phoenixframework/archives/raw/master/phoenix_new.ez

The above command will install Phoenix and its dependencies using hex. Something to note about Phoenix is that it takes an optional dependency on nodejs. This is important to know if you plan to have Phoenix process your javascript, css, or other static assets. I’m not planning to do this as I plan to keep our server and client code completely separated.

Now that we have phoenix installed we can move on to creating our first Phoenix application. To do this we will run the command:

mix phoenix.new src/server

It’s important to know that src/server is a relative path to the directory you want to put your Phoenix application. The path can be relative or absolute. My terminal happens to be at the root of my repo so src/server is the path I want to use. This creates the scaffolding for our Phoenix application. I’m going to delete the src/server/hello_world.exs from my repo along with the src/server/math.ex. These are no longer needed. At this point my repository looks like this. You will be prompted to install dependencies with a prompt like:

Fetch and install dependencies? [Yn]

I’m going to input Y. Let’s pause here and look at what we have now. The first thing I see is the mix.exs file. This file looks like it defines all of the dependencies required in our application. One of the really important dependencies here is ecto this is your ORM, think Entity Framework (.NET), ActiveRecord (Ruby), or Mongoose (NodeJS). ecto provides similar functionality. The next thing I notice is package.json and brunch.js. Remember when I said that Phoenix had an optional dependency on nodejs. This is the outcome of that dependency. Phoenix relies on brunch to compile and bundle your javascript, css, and html. Since I’m planning to use Elm for my front end I’m going to see if I can generate the project without any javascript, css, or html. Turns out this can be done using:

mix phoenix.new src/server --no-brunch

Aha! That looks much better. My repository now looks like this. No more package.json or brunch.js that is exactly what I wanted. If you would like to continue using brunch or some other build tool you can take a look at Phoenix’s site to understand how that can be done.

Now that we have removed client-side packages and bundling we can continue with Phoenix. Let’s go ahead and start the Phoenix server to see what we get:

cd src/server
mix phoenix.server

This runs a little bit of what we need, however it prompts me to install this thing called rebar. What is rebar? Rebar comes from the Erlang ecosystem. More information can be found here. I’m going to input Y when prompted with:

Could not find "rebar", which is needed to build dependency :fs
I can install a local copy which is just used by Mix
Shall I install rebar? 
(if running non-interactively, use: "mix local.rebar --force") [Yn]

This will install rebar. If your like me then you will see a bunch of errors similar to:

[error] Postgrex.Protocol (#PID<0.2992.0>) failed to connect: ** 
(Postgrex.Error) tcp connect: connection refused - :econnrefused

This is complaining about not being able to connect to a Postgres server. Phoenix defaults to connecting to a Postgres server, however I don’t think I’ll need that. I’m going to delete my existing server folder and generate the project again. Turns out generating your Phoenix application can be generated without ecto and brunch with the command below:

mix phoenix.new src/server --no-brunch --no-ecto

Now that we don’t rely on a database lets try to run:

cd src/server
mix phoenix.server

Viola! Now we have a Phoenix application. My server started up on port 4000. My repo now looks like this.

So now we have a running Phoenix application. Lets look at what is actually going on in our application.

When we ran the new command we ended up with lots of files and folders that were completely generated. This kind of stuff is great for productivity, but as far as understanding what is actually happening always drives me a little crazy. I had this same opinion when I started to learn Ruby on Rails. Generated code has always made me nervous especially when I don’t understand the language or framework well. Lets dig into the files and folders that have been generated.

First let’s start with what tests have been generated. We can start with using the test command:

mix test

This yields the following output:

==> gettext
Compiling 1 file (.erl)
Compiling 19 files (.ex)
Generated gettext app
==> ranch (compile)
==> poison
Compiling 4 files (.ex)
Generated poison app
==> phoenix_pubsub
Compiling 12 files (.ex)
Generated phoenix_pubsub app
==> cowlib (compile)
==> cowboy (compile)
==> mime
Compiling 1 file (.ex)
Generated mime app
==> plug
Compiling 44 files (.ex)
Generated plug app
==> phoenix_html
Compiling 8 files (.ex)
Generated phoenix_html app
==> phoenix
Compiling 60 files (.ex)
Generated phoenix app
==> server
Compiling 13 files (.ex)
warning: variable tags is unused
 test/support/channel_case.ex:29

warning: variable tags is unused
 test/support/conn_case.ex:30

Generated server app
....

Finished in 0.03 seconds
4 tests, 0 failures

Randomized with seed 152766

There it is at the end four passing tests. Lets try to add one more simple test.

Since we are going to be creating a trader application stock prices are pretty important. Lets create a new channel for our stock prices:

mix phoenix.gen.channel StockPrices

This will create a new test and channel that we will use for getting stock prices. Running tests will now reveal seven passing tests. Lets look at what was just generated for us. Open up the src/server/test/channels/stock_prices_channel_test.exs. This file should have:

defmodule Server.StockPricesChannelTest do
  use Server.ChannelCase

  alias Server.StockPricesChannel

  setup do
    {:ok, _, socket} =
      socket("user_id", %{some: :assign})
      |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby")

    {:ok, socket: socket}
  end

  test "ping replies with status ok", %{socket: socket} do
    ref = push socket, "ping", %{"hello" => "there"}
    assert_reply ref, :ok, %{"hello" => "there"}
  end

  test "shout broadcasts to stock_prices:lobby", %{socket: socket} do
    push socket, "shout", %{"hello" => "all"}
    assert_broadcast "shout", %{"hello" => "all"}
  end

  test "broadcasts are pushed to the client", %{socket: socket} do
    broadcast_from! socket, "broadcast", %{"some" => "data"}
    assert_push "broadcast", %{"some" => "data"}
  end
end

I think we need to break this down a little bit. Lets start with this:

defmodule Server.StockPricesChannelTest do
  use Server.ChannelCase

  alias Server.StockPricesChannel

The above code defines the module Server.StockPricesChannelTest. Since we haven’t touched on modules we need to know what a module is in Elixir. Basically a module is a group of functions. Hopefully those functions are cohesive, but modules are basically a grouping mechanism. The next line use Server.ChannelCase is one example of how to consume a module. In this case the use macro tells Elixir that we want to require Server.ChannelCase. The other interesting thing use will do is make the functions defined in the required module available in to our module. Next, we have alias Server.StockPricesChannel this is another way to require a module. This one is slightly different than use. In this case we want our module to expand its “lookup” area to Server.StockPricesChannel. What I mean when I say “lookup” is that if Server.StockPricesChannel defines a custom version of List.flatten, for example, when we do List.flatten in our Server.StockPricesChannelTest module we will use the function defined in Server.StockPricesChannel. We can think of this as a way to get access to a modules functions and providing a way to keep our keystrokes down.

Okay, now that we have an idea of what the first few lines do lets look at the setup method:

setup do
    {:ok, _, socket} =
      socket("user_id", %{some: :assign})
      |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby")

    {:ok, socket: socket}
  end

The setup function here is pretty much the same as the Setup functions found in most testing frameworks. The code in there will before each test. This code does do some interesting things though. The first line:

{:ok, _, socket} =
      socket("user_id", %{some: :assign})
      |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby")

socket("user_id", %{some: :assign}) is going to create a socket with the id "user_id" and assign some to the socket. The next part |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby") will join the channel StockPricesChannel and subscribe to the "stock_prices:lobby" topic. We want to match the return with {:ok, _, socket}. This gives us a tuple with :ok and socket. The next we do is return {:ok, socket: socket} from the setup. The return is important because that is how our newly subscribed and joined socket is passed to our test methods.

This brings us to our first actual test:

test "ping replies with status ok", %{socket: socket} do
    ref = push socket, "ping", %{"hello" => "there"}
    assert_reply ref, :ok, %{"hello" => "there"}
  end

This is the first test we have seen so lets take a close look. The first line:

test "ping replies with status ok", %{socket: socket} do

Is going to pass the name of the test, "ping replies with status ok", it is also requesting a map which will match to the socket, %{socket: socket}. The socket the map receives is the socket we created and then subscribed and joined in our setup method. The next part we need to look at is the body of our test function:

ref = push socket, "ping", %{"hello" => "there"}

This part is going to push a message into the channel. To push a message we need a few things. The socket parameter tells the push method which socket and channel to push the message to. The "ping" parameter is the name of the event. The %{"hello" => "there"} parameter specifies the content of the message. The push method return a reference. Our next line is the line that will assert we did the thing we wanted to:

assert_reply ref, :ok, %{"hello" => "there"}

This code is going to assert that we replied with :ok and data %{"hello" => "there"}. The ref parameter is the reference that should be checked.

The rest of the tests here are testing that we can broadcast data through the socket/channel. This is all fantastic and really easy, however can we easily create a small bit of javascript to work with our new channel. For reference my code can be found here.

Lets head over to our src/server/web/templates/layout/app.html.eex. In here we need to add a few script tags:

Screen Shot 2016-08-29 at 1.32.58 PMThis does mean we need to add:
src/server/priv/static/js/stock_prics.js. The src/server/priv/static/js/phoenix.js already exists. Now, we need to add a little bit of code to our src/server/priv/static/js/stock_prices.js:

(function(Phoenix) {
    var socket = new Phoenix.Socket('/socket');
    socket.connect();

    var channel = socket.channel('stock_prices:lobby', {});
    channel.join()
        .receive('ok', res => console.log('Resp: ' + JSON.stringify(res)))
        .receive('error', res => console.log('Error: ' + JSON.stringify(res)));

    channel.on('quotes', res => {
        console.log('RES: ' + JSON.stringify(res));
    })
})(window.Phoenix);

This is just plain old javascript. The global Phoenix object comes from our src/server/priv/static/js/phoenix.js script. With it we can join channels and receive or broadcast messages. Lets make our channel publish a fake stock quote every second for now.

To publish a fake quote every second we need to add the file src/server/lib/workers/stock_prices_worker.ex:

defmodule Server.Workers.StockPricesWorker do
   use GenServer

 def handle_info(msg, state) do 
   payload = %{ :price => 1.02 }
   Server.Endpoint.broadcast("stock_prices:lobby", "quotes", payload)
   Process.send_after(self(), "get_quotes", 1000)
   {:noreply, state: state}
 end

 def start_link() do
   {:ok, pid} = GenServer.start_link(Server.Workers.StockPricesWorker, [])
   Process.send_after(pid, "get_quotes", 1000)
   {:ok, pid}
 end 
end

The code above will create a price quote every second. Then it will send that price quote to the "stock_prices:lobby" channel. This should work fine for a small proof that we have things working. We need to make a few more changes to get everything working together. If your repo is like mine. You will not see anything in the console when you run your application.

You first need to make a few other changes to files generated at the start of our project. This actually took me quite a while to figure out. First we need to modify our src/server/lib/server.ex to have look like:

defmodule Server do
  use Application
  ...
  # Define workers and child supervisors to be supervised
  children = [
     # Start the endpoint when the application starts
     supervisor(Server.Endpoint, []),
     worker(Server.Workers.StockPricesWorker, [])
     # Start your own worker by calling: Server.Worker.start_link(arg1, arg2, arg3)
     # worker(Server.Worker, [arg1, arg2, arg3]),
  ]
  ...
end

The key code here is the worker(Server.Workers.StockPricesWorker, []). This will start our new worker when the Phoenix server starts. You can add as many workers as you would like to this list. Next, we need to modify our src/server/web/channels/user_socket.ex to utilize our channel:

defmodule Server.UserSocket do
  use Phoenix.Socket

  ## Channels
  channel "stock_prices:*", Server.StockPricesChannel
  ...
end

This is actually something that we should have done after generating our channel, but if you are like me you skipped that part in the console output. This is what tells Phoenix to use your channel.

With these two pieces in place we should now see our small website listing price quotes in the console. For now that is good enough as the UI from the server is not going to be used much if at all, because we plan to use elm for the front-end of our trading application.

At this point everything should be working but we actually have quite a bit of unnecessary code in our src/server/web/channels/stock_prices_channel.ex and src/server/test/channels/stock_prices_channel_test.exs. Change your src/server/test/channels/stock_prices_channel_test.exs to be:

defmodule Server.StockPricesChannelTest do
  use Server.ChannelCase

  alias Server.StockPricesChannel

  setup do
    {:ok, _, socket} =
      socket("user_id", %{some: :assign})
      |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby")

    {:ok, socket: socket}
  end

  test "broadcasts are pushed to the client", %{socket: socket} do
    broadcast_from! socket, "broadcast", %{"some" => "data"}
    assert_push "broadcast", %{"some" => "data"}
  end
end

We removed the tests around pinging and shouting. This means we can also change our src/server/web/channels/stock_prices_channel.ex to look much simpler:

defmodule Server.StockPricesChannel do
  use Server.Web, :channel

  def join("stock_prices:lobby", payload, socket) do
    { :ok, socket }
  end
end

Ah, beautiful code. Simple and to the point.

I think this gives me a good enough grasp of Phoenix to move on to getting a start on our elm front end. In the next article we’ll start hooking up elm to our newly created channel. My repo currently looks like this, and all tests are green.

Trader-App: Meet Phoenix

Trader-App: Hello Elixir

For the Trader app I will be building I need to get to know Elixir much better than I do now. Considering I haven’t written any Elixir or Erlang I’ve got a lot of learning to do. I’m going to start with learning the basics of Elixir doing a hello world type exercise. Luckily Elixir has a very nice tutorial to get you started. It can be found here.

Previous Post: Trader-App: Motivation

Source: https://github.com/bryceklinker/trader

Language: Elixir

I plan to start with Elixir because that will be the server-side (backend) part of the application. I don’t like to build front-ends often using mocked up or faked data especially since I have no idea what the data will look like that I plan to consume. Faking or mocking the data seems like an exercise in futility at this point.

Now let us begin with a quick run through of the different syntax and building blocks in elixir. My repo currently looks like this. As you can see that repo is nearly empty except the LICENSE file.

We first must install Elixir so that we can actually compile and run Elixir code. Back over to elixir-lang.org. This will walk you through installing Elixir on your platform. For me (on my Mac Book Pro) I had to modify the .bash_profile file found in my home directory. From a terminal you should be able to do the following:

cd ~ # Takes you to your home directory
ls -a # Lists all files and directories in your home directory
nano .bash_profile # opens your .bash_profile using nano

# Add the following line to your .bash_profile
export PATH="$PATH:/usr/local/bin" # Adds the directory with Elixir to your path

# Now if your like me and not as familiar with nano you will need to 
# press "Ctrl + O" this will write the file to disk
# press "Ctrl + X" this will exit nano

Now you should have Elixir on your path. To test this in your terminal enter:

elixir -v
1.3.2 # this is what I see

This should show you the version of Elixir you have just installed. At this point we are ready to start writing our Hello World application in Elixir.

First how can we simply print text to the screen. We first need to open an editor to our repository. For my purposes I want to indicate that my server and client are in two different places. Thus I’ll create a structure like:

- src
    - client
    - server
LICENSE

This will allow me to keep track of what is server and what is client code. If you don’t care about this then just put your code wherever you like it. Next we need to create our first Elixir file. Create the file src/server/hello_world.exs. In it we can add the line:

IO.puts "Hello World"

To see if this works we need a terminal. Open a terminal and navigate/cd your way to your repository. Now enter:

elixir src/server/hello_world.exs # if your structure is like mine

This should simply output:

Hello World

Huh… That was easy enough. Lets commit here. Now for something a little more complex. Open up src/server/hello_world.exs. Now lets add the line:

...
IO.puts 6 + 4

Now when we run our Elixir hello_world.exs we see:

Hello World
10

How nice is that? You can see my code at this point here. This tells me something interesting about how Elixir is going to run our code. Elixir is going to run our code as a script. This means that any statement or operation in the file will be executed once and only once. When the end of the script is reached Elixir will exit. This is good to know.

Now lets add a few more lines to our hello_world.exs:

...
IO.puts :hello == :world # this compares the atom hello to the atom world

My output now looks like:

Hello World
10
false

Let’s break down what IO.puts :hello == :world is actually doing. First, we are telling Elixir to compare the atom :hello to the atom :world. An atom in Elixir is a constant where its name is its value. However, this :hello == "hello" will still yield false. The good thing to know is that :hello == :hello will yield true. For now I’m going to think of atoms as constants that could hold any value. Lets try a small experiment. Lets see what happens if we do:

:hello = "hello"
IO.puts :hello == "hello"

This yields an error like below:

** (MatchError) no match of right hand side value: "hello"
 src/server/hello_world.exs:5: (file)
 (elixir) lib/code.ex:363: Code.require_file/2

This tells me something interesting one atoms can be used for matching and that the = can be used for matching. I’ve heard of matching in functional languages, but I don’t know how it works. I think that encompasses my knowledge of atoms at this point. Lets move on for now. My repo now looks like this.

Next on the list is declaring and calling an anonymous function. First we need to declare an anonymous function in our hello_world.exs:

...
add = fn a, b -> a + b end
IO.puts is_function(add)

IO.puts add.(4, 5)

Lets quickly look at what is happening here. So I’m creating a function inline that will do a + b. I declare my function using fn a,b -> then I add the body of my function a + b then I declare the end of my function body with end. To me I find this to be an interesting syntax. First thing that I notice is the lack of a return statement. The next thing I notice is a lack of brackets {}. I also notice there are no parenthesis (). Coming from C# this looks odd. Something I have to remember is that Elixir will return the result of the last statement in the function body. It is also using the fn to denote the start of a function definition. It then uses the a, b to denote arguments to the function. It must then use the end to denote where the function body stops. Interesting, this actually reminds me a lot of ruby. Okay so we dissected the function declaration what about the rest of the code.

The is_function method call is a built in function. It will tell us if something is a function. Since we assigned add to be our anonymous function we should get true when we run our script. The next line is IO.puts add.(4,5). This is actually how we invoke our anonymous function. This looks fairly normal to me except on small dot (ha!) the add. looks odd. As it turns out this is how you have to invoke anonymous functions in Elixir. Now your output at this point should look like:

Hello World
10
false
true
9

This looks pretty good. My repo can be seen here.

At this point we have do simple addition, comparisons, called a built-in function, and created an anonymous function. Well how the heck can we work with lists, tuples, strings, and maps?

Lets start with Lists first, or more accurately linked lists. In Elixir lists are actually linked lists. This is important because Elixir provides head (hd) and tail (tl) functions for getting the head and tail of lists. Because all lists are linked lists head and tail are important functions. Lets add some more code to our hello_world.exs.

...
IO.puts length [1, 2, 3] # print the length of the list 1,2,3

IO.puts length [7, 5, 4] ++ [8, 4, 3] # print the length of lists 
                                      # 7,5,6 concatenated with 8,4,3

list = [5, 6, 3] # create a list with 5,6,3
IO.puts hd(list) # print the head of list

IO.puts length(tl(list)) # print the length of the tail of list

This small piece of code shows something I like about Elixir. If you wanted to know how to concatenate two lists together in Elixir its as simple as using the ++ operator. This will take the list of the left and the list on the right and return a new list with all the members from both lists. That is really self explanatory, I think. The next piece I find interesting is the last line. I originally thought that calling tl with a list would give me the last item in the list, however this is the wrong assumption. Instead it will give you a list that has all the members except the head. In our case tl(list) will actually return a list with 6,3. Thus, when we do length(tl(list)) we should get 2. Let’s go ahead and run our code:

elixir src/server/hello_world.exs

I get the following output:

Hello World
10
false
true
9
3
6
5
2

Nice! Now we have code that should look like this. This now lets us take a deeper look at how Elixir uses tuples.

Let me start with saying that coming from .NET Tuples sound like a terrible idea. My preference with .NET has always been that if I need something with multiple properties or values I can easily create a class to represent that, and bingo I have a working solution. However, Elixir, and functional programming languages in general, do not have classes. This means that tuples, lists, and maps, likely more, are used in place of classes when transferring data from one function to another. Now that I’ve said my peace lets march on.

Lets dive into tuples. In your hello_world.exs lets add some more code:

...
IO.puts elem({ :ok, "Stuff" }, 0) # Get the first element of the tuple

IO.puts elem({ :ok, "Stuff" }, 1) # Get the second element of the tuple

tuple = { :OKY, :SMOKY } # create a tuple with two atoms

tuple = put_elem(tuple, 1, :DOKY) # put the atom :DOKY 
                                  # in the second element of the tuple

IO.puts elem(tuple, 1) # print the second element of the tuple

Now I’m seeing some familiar faces. My curly braces and parenthesis are back. I feel right at home again. But first lets look at what each of these are doing.

The first chunk IO.puts elem({ :ok, "Stuff"}) will create a tuple with element 0 as :ok and element 1 as “Stuff”. The function elem will return the element at the provided location. In this case we should get ok. The next line IO.puts elem({ :ok, "Stuff"}) does the same thing except it is returning the second element in the tuple. This tells us something very important about Elixir. It tells us that Elixir uses zero based indexing, thankfully. Next up we have

tuple = { :OKY, :DOKY }
tuple = put_elem(tuple, 1, :DOKY)
IO.puts elem(tuple, 1)

This part surprised me a little bit. Here we are assigning tuple to be one value and then reassigning it to be a different value. Some might think this violates the whole immutable data thing. However, this actually doesn’t. Immutable data is more about not modifying data once it is created. In this case we are not modifying tuple once it has been created, instead we are just assigning it to a new value. The original value is being tossed out in favor of the new one. The put_elem function is also not modifying our tuple instead it is creating a new tuple with the two existing values { :OKY, :SMOKY } with the value :DOKY inserted between them. Thus, tuple will end up being { :OKY, :DOKY, :SMOKY }. We still have not broken immutability, which is good. Lets check our output. I see:

Hello World
10
false
true
9
3
6
5
2
false
ok
Stuff
DOKY

My code can be see here. That’s pretty neat, but how do we work with strings?

Strings in Elixir are actually very powerful. Lets start simple to see why strings are so nice in Elixir. Add the following to your hello_world.exs:

IO.puts "Hello" <> " " <>"World"

If you are like me the only time you have seen <> is in SQL, and you likely cringed when you saw it. My reflex here was to dislike it, but it also has a completely different purpose here than in SQL. Here we are actually going to concatenate "Hello", " ", and "World" to get the string "Hello World". I’m not a huge fan of this, but I’m sure I’ll get used to it. Now for something that is really interesting. Put the following in your hello_world.exs:

...
IO.puts is_binary("Hello")

With that in your hello_world run your Elixir code. Do you see anything interesting?

Hello World
10
false
true
9
3
6
5
2
false
ok
Stuff
DOKY
Hello World
true # Here is the interesting part

Would you have expected your string to be classified as binary? I certainly wouldn’t have. Lets dig in a little deeper here.

In Elixir strings default to being UTF-8 encoded. This in combination with the is_binary function return true for a string. Means that all strings are simply a set of binary data. The fact that the binary data represents a string is actually not that important to making Elixir work with strings. This also means that you may or may not have binary data that is also a valid string. This was something I found interesting when working through the Elixir guide. We’ll dig more into this later. For now strings are strings and binary data. My repo can be seen here.

Lets take a small break from data structures and look at how we do comparisons in Elixir. Lets start with something simple add this to your hello_world.exs:

IO.puts false and false # uses an "and" to compare two values

IO.puts false or is_atom(:false) # uses an "or" to compare two values.

IO.puts false and raise("This won't actually happen") # comparison is done left to right
IO.puts true or raise("Again this won't actually happen") # ditto

IO.puts 1 == 1.0 # loose comparison of int 1 to double 1.0
IO.puts 1 === 1.0 # strict comparison of int 1 to double 1.0

IO.puts 1 < :something # WAT!? int 1 compared to atom something?

The first two statements work as they read for the most part. One interesting part is that false is both a keyword and an atom. false:false are actually the same thing, same goes for true and :true. The next couple are showing that and and or are evaluated from left to right. If the first part is false the second part will not execute. In our case we don’t receive any errors. People that are familiar with javascript will know the different between 1 == 1.0 and 1 === 1.0. For those that don’t know what the difference is the simplest explanation is that 1 == 1.0 compares their values while 1 === 1.0 compares their values and types. Elixir does the same thing here. The difference is in the strictness of the comparison == is less strict than ===. The most interesting comparison here, I think, is the last one. The last one compares an integer, 1, to the atom :something. Now you may think that this comparison is done using the length of the string something, but that isn’t it. It is actually built into Elixir that integers are always less than atoms. This is really valuable to know when sorting lists of different data types. Elixir will compare using the following rules:

number < atom < reference < functions < port < pid < tuple < maps < list < bitstring

Now that we know we can compare anything to anything else lets see what happens when we run our code now.

Hello World
10
false
true
9
3
6
5
2
false
ok
Stuff
DOKY
Hello World
false
true
false
true
true
false
true

This is all fantastic, but I’ve always read that if you are doing if statements and lots of comparisons in your functional code then you are likely not thinking entirely functional. In functional languages there is a thing called matching that kinda takes the place of conditional statements. Lets look at how we can do matching. In your hello_world.exs add:

x = 1 # assign x to be 1
IO.puts 1 = x # match 1 to x

{ a, b, c } = {"Hello", "World", "!" } # match a,b,c to "Hello","World","!"
IO.puts a # a matched to "Hello"
IO.puts b # b matched to "World"
IO.puts c # c matched to "!"

{ :ok, result } = { :ok, 15} # match :ok assign result to value after :ok
IO.puts result # should print 15

[ a, b, c ] = [ 5, 7, 2 ] # match a,b,c to 5,7,2
IO.puts a # a matched to 5
IO.puts b # b matched to 7
IO.puts c # c matched to 2

[head | tail] = [ 6, 7, 1 ] # match head to 6 and tail to the rest
IO.puts head # should be 6
IO.puts hd(tail) # tail should be [7,1] head of tail should be 7

[h | _] = [2, 3, 7, 9] # match h to 2 let _ match on anything else
IO.puts h # should be 2

Okay that is quite a bit of code. Lets break it down:

x = 1 # assign x to be 1
IO.puts 1 = x # match 1 to x

This does two operations first we assign x to be 1. Then we match 1 to x. This seems strange, because the = operator is performing two completely different operations. This should actually print 1.

Next we have:

{ a, b, c } = {"Hello", "World", "!" } # match a,b,c to "Hello","World","!"
IO.puts a # a matched to "Hello"
IO.puts b # b matched to "World"
IO.puts c # c matched to "!"

This will actually take the value in element 0 of the right and assign it to the value in element 0 of the left. This is the same for elements 1 and 2. This means we end up with three variables whose values match the values on the right side. This is a match using tuples. Our next match is another one done with tuples, but it has a slight twist:

{ :ok, result } = { :ok, 15} # match :ok assign result to value after :ok
IO.puts result # should print 15

You will notice that this one has an atom as the first element. What this means is that in order for this to match, not throw an error, the value on the right has to have :ok as its first element. If the two do match correctly the result in the left tuple will be assigned the value from the right tuple. In this case our result in the left tuple is assigned the value of 15.

Lets just think about that for a minute. This means we could have code that matches using conventions such as :ok for worked and :error for didn’t work. Then our code could handle the separate paths correctly without ever writing an if statement. That is actually really cool to think about. However, we will see that in action later. Lets first look at doing matching with lists.

Our first list match is:

[ a, b, c ] = [ 5, 7, 2 ] # match a,b,c to 5,7,2
IO.puts a # a matched to 5
IO.puts b # b matched to 7
IO.puts c # c matched to 2

This looks pretty similar to the tuple match we did before. The only thing really changing here is the underlying data structures. On the surface we see almost no difference except for we have [] instead of {}. Other than that the outcome is about the same. But lets take a look at the next match we perform on lists:

[head | tail] = [ 6, 7, 1 ] # match head to 6 and tail to the rest
IO.puts head # should be 6
IO.puts hd(tail) # tail should be [7,1] head of tail should be 7

This match is saying place the head of the list on the right in the head variable and place the tail of the list on the right in the tail variable. This gives us 6 and [7,1]. I can’t think of a great use for this at the moment, but hopefully we will find one while working through our application. The final list match we do is:

[h | _] = [2, 3, 7, 9] # match h to 2 let _ match on anything else
IO.puts h # should be 2

I find this one to be interesting, because we are basically saying we only care about the head part of the list on the right. The reason this says we don’t care about the tail is because we can’t actually read the _ variable. However, the match will work without error. Think of the _ as match on anything.

If we run our code now we should see:

Hello World
...
!
15
5
7
2
6
7
2

Okay so our code should look like this and I think we have a very basic understanding of how matching works. If you look at the commit history you will see some other examples, some not working. Lets keep move on to an example that uses some matching in a function:

f = fn
 x, y when x > 0 -> x + y
 x, y -> x * y
end
IO.puts f.(6, 4)
IO.puts f.(-1, 5)

Now, ignore the poorly named function. Lets focus on the function body. We see that the function body consists of two lines. However, the two lines are actually two different matchers. In one case we are saying perform x + y when x > 0. In the other match we are saying perform x * y. I have an inkling saying this is the kind of pattern matching people are talking about when they talk about functional languages. I can see the power in doing something like this. We should receive the output like below when we run our code:

Hello World
...
10
-5

That is pretty nice. I’ve got two code paths happening, but I didn’t actually have to use an if, switch, conditional, or anything like that to make it work. I’m pretty sure this is the tip of the iceberg, but I can see the potential here. At this point I skipped over some of the case statement and conditional stuff found in my repo as I get the feeling that avoiding the case keyword is the way to go with Elixir at least. My code can be seen here.

The next thing I want to look at is using maps. My sample is going to contain other examples of tuples, strings/bitstrings, and if statements. However, I want to skip ahead to using maps. Lets put the following in our hello_world.exs:

simple_map = %{ :a => 5, 2 => :b} # Map with :a equal 5 and 2 equal :b
IO.puts simple_map[:a] # should print 5
IO.puts simple_map[2] # should print b
IO.puts simple_map[:c] == nil #should print c

better_map = %{ :a => 65, :b => 8, :c => 98 }
IO.puts Map.get(better_map, :a) # should print 56
IO.puts better_map.c # should print 98

people = [
 jack: %{ name: "Jack", age: 56, languages: ["Elixir", "Spanish", "English" ]},
 mary: %{ name: "Mary", age: 23, languages: ["Javascript", "English", "Ract" ]}
] # Map with nested properties

IO.puts List.first(people[:jack].languages) # should print Elixir

updated_people = update_in people[:mary].languages, &List.delete(&1, "Javascript")
IO.puts List.first(updated_people[:mary].languages) # should print English

The above code creates three maps. The first map has two keys :a and 2, with values 5 and :b respectively. The nice thing about maps is that if a key doesn’t exist you will get back nil that is the case when we do IO.puts simple_map[:c] == nil this should print true. The second map declared is better_map this map. This one is nearly identical to simple_map. The next interesting part is people. The people list is a list with maps :jack and :mary. If you have seen JSON before this should look familiar. A map in Elixir is similar, in syntax, to a JSON object. For now I’m thinking of maps as immutable JSON objects. The list is also immutable.

The next piece of interesting code is the updated_people = update_in people[:mary].languages, &List.delete(&1, "Javascript") this line will remove "Javascript" from the list of mary’s languages. Keep in mind that because Elixir uses immutable data structures update_in returns a new list of people. It will not modify the existing people list. Lets go ahead and run our script to make sure what we think should happen actually happens:

...
5
b
true
65
98
Elixir
English

Yep that worked, but lets check one more thing. Add the following to your code:

IO.puts List.first(people[:mary].languages)

I would expect to see this line print Javascript. Only way to know is to run the script and find out:

5
b
true
65
98
Elixir
English
Javascript # Bingo!

That worked as expected. Well I think that is pretty good for today. We have a working Elixir script essentially that helped us learn all kinds of stuff about Elixir. I feel much better moving forward with starting to create some real server side code. My source up to this point can be found here.

Trader-App: Hello Elixir

Trader-App: Motivation

Today I’m starting work on a new application that will be completely open source. I’ve always had an interest in stock trader applications, but never worked in a place that required me to build one. Interestingly I’ve also never had a huge interest in trading stocks. However, stock tickers seem to create an interesting set of problems.

Source: https://github.com/bryceklinker/trader

Languages: Elixir, Elm

What makes them interesting is that they need to update and respond to external events in sub-second intervals. This is because the market is constantly changing price quotes, valuations, and the like. If you are a trader this information needs to be up-to-date, accurate, and visible. This to me indicates that trader applications really do need to be built using some sort of reactive architecture that is highly scalable. How you achieve the reactiveness of the application isn’t nearly as important as making sure the application reacts quickly, under a second, to changing data.

Because of this reactive nature I believe Elixir and Elm could provide a great solution. I believe this for one reason both languages/frameworks are immutable by default and functional languages. Functional languages are starting to become all the rage for highly scalable applications. Take a look at Jet.com and WhatsApp for examples of companies leveraging functional languages to create highly scalable products. Jet and WhatsApp use different languages, F# and Erlang respectively, but both of those languages are immutable by default and functional.

I’ve chosen Elixir (a derivative of Erlang) and Elm (a derivative of Haskell targeted at building UIs). I chose these to languages because in addition to building my first trader application I wanted to take myself out of my comfort zone of .NET (backend) and Angular (front-end).

The series of Trader-App articles will focus on building and learning trader applications/apis, Elixir, and Elm. I’ll be start with learning Elixir.

Trader-App: Motivation

Dumpster Fire: Angular 2 from CLI

In the previous blog post I looked at how to setup angular 2 from scratch using webpack and typescript. This time we are going to setup the same application using the angular CLI.

Origin Post: Dumpster Fire: Origin

Angular 2 from Scratch: Dumpster Fire: Angular 2 from Scratch

Source: ng2-cli-dumpster-fire

Tools/Frameworks: Angular 2 (ng2)karma, protractor, SystemJs, angular-cli, BroccoliJs

Software: node/npmgitVSCode

Right now, We have an empty repository which can be seen here. At this point we only have a LICENSE file. Now we need to install the angular cli to do this we will need a terminal/command prompt. In the terminal run:

npm install angular-cli -g

This unfortunately takes a little while, but the benefits of the cli are pretty amazing. Once your angular cli is installed in your terminal go to your local repository. Now you will need to run the init command.

ng init

This will initialize an angular application for you. My version can be see here. Lets take a quick look at what the init command does.

In your repo you will see lots of json files. These files are configuring the tools used to build, deploy, and run your newly created angular 2 application. Lets start with running the test command to run all the tests in our application.

ng test

My repo will run karma using chrome. I see 2 passing tests along with some nice output indicating when the project is being built and then runs the tests. Next we can run the serve command.

ng serve

This again gives us some nice output saying that it is building the application. The next thing it will do is start a local web server at http://localhost:4200/. When I look at the new page I see the text “app works!” in the browser. This web server like webpack will monitor your application for file changes. It will then rebuild your application and reload your browser automatically. This is all powered using broccoli and livereload modules. The next command to look at is e2e.

To have e2e work properly you need to have your web server up and running. To do this open another terminal and run:

ng serve

In another terminal you should be able to run:

ng e2e

This command will run the end-to-end tests using protractor. At this point we have created an angular 2 application complete with unit tests, end-to-end tests, live reload server, and build. The amazing part is that because the angular team has made the effort to create a simple cli that will create modern front-end workflow that would normally take hours. For me anyways I was able to get a working angular 2 build pipeline setup with angular cli in about one hour while writing this blog post. This is in stark contrast to the six or seven hours it took me to setup angular 2 with webpack from scratch while writing the last blog post.

As a bonus to readers this one only took ~500 words. Whew that is one great cli. The unfortunate part about the cli at this point is that it uses the rc4 version of angular instead of the rc5 version I was using in my last post. This of course will be changed over time. As the angular team closes in on release the cli will continue to improve. We should also note that the angular cli is heavily based on the ember cli. It is so heavily based on the ember cli that the angular team has included parts of ember in their solution. I don’t view this as a bad thing. The ember-cli has been around for a long time and the ember community has benefitted greatly.

One of the largest advantages of the angular cli would be consistency across angular projects. Right now, the amount of fracture in the javascript community is creating churn. This churn is making it hard for anyone to keep up with javascript and its libraries. I’m happy to see the angular team trying to bring its community together with a common set of conventions and build tools. The cli should help the angular community avoid the churn and fractures present in many other javascript frameworks. This is the same advantage the ember team has had for a long time.

I for one am hopeful the angular team can bring the productivity of ember to the angular community.

Dumpster Fire: Angular 2 from CLI