Front-end Madness: Simple Back-end

Last time we added a simple look at showing a list of customers to an Angular application. One part that was failing was the http call to the back-end. Since this series is focused on front-end technologies I only want to cover the back-end code briefly.

Express REST Api

The back-end for this series of posts is an expressjs REST api. I created the back-end api before starting on the last post.

CURRENT BACK-END

The back-end code does have a set of tests to ensure it works. The tests are written using mocha and chai. Notice that I didn’t list a mocking framework the reason here is that I simply don’t need to worry about mocking data. The back-end is simple enough I can test the entire thing without worrying about mocks, fakes, or test doubles.

Back-end Tests

To start I want to take a quick look at how the api is tested without mocks, fakes, or doubles of any kind. Since the last post dealt with showing customers in the Angular application I want to take a look at the one of the customers api tests.

# customers.spec.ts
...
describe('Customers', () => {
    let httpServer: http.Server;
    let baseUrl: string;

    before(() => {
        const result = setupServer();
        httpServer = result.httpServer;
        baseUrl = result.baseUrl;
    })

    it('should return empty customers', async () => {
        const result = await getJson<ResultList<Customer>>(`${baseUrl}/api/customers`);
        expect(result).to.eql({ items: [] });
    })
    ...
    after(async () => {
        await tearDownServer(httpServer);
    })
})

This is a fairly simple test and was one of the first tests I wrote. Since I wrote it the code evolved quite a bit. To follow that evolution you can checkout the series of commits that involved this set of specs.

Initial back-end spec ⇒ Refactored Repository/Database ⇒ Lokijs to populate Id ⇒ Break down spec ⇒ Added orders api ⇒ Added products api ⇒ Added api to get orders for customer

Unfortunately, I wasn’t diligent enough at the time to capture more regular snapshots of the evolution of the api.

Notice that in these tests I start and stop the express app before each set of specs. The first question many people would ask is doesn’t that make the tests really slow? To answer that we need to run the tests. To run the backend tests you can use the commands:

cd ./backend
npm install
npm test

On my machine the tests take about < 500 ms. For me this is fast enough I’m able to get quick feedback that the api is working. The feedback I receive from these tests confirms that the express app starts, each endpoint functions as expected, and the lokijs database is read/written correctly.

Running the Back-end

To run the back-end api you can use the following commands:

cd ./backend
npm install
npm start

You should see the following output in your terminal:

Now listening at http://localhost:5000

At this point you should be able to start using something like Postman or curl to start interacting with the api. Here are some examples using curl.

curl http://localhost:5000
curl http://localhost:5000/api/customers
# Powershell curl post
curl -Method POST -ContentType "application/json" -Body '{"name":"workingdev"}' -Uri http://localhost:5000/api/customers

# bash curl post
curl -d '{"name": "workingdev"}' -H "Content-Type: application/json" -X POST http://localhost:5000/api/customers

Since there is no authentication interacting with the api is simple and easy, just what we want for now.

What Apis Exist?

The back-end has more that just customers as part of the api. Below is a listing of the apis that exist. Eventually I’d like to add swagger or something similar to the api but for now this will work:

  • Customers
    • GET /api/customers
    • GET /api/customers/:id
    • GET /api/customers/:id/orders
    • POST /api/customers
    • PUT /api/customers/:id
    • DELETE /api/customers/:id
  • Orders
    • GET /api/orders
    • GET /api/orders/:id
    • POST /api/orders
    • PUT /api/orders/:id
    • DELETE /api/orders/:id
  • Products
    • GET /api/products
    • GET /api/products/:id
    • POST /api/products
    • PUT /api/products/:id
    • DELETE /api/products/:id

Running the Api and Angular App

The root of the repository is an npm package with a set of scripts to help with this. Running both applications can be done using the command:

cd {root of repo}
npm start # starts backend and each app

This is great However, it doesn’t take care of our end-to-end tests scenario. I like end-to-end tests to start up all apis, services, etc. that are needed for the test. To do this we should be able to modify the onPrepare method of the protractor.conf.js. Go to the angular-app directory of the repo and add the following to the protractor.conf.js:

...
const path = require('path');
const { spawn } = require('child_process');

let backendProcess;
exports.config = {
  ...
  onPrepare() {
    ...
    return startBackend();
  },
  onComplete() {
    if (!backendProcess && !backendProcess.killed)
      return;

    backendProcess.kill('SIGTERM');
  }
};

function startBackend() {
  return new Promise((resolve, reject) => {
    backendProcess = spawn('npm', ['start'], { cwd: path.resolve(__dirname, '..', 'backend'), shell: true });
    backendProcess.stdout.on('data', data => {
      if (data.indexOf('Now listening'))
        resolve();

      console.log(`BACKEND: ${data}`)
    });
    backendProcess.stderr.on('data', data => console.error(`BACKEND: ${data}`));
    backendProcess.on('close', code => console.log(`BACKEND: Exited with code ${code}`));
  })

}

The Angular end-to-end tests will now start up the back-end api so that when interacting with the api while running tests will work.

CODE CHECKPOINT

Next we will start adding more components and interactivity to our Angular application.

Advertisements
Front-end Madness: Simple Back-end

Front-end Madness: Angular – Customers

For the first feature in our application I want to add the ability to view customers. I plan to use a master detail type of view.

In order to get started with our features I plan to bring in a few libraries that I hope others will at least look at when building Angular applications.

Extra libraries

To install these libraries we can run:

npm install @angular/cdk @angular/material --save

End-to-End Customers Test

To get started with our master detail list we’ll start by adding an end-to-end test. I like to organize all of my code by feature instead of object type. In light of this preference I’m going to remove the existing end-to-end test and add a new one for customers. Here is our new end-to-end test:

import {CustomersPage} from "./customers.po";

describe('Customers', () => {
  let page: CustomersPage;

  beforeEach(() => {
    page = new CustomersPage();
  })

  it('should show empty customers', () => {
    page.navigateTo();

    expect(page.getCustomersList().isPresent()).toBe(true, 'Customers list is not present on page');
    expect(page.getCustomers().count()).toBe(0, 'Customers list is not empty');
  })
})

This is fairly simple to start. We just expect to see an empty customers list. Since no customers exist this makes sense. Notice that we have added messages to our expectations so that it is easier to figure out what failed when our tests don’t work.

When we run this we will see:

Customers
 × should show empty customers
 - Expected false to be true, 'Customers list is not present on page'.
 at UserContext.<anonymous> (C:\dev\code\boot-xp\front-end-madness\angular-app\e2e\customers\customers.e2e-spec.ts:13:49)
 at new ManagedPromise (C:\dev\code\boot-xp\front-end-madness\angular-app\node_modules\selenium-webdriver\lib\promise.js:1067:7)
 at ControlFlow.promise (C:\dev\code\boot-xp\front-end-madness\angular-app\node_modules\selenium-webdriver\lib\promise.js:2396:12)
 at TaskQueue.execute_ (C:\dev\code\boot-xp\front-end-madness\angular-app\node_modules\selenium-webdriver\lib\promise.js:2970:14)
 at TaskQueue.executeNext_ (C:\dev\code\boot-xp\front-end-madness\angular-app\node_modules\selenium-webdriver\lib\promise.js:2953:27)
 at asyncRun (C:\dev\code\boot-xp\front-end-madness\angular-app\node_modules\selenium-webdriver\lib\promise.js:2860:25)
 at C:\dev\code\boot-xp\front-end-madness\angular-app\node_modules\selenium-webdriver\lib\promise.js:676:7
 at <anonymous>
 at process._tickCallback (internal/process/next_tick.js:160:7)

Now that we have a failing test we can move onto the actual functionality.

CODE CHECKPOINT

Reorganize Root Concerns

With an end-to-end test in place we know what we are trying to make happen. In this case I want to get rid of a lot of the existing code that is unnecessary for our application. I’m going to rename the app.component.* files  to root.component.*. I’m also going to move those into a Root folder to indicate they are part of the root application and not a specific feature. I’ve bounced back and forth between Root and Shell, but have settled on root as most people seem to understand that. I’m also going to move the app.module.ts file into the root folder and rename it to root.module.ts. This keeps all of the root level concerns in one location for easy identification.

The above changes aren’t absolutely necessary, but it helps me keep things clean and easy to find.

CODE CHECKPOINT

Updating Root Component

Once we have everything reorganized I feel comfortable updating the root component to have the application shell we actually need. Since we don’t have any need for navigating around our app yet I’m not going to add many navigation elements, however I will be setting up routing within our root component so that adding future features is quick and easy.

Update the ./app/root/root.component.spec.ts to expect a router outlet to exist in the component:

import { TestBed, async } from '@angular/core/testing';

import { RootComponent } from './root.component';

describe('RootComponent', () => {
  beforeEach(async(() => {
    TestBed.configureTestingModule({
      declarations: [
        RootComponent
      ],
    }).compileComponents();
  }));

  it('should have router outlet', async(() => {
    const root = TestBed.createComponent(RootComponent);

    expect(root.nativeElement.querySelectorAll('router-outlet').length).toBe(1);
  }))
});

Notice that I’ve removed the existing content of the spec as there is no need for all of that. I also prefer to test the resulting html as much as possible since changes in the component are generally expected to change the html.

Okay lets go ahead and implement our RootComponent given our failing unit test:

# root.component.ts
import { Component } from '@angular/core';

@Component({
 selector: 'app-root',
 templateUrl: './root.component.html',
 styleUrls: ['./root.component.css']
})
export class RootComponent {
}

# root.component.html
<router-outlet></router-outlet>

We would expect this to pass our test as there is a “router-outlet” element in our html. However we end up with an error such as below:

RootComponent should have router outlet FAILED
 'router-outlet' is not a known element:
 1. If 'router-outlet' is an Angular component, then verify that it is part of this module.
 2. If 'router-outlet' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message. ("[ERROR ->]<router-outlet></router-outlet>
 "): ng:///DynamicTestModule/RootComponent.html@0:0
 Error: Template parse errors:
 at syntaxError C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:485:22)
 at TemplateParser.webpackJsonp.../../../compiler/esm5/compiler.js.TemplateParser.parse C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:24661:1)
 at JitCompiler.webpackJsonp.../../../compiler/esm5/compiler.js.JitCompiler._parseTemplate C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34601:1)
 at JitCompiler.webpackJsonp.../../../compiler/esm5/compiler.js.JitCompiler._compileTemplate C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34576:1)
 at http://localhost:9876/_karma_webpack_/webpack:/C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34477:48
 at Set.forEach (<anonymous>)
 at JitCompiler.webpackJsonp.../../../compiler/esm5/compiler.js.JitCompiler._compileComponents C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34477:1)
 at http://localhost:9876/_karma_webpack_/webpack:/C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34365:1
 at Object.then C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:474:33)
 at JitCompiler.webpackJsonp.../../../compiler/esm5/compiler.js.JitCompiler._compileModuleAndAllComponents C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34363:1)
 'router-outlet' is not a known element:
 1. If 'router-outlet' is an Angular component, then verify that it is part of this module.
 2. If 'router-outlet' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message. ("[ERROR ->]<router-outlet></router-outlet>
 "): ng:///DynamicTestModule/RootComponent.html@0:0
 Error: Template parse errors:
 at syntaxError C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:485:22)
 at TemplateParser.webpackJsonp.../../../compiler/esm5/compiler.js.TemplateParser.parse C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:24661:1)
 at JitCompiler.webpackJsonp.../../../compiler/esm5/compiler.js.JitCompiler._parseTemplate C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34601:1)
 at JitCompiler.webpackJsonp.../../../compiler/esm5/compiler.js.JitCompiler._compileTemplate C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34576:1)
 at http://localhost:9876/_karma_webpack_/webpack:/C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34477:48
 at Set.forEach (<anonymous>)
 at JitCompiler.webpackJsonp.../../../compiler/esm5/compiler.js.JitCompiler._compileComponents C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34477:1)
 at http://localhost:9876/_karma_webpack_/webpack:/C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:34365:1
 at Object.then C:/dev/code/boot-xp/front-end-madness/angular-app/node_modules/@angular/compiler/esm5/compiler.js:474:33)

This error looks pretty bad, but ultimately it is mostly stack trace information. The important part is:

'router-outlet' is not a known element:
 1. If 'router-outlet' is an Angular component, then verify that it is part of this module.
 2. If 'router-outlet' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message. ("[ERROR ->]<router-outlet></router-outlet>

Basically this means Angular doesn’t know what to do with the router-outlet element. At this point we have two choices

  1. Use the RouterTestingModule in our test so that the router-outlet component is known to Angular
  2. Tell Angular to ignore unknown custom elements using the ‘CUSTOM_ELEMENTS_SCHEMA’.

I prefer to use option #1 as often as possible. As the components that I’m using are normally my own components or part of an external library and I want to ensure they behave the way I expect them to. In light of this we will add the RouterTestingModule to our test like so:

import { TestBed, async } from '@angular/core/testing';

import { RootComponent } from './root.component';
import {RouterTestingModule} from "@angular/router/testing";

describe('RootComponent', () => {
  beforeEach(async(() => {
    TestBed.configureTestingModule({
      imports: [
        RouterTestingModule.withRoutes([])
      ],
      declarations: [
        RootComponent
      ],
    }).compileComponents();
  }));

  it('should have router outlet', async(() => {
    const root = TestBed.createComponent(RootComponent);

    expect(root.nativeElement.querySelectorAll('router-outlet').length).toBe(1);
  }))
});

The updated spec above will now pass because we added the following code:

import {RouterTestingModule} from "@angular/router/testing";
...
describe('RootComponent', () => {
  beforeEach(async(() => {
    TestBed.configureTestingModule({
      imports: [
        RouterTestingModule.withRoutes([])
      ],
...
    }).compileComponents();
  }));
...
});

Our tests now pass with a message such as:

Chrome 63.0.3239 (Windows 10 0.0.0): Executed 1 of 1 SUCCESS (0.078 secs / 0.071 secs)

CODE CHECKPOINT

Add Customers Module

Finally we are at a point to start adding the customers module. To do this we can return to the angular cli to generate our module and folder for us:

ng generate module customers --routing

This command will generate a .\customers\customers.module.ts and a .\customers\customers-routing.module.ts. The contents of both are:

# customers.module.ts

import { NgModule } from '@angular/core';
import { CommonModule } from '@angular/common';

import { CustomersRoutingModule } from './customers-routing.module';

@NgModule({
  imports: [
    CommonModule,
    CustomersRoutingModule
  ],
  declarations: []
})
export class CustomersModule { }

# customers-routing.module.ts

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';

const routes: Routes = [];

@NgModule({
  imports: [RouterModule.forChild(routes)],
  exports: [RouterModule]
})
export class CustomersRoutingModule { }

The routes are currently empty for the customers module, but this is a nice way to get started creating a module.

Now we need to return to our root.module.ts and add routing to the module so that it can define how to get to the customers module.

Let’s add a routing module to accommodate our root.module.ts similar to how the customers module is setup.

# root-routing.module.ts
import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';

const routes: Routes = [
 { path: 'customers', loadChildren: 'app/customers/customers.module#CustomersModule'}
];

@NgModule({
 imports: [RouterModule.forChild(routes)],
 exports: [RouterModule]
})
export class RootRoutingModule {}

Now we need to add our RootRoutingModule to our RootModule imports:

@NgModule({
...
  imports: [
    BrowserModule,
    RootRoutingModule
  ],
...
})
export class RootModule { }

CODE CHECKPOINT

Add CustomersRoot Component

Now that we have an empty module we can start adding functionality to our customers module. To start we’ll use the angular cli:

ng generate component customers/components/CustomersRoot --module customers/customers.module.ts --spec

This will generate the component’s typescript, html, css, and spec. Now we can start writing tests against our CustomersRootComponent. Let’s add a test that defines how to get the list of customers:

import { async, ComponentFixture, TestBed } from '@angular/core/testing';

import { CustomersRootComponent } from './customers-root.component';
import {HttpClientTestingModule, HttpTestingController} from "@angular/common/http/testing";

describe('CustomersRootComponent', () => {
  let httpTestingController: HttpTestingController;
  let component: CustomersRootComponent;
  let fixture: ComponentFixture<CustomersRootComponent>;

  beforeEach(async(() => {

    TestBed.configureTestingModule({
      declarations: [ CustomersRootComponent ],
      imports: [HttpClientTestingModule]
    })
    .compileComponents();
  }));

  beforeEach(() => {
    httpTestingController = TestBed.get(HttpTestingController);

    fixture = TestBed.createComponent(CustomersRootComponent);
    component = fixture.componentInstance;
  });

  it('should get customers from api', () => {
    fixture.detectChanges();
    const req = httpTestingController.expectOne('http://localhost:5000/api/customers');
    expect(req.request.method).toBe('GET');
    expect(req.flush([]));
  });

  afterEach(() => {
    httpTestingController.verify();
  })
});

This test should fail with a message similar to:

Error: Expected one matching request for criteria "Match URL: http://localhost:5000/api/customers", found none.

Now we can write the code required to pass this test:

import { Component, OnInit } from '@angular/core';
import {HttpClient} from "@angular/common/http";

@Component({
  selector: 'app-customers-root',
  templateUrl: './customers-root.component.html',
  styleUrls: ['./customers-root.component.css']
})
export class CustomersRootComponent implements OnInit {

  constructor(private httpClient: HttpClient) { }

  ngOnInit() {
    this.httpClient.get('http://localhost:5000/api/customers');
  }

}

Okay this should work right? Let’s check our test results:

Error: Expected one matching request for criteria "Match URL: http://localhost:5000/api/customers", found none.

WHAT?!?!?! I made the http call its right there what happened? One of the things to remember about Angular is it utilizes RxJS heavily including here. The HttpClient.get method returns us an Observable. This means that unless we have something subscribed to the returned observable the actual http call never happens. We need to update the test to have the following:

..
export class CustomersRootComponent implements OnInit {
...
  ngOnInit() {
    this.httpClient.get('http://localhost:5000/api/customers')
      .subscribe(() => {});
  }

}

Okay do our tests work now?

Chrome 63.0.3239 (Windows 10 0.0.0): Executed 2 of 2 SUCCESS (0.122 secs / 0.116 secs)

Oh good they work now.

CODE CHECKPOINT

Why didn’t I use a service for the http call?

The reason for using HttpClient directly here is that its the simplest thing that could possibly work. Until I have a reason to extract a service for getting customers there is no harm in using HttpClient directly.

What do we do with our retrieved customers?

We will return to our tests to define how we want our component to work:

...
describe('CustomersRootComponent', () => {
  ...
  it('should show customers list', async(() => {
    fixture.detectChanges();

    const req = httpTestingController.expectOne('http://localhost:5000/api/customers');
    req.flush({items: [{}, {}] });

    fixture.detectChanges(); // IMPORTANT
    fixture.whenStable().then(() => {
      expect(fixture.nativeElement.querySelector('.customers-list').length).toBe(1);
      expect(fixture.nativeElement.querySelector('.customer-list-item').length).toBe(2);
    })
  }))
  ...
});

Notice this test is very similar to our end-to-end test. The difference here is we are expecting there to be customers in our list instead of the list being empty. Let’s get this test to pass:

...
import {Customer} from "../../../../../../backend/src/api/customers/customer";
import {ResultList} from "../../../../../../backend/src/api/general/models/result-list";

@Component({
  ...
})
export class CustomersRootComponent implements OnInit {
  customers: Customer[];
...
  ngOnInit() {
    this.httpClient.get<ResultList<Customer>>('http://localhost:5000/api/customers')
      .subscribe(result => this.customers = result.items);
  }
}

The above code only fills in half the picture. We need to go to the customers-root.component.html to finish the rest of the required functionality.

<mat-list class="customers-list">
  <mat-list-item class="customer-list-item" *ngFor="let customer of customers">

  </mat-list-item>
</mat-list>

Notice I’m using mat-list and mat-list-item components. These come from @angular/material. Are the tests passing?

Chrome 63.0.3239 (Windows 10 0.0.0) CustomersRootComponent should show customers list FAILED
 'mat-list-item' is not a known element:
 1. If 'mat-list-item' is an Angular component, then verify that it is part of this module.
 2. If 'mat-list-item' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppre
ss this message. ("<mat-list class="customers-list">

Nope tests still aren’t working. However, the error message should look familiar. This is the same kind of issue we saw when adding router-outlet to our RootComponent. We need to update our testing module:

...
import {MatListModule} from "@angular/material";

describe('CustomersRootComponent', () => {
  ...
  beforeEach(async(() => {
    TestBed.configureTestingModule({
      declarations: [ CustomersRootComponent ],
      imports: [
        HttpClientTestingModule,
        MatListModule
      ]
    })
    .compileComponents();
  }));

  ...
});

Now our tests should be passing:

Chrome 63.0.3239 (Windows 10 0.0.0): Executed 3 of 3 SUCCESS (0.23 secs / 0.212 secs)

Great now we have a working CustomersRootComponent.

CODE CHECKPOINT

Add CustomersRoot To Customers Routing

Now if you were to return to our end-to-end test you would likely see an error like this:

1) Customers should show empty customers
 - Failed: Angular could not be found on the page http://localhost:49152/customers.If this is not an Angular application, you may need
to turn off waiting for Angular.
 Please see 
 https://github.com/angular/protractor/blob/master/docs/timeouts.md#waiting-for-angular-on-page-load

This tells us we are missing something. I believe what we are missing is a route pointing to our CustomersRootComponent. Lets go to our CustomersRoutingModule to add a route:

...
import {CustomersRootComponent} from "./components/customers-root/customers-root.component";

const routes: Routes = [
  { path: '', component: CustomersRootComponent }
];

@NgModule({
  imports: [RouterModule.forChild(routes)],
  exports: [RouterModule]
})
export class CustomersRoutingModule { }

This solves part of the problem however we have an issue in our RootRoutingModule:

import { NgModule } from '@angular/core';
import { Routes, RouterModule } from '@angular/router';

const routes: Routes = [
  { path: 'customers', loadChildren: 'app/customers/customers.module#CustomersModule'}
];

@NgModule({
  imports: [RouterModule.forChild(routes)],
  imports: [RouterModule.forRoot(routes)],
  exports: [RouterModule]
})
export class RootRoutingModule {}

I forgot to change the RouterModule.forChild(routes) call to RouterModule.forRoot(routes). Okay now with that change everything should be working right?

1) Customers should show empty customers
 - Expected false to be true, 'Customers list is not present on page'.

Nope, but that error looks better than before. Let’s try running our app to see if there are any unseen console errors:

npm start

Once the cli has finished building our app we can go to http://localhost:4200/customers. If you are like me you see the following error in the console of the browser:

Uncaught (in promise): Error: Template parse errors:
'mat-list-item' is not a known element:
1. If 'mat-list-item' is an Angular component, then verify that it is part of this module.
2. If 'mat-list-item' is a Web Component then add 'CUSTOM_ELEMENTS_SCHEMA' to the '@NgModule.schemas' of this component to suppress this message. ("<mat-list class="customers-list">

Aha that again looks familiar. We changed our testing module for CustomersRootComponent however we didn’t add the MatListModule to our CustomersModule. We can do that fairly easily:

...
import {MatListModule} from "@angular/material";
...
@NgModule({
  imports: [
    CommonModule,
    MatListModule,
    CustomersRoutingModule
  ],
...
})
export class CustomersModule { }

Any luck? Nope still have an error in the browser’s console:

ERROR Error: Uncaught (in promise): Error: StaticInjectorError(RootModule)[CustomersRootComponent -> HttpClient]:

This is a bit different from the previous error, but we can use a similar fix. The issue is that Angular’s dependency injection doesn’t know what an HttpClient is so it can’t inject that into our CustomersRootComponent. We need to add the following to our CustomersModule:

...
import {HttpClientModule} from "@angular/common/http";
...
@NgModule({
  imports: [
    CommonModule,
    MatListModule,
    HttpClientModule,
    CustomersRoutingModule
  ],
...
})
export class CustomersModule { }

Okay now I only see one error in the browser’s console complaining about the http call failing.

We will get to that next time. Let’s see if our end-to-end test works now:

1) Customers should show empty customers
 - Expected false to be true, 'Customers list is not present on page'.

Huh…. That seems odd everything works in the browser? Take a quick look at the CustomersPage object in our end-to-end test:

import { browser, element, by } from 'protractor';

export class CustomersPage {
  navigateTo() {
    browser.get('/customers');
  }

  getCustomersList() {
    return element(by.className('customer-list'));
    return element(by.className('customers-list'));
  }

  getCustomers() {
    return element.all(by.className('customer-list-item'));
  }
}

Looks like I had the wrong class name in there. Try it now using the correct class name:

Customers
 √ should show empty customers

Finally we have a working end-to-end test.

CODE CHECKPOINT

Wrapping Up

Up to this point we have our first couple of modules, components, tests, and one end-to-end test. One thing we know is broken is the http call to get clients from the api? We’ll look at that next time. I’ve setup a simple back-end that we need to get started as part of running our app and end-to-end tests.

Front-end Madness: Angular – Customers

Front-end Madness: Angular – Getting Started

Given the number of front-end frameworks to choose from I wanted to start looking at the similarities and differences between frameworks. In light of this I decided to start building the same application in each framework that I find interesting. I’ve decided to start with Angular as I’m more comfortable with Angular than most other frameworks.

Source Code: GitHub Repository

Start with the CLI

Anyone getting started with Angular should get their application started using the Angular CLI. This is an npm package that can be installed using the command:

npm install @angular/cli --global

Once the Angular CLI is installed you can start your project using:

ng new {app name}

This will give you a starting point that looks similar to the application found here.

The cli can be used with different flags to configure how to generate your new project. A few that I find handy are:

ng new {app name} --directory {target directory} # Specify directory for app
ng new {app name} --source-dir {app source directory} # Directory to place app code defaults to src
ng new {app name} --prefix {prefix} # Prefix to use for components when generating components using the cli

What’s in the box?

Now that we have a solid starting point let’s look at what has been generated.

The key files for the CLI and build process are:

  • ./.angular-cli.json – Defines how the cli will build, lint, bundle, and generate your application.
  • ./tsconfig.json – Defines how TypeScript will transpile your application.
  • ./src – Directory containing your application source code.
    • This is the directory name you are changing if you use the –source-dir flag.
  • ./e2e – Directory containing End-to-End protractor tests.
    • This can be skipped using the –skip-tests flag
  • ./src/main.ts – Entry point of your application.
  • ./src/polyfills.ts – Provides various polyfills.
    • The Angular team has been nice enough to leave lots of comments in this code indicating which polyfills are needed for various browsers
  • ./src/app/app.module.ts – Root application module
  • ./src/app/app.component.ts – Root application component
    • This is the root component in your application.
    • For more information on Angular Components check out the angular docs.
  • ./src/test.ts – Entry point for unit tests
    • This interestingly won’t be skipped when using –skip-tests flag

There is a lot of other code generated for you, but these ones are the big hitters you need to know about to get started.

Running Your Application

You can start your application using one of the following commands:

ng serve # uses the angular cli to serve your application
npm start # uses npm to run the ng serve command

I prefer to use the npm commands as instead of using the cli commands.

You can run your unit tests using one of the following commands:

ng test 
npm test

You can run your end-to-end tests using one of the following commands:

ng e2e
npm run e2e

When developing my angular applications I generally have a command prompt running tests and serving my application so that I can quickly get feedback on my progress. Fortunately, the angular cli sets up npm start and npm test to watch for file changes.

 

 

Front-end Madness: Angular – Getting Started

Developer Learning

Recently, I’ve been giving more thought to the idea of how can we improve the training/development of developers. My main curiosity has been revolving around preparing new developers for what they will encounter in the “real world.” I keep bouncing back and forth between thinking developer boot camps are the way to go and thinking the traditional four year degree in CS, CIS/MIS, SE, etc. is the best way. However, I honestly think those two options are just the extreme ends of a spectrum of options.

Developer Boot Camps

I’ve had the privilege of working with a few developers that have come out of developer boot camps. What I have gathered from this experience is that developer bootcamps focus on getting new developers up-to-speed on one technology stack. For example this would mean taking a person from knowing nothing about development to being comfortable working in a Ruby on Rails application. In theory this sounds fantastic and from my experience it works pretty well. The caveat here is that theory and the like are go unknown to boot camp graduates, unless graduates pursue that kind of information on their own. (Disclaimer: I’m not an expert on boot camps and their curriculum) To be clear when I say theory I’m talking about concepts like encapsulation, polymorphism, etc.

Looking at boot camps like this they appear to live on the extreme side of just get used to the tools, one language, and a small set of technologies. The developer is then left with the knowledge of the mechanics, but I feel has missed quite a bit of the why. The “why” is extremely important when it comes to deciding how to break down a system, how to split up work, how to increase application flexibility, etc. Essentially without having a deep understanding of “why” a developer may not have the knowledge to make good design decisions. The design decisions I’m talking about here aren’t overarching application architecture, but decisions like: should I use inheritance here? should I break this class/module/function down? where does this functionality belong? These are questions every developer has to answer multiple times a day. Making good decisions at a micro level contributes to the overall maintainability of the system.

Now let me be clear here I think developer boot camps offer a tremendous way to enter the field of software development at a much more reasonable cost than a traditional four year degree.

Traditional Four Year Degree (CS, MIS/CIS, SE)

I can’t speak for everyone’s four year degree, but I know that when I entered the work force after college I was slapped in the face with here’s an application we need you to work on. I was a 22 year old kid just out of school and now working on a system that would be handling billions of dollars worth of transactions and inventory. I wasn’t a lead or anyone special I was just another developer on the team. I did however feel unprepared for the development that was needed. For all my four year degree taught me I hadn’t actually built a full blown application of any real size. The largest application I had built in college was probably ~2,000 lines of code max, I feel that’s a gross over estimation. My four year degree had focused almost entirely on theory subjects encapsulation, polymorphism, algorithms, etc. This kind of learning prepared me for thinking through problems and understanding what terms meant, but I had no idea how to apply most of those theories in practice.

To mean it is the putting theories to practice that is completely lacking from at least my curriculum, and from what I’ve heard, other curriculum as well. My four year degree didn’t teach me how to build applications. The closest thing to an application I built was a very simple php application that equated to about 500 lines total. The one application was the only application I had built that used an actual database. In the “real world” every application I’ve worked on has used at least one if not a couple different databases. Four year degrees seem to miss the mark when it comes to teaching students about the mechanics of building applications. Four year degrees, to me, represent the theory extreme end of the spectrum.

Finding the Middle Ground

Thinking of developer boot camps and traditional CS, MIS/CIS, SE degrees as extreme opposites pushes me to want to find a middle ground. There must be some way to combine the two ideas into a more effective approach. I can think of a few options that could be a middle ground:

  • In a four year degree have students select an application they will build throughout their degree?
  • In a boot camp could students spend a week or so pairing with experienced developers?
  • Should four year programs begin to partner with companies to get their students exposure to “real world” applications?
  • Should boot camps spend a week or more looking at code that exhibits a good and bad use of polymorphism, algorithms, etc?

Those are just a few ideas that could help close the gap. There are probably examples of each of these occurring already I’m just unaware of them.

Developer Learning

Deploy Azure Functions App From AppVeyor

At work recently we have been starting to use Azure functions for a fairly small job in our infrastructure. Also at work we use AppVeyor for our CI/CD server. I like AppVeyor and it has served us well thus far. However, since the tooling and idea of Azure Functions is relatively new it was a bit hard to find a good example of a way to deploy an Azure Functions App from AppVeyor. Luckily one blog had a good starting point, however there are things I was unaware of about Azure Functions that were missing from the blog post. One of the biggest ones was how the function.json and host.json files are created.

Function.json

If you are new or haven’t researched the output of building an azure function there are a few things you need to be aware are happening behind the scenes, especially if you want some sort of CI/CD. In a functions app project you define your functions like below:

using System;
...

namespace Your.Function.App
{
    public static class YourFunction(
        // This trigger piece is important.
        [ServiceBusTrigger("{topic}", "{subscription}", AccessRights.Listen, Connection = "{connectionName}")] object message,
        TraceWriter logger)
    {
        // Perform you function logic here
    }
}

The important piece to notice here is the function trigger. In this case the trigger is a service bus message arriving for a specific service bus topic. For a list of triggers available take a look at here. If you are familiar with Azure WebJobs you may think that this is all you need, however that trigger attribute is really just the first piece of the puzzle. The part of the puzzle we can’t see yet is the generated json file this attribute helps create. Here is a sample of the json that would result from the above code.

{
  "generatedBy": "Microsoft.NET.Sdk.Functions-1.0.0.0",
  "configurationSource": "attributes",
  "bindings": [
    {
      "type": "serviceBusTrigger",
      "connection": "{connectionName}",
      "topicName": "{topic}",
      "subscriptionName": "{subscription}",
      "accessRights": "listen",
      "name": "message"
    }
  ],
  "disabled": false,
  "scriptFile": "{relative path to output}\\{Your assembly name}",
  "entryPoint": "Your.Function.App.YourFunction.Run"
}

The big question here is how does this file get generated? You won’t see it in your solution and if you deploy from Visual Studio you won’t even know, unless you look for it, that the file even exists. I didn’t know this and didn’t realize it was needed, however this file is what Azure uses to know how and when to trigger your function so without it you won’t see any functions in your Azure Functions App. I know you won’t see any functions because that’s exactly what happened to me when attempting to duplicate Alastair Christian’s blog post. The issue I was seeing had nothing to do with the content of the blog it was my lack of knowledge of how Azure Functions work.

What were we missing?

The part I missed when attempting to duplicate Alastair’s solution was a small one but extremely important in this context. He was using MSBuild to build his solution and I was using the dotnet cli.

In this case: msbuild.exe Solution.sln != dotnet build

Remember the all important function.json file that gets generated? It turns out that is generated from some of the targets that Visual Studio’s version of msbuild uses that the dotnet cli’s msbuild won’t have available (to be clear there is likely a way to resolve this, but I didn’t dig for it).

How did we fix it?

When using AppVeyor, and likely any other CI server, is fairly straight forward. If you are using the dotnet cli for your existing builds, but want to start using Azure Functions be sure that you change your scripts to use:

C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\MSBuild\15.0\Bin\msbuild.exe {solution}.sln

instead of:

dotnet build {solution}.sln

This will ensure that your output generates the function.json appropriately. You could attempt to maintain the function.json yourself, however if the file can be generated from our code I don’t want to worry about it.

Now that we build, how do we deploy?

This part is very straight forward and I was able to use Alastair’s example with one small tweak, I prefer to use appveyor.yml files instead of the AppVeyor interface for configuring our builds (this is a preference and direction we have chosen at work). So as an alternative to the user interface you can add deployment to your AppVeyor build altering the below example:

image: Visual Studio 2017
...
artificats:
- path: '{relative path to functions output}'
  name: '{name of artifact}'
...
deploy:
- provider: WebDeploy
  server: https://{azure site name}.scm.azurewebsites.net/msdeploy.axd
  website: {azure site name}
  username: {user name that has deployment access}
  password:
    secure: {secured password}

And that’s it now you have an automated deployment of Azure Functions from AppVeyor. I hope that helps you or someone you know.

Deploy Azure Functions App From AppVeyor

Thinking Reactively

Recently I’ve started a new job working with technology in the IoT space. This new experience is making me start to think about building applications differently. One of the main things I’ve noticed with this new work is the reliance on events coming from one or many different devices. The application we work on needs to react accordingly to each of these events. Reacting to all of these events makes me wonder if the usage of observables and a more reactive paradigm would be helpful.

Part of this train of thought comes from starting to work and see how React and Redux have changed the way we look at building front ends. I believe that React and Redux/Flux/{pick your fav. flux implementation} have shown a different and interesting way to think about building interactions. However, the only real change with these ideas is putting structured management around the state of your application and ensuring that actions (events) flow up and the changes to state flow down. This is the crux and great thing React and flux implementations have brought. For the record I have found these helpful.

Since I’ve found this idea/thinking helpful on the front-end I’ve been curious could we apply the same idea to the back-end? For our application I’ve been toying with the idea of maintaining a single global state (Ala redux). For applications consuming terabytes of data this likely wouldn’t be an option, however our application is smaller scale and would need to keep a few thousand IoT devices in memory. The goal with this approach would be to aid in handling the random sequence of events the devices are outputting. Similar to redux each incoming event from devices would be treated as an action that one or more reducers would handle to result in a new global state. Once the new state is available it would then be broadcast (using websockets, http, webhooks, database?) to interested parties.

I’m unsure if our back-end would be able to sustain this kind of architecture. I’m also unsure of a good way to test this idea out to get a feel if back-end systems would benefit greatly from this approach. One of the issues we are seeing now is that we are placing caching in specific places or attempting to minimize trips to the database because they are slow. I’m curious if we could increase the amount of memory to support keeping nearly everything in a single global state. Once actions occur we could then handle them in memory and sync them to a database asynchronously while allowing the rest of the application to continue handling events.

I’d be interested if anyone has tried or is doing this kind of thing with back-end systems. Does this even sound like an idea worth trying?

Thinking Reactively

Trader-App: Meet Phoenix

In the previous post Trader-App: Hello Elixir we became more familiar with Elixir’s syntax. Now I think it’s time to work on getting a working server setup so that we can start building an API for our trader application.

Source: https://github.com/bryceklinker/trader

Language: Elixir

Frameworks: Phoenix, Ecto, ExUnit

Tools: Hex

I’m sure you are wondering what the hell is Phoenix. In short Phoenix is a web application framework for building web applications, apis, and other applications built using Elixir. I think of Rails, ASP.NET MVC, or ASP.NET WebApi when thinking of Phoenix the difference is the language used to build the applications. To get Phoenix locally we need to do a few things in the terminal:

mix local.hex

This command will install or upgrade hex. Hex is a package manager used for Elixir and Erlang, think npm (nodejs), nuget(.NET) or bundler (ruby). The next thing we need to do is install Phoenix using Hex:

mix archive.install https://github.com/phoenixframework/archives/raw/master/phoenix_new.ez

The above command will install Phoenix and its dependencies using hex. Something to note about Phoenix is that it takes an optional dependency on nodejs. This is important to know if you plan to have Phoenix process your javascript, css, or other static assets. I’m not planning to do this as I plan to keep our server and client code completely separated.

Now that we have phoenix installed we can move on to creating our first Phoenix application. To do this we will run the command:

mix phoenix.new src/server

It’s important to know that src/server is a relative path to the directory you want to put your Phoenix application. The path can be relative or absolute. My terminal happens to be at the root of my repo so src/server is the path I want to use. This creates the scaffolding for our Phoenix application. I’m going to delete the src/server/hello_world.exs from my repo along with the src/server/math.ex. These are no longer needed. At this point my repository looks like this. You will be prompted to install dependencies with a prompt like:

Fetch and install dependencies? [Yn]

I’m going to input Y. Let’s pause here and look at what we have now. The first thing I see is the mix.exs file. This file looks like it defines all of the dependencies required in our application. One of the really important dependencies here is ecto this is your ORM, think Entity Framework (.NET), ActiveRecord (Ruby), or Mongoose (NodeJS). ecto provides similar functionality. The next thing I notice is package.json and brunch.js. Remember when I said that Phoenix had an optional dependency on nodejs. This is the outcome of that dependency. Phoenix relies on brunch to compile and bundle your javascript, css, and html. Since I’m planning to use Elm for my front end I’m going to see if I can generate the project without any javascript, css, or html. Turns out this can be done using:

mix phoenix.new src/server --no-brunch

Aha! That looks much better. My repository now looks like this. No more package.json or brunch.js that is exactly what I wanted. If you would like to continue using brunch or some other build tool you can take a look at Phoenix’s site to understand how that can be done.

Now that we have removed client-side packages and bundling we can continue with Phoenix. Let’s go ahead and start the Phoenix server to see what we get:

cd src/server
mix phoenix.server

This runs a little bit of what we need, however it prompts me to install this thing called rebar. What is rebar? Rebar comes from the Erlang ecosystem. More information can be found here. I’m going to input Y when prompted with:

Could not find "rebar", which is needed to build dependency :fs
I can install a local copy which is just used by Mix
Shall I install rebar? 
(if running non-interactively, use: "mix local.rebar --force") [Yn]

This will install rebar. If your like me then you will see a bunch of errors similar to:

[error] Postgrex.Protocol (#PID<0.2992.0>) failed to connect: ** 
(Postgrex.Error) tcp connect: connection refused - :econnrefused

This is complaining about not being able to connect to a Postgres server. Phoenix defaults to connecting to a Postgres server, however I don’t think I’ll need that. I’m going to delete my existing server folder and generate the project again. Turns out generating your Phoenix application can be generated without ecto and brunch with the command below:

mix phoenix.new src/server --no-brunch --no-ecto

Now that we don’t rely on a database lets try to run:

cd src/server
mix phoenix.server

Viola! Now we have a Phoenix application. My server started up on port 4000. My repo now looks like this.

So now we have a running Phoenix application. Lets look at what is actually going on in our application.

When we ran the new command we ended up with lots of files and folders that were completely generated. This kind of stuff is great for productivity, but as far as understanding what is actually happening always drives me a little crazy. I had this same opinion when I started to learn Ruby on Rails. Generated code has always made me nervous especially when I don’t understand the language or framework well. Lets dig into the files and folders that have been generated.

First let’s start with what tests have been generated. We can start with using the test command:

mix test

This yields the following output:

==> gettext
Compiling 1 file (.erl)
Compiling 19 files (.ex)
Generated gettext app
==> ranch (compile)
==> poison
Compiling 4 files (.ex)
Generated poison app
==> phoenix_pubsub
Compiling 12 files (.ex)
Generated phoenix_pubsub app
==> cowlib (compile)
==> cowboy (compile)
==> mime
Compiling 1 file (.ex)
Generated mime app
==> plug
Compiling 44 files (.ex)
Generated plug app
==> phoenix_html
Compiling 8 files (.ex)
Generated phoenix_html app
==> phoenix
Compiling 60 files (.ex)
Generated phoenix app
==> server
Compiling 13 files (.ex)
warning: variable tags is unused
 test/support/channel_case.ex:29

warning: variable tags is unused
 test/support/conn_case.ex:30

Generated server app
....

Finished in 0.03 seconds
4 tests, 0 failures

Randomized with seed 152766

There it is at the end four passing tests. Lets try to add one more simple test.

Since we are going to be creating a trader application stock prices are pretty important. Lets create a new channel for our stock prices:

mix phoenix.gen.channel StockPrices

This will create a new test and channel that we will use for getting stock prices. Running tests will now reveal seven passing tests. Lets look at what was just generated for us. Open up the src/server/test/channels/stock_prices_channel_test.exs. This file should have:

defmodule Server.StockPricesChannelTest do
  use Server.ChannelCase

  alias Server.StockPricesChannel

  setup do
    {:ok, _, socket} =
      socket("user_id", %{some: :assign})
      |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby")

    {:ok, socket: socket}
  end

  test "ping replies with status ok", %{socket: socket} do
    ref = push socket, "ping", %{"hello" => "there"}
    assert_reply ref, :ok, %{"hello" => "there"}
  end

  test "shout broadcasts to stock_prices:lobby", %{socket: socket} do
    push socket, "shout", %{"hello" => "all"}
    assert_broadcast "shout", %{"hello" => "all"}
  end

  test "broadcasts are pushed to the client", %{socket: socket} do
    broadcast_from! socket, "broadcast", %{"some" => "data"}
    assert_push "broadcast", %{"some" => "data"}
  end
end

I think we need to break this down a little bit. Lets start with this:

defmodule Server.StockPricesChannelTest do
  use Server.ChannelCase

  alias Server.StockPricesChannel

The above code defines the module Server.StockPricesChannelTest. Since we haven’t touched on modules we need to know what a module is in Elixir. Basically a module is a group of functions. Hopefully those functions are cohesive, but modules are basically a grouping mechanism. The next line use Server.ChannelCase is one example of how to consume a module. In this case the use macro tells Elixir that we want to require Server.ChannelCase. The other interesting thing use will do is make the functions defined in the required module available in to our module. Next, we have alias Server.StockPricesChannel this is another way to require a module. This one is slightly different than use. In this case we want our module to expand its “lookup” area to Server.StockPricesChannel. What I mean when I say “lookup” is that if Server.StockPricesChannel defines a custom version of List.flatten, for example, when we do List.flatten in our Server.StockPricesChannelTest module we will use the function defined in Server.StockPricesChannel. We can think of this as a way to get access to a modules functions and providing a way to keep our keystrokes down.

Okay, now that we have an idea of what the first few lines do lets look at the setup method:

setup do
    {:ok, _, socket} =
      socket("user_id", %{some: :assign})
      |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby")

    {:ok, socket: socket}
  end

The setup function here is pretty much the same as the Setup functions found in most testing frameworks. The code in there will before each test. This code does do some interesting things though. The first line:

{:ok, _, socket} =
      socket("user_id", %{some: :assign})
      |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby")

socket("user_id", %{some: :assign}) is going to create a socket with the id "user_id" and assign some to the socket. The next part |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby") will join the channel StockPricesChannel and subscribe to the "stock_prices:lobby" topic. We want to match the return with {:ok, _, socket}. This gives us a tuple with :ok and socket. The next we do is return {:ok, socket: socket} from the setup. The return is important because that is how our newly subscribed and joined socket is passed to our test methods.

This brings us to our first actual test:

test "ping replies with status ok", %{socket: socket} do
    ref = push socket, "ping", %{"hello" => "there"}
    assert_reply ref, :ok, %{"hello" => "there"}
  end

This is the first test we have seen so lets take a close look. The first line:

test "ping replies with status ok", %{socket: socket} do

Is going to pass the name of the test, "ping replies with status ok", it is also requesting a map which will match to the socket, %{socket: socket}. The socket the map receives is the socket we created and then subscribed and joined in our setup method. The next part we need to look at is the body of our test function:

ref = push socket, "ping", %{"hello" => "there"}

This part is going to push a message into the channel. To push a message we need a few things. The socket parameter tells the push method which socket and channel to push the message to. The "ping" parameter is the name of the event. The %{"hello" => "there"} parameter specifies the content of the message. The push method return a reference. Our next line is the line that will assert we did the thing we wanted to:

assert_reply ref, :ok, %{"hello" => "there"}

This code is going to assert that we replied with :ok and data %{"hello" => "there"}. The ref parameter is the reference that should be checked.

The rest of the tests here are testing that we can broadcast data through the socket/channel. This is all fantastic and really easy, however can we easily create a small bit of javascript to work with our new channel. For reference my code can be found here.

Lets head over to our src/server/web/templates/layout/app.html.eex. In here we need to add a few script tags:

Screen Shot 2016-08-29 at 1.32.58 PMThis does mean we need to add:
src/server/priv/static/js/stock_prics.js. The src/server/priv/static/js/phoenix.js already exists. Now, we need to add a little bit of code to our src/server/priv/static/js/stock_prices.js:

(function(Phoenix) {
    var socket = new Phoenix.Socket('/socket');
    socket.connect();

    var channel = socket.channel('stock_prices:lobby', {});
    channel.join()
        .receive('ok', res => console.log('Resp: ' + JSON.stringify(res)))
        .receive('error', res => console.log('Error: ' + JSON.stringify(res)));

    channel.on('quotes', res => {
        console.log('RES: ' + JSON.stringify(res));
    })
})(window.Phoenix);

This is just plain old javascript. The global Phoenix object comes from our src/server/priv/static/js/phoenix.js script. With it we can join channels and receive or broadcast messages. Lets make our channel publish a fake stock quote every second for now.

To publish a fake quote every second we need to add the file src/server/lib/workers/stock_prices_worker.ex:

defmodule Server.Workers.StockPricesWorker do
   use GenServer

 def handle_info(msg, state) do 
   payload = %{ :price => 1.02 }
   Server.Endpoint.broadcast("stock_prices:lobby", "quotes", payload)
   Process.send_after(self(), "get_quotes", 1000)
   {:noreply, state: state}
 end

 def start_link() do
   {:ok, pid} = GenServer.start_link(Server.Workers.StockPricesWorker, [])
   Process.send_after(pid, "get_quotes", 1000)
   {:ok, pid}
 end 
end

The code above will create a price quote every second. Then it will send that price quote to the "stock_prices:lobby" channel. This should work fine for a small proof that we have things working. We need to make a few more changes to get everything working together. If your repo is like mine. You will not see anything in the console when you run your application.

You first need to make a few other changes to files generated at the start of our project. This actually took me quite a while to figure out. First we need to modify our src/server/lib/server.ex to have look like:

defmodule Server do
  use Application
  ...
  # Define workers and child supervisors to be supervised
  children = [
     # Start the endpoint when the application starts
     supervisor(Server.Endpoint, []),
     worker(Server.Workers.StockPricesWorker, [])
     # Start your own worker by calling: Server.Worker.start_link(arg1, arg2, arg3)
     # worker(Server.Worker, [arg1, arg2, arg3]),
  ]
  ...
end

The key code here is the worker(Server.Workers.StockPricesWorker, []). This will start our new worker when the Phoenix server starts. You can add as many workers as you would like to this list. Next, we need to modify our src/server/web/channels/user_socket.ex to utilize our channel:

defmodule Server.UserSocket do
  use Phoenix.Socket

  ## Channels
  channel "stock_prices:*", Server.StockPricesChannel
  ...
end

This is actually something that we should have done after generating our channel, but if you are like me you skipped that part in the console output. This is what tells Phoenix to use your channel.

With these two pieces in place we should now see our small website listing price quotes in the console. For now that is good enough as the UI from the server is not going to be used much if at all, because we plan to use elm for the front-end of our trading application.

At this point everything should be working but we actually have quite a bit of unnecessary code in our src/server/web/channels/stock_prices_channel.ex and src/server/test/channels/stock_prices_channel_test.exs. Change your src/server/test/channels/stock_prices_channel_test.exs to be:

defmodule Server.StockPricesChannelTest do
  use Server.ChannelCase

  alias Server.StockPricesChannel

  setup do
    {:ok, _, socket} =
      socket("user_id", %{some: :assign})
      |> subscribe_and_join(StockPricesChannel, "stock_prices:lobby")

    {:ok, socket: socket}
  end

  test "broadcasts are pushed to the client", %{socket: socket} do
    broadcast_from! socket, "broadcast", %{"some" => "data"}
    assert_push "broadcast", %{"some" => "data"}
  end
end

We removed the tests around pinging and shouting. This means we can also change our src/server/web/channels/stock_prices_channel.ex to look much simpler:

defmodule Server.StockPricesChannel do
  use Server.Web, :channel

  def join("stock_prices:lobby", payload, socket) do
    { :ok, socket }
  end
end

Ah, beautiful code. Simple and to the point.

I think this gives me a good enough grasp of Phoenix to move on to getting a start on our elm front end. In the next article we’ll start hooking up elm to our newly created channel. My repo currently looks like this, and all tests are green.

Trader-App: Meet Phoenix