Protractor & BrowserStack from behind a firewall/proxy

We  adopted these on our project and as usual I found myself fighting the proxy setting and the config to make these play nice. The pointers I found online were rather incomplete. So here you go. Hopes it saves you some time and head bashing!

GitHub Gist

Advertisements
Posted in Uncategorized | Leave a comment

Testing and the lip service to it

Software industry has been filled with praises and reasons for why unit/integration/functional testing is a good idea. A little bit like Agile IMO. It’s rare to find people who will state on the record that they think that it’s a bad idea. Test coverage tools are enthusiastically installed and run by directives from higher up. Testing framework support in built into all major IDEs and endless books extol its virtues. However once you step into the trenches and look at the work being produced it’s all too common to find that its was all just lip service 😉

So what’s wrong?

Once enough experts said out loud that testing was a good thing; developers assumed that it must be correct. When developers went back to work the next day; many found the concept of TDD obtuse and probably gave up after a quick try (its a bit like functional programming in that respect). While focusing on writing straight forward methods, the value of testing those methods is not immediately obvious. It can and is difficult to imagine the situation where the codebase grows and now has hundreds of those straight forward methods. Im my experience majority of production problems arise from these slip ups in simple logic as opposed to misread on a fundamental aspect.

The primary problem I believe lies in the way the value proposition of testing is put forward. It’s been advocated primarily as an academic ‘right’ thing to do as opposed to something with practical benefits contributing to success.

How should we think about testing?

Imagine that you are a freelancer and you have been approached to build software for a need, let’s say making bills from a bunch of expenses. You start building; since there is no architects/leads/managers asking for tests you haven’t bothered with tests. At some point you write important code with calculates taxes, applies commission and such. As you keep adding features you do manual testing along the way to make sure stuff still works.  All is going smoothly and the codebase has grown to a few thousand lines… the quick manual test you were doing now takes a bit longer – starting to break to flow a little. As you add more and more it starts to get disruptive, now you start excluding test steps by judging the impact of a change.

The client calls up and asks for a change. Now you can’t quite remember all the details; it’s a small change to billing but you can’t be sure that you recall all the side effects accurately. So you make a small sub-optimal change; entropy has now sneaked in.  Unfortunately, the client keeps making these changes – till one day the inevitable slip up happens. A bill has been calculated incorrectly in Prod – how could this be – you scramble .. ah a small oversight… no problem… minor heroic effort like skipping dinner with family… all good again. The good news is you are still getting more people using this software. Over time, more features are being added. Then another slip happens … you probably start missing those tests around about now … but you don’t want to slow down your feature delivery. It would take non-trivial time to add missing tests. I hope you can see the pattern.

So now the lack of automated verification is likely to cost to actual reputation damage; is it worth trying to spend time on it? Will it give you a competitive advantage?

IMO testing has to be looked at as a means to an end. More stable products which break less often and can be changed with a higher degree of certainty. You should go after logic and calculation first. Cover end to end config with some workflow tests. If a piece is a static, once off perhaps you can skip  it, for now. In a nutshell testing is not dogma; think what is important to you and balance it against the effort. Whatever you do, do not think you can scale a code base without tests. That is just naive 🙂

 

 

Posted in Dev process | Tagged | Leave a comment

Using node-sass behind a firewall

LibSass is written in C/C++. It offers various wrappers for use, including node-sass.

npm install node-sass pulls the pre-built Sass lib from GitHub.

This won’t work you in a corporate network. You will typically see this error:

proxyerror

This could mean that the proxy is not right but it is also likely that the corporate network is blocking the connection. Next you will see this is in log.

sassbuild

What’s happening here is that the npm installer is trying to build the binary since it could not download the pre-built version. This will get tricky since the build script is going to look for C++ compiler and Python install.

Options

There are a few ways to deal with this

–sass-binary-site

Looks like this was added by the Linked guys to  get around the firewall problem. This isn’t particularly well documented. See this commit for details. IMO this isn’t the best option since it relies on a fairly specific site hosting scheme.

–sass-binary-path

This takes the full path including the name of the binary. This path is used to look up the local binary. For example:

npm install --sass_binary_path="C:\src\v3.3.6\win32-x64-46_binding.node"

This is the best option, IMO.

Download the binary. Package it locally and npm install with the flag.

–sass-binary-name

This only takes a part of the binary name. From the code (node_modules\node-sass\lib\extensions.js), the full name is resolved like so:

return [binaryName, 'binding.node'].join('_');

This seems to be used for constructing the download url. So not much use in this scenario.

Posted in Uncategorized | Leave a comment

Knockout and React together

KOLogo react

My team has built a large KO app over the last 2 years. KO has been a great library but its time to move on.

KO and React play well together since the core philosophy is the same, small libs doings one thing well.

We are not looking at mingling KO and React code, rather leave our KO code in place; start hooking in React components and look to evolve from there.

Components

Passing data

  • Data can be passed to React component via props in the KO binding
  • Use Amplify/SessionStorage to pass msgs between both KO/React

Now the KO component can be added to the DOM as usual and the rendering will be passed on to React.

Posted in Tech bits, Web development | Leave a comment

Site Reliability Engineering @ Google

cre 

Overall, great write up. As an engineer mostly involved on the build side it introduced me to a number of good ideas and confirmed others. It’s Google DevOps++.

Here is a quick bite-sized packaging of the main takeaways. Watch out for the key points.

Teams 

  • SRE teams are staffed by a mix of sys admins and software developers
  • Aim is to spend no more than 50% of individual time on ‘toil’. SREs will write code towards that aim, of making Google’s systems run themselves – also results in a large acceptance of change

Error budgets 

Acknowledge the fundamental odds between ops (stability) and dev (change) – align to focus on delivery speed within acceptable risk boundaries.

The error budget stems from the observation that 100% is the wrong reliability target for basically everything – the right target is a product specific question.

The error budget determines how unreliable the service is allowed to be within a single quarter. As long as there is error budget remaining—new releases can be pushed. This gives SRE and Dev teams focus and structure to find the right balance between innovation and reliability.

principle_icons_accountable20care Principles

Eliminating Toil

  • Toil is defined as boring repetitive tasks with no enduring value. 50â„… of your time is supposed to be spent on building stuff to eliminate toil
  • Toil leads to boredom > discontent > quitting

Risk

Explicitly align the risk taken by a given service with the risk the business is willing to bear – make a service reliable enough, but no more reliable than it needs to be. Work with the product owners directly to establish the threshold

Monitoring

Golden signals to monitor on a service

  • Latency
  • Traffic
  • Errors
  • Saturation

Machines are instrumented out of the box to a large extent. Discards email as primary notification mechanism. Defined levels of alerts:

  • Pages – respond now
  • Tickets – respond later

Automation

Emphasis on building systems to be automatic not just ‘automated’ – system should require minimal babysitting and take reasonable steps to respond to anomalies – for example, DB notices problems and fails over automatically

Release engineering

Focus on a self-sufficient/self service model for the consuming teams. `Release engineering` is a separate function – which develops tools and best practices.

Builds

Builds are hermetic, meaning that they are insensitive to the libraries and other software installed on the build machine. Instead, builds depend on known versions of build tools, such as compilers, and dependencies, such as libraries – in other words , always reproducible

Branching

  • Add code goes into the main branch
  • Branch from main for a release > this is never merged back
  • Bug fixes are submitted to the main and then cherry picked into the branch for inclusion in the release

Tests

  • In addition on the CI – tests are run in the context of what’s being released
  • An independent testing environment runs system tests on packaged build artifacts

Configuration

  • Config files are external to the binary
  • Dynamically changing config goes into a central storage

principle_icons_accountable20care Practices

Being on call

  • Flexible alert delivery systems that can dispatch pages via multiple mechanisms (email, SMS, robot call, app) across multiple devices
  • Limiting the number of engineers in the on-call rotation ensures that engineers do not lose touch with the production systems

Effective troubleshooting

  • Open bug for every issue reported
  • While troubleshooting a high volume service it might not be a feasible to log everything. Log one out of every 1000 requests (for example) and use a statistical sampling approach.

Postmortems

  • Primary goal is to ensure that the incident is documented, root cause understood and preventive steps are put in place
  • Blameless postmortems are a core tenet – focus on contributing causes of the incident without indicting an individual or teams for inappropriate behaviour. IMO this has to do with physiological safety – if we concentrate on blame, it inhibits open sharing.

Testing for reliability

Similar to the take on reliability – the theme is about fitness for purpose. The level of testing is proportional to the criticality of the system in question – as opposed to thoughtless statements like ‘100% coverage’

Reliable product launch at scale

  • A dedicated consulting team within SRE tackles the task of launching at scale – staffed with experienced SRE engineers. The aim to a have a process which is:
    • Lightweight – engineers sidestep burdensome processes
    • Robust
    • Adaptable – caters to small changes to high visibility public announcements
  • Launch control works on a checklist basis – points are mostly drawn from experience and serve to provide the appropriate level of rigor/facilitate conversations
  • Updates go out in a rolling manner with verification steps interspersed

It has taken Google 10 years to fine tune the process and the book admits that there have been low points where the difficulty of launching a new service had become ‘legendary’

Management

In the next post…

Posted in Book review | Tagged , , , | Leave a comment

Kotlin … first impressions

In case you missed it, Kotlin is a relatively new JVM language from JetBrains (creators of IntelliJ). It compiles down to Java and largely aims to be a better Java.

Language philosophy tweaks 

Everything doesn’t have to be a class

6-8-2016 4-40-36 PM

NPE safety

6-8-2016 4-45-42 PM

this will resolve the property chain without blowing up in case of a NPE
If a variable is defined as not null safe (using the ? operator), Kotlin keeps track of
potential NPEs for you, nice.

6-8-2016 4-46-37 PM.png

I haven’t checked out yet but Kotlin also has true Closures (which capture scope).

Smooth over Java’s annoyances

Goodbye annoying String additions

6-8-2016 4-41-56 PM

Goodbye verbose collections

6-8-2016 4-47-46 PM

Type inference… finally

Tooling and interoperability

IntelliJ tooling and Java interoperability is seamless

6-8-2016 4-43-30 PM

Kotlin is a joy to use. Its got a Python like flavor with a pragmatic approach.
I like it so far.

Posted in Uncategorized | Leave a comment

@JavaLand

ranting-homer2_1.gifWhy does everything (say Foo) need to have an interface … IFoo … can you come up with three possible implementations you need over the next 3 months in a minute?

If you can’t… then you don’t need the interface. Interface is a contract … only has intrinsic value if you have/need multiple implementations of a contract. If you do not have that use case… you do not need to create clutter by mechanically creating an interface … YAGNI. This is all over the Java landscape:

SimpleFoo implements IFoo… That’s it … no other implementation. What’s the point of IFoo?

Let’s say you can think of multiple implementations … before your first implementation… do you know for certain what the contract is going to look like?

This is rarely the case … it’s much better to rely on the Extract interface refactoring … build what you need first … use TDD if possible so you know that your code is easy to interact with … then introduce an Interface if/when you need a second implementation. By defining IFoo first you are unnecessarily locking yourself into a contract which you might not be able to honor.

A spurious IFoo is very `enterprisey` and all its let’s you do is bask in the glow of an accidentally complex `enterprise` app

I feel much better now 🙂 Better out than in- Shrek

Posted in Uncategorized | 2 Comments