live site at launch (https://stpf.arup.digital/)

Major project delivery: a retrospective look

Liam Hanrahan
Published in
8 min readNov 27, 2016

--

As our team grows we’re trying to be more open about our development processes and documenting our project retrospectives is part of this. We have recently completed a reasonably large project (for us) with development lasting almost 6 months including shifting scope and timescales.

The project was designed to allow users to understand planned, on-going and completed transport projects as part of the Transport for New South Wales Future Transport program.

We were tasked with developing a single page web app that allowed users to understand the projects under the Future Transport umbrella, and explore the demographics of a changing NSW that have influenced the program (historical and forecast population and employment).

You can see the results of the project here: https://stpf.arup.digital/

Approach

For this project we decided to take an agile approach to development. What this would mean is that we would break up the development into ‘sprints’ where we would have a set of functionality to implement by the end of the sprint, in this case lasting 2 weeks. This form of development can be very haphazard with its implementation in our experience, but allowed us to present the client with a working prototype that would be updated weekly. With this approach we could quickly narrow down on what the client wants, and doesn’t want in the product.

The risk with this though is that whilst it allows for overall flexibility in outcomes it doesn’t provide for much leniency within the sprint for changing political pressures. It is important to ensure both the client and delivery teams are on the same page in this regard, and trust the process to deliver rather than take the traditional peculiar iterative waterfall approach that typical come about as pressures rise and new information surfaces. Something to bear in mind in the future, and add to lessons learnt here.

Technology Stack

To develop our project we used a mixture of React, React-Redux, D3, and Leaflet to develop a responsive one page web application. These libraries allows us to present the user with an interactive map backed up with filterable visualised data. Our standard working practices include GitHub to manage version control and conflict resolution, deployment through Jenkins and day to day communication is on Slack. User stories are documented in Trello, then broken into issues on a GitHub project board for developers to work on.

Our retrospectives are focused around examining the key areas of what we thought was ‘good’, ‘bad’, and ‘ugly’ during development and coming up with actions to implement based on what we learned from the project.

The Good

Everyone on the project agreed that the site overall looked great. The design was clean and clear, the functionality the client needed was all there and ran with great performance. The reason we were able to achieve this final product was because of the ‘good’ things we did during development. One of the major differences this project had compared to any others we had done before is that it was the first project where we had our new UI/UX designer, Gel Abrahams working with us from the project outset. Before any development began Gel was able to draft up basic wire-frames of how we envisioned the app would look.

From these wire-frames we were able to begin discussing with the client the possible designs of the app to better suit their needs without taking the time to make a semi-functional prototype first. In the long run this helped us to deliver the final product in a shorter time by detaching the design process from the development so there was less time spent changing design decisions in the actual code.

Another design success we recognised was that this was a mobile-first site, meaning that we developed our wire-frames for the mobile screen size and adapted it for the desktop view. This was done with minimal issue and the result was a site that looked great on both devices due to the minimal and uncluttered design that was naturally developed for a smaller screen.

In regards to what was done right in the actual code, the developers were able to figure out several procedures which allowed us to develop efficiently and make a more maintainable application. One of the most useful practices which we will be using in future projects was the separation of our components into connected and dumb components.

What this did was that we were able to develop functionality completely separate from the data and use dummy data to mock example content. By doing this we were forced to create extensible and reusable components as no component was dependent on data and were able to implement them before data was handed over.

As part of this practice we were able to ‘crack’ a simple debugging method which involved creating a single page which contained every single component removed from any outside data. This debug page was used to test functionality without needing to actually implement the component in the application.

Another system we implemented to improve testing our code was the use of deployment tests along with automated testing and deployment through Jenkins. Whenever a change was going to be added into the code Jenkins would build the application and check to see if the resulting change in the code would build and pass unit tests successfully.

If this check passed, the code would be merged in and would automatically update the live site. If the check failed, the developer would be notified that something went wrong and would return an error message so that the developer could fix any issues before trying again.

The Bad

Although there were no things we did outright wrong with the development of this project a few minor things stood out which we should be careful not to repeat.

The initial lack of testing in our project proved to be an actual problem when duplicated logic was found between different components. If testing had been implemented earlier we would have been able to catch the inconsistent results.

Due to wide ranging types of projects undertaken by the client there were many complex aspects in gathering information together in one location — which had never been attempted before. This required a higher number of face to face hours with the client and exploring the available data to inform what would be the most engaging and meaningful experience for the user. As we were identifying these new possibilities, we were able to adjust our development timeline accordingly to cater to tweaks in the design and changes in the scope, however every adjustment comes with a time cost.

These development and planning changes added up in the end and caused us to dip over our time estimate as we had to spend more hours than intended to fix the issues that occurred.

The Ugly

The ‘ugly’ isn’t what we did wrong or right but something that we think we could have done better on.

One of the most confusing parts of the project was the lack of clarity in the scope not being resolved before significant amounts of development. When the scope of the project was settled on it allowed full data sets to be handed over, final designs confirmed and required extra work to be done on components we thought had already been completed.

With the project having an initially broad scope we should have taken measures to manage the client’s expectations better. As the project continued to be developed small misunderstandings occurred in the expectations of what the application was going to deliver. One way we could have done this better is by limiting the functionality of the prototype in the beginning so that components should only be added and not removed.

Continued development of functionality and features before receiving actual data became an ugly issue when the data did not match our expectations. In the end we had to spend extra time updating the components to work with the data we received. Our time would have been better spent working on other components which we did have data for or instead have done nothing until we did receive the data.

As development got held up by these issues our design continued without having the clients full understanding of how it would actually function. When the functionality was eventually implemented we had to change it to suit the clients needs which led to these designs needing to be changed at a fundamental level.

Instead of investing time into working on designs that were made unusable we should have waited until the client agreed on the existing mock-ups and wire-frames.

Aside from the scope issues we had there was also a case were we needed to make a huge structural change in the project. Although this change made the app more responsive and was necessary, the way that it was implemented could have been better. This change ended up taking multiple weeks to go through and because it was changing the structure of the app at a fundamental level all other development was put on hold. After the change went through development resumed as normal but the other developers were not really involved and there was no re-cap of what was actually changed. The team agreed that it would have been better to include the rest of the developers in some way so that they knew what was happening.

Actions to take

From these observations several actions and practices were agreed on to implement into our workflow:

  • Although the agile methodology of development has worked well in the past for our small team, this project has shown us that we should be more careful with agile design so that we do not exceed scope in the future.
  • During the planning stages of the project we need to come up with strategies to reduce people swarming. In our client meetings we found that there were too many people wanting to add things to the discussion, which with our frequent meetings led to scope creep from incremental feature requests and continual design changes.
  • In regards to the large structural change that occurred, we should in future try to work through major changes as a group or at least recap them as a group so that everyone is on the same level. This will help to bring everyone up to the same page so that development can continue with minimal difficulty. It would also help to let everyone understand what was done wrong and bring up the opportunity to brainstorm solutions to prevent it from happening in the future.
  • The build test and debugging page that were implemented worked well and we should continue doing this for other projects when applicable.
  • Implementing tests for functionality is dearly overdue in our development process. In particular, tests should be written for the actions and reducers as they are purely logical so there is no excuse for tests to not exist for them.
  • The development team need to start standardising common logical patterns such as searching and loading. These functions seem to be implemented from scratch for each project when it would be better to come up with a refined solution we can tailor to each project.

--

--