in Technology, Toronto, Under the Hood

Under the Hood: The Technical Setup of Upverter

Editor’s note: This is a cross post from the Upverter blog written by Zak Homuth (LinkedIn, @zakhomuth, Github). Follow him on Twitter @zakhomuth. This post was originally published on August 1, 2011, I was just negligent in posting it.

Who doesn’t love tech porn? And what’s better than an inside look at the architecture and tools that power a startup? That’s right, nothing. So we thought, why not put up our own little behind the scenes, and try and share a little bit about how we do what we do?

At Upverter, we’ve built the first ever web-based, the first ever collaborative, and the first ever community and reuse focused EDA tools. This meant re-thinking a lot of assumptions that went into building the existing tools. For example, clients and servers weren’t an afterthought, but instead a core part of our architecture. Collaboration was baked in from the start which also meant a whole new stack – borrowed heavily from guys like Google Wave, and Etherpad.

http://en.wikipedia.org/wiki/Apache_Wave
http://code.google.com/p/etherpad/
http://techblog.gomockingbird.com/archive/5/2010

 

Apache-wave

On the front-end, our pride and joy is what we call the sketch tool. Its more or less where we have spent the bulk of our development time over the last year – a large compiled javascript application that uses long polling to communicate with the API and Design Servers. When we started out to move these tools to the web, we knew that we would be building a big Javascript app. But we didn’t quite know what the app itself would look like and our choice of tech for the app itself has changed quite a bit over time… more on this later!

On the back-end, we run a slew of servers. When it comes to our servers, there was a bit of a grand plan when we started, but in reality they all came about very organically. As we needed to solve new problems and fill voids, we built new servers into the architecture. As it stands right now, we have the following:

  • Front-end web servers, which serve most of our pages and community content;
  • API & Design servers, which do most of the heavy lifting and allow for collaboration;
  • DB servers, which hold the datums; and
  • Background workers, which handle our background processing and batch jobs.

 

 

So let’s talk tech…

  • We use a lot of Linux (ub) (arch), both on our development workstations and all over our servers.
  • We use Python on the server side; but when we started out we did take a serious look at using Node.js () and Javascript. But at the time both Node and javascript just wern’t ready yet… But things have come a tremendously long way, and we might have made a different choice if we were beginning today.
  • We use nginx (http://nginx.org/) for our reverse proxy, load balancing and SSL termination.
  • We use Flask (http://flask.pocoo.org/) (which is a like Sinatra) for our Community and Front-end web servers. We started with Django, but it was just too full blown and we found ourselves rewriting it enough that it made sense to step a rung lower.
  • We use Tornado () for our API and design servers. We chose Tornado because it is amazingly good at serving these type of requests at break neck speed.
  • We built our background workers on Node.js so that we can run copies of the javascript client in the cloud saving us a ton of code duplication.
  • We do our internal communication through ZMQ (www.zeromq.org) on top of Google Protocol Buffers
  • Our external communication is also done through our custom RPC javascript again mapped onto Protocol Buffers. http://code.google.com/apis/protocolbuffers/docs/overview.html/
  • We used MySQL () for both relational and KV data through a set of abstracted custom datastore procedures until very recently, when we switched our KV data over to Kyoto Tycoon ().
  • Our primary client the sketch tool is built in Javascript with the Google Closure Library () and Compiler ().
  • The client communicates with the servers via long polling through custom built RPC functions and server-side protocol buffers.
  • We draw the user interface with HTML5 and canvas (), through a custom drawing library which handles collisions and does damage based redrawing.
  • And we use soy templates for all of our DOM UI dialogs, prompts, pop-ups, etc.
  • We host on EC2 and handle our deployment through puppet master ().
  • Monitoring is done through a collection of OpsView/nagios, PingDom and Collectd.

Our development environment is very much a point of pride for us. We have a spent a lot of time making it possible for us to do some of the things we are trying to do from both the client and server sides and putting together a dev environment that allows our team to work efficiently within our architecture. We value testing, and we are fascists about clean and maintainable code.

  • We use git (obviously).
  • We have a headless Javascript unit test infrastructure built on top of QUnit () and Node.js
  • We have python unit tests built on top of nose ().
  • We run closure linting () and compiling set to the “CODE FACIEST” mode
  • We run a full suite of checks within buildbot () on every push to master
  • We also do code reviews on every push using Rietveld ().
  • We are 4-3-1 VIM vs. Text Edit vs. Text Mate.
  • We are 4-2-2 Linux vs. OSX vs. Windows 7.
  • We are 5-2-1 Android vs. iPhone vs. dumb phone.

If any of this sounds like we are on the right path, you should drop us a line. We are in Toronto, we’re solving very real-world, wicked problems, and we’re always hiring smart developers.

Reference

Editor’s note: This is a cross post from the Upverter blog written by Zak Homuth (LinkedIn, @zakhomuthGithub). Follow him on Twitter @zakhomuth. This post was originally published on August 1, 2011, I was just negligent in posting it.

  1. The environment and dev process of a dream! It is one thing to reason about all of these different quality techniques in dev process and another to see them implemented in one place. 100 out of 12 on Joel’s Test!

    Why did you end up choosing ZeroMQ over RabbitMQ or another AMQP broker? 

  2. Stephen from upverter:
    Thanks, we really like the dev process here. We’ve tried to take the things we found worked at previous jobs and repeat them here.

    The reason for choosing ZeroMQ over the other messaging systems out there had a lot to do with the type of internal communication we do. Most of it is point-to-point, socket type communication between front end servers and API servers or design servers. ZeroMQ takes lots of the ugly bits out of socket type communication and IPC/RPC development.

  3. Hi John,
    Stephen here from Upverter. The connection to Google Wave is in how the editor is built. We designed the schematic editor around the concept of operational transformations. It is one of the components that allows us to do collaborative editing. A lot of the early things we learned about OT was from looking at the code open sourced from Google Wave and EtherPad.

Comments are closed.