Drilling with casing bitstamps50 comments
Coingate bitcoin charts
And more importantly I need maintainers! After getting into the language I realized that most of the stdlib is pretty essential to programs these days, compression, json, IO, buffered IO, string manipulation and so on. The bulk of these APIs are well-defined, and powerful. This can be a little confusing at times but it has an interesting side-effect, you really have to review each one to determine the best solution.
We could achieve similar things in Node with generators, but in my opinion generators will only ever get us half way there. Error-handling in Go is superior in my opinion.
Node is great in the sense that you have to think about every error, and decide what to do. Node fails however because:. This is incredibly difficult to reason about in live production code, why bother?
I still hope Node does well, lots of people have heavily invested in it, and it does have potential. Streams are broken, callbacks are not great to work with, errors are vague, tooling is not great, community convention is sort of there, but lacking compared to Go. That being said there are certain tasks which I would probably still use Node for, building web sites, maybe the odd API or prototype.
See what else is out there, you just might enjoy programming again. There are a lot of awesome solutions out there, my mistake was waiting to long to play around with them! For the most part, eBay runs on a Java-based tech stack. Considering the scale of traffic and the stability required by a site like ebay. When we found that Java did not seem to fit the project requirements no offense , we began exploring the world of Node.
Today, we have a full Node. We had two primary requirements for the project. First was to make the application as real time as possible—i. Second was to orchestrate a huge number of eBay-specific services that display information on the page—i. We started with the basic Java infrastructure, but it consumed many more resources than expected, raising questions about scalability for production.
The numerous questions involved ensuring type safety, handling errors, scaling, etc. To address concerns, we created an internal wiki and invited engineers to express their questions, concerns, doubts, or anything else about Node. As expected, the most common questions centered around the reliability of the stack and the efficiency of Node.
We answered each one of the questions, providing details with real-world examples. At times this exercise was eye-opening even for us, as we had never considered the angle that some of the questions presented. We started from a clean slate. With this basic setup, we were able to get the server up and running on our developer boxes. The server accepted requests, orchestrated a few eBay APIs, and persisted some data. For end-to-end testing, we configured our frontend servers to point to the Node.
Now it was time to get more serious. We started white-boarding all of our use cases, nailed down the REST end points, designed the data model and schema, identified the best node modules for the job, and started implementing each end point. Once the application reached a stable point, it was time to move from a developer instance to a staging environment. This is when we started looking into deployment of the Node. Our objectives for deployment were simple: Automate the process, build once, and deploy everywhere.
This is how Java deployment works, and we wanted Node. We were able to leverage our existing cloud-based deployment system. Whenever code is checked in to the master branch, the Hudson CI job kicks off. Using the shell script, this job builds and packages the Node. The cloud portal provides an easy user interface to choose the environment QA, staging, or pre-production and activate the application on the associated machines. Now we had our Node. We achieved similar monitoring for the Node.
Fortunately for us, we had logging APIs to consume. We developed a logger module and implemented three different logging APIs:. We made sure the log data formats exactly matched the Java-based logs, thus generating the same dashboards and reports that everyone is familiar with. One particular logging challenge we faced was due to the asynchronous nature of the Node.
The result was that the logging of transactions was completely crossed. The process will now proceed with the next request, before the DB transaction finishes. This being a normal scenario in any event loop-based model like Node. We have worked out both short-term and long-term resolutions for this issue.
With all of the above work completed, we are ready to go live with our Hackathon project. This is indeed the first eBay application to have a backend service running on Node. Exciting times are ahead! With the success of the Node. I was floored by how cool it looked and felt. That led us toward a single-page app that would generate its UI on the client and accept data updates from a push channel.
We have consistently opted for promising and often troublesome new technologies that would deliver an awesome experience over more mature alternatives. We loved it, and soon converted the rest of the code over and started coding CoffeeScript exclusively. It existed when we started Trello, but I was worried about the added complexity of having to debug compiled code rather than directly debug the source. When we tried, it, though, the conversion was so clean that mapping the target code to the source when debugging in Chrome required little mental effort, and the gains in code brevity and readability from CoffeeScript were obvious and compelling.
In reasonably high-bandwidth cases, we have the app up and running in the browser window in about half a second.
After that, we have the benefit of caching, so subsequent visits to Trello can skip that part. When the data request returns, Backbone. The idea with Backbone is that we render each Model that comes down from the server with a View, and then Backbone provides an easy way to:. Using that general approach, we get a fairly regular, comprehensible, and maintainable client.
We custom-built a client-side Model cache to handle updates and simplify client-side Model reuse. We use HTML5 pushState for moving between pages; that way we can give proper and consistent links in the location bar, and just load data and hand off to the appropriate Backbone-based controller on transition.
Where we have browser support recent Chrome, Firefox, and Safari , we make a WebSocket connection so that the server can push changes made by other people down to browsers listening on the appropriate channels. Because our server setup allows us to serve HTTPS requests with very little overhead and keep TCP connections open, we can afford to provide a decent experience over plain polling when necessary.
We tried Comet, via the downlevel transports for Socket. Also, Comet and WebSockets seemed to be a risky basis for a major feature of the app, and we wanted to be able to fall back on the most simple and well-established technologies if we hit a problem. We hit a problem right after launch. It allowed us to degrade gracefully as we increased from to 50, users in under a week. Node also turned out to be an amazing prototyping tool for a single-page app. The prototype version of the Trello server was really just a library of functions that operated on arrays of Models in the memory of a single Node.
This was a very fast way for us to get started trying things out with Trello and making sure that the design was headed in the right direction. We used the prototype version to manage the development of Trello and other internal projects at Fog Creek. By the time we had finished the prototype, we were good and comfortable in Node and excited about its capabilities and performance, so we stuck with it and made our Pinocchio proto-Trello a real boy; we gave it:.
It balances TCP between the machines round robin and leaves everything else to Node. Things like the activity level of a session or a temporary OpenID key are stored in Redis, and the application is built to recover gracefully if any of these or all of them are lost.
Our most interesting use of Redis is in our short-polling fallback for sending changes to Models down to browser clients. When an object is changed on the server, we send a JSON message down all of the appropriate WebSockets to notify those clients, and store the same message in a fixed-length list for the affected model, noting how many messages have been added to that list over all time.
Then, when a client that is on AJAX polling pings the server to see if any changes have been made to an object since its last poll, we can get the entire server-side response down to a permissions check and a check of a single Redis value in most situations.
Redis is so crazy-fast that it can handle thousands of these checks per second without making a substantial dent into a single CPU. Once you have a Redis server in place, you start using it for all sorts of things. We knew we wanted Trello to be blisteringly fast. One of the coolest and most performance-obsessed teams we know is our next-door neighbor and sister company StackExchange. Talking to their dev lead David at lunch one day, I learned that even though they use SQL Server for data storage, they actually primarily store a lot of their data in a denormalized format for performance, and normalize only when they need to.
Another neat side benefit of using a loose document store is how easy it is to run different versions of the Trello code against the same database without fooling around with DB schema migrations.
This has a lot of benefits when we push a new version of Trello; there is seldom if ever a need to stop access to the app while we do a DB update or backfill.
This is also really cool for development: We like our tech stack. There are some issues with submitting our fixes hacks!
We are working to get those changes which are fit for general consumption ready to submit back to the project.