Anonymous social network unface.me lets you communicate with your family in an anonymous way, understand your friends better, and find soul mates. On unface.me you can:
- Tell stories about friends anonymously; ask questions and publish opinions
- Talk anonymously in chats in person or with friends
- Rate friends and get your real rating in your company
Our technical goal was to make service fast and scalable. So we chose this technologies:
- MongoDB – main database for all app data
- Redis – session storage
- Node.js – app server with a cluster of workers
- Nginx – serving statics and proxying requests to node.js
- Elasticsearch – search engine
- Backbone, jQuery – frontend
MongoDB stores all user data. In total with all indexes it takes about 30gb of disk space. All queries are made only by indexes so everything works very fast. At first we thought about using sharding, but there was no need in it even with 10 million records in a collection, so we just added a few replics. If number of users doubles the we’ll consider adding sharding to increase query speed.
As a mongodb driver we were using node-mongodb-native. In 2011 it was the best choise because Mongoose was too buggy and had some memory leaks so we had to stick to the native driver.
In the start of the development (in 2011) we faced a serious decision to make: use well-known PHP or risk and try new and not well tested in production node.js. We decided to take risks and did not regret. At that moment we were pioneers in node.js development: except us, the only fairly large project made on node.js was the social network Tactoom.
The biggest problem of node.js developemnt at that moment was crazy code nesting (spaghetti-code). We had to face it and were not trying to solve it with mongoose, modules for witing synchronous code, promises or anything else.
The main advantage of choosing node.js for a highload service is that it can handle a lot of synchronous requests. Node.js users linux libev (just as nginx by Igor Sysoev) and does not spawn new processes for each requests (aside from e.g. php). From this advantage comes the greatest flaw: node.js is very poor with handling complex operations. Complex computations block process and all it’s requests. So big operations should be done on something else than node.js. But if you are mostly just passing data from the database (just as we) then you’re fine.
To use all processor cores at max it is better to use node.js cluster, or a cool process manager pm2. In unface.me we had some workers for general requests, some workers for long-pooling requests for instant messaging and one master process to rule them all.
We managed to build a highload backend which was well scalable. In case of traffic increase we could just add a few more servers and go through any load. Right now we have about 10 million of user profiles in the database and speed of website was mentioned by the best tech media.