0

Questions on server-side limitations / procedures

What are the fundamental limits of the Shotgun setup? We push through perhaps 2 dozen projects a year. I'd like to give some consideration to what is stored where (in terms of filespace provision) and procedures needed for offlining, archiving, retrieving and reintegrating projects.

From looking at your tech specs (and a recent Shotgun crash message) I can see you're built on Ruby, Gems, Rails and Phusion but with a Python API. Have you a recommendation for the point at which load-balancing is desireable? Clustering?

1 comment

  • 0
    Avatar
    Chris Opena

    Peter, with respect to fundamental limits, there are really two venues to consider:

    Hosted
    Right now there isn't a hard limit on what you can store or how big the database can get outside of the limits of PostgreSQL, and those limits are beyond what our hosted clients typically reach right now.  We do watch space usage for file uploads, and will notify as those become overly large, but to date our server has been beefy enough that the limits are not even close to threshold.  As long as you're not uploading terabytes upon terabytes of data here, you should be ok.

    Local Install
    If you're looking at doing a local install then the limits are really upon the class of server hardware you choose.  We have several customers that are running Shotgun locally, and more than a few have databases that approach the multi-gigabyte size with projects ranging from a few to dozens.  For upload storage, in some cases they attach network storage and symlink accordingly so that the Shotgun server can retrieve it appropriately, giving them an easily-upradable path for storage of large numbers of large files.

    As far as load-balancing is concerned, on the hosted side our tact has normally been one of focusing on getting the beefiest servers possible to ensure best performance and making sure that all data (database, uploaded files, etc.) is backed up adequately to allow for quick recovery.  Our deployment process is such that it is fairly simple to bring a site back up and running even after disastrous server hardware failure.

    On the local install side, this can run the gamut, depending on the preference of the client.  Some take a similar tact to the above hosted example, others run Shotgun on VMs with an external PostgreSQL cluster, and yet others utilize clustering software to have multiple Shotgun servers available (whether cold, hot, or somewhere in between).  The nature of Shotgun is such that is very amenable to the vagaries of local infrastructure requirements.  Shotgun *is*, however, quite dependent on running in an operating environment that has the correct package versions for 3rd party libraries and binaries running.  This makes it undesirable to, say, install Shotgun on a server that has another primary function that may require similar libraries with different versions.

    At the end of the day, on the local install side, load-balancing / clustering (and when to do it) is very dependent on the hardware resources allocated to Shotgun and the internal necessities of availability.  Our general recommendation is that if your server is reaching internally-defined resource allocation thresholds, it's time to either upgrade the hardware or bring in clusterable / load-balanced resources.  Our setup is fairly simple (Apache > Mod_Rails > Shotgun), so it allows for a myriad of different load-balancing strategies - whether at the front of the chain or further down the line.

    On the offlining, archiving, and retrieving side, we're working on better ways to have this directly integrated into the application.  Currently, some clients choose to spin up multiple instances of Shotgun to handle different projects (especially if the projects are very large).  However, if you're going through several dozen small-to-medium-sized projects per year, it would probably be best to keep them all resident on one site and we can work with you to enhance the archival capabilities of Shotgun in future releases.

    Thanks,
    -Chris.

Please sign in to leave a comment.