If you run your own dedicated server, you have probably been faced with, or at least you have considered, the problem of one website getting so much traffic it brings down all the other sites. There’s also the possibility that a poorly written piece of code will cause the webserver to go into an endless loop. In some places I’ve seen companies use a VPS system for customer websites, and a standalon database server using Postgresql. The concern was that a single customer site getting heavy traffic, or worse yet, attacked, could bring down everyone because of the single point of failure that existed in the database server.
The solution that was proposed was to configure multiple instances of Postgresql, one for each client, running on separate ports. This same idea would work for Mysql as well. This adds a level of security as well, since you could then give the client super human access to their instance, and there’s no way they could see or manipulate databases from other clients. Another reason for doing this is to fine tune each customer’s database server to meet their demands. A lightly loaded server would need fewer processes/memory than a heavy duty one. In a VPS environment, if you ran mysql or Postgresql, you would in effect have an isolated environment anyway, so this is a moot point. It’s the shared hosting environments that benefit greatly.
I’d like to mention that you can easily setup Apache this way too. This would give customers a completely isolated process space for their website on a single dedicated “shared hosting” machine. Granted, there are some caveats, namely, if you’re using CPanel or other hosting control panels, they of course would not know how to manage these custom configurations. Can you think of any other negatives here?
Leave a Reply
You must be logged in to post a comment.