scallioxtx — 2012-11-04T18:17:45-05:00 — #1
The last few years we've had some pretty huge changes in LAMP land. PHP has gone up to version 5.4 now, which is vastly different (better) than 5.0; we have namespaces, array derefencing, new array notations, etc.
At the same time I feel there is a trend of people moving away from the tried and tested Apache to it's newer and leaner competitors like NGiNX and Lighttpd.
Meanwhile Oracle is closing off tests from the MySQL source and refusing to say why which makes its future uncertain. Will it stay open source and free, or will Oracle close it off bit by bit until it's completely closed? And if they do, will they keep it alive or kill it in the hope people will go and use their DBMS? There are a few alternatives like MariaDB and Postgres, while at the same time the traditional DBMSs are challenged by NoSQL solutions like MongoDB, CouchDB, etc.
Of course Linux is still Linux, and will always remain so. Versions have come and gone, but in terms of server usage not a whole lot has changed here.
All in all, it's not all that obvious anymore that anyone who runs PHP does this using a LAMP stack. Most will probably still be running LAMP, but there are a lot of other permutations that are getting more and more commonplace.
At work we're running CentOS Linux, because that's the distro we like best and the whole chkconfig system makes it very easy to setup and manage deamons. For the HTTP layer we've ditched Apache completely and are now running everything on NGiNX (and this coming from the Apache guru of the year 2011, I do see the irony in that) because it's a lot easier to configure (i.e. config files are much more readable) and we've found it performs a lot better than Apache. One thing I also particularly like about NGiNX is that more advanced stuff is disabled/not built in by default so you have to actively enable it, while Apache comes with a lot of bloat enabled by default you never ever use and just sits there hogging CPU and RAM for nothing, and you can then find out for yourself what is safe to disable and what isn't.
For the database we use PostgreSQL, because we have quite sensitive data and we rely on transactions a lot, and when it comes to transactions PostgreSQL is just better at that stuff (arguably, of course) than MySQL is. We're also running MongoDB, but that's for logging purposes only. It's just very handy to log a lot of contextual data with a log entry without knowing beforehand which fields you will be logging (and this also differs heavily per log item).
Lastly, we run PHP 5.3 via php-fpm. We are thinking of switching to PHP 5.4, but we need to test this extensively first before we feel comfortable rolling it out.
So, what about the rest of you guys? I'm curious to know what what you're running and why.
logic_earth — 2012-11-04T18:54:10-05:00 — #2
Well...I might be the odd one of the bunch. I run on Window Server, currently using Windows Azure using IIS thats built in (Production and Development). The current generation of IIS 7+ is much like NGiNX, it is very lean and only runs what you want, easy to configure as well, very readable XML based files. I know most people scuff at the ideal of using Windows as a server, they ramble on about how so insecure it is compared to Linux blah blah blah. A server is only as secure as the one maintaining it. I've maintained both Linux and Windows servers quite frequently. Linux is not a silver bullet, don't maintain it and it will be compromised just the same. (In reality, its not actually the server OS that is the weak link, but the applications that run on top.)
Windows Azure has grown quite a bit since it first released. It is no longer Windows exclusive you can actually get a Linux machine, directly from Microsoft! They offer, last time I checked: CentOS 6.2, SUSE Distributions, and Ubuntu Server. If you don't like those choices you can upload you own as well. If you don't care about running a static virtual machine but do not want to use IIS, you can just use a worker role and install your own web server if you like. But you will be responsible for setting it up.
Anyways, ever since PHP 5.3, Windows has become a first class citizen for PHP. Performance and support has greatly increased make it a very comparative platform (concerning the LAMP stack) to running PHP. So I see no reason to scoff at the idea of using Windows for both development and production.
I should cover databases...well I stopped using MySQL a long time ago. Depending on the application, I use SQLite for pretty much everything that does not need a full blown relational database, plus its built into PHP doesn't need anything extra. When I do need a full relational database, I either use PostgreSQL or Microsoft SQL Server (SQL Azure), PostgreSQL has always been better then MySQL in so many ways, its a shame hardly anyone ever gave it a try.
I rambled on enough for now...I have work to get back to.
Err I should add, I use 5.3 and 5.4 (for new stuff) setup using FastCGI.
scallioxtx — 2012-11-04T19:03:24-05:00 — #3
Right, I completely forgot about windows. I've never really liked windows running on a server because it's primarily GUI based, but for a server I could care less about the GUI; I find command line is much faster and generally more workable than GUIs. Might be just me though.
I don't care too much for Ubuntu, but I do like CentOS. I might give this another look, thanks!
"In the old days" I used to run windows on development and linux in production. As long as you know the differences it's not really a problem. Although I agree lining development and production up 1:1 is much better (we do that now, too).
Yes, PostgreSQL seems a lot more mature than MySQL. I think it's the easy of setting it up and using it (e.g. Postgres' pg-hba.conf for auth is just plain weird) is why more people are using MySQL than are using PostgreSQL. Protip for those creating APIs: As of 9.2 PostgreSQL can spit out JSON directly. Just query and directly return the result over the wire without any processing whatsoever! We're not using this yet (because 9.2 wasn't out at the time), but we might in the future.
logic_earth — 2012-11-04T19:16:52-05:00 — #4
Today, that really is not an issue. Aside from third-party applications, almost everything concerning Windows Server can be configured using the command line or PowerShell. If fact it is required for Server Core installations. (Server 2008 R2 added .NET and PowerShell support to Server Core.) And if you really like managing the server from the command line, you will love PowerShell 3.0. It is capable of managing almost everything from the command line, remotely as well.
scallioxtx — 2012-11-04T19:26:43-05:00 — #5
That is definitely a step in the right direction! Kudos to Microsoft.
serverstorm — 2012-11-05T14:21:58-05:00 — #6
I currently use Centos 6, OpenSuse (a headless version), Windows 2008 R2, PostgreSQL, MongoDB, NodeJS, NGINX and a heavily stripped down front-end Apache server (used as a single IP router).
About a year ago I gave up on Apache and MySQL on production servers for different reasons. Apache is bloated and not streamlined - to get it lean, one has to do quite a lot of work. NGINX is much leaner, and like Remon said it is easier to configure. I still keep a dev machine that has Apache, PHP, and MySQL - the questions that are normally on sitepoint involve these technologies so I stay familiar with them for that. I got worried when Oracle bought MySQL - leopards generally don't wear different skins - so I started to use PostGre and have not had to look back.
I have been impressed by IIS7 and think that it is a solid choice with PHP and MSSQL Server or SQLite. I just like the opensource community way better so I run the opensource technologies for business.
I currently remote in using SSH via OpenVPN. Then I can either connect using nxclient or via the command line, I also set up WebMin under SSL so I can manage my servers via the web too.
Lately I've started to build applications in NodeJS and am really impressed with its' performance; however it is a little raw yet so I would not feel comfortable putting this type of work into production. I have also been gaining a better understanding of where NoSQL should be substituted in for relation databases - Remon's idea about flexibility in not knowing what fields are required is a great place to start
I had not used Windows Azure until recently - when logic_earth had mentioned it in the forum - I quite like it, more so than Amazon EC2 as it is faster and easier to set up. I am running Centos 6.2 there and it works for what I need.
Very interesting thread!
force — 2012-11-05T20:23:44-05:00 — #7
Oracle has been shooting themselves in the foot the last few years. They've been making stupid decisions that have been driving the developers of their projects away.
If MySQL falls out of favor, I would certainly like to see PostgreSQL to take over. I've always been a fan of it, as it isn't nearly as cobbled together as MySQL is. But unfortunately, most webapps have standardized on the LAMP stack.
wwb_99 — 2012-11-06T05:58:08-05:00 — #8
Mainly running .NET stuff on IIS 7 here. Mainly MSSQL but we are starting to use more RavenDB (think .NET mongodb with ACID compliance).
Got a few Wordpress sites floating around so unfortunately we need to support MySQL. Also got a Drupal site coming down the pipe, I'm thinking nginx and PHP-fpm but the guys building it want to use apache. I could live my entire life and never support apache and be happy.
Also running redmine on Ubuntu / nginx / Phusion Passenger for fun. And profit.
Not using any of the public clouds, but we run everything in a private cloud. No plans to take the public cloud on at this time -- our host just proved their worth. Their data centers are in lower manhattan and northern nj. They remained up throughout sandy and the aftermath.
serverstorm — 2012-11-06T10:21:43-05:00 — #9
It is not like you want to have to test this scenario out but so glad that your provider did such a great job.
stomme_poes — 2012-11-06T11:06:36-05:00 — #10
Want. Currently all my dev stuff is local, which means virtual machines instead of the wider range of machines I would like. I dunno nothing about vpn's and networking though. Might learn it just so I can test more.
We're using gUnicorn as our server (on various Linux machines somewhere in a data center; we develop locally on whatever we want, Debian and Ubuntu being the most popular, but it's one of those "cloud" things so it's whatever machine is available for whatever load). PostgreSQL8.4 or 9 depending on the age of the dev machine, with an open source ORM called Tryton (which I had never heard of before... it's made by some Belgians). We use REDIS too but just for some easy quick caching/calling, and SphinxSearch for the full-text search engine. Our main framework is Python's (2.7 for now) Flask, which is one of those minimalistic framworks (similar to Ruby's Sinatra and Perl's Dancer).
So for development we'll start up REDIS, then Sphinx, then the webserver. The pain point when we get slowness seems to be the ORM? Sphinx is damn fast though.
I'm using Ubuntu just because it was convenient to install; Unity is horrid terrible nasty ugly crap that I manage to avoid by sticking to the terminal as much as possible (using the Gimp in Unity is hair-pulling though). Ubuntu's got some things it runs by default that Debian doesn't though and I like more regular upgrades, which I had on my Debian Testing machine but that's a bit less stable. Other co-workers have Windows machines with some Linux in a VM (usually old Gnome Ubuntu, one has Arch).
serverstorm — 2012-11-06T13:05:37-05:00 — #11
If I could recommend you get familiar with PFSense (an opensource firewall that support OpenVPN). It is relatively easy to install OpenVPN on PFSense and it allows authenticated users to connect to it and download the certificates needed for the vpn client on their OS. The nice thing is that a person can be on a Linux, Mac or Windows box and can use their favourite vpn client; I currently use OpenSuse's default VPN client and it is rock-solid (vpn does not drop connected continuously over several weeks). The client setup is the most complex part of the setup but there are PFSense tutorials that walk you through how to do it. Once you have it setup on PFSense and your OS then you open, connect to it and then you can use an SSH capable client (for Linux, nxclient, Eclipse, the command line ...). You can then access your servers using host names or typing in the internal I.P. address. You can also install the certificates and client on a smart phone and manage your resources from their too.
Interesting and fairly different setup, but you have pointed to unfamiliar technologies that I'm going to look into... thanks
I used to love Ubuntu but they really frigged it up with Unity. I know that there are concerns with OpenSuse 12.2 (turning commercial some time in the future, leaving the opensource community high and dry). It is rock solid, fast and does not have any of the problems associated with Ubuntu (and Unity), using far less RAM and not the stripped down nature of Debian (good networking, server, application features 'out of the box'). I am very glad I switched.
wwb_99 — 2012-11-06T13:44:38-05:00 — #12
Yeah. To be honest, we picked them back in 2004 or so because they were a block from the WTC on 9/11 and kept everything running without physical access to their infrastructure for upwards of a month.
I second the idea that one should put a box in front of things, web servers should not really be at the edge. We run m0n0wall but PFSense is a good call too. For most QA purposes any old PC with a CF-IDE adapter and a 4gb CF card will work.
Dev-wise I'd advise checking out Vagrant if you've got complex *nix based scenarios. Really a beatiful thing to have a completely scripted environment creation trick.
kylewolfe — 2012-11-06T14:22:55-05:00 — #13
Wow. Name please?
kylewolfe — 2012-11-06T14:28:22-05:00 — #14
Have you tried Xubuntu? I believe it dumps most of the resource hogs, but still uses Ubuntu repositories to stay up to date.
cpradio — 2012-11-06T14:42:51-05:00 — #15
Can't say much about where I work now, as I've yet to figure out all of the Prod servers, but where I worked previously, we had 6 production servers, 3 public facing, and 3 private, all behind a load balancer that also acted as a firewall.
All of them ran Windows Server 2003 or Windows Server 2008 (as it was a .NET shop). The 3 public servers were the only servers with access to the 3 private servers (for web applications, then there are still database servers and a few application servers that are private). The 3 private servers were the only servers that had access to the Database Servers and other internal applications.
Developers had Windows XP or Windows 7 (all developers were ultimately upgraded to Windows 7 with 8 GB of RAM). Our Development and Test environments were similar to Production just on a smaller scale (1-2 Public Servers, 1-2 Private Servers).
From a Personal Freelance standpoint, I have a test machine that runs Debian, Apache and PHP (haven't gone into NGINX or LightHTTPD yet), and I have a separate Debian machine for MySQL and PostgreSQL that is private (no external connections).
My host has a similar setup, running Debian, Apache and PHP on their public facing servers and MySQL/PostgreSQL on less public servers (you have to manually allow external/public access).
serverstorm — 2012-11-06T15:09:03-05:00 — #16
No I have run that distro, I've run Mint, Pepper Mint, Ubuntu, Debian, Fedora, CentOS, ClearOS, and Arch Linux but any of the Debian-based distros (Mint, Pepper Mint, Ubuntu...) use the apt-get and packaging system of debian. If I really want to strip something down to no-nonsense then I'll use Debian as it is a conservative distro and will behave, but I will not expect good multimedia support or enhanced features. I may try Xubuntu and see what if it is superior to the other distros that I try so thanks for words about it :).
serverstorm — 2012-11-06T15:16:13-05:00 — #17
Yes an event, if navigate was sure to give you confidence. They sound like a great option!
Yes I too favour the idea of having the security infront of the web server. M0n0wall is a good security device (that I haven't used in a while). I've fell in favour with PFSense as it now supports V-LAN and smart monitoring as well as the standard Linux Chaining and affiliated firewall/network features such as Intrustion detection and DOS attack blocking. I also use it for eliminating I.P. zones, so I don't have to do it with scripts on the web-server.
Thanks for the recommendation... It looks great and I plan to give it a whirl
wwb_99 — 2012-11-06T15:55:16-05:00 — #18
The host in question is Logicworks -- they are not cheap but they are a really great outfit. If anyone needs an intro PM me.
For the record I totally agree PFsense is a superior platform in any way. Same CF IDE trick works AFAIK, much better not to have moving parts in firewalls.
For the HTTP-layer stuff I prefer to have a reverse proxy standing in front of the app server rather than use a firewall device.
cpradio — 2012-11-06T16:33:15-05:00 — #19
I run Debian for my primary system, I didn't have any issues getting good multimedia support... or do you mean by default, as I did have to add deb-multimedia.org to my apt/sources.list to get it. But I also had to add the plex media server source too, and that works killer on Debian (much better than it did on my test Windows machine).
serverstorm — 2012-11-06T16:52:59-05:00 — #20
Yes I should have clarified that by default the Multimedia support is not quite there. When I'm configuring business systems, I don't like to have to mess around with such things as I lose money per install. Yes that means that I have customers that have decided to switch their desktops to Linux after I show them what it can be like on my laptop. The cost savings, better security and lack of ongoing licensing costs have swayed a surprising number of customers. As such, I use distros that I can do my own Linux build with and have everything ready to go for neworking, exchange server and multimedia.
Glad you like Debian.
next page →