Posts filed under ‘PHP’
One of the Pro’s in the Symfony Users Google group had some comments on the Virtual Machine for Symfony at Sipx.ws and I wanted to share my thinking about my plans.
Generally when developing you should have an environment that represents that to which you’ll be deploying to – it’ll save you time, effort and much pain to have something as close as possible. There are however, several scenarios for developers:
Ideally your environment matches that to which you’ll be deploying to. If you control the server infrastructure then this is less of a problem – you’ll build the server yourself (ideally via an automated deployment process) and building a VM from this is trivial.
If you don’t however control the server infrastructure then you have a more complex situation to deal with. If the gods are smiling then they’ve built their server completely from public distros and repos and used a package manager for all installs. If this is the case you can dump the package list and server build – and rebase an image yourself. Often however they have a custom OS build (tweaked for whatever reason), local repositories (hopefully mirrored) but sometimes not and a few extras thrown in. This makes building an image that represents the environment you’re going to use, while not impossible, generally non-trivial.
ServerGrove (http://www.servergrove.com/), forward-thinking & proactive, are interested in providing an image to their customers that does just this – allows people to develop locally in an environment that represents where the application will be deployed.
A growing trend with hosting providers is where they allow you to upload your own image to the hosting environment allowing you to build your own OS (subject of course to licensing requirements). One of the aims of the VM was to provide a way for devs to start locally and then upload a copy of the image to the hosting environment. With a few caveats (mostly around networking) you’re assured of 100% success for the deployed project as you’ve been able to put it through its paces in before uploading.
In this scenario the developer is building applications for non-specified specified servers – either because they don’t have or haven’t selected the hosting environment yet, they don’t have complete information from the project sponsor – or some other reason (it’s weird and wacky out there). Another possible deployment is Open Source projects where the deployed application may be any OS – and yet you’d like to have a common “known” environment for developers and end-users.
In this situation the VM helps both the developer and the project sponsor – as it’ll allow the dev to share the VM with the sponsor for testing and signoff. Essentially passing the monkey wrt the hosting environment.
General Approach (now and 1.x)
The current approach I’ve taken is mainly aimed at providing a lean-learning curve, a clean & repeatable environment to the community developing against Symfony and the Zend Framework (the Zend side is mostly a freebie but also aimed at helping people with Lucene search issues). With each build I test to ensure that all sf frameworks work by deploying a test application that covers ORM’s, plugins, routing and the DB/httpd. With the build I try to ensure that it’s portable and therefore works against the major VM Client vendors (VirtualBox, VMWare and Xen currently). The aim of the 1.0 release is to have something built and packaged ready to run – much like the sf sandbox currently works.
While VM’s have been around for a while – and while installing linux has become more user friendly – there’s still a lot of areas you can trip-up building images and installing OS’s. One of the aims was to remove this as a blocker to devs wanting to just get down to developing applications.
With the release of 1.0 there should be the following images and deployments available:
o Stand alone (everything in one box for simple dev projects)
o Load Balanced (built using devSFCore with configuration that puts the server into modes: lb [load balanced], web [web server, memcached & no db], db [db, svn, no httpd but a http management interface])
· Project helpers
o Helpers to aid start-up of projects and development. Things like building the root development folder, linking to the version of the framework you wish to use, creating and configuring the DB, configuring the application to use the DB and running tests on the initial setup. Think a2ensite for creating a symfony application and you’ll get the picture. The intention isn’t so much to dumb down – but to streamline and to facilitate adoption by those not that familiar with symfony. Included will be log creation of the actual steps involved to help devs understand what to do.
With Deployments the general idea is that you’ll be able to run multiple images in modes – to facilitate testing, architecture scenarios, etc. With this you run one image as a DB, several as web servers and drop in a load balancer – and hey-presto you have a way to test how your application performs when scaling out.
With the 1.x branch I’m intending to go with a much lighter approach – still with some base images for various distributions and deployments (there will be standard and live images along the same approach as the live-cd used with some distributions) but using some of the approaches you’ve outlined for providing the packages and for linking in with repositories. This approach however requires some infrastructure to support it – and infrastructure = time + resources and resources = money.
This approach essentially extends the current sf sandbox to a deployed image mode. It’ll work out compatibilities, issues and fixes, deal with things like pear and pecl dependencies, PDO and handle the deployments you’ll see above.
With 1.x comes features for both devs and hosters (and allows for Targeted deployment). Hosters can build their base image and include the needed components into the image – and share it with their customers(the devs). Devs can download and use the image – and it’ll pull all the needed parts down. When they are ready to deploy – then from within the VM they can provision and deploy the application. With the provisioning on the hosting provider side building the image locally, deploying it and then accepting the deployment of the application.
Should the dev decide to move hosting providers to another supporting this model – as it’ll be built using the same components (but probably a different base OS) – then it should be a simple process to download their base image, deploy from the current VM to the new VM, test and redeploy.
I’ve just launched a new website aimed at helping developers have a cleaner environment for developing & testing their Symfony applications at Sipx.ws. Thanks to the great guys at ServerGrove – I managed to get the site up and running in no time.
Fabian presenting the freshly hatched stand alone components for PHP (I’m still wondering what he’s hiding behind his back 😛 )
There’s a couple of places that tend to cause confusion when people try to override Symfony & plugin functionality and the autoloader doesn’t help when you’re trying to test things out.
You can generally create your own version of any class – if you put the new version in the right place and if the file and class are named properly.
The location you put the file depends on a number of factors (is it a core module or a plugin) and the scope you want to affect.
With plugins generally the best approach to start is to be as local to your Symfony app as possible.
- Clear your cache – it doesn’t hurt to do this before and after you start making changes and it’s a good habit to do this often
- Start by making a folder for the in apps\<applicationName>\modules\<pluginName> (often you just make the folder rather than using the generator)
- Depending on what you’re overloading you create the sub folder here for it – so if you’re modifying a template then create a templates folder under the plugin folder you just created
- Now copy the existing file from the plugin to the folder you just created- it’ll be a good starting place to making any changes. When copying a file it’s the file that will be used initially by the autoloader rather than a file named Base….
- So if you’re looking to override the actions for sfGuard, it’s going to be under the sfGuard\modules\sfGuardUser\actions\actions.class.php
- Well written plugins will use a base file – for sfGuard this is BasesfGuardUserActions.class.php – which allows easy overriding of your own functionality – you’ll see a reference to this in the top of the actions file
- You’ll need to change the require/require_once statement in the top of the file you just copied – to point to the correct place as when you override the autoloader won’t be able to find the class you’re trying to include.
- This file will generally just be a placeholder – with all the work being done in the parent class. You’ll need to refer to the base class or have a decent IDE that gives you code assist or exploring of the methods in the parent to determine what method signatures are available for you
- Now you can implement your own functionality in the file – for an action start with something simple like overriding a method and putting die(“hey – it worked! – Who’s da man?!?!”). You can even put this just under the require statement to test – so you’ll know that it was your customised file that was included rather than the specific one within the plugin itself.
- Clear your cache again and fire up your browser to load a page under the application you used above
- Now you see your die statement is being hit – you can implement the actual code you want to happen
The locations available for you to put custom classes are:
thisApp is your application name
moduleName is the name of your module or the plugin name
folderType is the type of folder – i.e. actions, templates, model, etc
fileName is the name of the file in the expected format (so actions.class.php for the actions class)
Often people have issues with how best to debug their application from the browser when developing with Symfony.
There’s a couple of tools available that make the dev’s life easier:
The winning combo of Firefox + Firebug allow you to see what’s on the page, inspect your form + input tags and work out if there are problems locating page assets (like css, images and JS). Just right click on any of the page elements and select “Inspect Element” and you can navigate the HTML in a quick and intuitive way. Enable the Script, Net & Console for the site by clicking on the headings from within Firebug and you’ll be given access to those features and you can examine which assets are missing, load times, set breakpoints on your script – all in realtime and on the fly.
Watching the Firebug net panel – you’ll see details of HTTP GET’s, POST’s and AJAX requests but sometimes it misses a few things – especially when you’re using Flash or Flex on the page – which uses connections outside of the browser and therefore not tracked by Firebug.
In this case you’ll need to use a desktop proxy to monitor the connection which can proxy all connections and will allow you to inspect & decode the traffic. Fiddler is a lightweight desktop proxy for the Windows Platform that does just this and more. You can capture, review, decode, replay and construct all connections – drilling into and examining what’s happening and thereby giving you a much clearer picture about what’s happening between the browser and the server.
I’ve used ipvs (http://www.linuxvirtualserver.org/software/ipvs.html) effectively on a few sites for clients – it’s more scalable than using reverse proxies.
It’s a handy & fast and efficient way to:
· load balance
· manage traffic to the cluster (allowing for transparently bringing servers online/offline, migrations)
· firewall the cluster and back-end services
I’m not a fan of moving the ORM layer – in terms of bang/buck it’s just not efficient or cost effective.
Logical separation is more important that physical separation. It’s enough to use a dedicated db server and optimise the machine for the purpose.
One of the hardest problems is dealing with assets in dynamic sites – images, movies, etc – when you have multiple servers.
Shared file systems (ie NFS) just doesn’t cut it. For a couple of clients I’ve used libs built on fuse but OS support can be patchy.
The handy thing about fuse is that you can use it fairly easily in conjunction with CDN’s but planning the financials is complex – and it’s something you need to consider in your architecture.
Fuse in local mode is easy to setup, scalable, fault tolerant and fast – most hosting providers have gigabit local network connections (and local network traffic isn’t billable). There’s a couple of hosting providers that implement local CDN’s which they make available for clients – but these are few and far between.
Most projects I tend to recommend the VPS route rather than dedicated machines. It’s cost effective, allows for growth, machines can be provisioned in minutes rather than days and you can respond quickly if traffic increases. Allocating/de-allocating extra resources is usually just a few clicks away and if you start exceeding your optimal utilisation you can provision another machine, hot tweak the IPVS table and you suddenly have another machine serving your users. Good hosting providers even have API’s that allow your application to adjust its resources up/down from within the app. If it’s a short term spike (due to a promotion, press, etc) then when things calm down a few days later – you can hot tweak the ipvs table, un-provision the machine and hey presto – you’ve only incurred costs for the duration. Implementing this type of approach means you need to understand when it’s best to scale up and when you scale out – and it’s hard to determine what strategy to take until you have optimised your environment and have accurate metrics about how your application performs. With this you can set thresholds and with monitoring in-place the application can notify you when these are exceeded.
Develop apps with the Aptana IDE and deploy from within the IDE directly to the Aptana cloud. Supports SVN, Staging and live – very handy and very cool.