Posts filed under ‘Symfony’

Configuration Management: Jump into the Kitchen

Configuration Management is an old horse that rarely gets any loving outside of the Microsoft environment.

Generally it’s a mechanism that allows you to control the configuration and software available on machine but it’s usually clunky, brutally inefficient on the network and generally requires total control of the target machines.

Then along comes Opscode and opens up their Configuration Management Kitchen with Chef.  Chef is a lightweight approach to Systems Integration & Configuration Management (SI & CM for the light-hearted) built on Ruby/Rails/Gems that allows you to quickly deploy and configure software and services without requiring total domination.

I’ve had my eye on it for a while and with the Virtual Machine environments I’ve been working on for Symfony and Zend – I decided to dig in and give it a spin and I’m impressed; almost beyond words.

Chef depends on having Fully Qualified Domain Names up and running and can be a little quirky without them.

The installation instructions for the Chef-Server and Chef-Client are clear and concise and can be found here.

You start by installing the Chef-Server which provides the core back-bone to support your environment.  Once it’s up and running you have Chef running on Rails under Apache providing a web and REST interface for clients (or nodes in the Chef parlance).  Here you can view and control the attributes of a node, examine your configuration scripts (Recipes)  and authorise clients.  The GUI tools in the current (6.2) release are a little raw but functional and the coming 6.4 Release sharpens up the Web UI a lot (and brings with it a whole host of exciting features).  I setup the chef server on a stand-alone VirtualBox machine with 256 MB memory and a 3GB disk – which is working well for everything I’ve thrown at it so far.  You’ll need to login to the Web UI using OpenID and ensure you use the appropriate domain appended to your login – full details of the OpenID providers and their naming schemes can be found on the OpenID site here.

It can take a few minutes for the registrations to appear in the Chef Web UI.

Once you have the server up and running you’ll need to install the chef-client on a host.  Once up and running the client will connect to the server and register itself.  You’ll need to fire-up the Web UI on the server and authorise the client before you’ll be able to do anything more with the client.

Once it’s been authorised just run the chef-client again with:

sudo chef-client

When it completes you’ll see the information about the client in the Web UI in the nodes and status panels.

If you don’t authorise a client on the server then you’ll see a HTTP 403 error when you run the chef-client.

Now you have both the client and server up and running – you can get down to the real business of deploying something.

Open 2 SSH connections – one on the chef-server and another on the chef-client and start simply by following their quick-start guide on the chef-server and in a couple of minutes you’ll have your first chef-recipe complete.  Now just drop into the cookbooks folder and copy the quick_start cookbook to /srv/chef/site-cookbooks:

cd cookbooks
cp –R ./quick_start /srv/chef/site-cookbooks/

Now refresh the Web UI and open the Recipes Panel and you’ll see the quick_start recipe that you just created listed.

To apply the recipe to a node (your client) open up the nodes panel in the Web UI and double click on Recipes for it.  In Chef 6.2 you’ll get an awful textbox with the information for the node in JSON format.  Scroll down to the bottom and you’ll find the recipes entry – inside the [] put “quick_start” (include the “”) and hit save.

The end result should look something like:

“recipes”: [
” quick_start”
],

If you did it right you’ll see the page update.  Another minor issue in the 6.2 release is that if you didn’t update the JSON correctly you’ll see saving that’ll never complete.

All that’s left is to switch to the chef-client SSH terminal and get the client to update itself now with:

sudo chef-client

A few seconds later the client will find that it has a new recipe and install it.  On the client go to the /tmp folder and you’ll see deep_thought.txt from the chef-run J

Now this seems like a lot of effort to get a text file to appear in a folder – but it’s just as simple writing a recipe that installs MySQL, PHP, Redmine, Symfony or Zend Server.  But it’s not just about installing packages that’s already pretty simple using bash with apt or yum.  Using a recipe allows you to ensure that the installs are idempotent or transactional.  If one part fails – then you can ensure that the machine is left in a known reliable state.  If you have a failure in a script then you can be left with partial installs or worse – the machine in an unreliable or unworkable state.

One of the exciting aspects to all of this is that it’s very easy to hook things together – not just on one machine but all machines in your environment – regardless of what OS they’re running.  A recipe to install Zend Server, Symfony, MySQL or as a single package will work on Ubuntu, Redhat, CentOS or most other variants.

Hooking into the infrastructure allows very simple approaches to things like provisioning, deployment and configuration of environments – in my case this allows:

  • Automated creation of a virtual machine instance
  • Automatic provisioning of the instance
  • Dynamic allocation & changing of the resources available to the instance (Memory, Disk, Drives, etc) although with VirtualBox a reboot is needed for memory changes to take effect.
  • Dynamic package and configuration – allowing me (from within the VM instance) to switch it’s mode of operation and determine its role.  So within minutes it changes from all in one (complete LAMP on the instance) to the DB Server role
Advertisements

Friday 29th May, 2009 at 1:48 pm 1 comment

General direction for the Virtual Machine for Frameworks (Symfony & Zend)

One of the Pro’s in the Symfony Users Google group had some comments on the Virtual Machine for Symfony at Sipx.ws and I wanted to share my thinking about my plans.

Generally when developing you should have an environment that represents that to which you’ll be deploying to – it’ll save you time, effort and much pain to have something as close as possible.  There are however, several scenarios for developers:

Targeted deployment

Ideally your environment matches that to which you’ll be deploying to. If you control the server infrastructure then this is less of a problem – you’ll build the server yourself (ideally via an automated deployment process) and building a VM from this is trivial.

If you don’t however control the server infrastructure then you have a more complex situation to deal with. If the gods are smiling then they’ve built their server completely from public distros and repos and used a package manager for all installs. If this is the case you can dump the package list and server build – and rebase an image yourself. Often however they have a custom OS build (tweaked for whatever reason), local repositories (hopefully mirrored) but sometimes not and a few extras thrown in. This makes building an image that represents the environment you’re going to use, while not impossible, generally non-trivial.

ServerGrove (http://www.servergrove.com/), forward-thinking & proactive, are interested in providing an image to their customers that does just this – allows people to develop locally in an environment that represents where the application will be deployed.

Trends

A growing trend with hosting providers is where they allow you to upload your own image to the hosting environment allowing you to build your own OS (subject of course to licensing requirements). One of the aims of the VM was to provide a way for devs to start locally and then upload a copy of the image to the hosting environment. With a few caveats (mostly around networking) you’re assured of 100% success for the deployed project as you’ve been able to put it through its paces in before uploading.

Non-Targeted Deployment

In this scenario the developer is building applications for non-specified specified servers – either because they don’t have or haven’t selected the hosting environment yet, they don’t have complete information from the project sponsor – or some other reason (it’s weird and wacky out there). Another possible deployment is Open Source projects where the deployed application may be any OS – and yet you’d like to have a common “known” environment for developers and end-users.

In this situation the VM helps both the developer and the project sponsor – as it’ll allow the dev to share the VM with the sponsor for testing and signoff. Essentially passing the monkey wrt the hosting environment.

General Approach (now and 1.x)

The current approach I’ve taken is mainly aimed at providing a lean-learning curve, a clean & repeatable environment to the community developing against Symfony and the Zend Framework (the Zend side is mostly a freebie but also aimed at helping people with Lucene search issues). With each build I test to ensure that all sf frameworks work by deploying a test application that covers ORM’s, plugins, routing and the DB/httpd. With the build I try to ensure that it’s portable and therefore works against the major VM Client vendors (VirtualBox, VMWare and Xen currently). The aim of the 1.0 release is to have something built and packaged ready to run – much like the sf sandbox currently works.

While VM’s have been around for a while – and while installing linux has become more user friendly – there’s still a lot of areas you can trip-up building images and installing OS’s. One of the aims was to remove this as a blocker to devs wanting to just get down to developing applications.

With the release of 1.0 there should be the following images and deployments available:

· Images

o devSFCoreServer

o devSFCoreIDE

· Deployments

o Stand alone (everything in one box for simple dev projects)

o Load Balanced (built using devSFCore with configuration that puts the server into modes: lb [load balanced], web [web server, memcached & no db], db [db, svn, no httpd but a http management interface])

· Project helpers

o Helpers to aid start-up of projects and development. Things like building the root development folder, linking to the version of the framework you wish to use, creating and configuring the DB, configuring the application to use the DB and running tests on the initial setup. Think a2ensite for creating a symfony application and you’ll get the picture. The intention isn’t so much to dumb down – but to streamline and to facilitate adoption by those not that familiar with symfony. Included will be log creation of the actual steps involved to help devs understand what to do.

With Deployments the general idea is that you’ll be able to run multiple images in modes – to facilitate testing, architecture scenarios, etc. With this you run one image as a DB, several as web servers and drop in a load balancer – and hey-presto you have a way to test how your application performs when scaling out.

With the 1.x branch I’m intending to go with a much lighter approach – still with some base images for various distributions and deployments (there will be standard and live images along the same approach as the live-cd used with some distributions) but using some of the approaches you’ve outlined for providing the packages and for linking in with repositories. This approach however requires some infrastructure to support it – and infrastructure = time + resources and resources = money.

This approach essentially extends the current sf sandbox to a deployed image mode. It’ll work out compatibilities, issues and fixes, deal with things like pear and pecl dependencies, PDO and handle the deployments you’ll see above.

With 1.x comes features for both devs and hosters (and allows for Targeted deployment). Hosters can build their base image and include the needed components into the image – and share it with their customers(the devs). Devs can download and use the image – and it’ll pull all the needed parts down. When they are ready to deploy – then from within the VM they can provision and deploy the application. With the provisioning on the hosting provider side building the image locally, deploying it and then accepting the deployment of the application.

Should the dev decide to move hosting providers to another supporting this model – as it’ll be built using the same components (but probably a different base OS) – then it should be a simple process to download their base image, deploy from the current VM to the new VM, test and redeploy.

Posted via web from Dasher’s Dev Den

Thursday 21st May, 2009 at 4:34 pm Leave a comment

Virtual Machine for Frameworks (Symfony & Zend)

I’ve just launched a new website aimed at helping developers have a cleaner environment for developing & testing their Symfony applications at Sipx.ws. Thanks to the great guys at ServerGrove – I managed to get the site up and running in no time.

Continue Reading Wednesday 20th May, 2009 at 6:31 pm 1 comment

Symfony YAML – A PHP library that speaks YAML

Fabian presenting the freshly hatched stand alone components for PHP (I’m still wondering what he’s hiding behind his back 😛 )

Posted via web from Dasher’s Dev Den

Friday 15th May, 2009 at 5:34 pm Leave a comment

Symfony: Overloading and Overriding Plugins & Base classes

There’s a couple of places that tend to cause confusion when people try to override Symfony & plugin functionality and the autoloader doesn’t help when you’re trying to test things out.

You can generally create your own version of any class – if you put the new version in the right place and if the file and class are named properly.
The location you put the file depends on a number of factors (is it a core module or a plugin) and the scope you want to affect.

With plugins generally the best approach to start is to be as local to your Symfony app as possible.

  1. Clear your cache – it doesn’t hurt to do this before and after you start making changes and it’s a good habit to do this often
  2. Start by making a folder for the in apps\<applicationName>\modules\<pluginName> (often you just make the folder rather than using the generator)
    • Depending on what you’re overloading you create the sub folder here for it – so if you’re modifying a template then create a templates folder under the plugin folder you just created
  3. Now copy the existing file from the plugin to the folder you just created- it’ll be a good starting place to making any changes.  When copying a file it’s the file that will be used initially by the autoloader  rather than a file named Base….
    • So if you’re looking to override the actions for sfGuard, it’s going to be under the sfGuard\modules\sfGuardUser\actions\actions.class.php
    • Well written plugins will use a base file – for sfGuard this is BasesfGuardUserActions.class.php – which allows easy overriding of your own functionality – you’ll see a reference to this in the top of the actions file
    • You’ll need to change the require/require_once statement in the top of the file you just copied – to point to the correct place as when you override the autoloader won’t be able to find the class you’re trying to include.
    • This file will generally just be a placeholder – with all the work being done in the parent class.  You’ll need to refer to the base class or have a decent IDE that gives you code assist or exploring of the methods in the parent to determine what method signatures are available for you
  4. Now you can implement your own functionality in the file – for an action start with something simple like overriding a method and putting die(“hey – it worked! – Who’s da man?!?!”).  You can even put this just under the require statement to test – so you’ll know that it was your customised file that was included rather than the specific one within the plugin itself.
  5. Clear your cache again and fire up your browser to load a page under the application you used above
  6. Now you see your die statement is being hit – you can implement the actual code you want to happen

The locations available for you to put custom classes are:

  • project\apps\thisApp\modules\moduleName\folderType\fileName
  • project\apps\thisApp\lib\folderType\fileName
  • project\apps\thisApp\lib\fileName
  • project\lib\folderType\fileName
  • project\lib\fileName

Where:

thisApp is your application name
moduleName is the name of your module or the plugin name
folderType is the type of folder – i.e. actions, templates, model, etc
fileName is the name of the file in the expected format (so actions.class.php for the actions class)

Posted via web from Dasher’s Dev Den

Friday 15th May, 2009 at 11:59 am Leave a comment

Spidermonkey in PECL « BombStrike’s blog

bombstrike has a new post about Spidermonkey interface for PHP, now available in PECL, here’s the post on his site

The PHP JS lib has been put into PECL which, with the coming release of PHP 5.3 will finally allow some interesting scenarios – especially when it comes to testing. It should be possible to extend the Symfony Web tester to include the JS behaviours, Ajax in test cases and a whole lot more!
tag: synfony, Server side JS

Posted via web from Dasher’s Dev Den

Thursday 14th May, 2009 at 3:36 pm Leave a comment

Symfony Debugging: Browser tips

Often people have issues with how best to debug their application from the browser when developing with Symfony.

There’s a couple of tools available that make the dev’s life easier:

The winning combo of Firefox + Firebug allow you to see what’s on the page, inspect your form + input tags and work out if there are problems locating page assets (like css, images and JS).  Just right click on any of the page elements and select “Inspect Element” and you can navigate the HTML in a quick and intuitive way.  Enable the Script, Net & Console for the site by clicking on the headings from within Firebug and you’ll be given access to those features and you can examine which assets are missing, load times, set breakpoints on your script – all in realtime and on the fly.

Watching the Firebug net panel – you’ll see details of HTTP GET’s, POST’s and AJAX requests but sometimes it misses a few things – especially when you’re using Flash or Flex on the page – which uses connections outside of the browser and therefore not tracked by Firebug.

firebug net panel

firebug net panel

In this case you’ll need to use a desktop proxy to monitor the connection which can proxy all connections and will allow you to inspect & decode the traffic.  Fiddler is a lightweight desktop proxy for the Windows Platform that does just this and more.  You can capture, review, decode, replay and construct all connections – drilling into and examining what’s happening and thereby giving you a much clearer picture about what’s happening between the browser and the server.

Posted via web from Dasher’s Dev Den

Thursday 14th May, 2009 at 2:28 pm Leave a comment

Older Posts


Recent Posts