Posts filed under ‘Development’

Titanium Android Modules

We’ve been beavering away working on our mobile client and where possible we’ve wrapped up parcels of code into module for Titanium.

We’ve a few out and available now – most are ready for use – a few are still in beta.

The modules available for Android are:

Paypal (Mobile payments) – http://github.com/dasher/titanium_mobile/tree/integration-paypal

UrbanAirship (Push notifications) – http://github.com/dasher/titanium_mobile/tree/integration-airship

AdMob & Smaata (Mobile ads) – http://github.com/dasher/titanium_mobile/tree/integration-ads

Google Maps (overlays & Polygons) – http://github.com/dasher/titanium_mobile/tree/master-integration

A pull request has been done to Appcelerator for integration with Titanium – so with luck they’ll appear in mainline sometime soon 🙂

Advertisements

Monday 5th July, 2010 at 5:51 pm 3 comments

Technology Clinic in Barcelona

Need some direction about how to use technology to bring an idea to market?

Continue Reading Monday 19th April, 2010 at 1:28 pm 1 comment

Cognition (expanded)

There are several underlying problems with cognition which are different from what most expect.

The primary issue is due to perception where too much emphasis is attributes to the human senses (primarily sight and sound) – which as I’ve mentioned before – are just inputs.  As you’ll know from physics – you’ll often see simple patterns repeated in many different fields – it’s unlikely that cognitive processes will be any different when dealing with sound/sight and thought.

The next issue is that many fall foul of attempting to describe the system in terms they can understand – a natural approach but essentially it boils down to the pushing of grammar parsers and hand lexers with too much forward weighting to identify external grammar (essentially pre-weighting the lexers with formal grammar).  An approach that can produce interesting results but isn’t cognition and fails as an end game for achieving it.  Essentially this is the approach used in current machine translation processes in it’s various forms.

The key fundamental issue is much simpler and related to issues around:  pattern, reduction & relationship.  An area that had some activity a while ago in various forms (cellular networks, etc) but fell to the wayside generally due to poor conceptual reference frameworks and the over-emphasis on modelling approaches used in nature (neural networks).

Now comes the time of definitions – a vehicle to ensure we’re on the same page 🙂

Pattern:
Cognitive processes thrive on them – and it’s one of the main drivers behind how it perceives, processes and responds to information.  There’s a constant search to find similarities between what is perceived and what is known.  It’s a fuzzy matching system that is rewarded, in the sense that it promotes change or adaptation, as much by differences as it is with finding similarities.  When thinking about similarities – a handy term is to think about something being true or false.  Don’t confuse true/false as the general definitions of the terms – it’s more about the sense of confidence.  If something has a high confidence of being valid then it is true.  The threshold of confidence is something that evolves and adapts within the cognition over time (essentially as a result of experience).
The development of patterns is both external (due to an external perception or input) and internal.  To avoid turning this comment into something massive (and boring you 🙂 ) – think along the lines of the human cognitive process and the subconscious or dreams.

Reduction:
Reduction happens at several key stages –  essentially it’s when a domain of experience breaches a threshold.  It’s a way of reducing the processing required to a more automatic response.  Think along the lines of short-circuit expressions.  It’s a fundamental part of the cognitive process.  From a human cognitive perspective you have probably seen it in your climbing and in your learning of the trumpet.  We often express it as “having the knack” or “getting the hang” of something.
It’s important for 2 reasons: a) it means it has gained knowledge about a domain; b)  it allows the cognitive process to further explore a domain.  While Reduction is a desirable end-game – it is not The End from a cognitive process perspective.  The meta information for this node of Reduction combines again and again with Pattern and Relationship allowing the process to reuse both the knowledge itself but more importantly the lessons learned when achieving reduction.

Relationship:
Relationship is really a meta process for drawing together apparently unrelated information into something that’s cohesive and is likely to either help with identifying patterns or for bringing about Reduction.  Relationship at first looks very similar to Pattern but differs in it’s ability to ask itself “what if” and by being able to adjust things (facts, perception, knowledge, Pattern, Reduction and versions of these[versions are actually quite important]) to suit the avenue that it being explored.  When expressed in human cognitive terms think of Relationship as the subconscious, dreams or the unfolding of events in thought.  The unfolding of events is an example of versions.  Essentially Relationship is a simulation that allows the testing of something.

Saturday 20th June, 2009 at 3:02 pm Leave a comment

NLP: thinking…

I stumbled over an interesting post on another site (http://www.ioremap.net/node/283) by zbr, a very bright guy which prompted a long comment.  I wanted to repost it here to further expand upon later.

NLP based on a grammatical rules engine, while an interesting toy, is essentially a dead-end when it comes to developing an approach to cognition.  Language is a complex system that has evolved over time and continues to evolve each and every day.  Grammar is an artificial construct that we have developed as a vehicle to describe language but  describing something doesn’t mean you understand it or that it can be used to extract knowledge or understanding from what it attempts to describe.

Take the example from Cyc (http://www.cyc.com/cyc/technology/whatiscyc_dir/whatsincyc):
* Fred saw the plane flying over Zurich.
* Fred saw the mountains flying over Zurich.

Grammar itself will help develop a weighted tree of the sentences and you’ll be able to describe the scene – but the system will lack enough reference to be able to respond.  In such a situation what is the proper response?

To answer we need a reference model – which luckily we have all around us everyday – people.  What do people do when they encounter a phrase and don’t have enough information to process it?  They ask a question.  What question would they ask?  Who’s fred? What’s a plane?  What’s Zurich? or would they laugh out loud as they exclaim (and picture) the mountains flying? (in itself a valid hypothesis)

Knowledge is obtained from the answer to the question – as it provided an addendum – a relationship between the phrase, the question and the answer.  Additionally the question itself often gets corrected – providing a short-circuit feedback  loop to the knowledge acquisition process.  The description of the answer also provides information about the relationship of items in the phrase to other information stored within the system.

What’s Zurich?  Zurich is the name of a city in a country called Switzerland.

(assuming that there is some information about what a plane is or that there is some relationship that interprets plans as machines like a car)
What color is the planes? Planes are all shapes and colors but this plane is bright green.
(note in this example the question indicates the singular but uses the plural – which is corrected in the answer)

The question provides insight into the internal state of the system we are interacting with (be it a computer program, a child we’re reading a story to or a colleague we are interacting with).  Inherent in any interaction is feedback, correction, elucidation of terms and phrases to assist understanding with those we are interacting with.  Often it happens in a subconscious way and tends to be in the style of continuous correcting feedback (the same approach we use when we reach down to pick up an object off of a surface).

A system needs to adapt & correct, to provide feedback (both to itself and with the other party it is interacting with) in a way that’s more than just updating state – but that also affects the very rules that make up the system itself.  This, however, is where many people tend to start going wrong.  A common pitfall is that the rules are considered to be the weightings between nodes of information or its relationships.  This however means that the underlying reference system (often implemented as grammar rules) rarely changes – which in essence lobotomizes the system.  It’s an indicator that you’ve put too much forward knowledge into the system.

Take how children learn – not the mechanic but the approach that’s used and not just for language or understanding (which is what we are trying to replicate when we implement the system) but with everything they do.  Nature, bless her cotton socks, is frugal with how she expends energy – so she reuses as much as possible (in essence cutting things down to their most common denominator).  You’ll see the same approach being uses for walking, talking, breathing, looking and following objects – in everything that we see, do or think.  Over time the system specializes domains of knowledge – further compartmentalizing – but also reusing that which has been learned and found to be valid in the domain.  Which in turn allows for further specialization and compartmentalization.

Thursday 18th June, 2009 at 12:10 pm 3 comments

Configuration Management: Jump into the Kitchen

Configuration Management is an old horse that rarely gets any loving outside of the Microsoft environment.

Generally it’s a mechanism that allows you to control the configuration and software available on machine but it’s usually clunky, brutally inefficient on the network and generally requires total control of the target machines.

Then along comes Opscode and opens up their Configuration Management Kitchen with Chef.  Chef is a lightweight approach to Systems Integration & Configuration Management (SI & CM for the light-hearted) built on Ruby/Rails/Gems that allows you to quickly deploy and configure software and services without requiring total domination.

I’ve had my eye on it for a while and with the Virtual Machine environments I’ve been working on for Symfony and Zend – I decided to dig in and give it a spin and I’m impressed; almost beyond words.

Chef depends on having Fully Qualified Domain Names up and running and can be a little quirky without them.

The installation instructions for the Chef-Server and Chef-Client are clear and concise and can be found here.

You start by installing the Chef-Server which provides the core back-bone to support your environment.  Once it’s up and running you have Chef running on Rails under Apache providing a web and REST interface for clients (or nodes in the Chef parlance).  Here you can view and control the attributes of a node, examine your configuration scripts (Recipes)  and authorise clients.  The GUI tools in the current (6.2) release are a little raw but functional and the coming 6.4 Release sharpens up the Web UI a lot (and brings with it a whole host of exciting features).  I setup the chef server on a stand-alone VirtualBox machine with 256 MB memory and a 3GB disk – which is working well for everything I’ve thrown at it so far.  You’ll need to login to the Web UI using OpenID and ensure you use the appropriate domain appended to your login – full details of the OpenID providers and their naming schemes can be found on the OpenID site here.

It can take a few minutes for the registrations to appear in the Chef Web UI.

Once you have the server up and running you’ll need to install the chef-client on a host.  Once up and running the client will connect to the server and register itself.  You’ll need to fire-up the Web UI on the server and authorise the client before you’ll be able to do anything more with the client.

Once it’s been authorised just run the chef-client again with:

sudo chef-client

When it completes you’ll see the information about the client in the Web UI in the nodes and status panels.

If you don’t authorise a client on the server then you’ll see a HTTP 403 error when you run the chef-client.

Now you have both the client and server up and running – you can get down to the real business of deploying something.

Open 2 SSH connections – one on the chef-server and another on the chef-client and start simply by following their quick-start guide on the chef-server and in a couple of minutes you’ll have your first chef-recipe complete.  Now just drop into the cookbooks folder and copy the quick_start cookbook to /srv/chef/site-cookbooks:

cd cookbooks
cp –R ./quick_start /srv/chef/site-cookbooks/

Now refresh the Web UI and open the Recipes Panel and you’ll see the quick_start recipe that you just created listed.

To apply the recipe to a node (your client) open up the nodes panel in the Web UI and double click on Recipes for it.  In Chef 6.2 you’ll get an awful textbox with the information for the node in JSON format.  Scroll down to the bottom and you’ll find the recipes entry – inside the [] put “quick_start” (include the “”) and hit save.

The end result should look something like:

“recipes”: [
” quick_start”
],

If you did it right you’ll see the page update.  Another minor issue in the 6.2 release is that if you didn’t update the JSON correctly you’ll see saving that’ll never complete.

All that’s left is to switch to the chef-client SSH terminal and get the client to update itself now with:

sudo chef-client

A few seconds later the client will find that it has a new recipe and install it.  On the client go to the /tmp folder and you’ll see deep_thought.txt from the chef-run J

Now this seems like a lot of effort to get a text file to appear in a folder – but it’s just as simple writing a recipe that installs MySQL, PHP, Redmine, Symfony or Zend Server.  But it’s not just about installing packages that’s already pretty simple using bash with apt or yum.  Using a recipe allows you to ensure that the installs are idempotent or transactional.  If one part fails – then you can ensure that the machine is left in a known reliable state.  If you have a failure in a script then you can be left with partial installs or worse – the machine in an unreliable or unworkable state.

One of the exciting aspects to all of this is that it’s very easy to hook things together – not just on one machine but all machines in your environment – regardless of what OS they’re running.  A recipe to install Zend Server, Symfony, MySQL or as a single package will work on Ubuntu, Redhat, CentOS or most other variants.

Hooking into the infrastructure allows very simple approaches to things like provisioning, deployment and configuration of environments – in my case this allows:

  • Automated creation of a virtual machine instance
  • Automatic provisioning of the instance
  • Dynamic allocation & changing of the resources available to the instance (Memory, Disk, Drives, etc) although with VirtualBox a reboot is needed for memory changes to take effect.
  • Dynamic package and configuration – allowing me (from within the VM instance) to switch it’s mode of operation and determine its role.  So within minutes it changes from all in one (complete LAMP on the instance) to the DB Server role

Friday 29th May, 2009 at 1:48 pm 1 comment

General direction for the Virtual Machine for Frameworks (Symfony & Zend)

One of the Pro’s in the Symfony Users Google group had some comments on the Virtual Machine for Symfony at Sipx.ws and I wanted to share my thinking about my plans.

Generally when developing you should have an environment that represents that to which you’ll be deploying to – it’ll save you time, effort and much pain to have something as close as possible.  There are however, several scenarios for developers:

Targeted deployment

Ideally your environment matches that to which you’ll be deploying to. If you control the server infrastructure then this is less of a problem – you’ll build the server yourself (ideally via an automated deployment process) and building a VM from this is trivial.

If you don’t however control the server infrastructure then you have a more complex situation to deal with. If the gods are smiling then they’ve built their server completely from public distros and repos and used a package manager for all installs. If this is the case you can dump the package list and server build – and rebase an image yourself. Often however they have a custom OS build (tweaked for whatever reason), local repositories (hopefully mirrored) but sometimes not and a few extras thrown in. This makes building an image that represents the environment you’re going to use, while not impossible, generally non-trivial.

ServerGrove (http://www.servergrove.com/), forward-thinking & proactive, are interested in providing an image to their customers that does just this – allows people to develop locally in an environment that represents where the application will be deployed.

Trends

A growing trend with hosting providers is where they allow you to upload your own image to the hosting environment allowing you to build your own OS (subject of course to licensing requirements). One of the aims of the VM was to provide a way for devs to start locally and then upload a copy of the image to the hosting environment. With a few caveats (mostly around networking) you’re assured of 100% success for the deployed project as you’ve been able to put it through its paces in before uploading.

Non-Targeted Deployment

In this scenario the developer is building applications for non-specified specified servers – either because they don’t have or haven’t selected the hosting environment yet, they don’t have complete information from the project sponsor – or some other reason (it’s weird and wacky out there). Another possible deployment is Open Source projects where the deployed application may be any OS – and yet you’d like to have a common “known” environment for developers and end-users.

In this situation the VM helps both the developer and the project sponsor – as it’ll allow the dev to share the VM with the sponsor for testing and signoff. Essentially passing the monkey wrt the hosting environment.

General Approach (now and 1.x)

The current approach I’ve taken is mainly aimed at providing a lean-learning curve, a clean & repeatable environment to the community developing against Symfony and the Zend Framework (the Zend side is mostly a freebie but also aimed at helping people with Lucene search issues). With each build I test to ensure that all sf frameworks work by deploying a test application that covers ORM’s, plugins, routing and the DB/httpd. With the build I try to ensure that it’s portable and therefore works against the major VM Client vendors (VirtualBox, VMWare and Xen currently). The aim of the 1.0 release is to have something built and packaged ready to run – much like the sf sandbox currently works.

While VM’s have been around for a while – and while installing linux has become more user friendly – there’s still a lot of areas you can trip-up building images and installing OS’s. One of the aims was to remove this as a blocker to devs wanting to just get down to developing applications.

With the release of 1.0 there should be the following images and deployments available:

· Images

o devSFCoreServer

o devSFCoreIDE

· Deployments

o Stand alone (everything in one box for simple dev projects)

o Load Balanced (built using devSFCore with configuration that puts the server into modes: lb [load balanced], web [web server, memcached & no db], db [db, svn, no httpd but a http management interface])

· Project helpers

o Helpers to aid start-up of projects and development. Things like building the root development folder, linking to the version of the framework you wish to use, creating and configuring the DB, configuring the application to use the DB and running tests on the initial setup. Think a2ensite for creating a symfony application and you’ll get the picture. The intention isn’t so much to dumb down – but to streamline and to facilitate adoption by those not that familiar with symfony. Included will be log creation of the actual steps involved to help devs understand what to do.

With Deployments the general idea is that you’ll be able to run multiple images in modes – to facilitate testing, architecture scenarios, etc. With this you run one image as a DB, several as web servers and drop in a load balancer – and hey-presto you have a way to test how your application performs when scaling out.

With the 1.x branch I’m intending to go with a much lighter approach – still with some base images for various distributions and deployments (there will be standard and live images along the same approach as the live-cd used with some distributions) but using some of the approaches you’ve outlined for providing the packages and for linking in with repositories. This approach however requires some infrastructure to support it – and infrastructure = time + resources and resources = money.

This approach essentially extends the current sf sandbox to a deployed image mode. It’ll work out compatibilities, issues and fixes, deal with things like pear and pecl dependencies, PDO and handle the deployments you’ll see above.

With 1.x comes features for both devs and hosters (and allows for Targeted deployment). Hosters can build their base image and include the needed components into the image – and share it with their customers(the devs). Devs can download and use the image – and it’ll pull all the needed parts down. When they are ready to deploy – then from within the VM they can provision and deploy the application. With the provisioning on the hosting provider side building the image locally, deploying it and then accepting the deployment of the application.

Should the dev decide to move hosting providers to another supporting this model – as it’ll be built using the same components (but probably a different base OS) – then it should be a simple process to download their base image, deploy from the current VM to the new VM, test and redeploy.

Posted via web from Dasher’s Dev Den

Thursday 21st May, 2009 at 4:34 pm Leave a comment

Virtual Machine for Frameworks (Symfony & Zend)

I’ve just launched a new website aimed at helping developers have a cleaner environment for developing & testing their Symfony applications at Sipx.ws. Thanks to the great guys at ServerGrove – I managed to get the site up and running in no time.

Continue Reading Wednesday 20th May, 2009 at 6:31 pm 1 comment

Older Posts


Recent Posts