Posts filed under ‘observations’

JazzTel – Theft, Deception or just technical blunders

We’ve been with Jazztel for a couple of years and generally they’ve been pretty decent but 6 months ago or so we noticed that the download speed wasn’t quite what it should be.

Our home setup is a little odd – the office is at the back of the building – almost as far away from the incoming ADSL connection as you can get. We have a Wireless N bridge between the 2 rooms – it’s a little temperamental but generally works. We have a plethora of wifi networks around us and it’s sometimes hard to find a clear channel without too much interference – so I’d assumed that the connection speed I was getting in the office was related.

More recently I’ve been able to do some proper testing of our setup and to my surprise I found that the local network was running fine. Not great but easily getting transfers of 5-10MB.

So I picked up the phone and called the usually helpful tech support of Jazztel to get some help with getting the issue fixed.

ADSL depends a lot on the distance you are from the exchange – so reaching the 20MB service we were paying for was unlikely but we were getting an avg transfer speed of 110 KBps – easily under 1MB so something was up.

Jazztel tech support has changed a lot – often the case as a company grows – and it’s not changed for the better. The first person we managed to reach was polite, helpful and quickly found that our line speed has been capped at 1MB even though their system reports that we should be getting the 20MB service. Ok – cool – so now we know it’s not an ADSL technical issue – it’s config related. Unfortunately he didn’t know how to get the cap removed – no worries he says – he’s passed it up the management food chain and we’ll get a call back the next day.

Days come & go without a call – so we call back. The experience this time however is worse – we reach an aggressive tech that tells us there is no problem and what we have is the best we’ll get. When I explain that we’ve been told otherwise and he needs to look more closely – he hangs up the phone and we end up talking to Customer Service and end-up with the usual tech support/customer-service ping-pong.

Yes there’s an issue but we don’t know how to fix it – if you’d like it fixed then you need to call tech support. Tech Support tell put you through to Customer Service – Huh!?!?

Some of the tech support can see the issue, others can’t be bothered enough to look past the automated tele-script their call center has to help.

The fact that there’s a cap on the line can be seen – and the cap is 5% of the service that’s being paid for. The actual bandwidth we’re getting is 10% less – so even though they’re happy to take your cash for a 20MB service, they’re delivering 0.5% of it.

Now it might just be a technical blunder – some config was written badly and it’s choked the line but the concern is that they don’t know how to fix it. Their system apparently doesn’t let them change the cap that’s been applied. This is worrying – as it means that management has prevented their techs from making the required change.

And if it’s a restriction put in by management – then in my mind it’s a management decision then it’s policy – which means deception & theft.

It’s a shame – they’re a young company that showed really great potential for delivering not only a decent service – but had a great ethic towards their approach to working with customers. Originally they provided a decent service for a good price without shafting customer-care, tech support or all of the other customer-oriented frills that makes a good service.

ISP’s often hide behind the ADSL “distance from exchange” statement to explain the connection speed – but this is the first time I’ve encountered masked management driven caps on the service beyond the usual Acceptable Usage Policies.

Advertisements

Friday 27th August, 2010 at 3:17 pm 1 comment

Cognitive Context

Eliza brings a couple of things to the table that other systems don’t – mostly as it allows a way to quickly load some structure into systems – which then allow the running of test data against those structures.  It’s often a way to short-circuit starting from 0 knowledge (new born infant) and to boot-strap yourself a 3 year old.  A simple example is extracting sentences from a paragraph.  It can be used as a pre-parser or a post parser or as a way of rephrasing data.  Rephrasing is a handy tool for testing validity.

It provides a vehicle for asking questions but also provides an approach to determining the relevance of information within the available context.  The term available context was used as it’s often interesting to limit the available information to cognitive processes.

You often ask questions about statements you encounter: Who, What, Where, When, Why

You’ll also have an operational mode that you’ll switch between: operational modes help to define how a cognitive process should approach the problem.

In the human model – think along the lines of how your state of mind changes based on the situational aspects of the encounter.  The context of the situation can be external, reflective or constructed.

External contexts are where we are expected to respond – maybe not to all input – but to some.  Often these situations are where an action or consensus is required.
Reflective contexts are where information is absorbed and processed – generally to bring out understanding or knowledge but also when a pattern is reverse fit – not proving a fact but re-assimilating input so that it correlates.
Constructed contexts are the what if situations & problem solving. Similar to the reflective context but more about adjusting previous input to test fitness to something new while attempting to maintain it’s validity to other knowledge.

You’ll often start in a reflective context as you assimilate information and then move into a constructed context to maximise knowledge domains.  Then you’ll often edge into the external context – while running reflective contexts in the background.  Periodically you’ll create constructed contexts to boot-strap knowledge domains and to learn from how knowledge domains are created (which in turn will tune how the reflective domains obtain information).

Essentially this is a lot of talk for saying that you don’t always need to provide an output.  🙂

Now I mentioned at the beginning that it’s often interesting to limit the information available to an available context – often it’s not only interesting but also important.  The available context is the set of prior knowledge (and the rules (or the approach) of applying the relationships to the information the it’s surrounding knowledge).

If all knowledge is available to an available context and the same approach is used for processing that information – then it’s hard for a system to determine relevance or importance of which facts to extract from data.  In essence the system can’t see the wood from the trees.

Think about how you tackle a problem you encounter – you start with one approach based on your experience (so you’re selecting and limiting the tools you’re going to apply to deal with the situation) and based on how the interaction with the situation goes – you’ll adjust.  Sometimes you’ll find that you adjust to something very basic (keep it simple stupid or one step at a time) – at others you’ll employ more complex toolsets.

The Eliza approach can be used not just as a processing engine – but also as a way of allowing cognitive systems to switch or activate the contexts I mentioned earlier.  It’s also a handy pre-parser for input into SOAR.

One of the reasons for these recent posts is after visiting zbr’s site and reading his interest in NLP and cognition.  I stumbled over his site when looking to understand more about POHMELFS, Elliptics and your DST implementation.  I’ve been looking for a paralleled distributed storage mechanism that is fast and supports a decent approach to versioning for a while for a NLP & MT approach.  Distribution and parallelism are required as I implement a virtualised agent approach which allow me to run modified instances of knowledge domains and/or rules to create dynamic contexts.  Versioning is important as it allows working with information from earlier time periods, replaying the formation of rules and assumptions and greatly helps to roll-back processing should the current decision tree appear fruitless.  In human cognitive terms these act as sub-concious processing domains.

Saturday 20th June, 2009 at 3:08 pm Leave a comment

Cognition (expanded)

There are several underlying problems with cognition which are different from what most expect.

The primary issue is due to perception where too much emphasis is attributes to the human senses (primarily sight and sound) – which as I’ve mentioned before – are just inputs.  As you’ll know from physics – you’ll often see simple patterns repeated in many different fields – it’s unlikely that cognitive processes will be any different when dealing with sound/sight and thought.

The next issue is that many fall foul of attempting to describe the system in terms they can understand – a natural approach but essentially it boils down to the pushing of grammar parsers and hand lexers with too much forward weighting to identify external grammar (essentially pre-weighting the lexers with formal grammar).  An approach that can produce interesting results but isn’t cognition and fails as an end game for achieving it.  Essentially this is the approach used in current machine translation processes in it’s various forms.

The key fundamental issue is much simpler and related to issues around:  pattern, reduction & relationship.  An area that had some activity a while ago in various forms (cellular networks, etc) but fell to the wayside generally due to poor conceptual reference frameworks and the over-emphasis on modelling approaches used in nature (neural networks).

Now comes the time of definitions – a vehicle to ensure we’re on the same page 🙂

Pattern:
Cognitive processes thrive on them – and it’s one of the main drivers behind how it perceives, processes and responds to information.  There’s a constant search to find similarities between what is perceived and what is known.  It’s a fuzzy matching system that is rewarded, in the sense that it promotes change or adaptation, as much by differences as it is with finding similarities.  When thinking about similarities – a handy term is to think about something being true or false.  Don’t confuse true/false as the general definitions of the terms – it’s more about the sense of confidence.  If something has a high confidence of being valid then it is true.  The threshold of confidence is something that evolves and adapts within the cognition over time (essentially as a result of experience).
The development of patterns is both external (due to an external perception or input) and internal.  To avoid turning this comment into something massive (and boring you 🙂 ) – think along the lines of the human cognitive process and the subconscious or dreams.

Reduction:
Reduction happens at several key stages –  essentially it’s when a domain of experience breaches a threshold.  It’s a way of reducing the processing required to a more automatic response.  Think along the lines of short-circuit expressions.  It’s a fundamental part of the cognitive process.  From a human cognitive perspective you have probably seen it in your climbing and in your learning of the trumpet.  We often express it as “having the knack” or “getting the hang” of something.
It’s important for 2 reasons: a) it means it has gained knowledge about a domain; b)  it allows the cognitive process to further explore a domain.  While Reduction is a desirable end-game – it is not The End from a cognitive process perspective.  The meta information for this node of Reduction combines again and again with Pattern and Relationship allowing the process to reuse both the knowledge itself but more importantly the lessons learned when achieving reduction.

Relationship:
Relationship is really a meta process for drawing together apparently unrelated information into something that’s cohesive and is likely to either help with identifying patterns or for bringing about Reduction.  Relationship at first looks very similar to Pattern but differs in it’s ability to ask itself “what if” and by being able to adjust things (facts, perception, knowledge, Pattern, Reduction and versions of these[versions are actually quite important]) to suit the avenue that it being explored.  When expressed in human cognitive terms think of Relationship as the subconscious, dreams or the unfolding of events in thought.  The unfolding of events is an example of versions.  Essentially Relationship is a simulation that allows the testing of something.

Saturday 20th June, 2009 at 3:02 pm Leave a comment

NLP: thinking…

I stumbled over an interesting post on another site (http://www.ioremap.net/node/283) by zbr, a very bright guy which prompted a long comment.  I wanted to repost it here to further expand upon later.

NLP based on a grammatical rules engine, while an interesting toy, is essentially a dead-end when it comes to developing an approach to cognition.  Language is a complex system that has evolved over time and continues to evolve each and every day.  Grammar is an artificial construct that we have developed as a vehicle to describe language but  describing something doesn’t mean you understand it or that it can be used to extract knowledge or understanding from what it attempts to describe.

Take the example from Cyc (http://www.cyc.com/cyc/technology/whatiscyc_dir/whatsincyc):
* Fred saw the plane flying over Zurich.
* Fred saw the mountains flying over Zurich.

Grammar itself will help develop a weighted tree of the sentences and you’ll be able to describe the scene – but the system will lack enough reference to be able to respond.  In such a situation what is the proper response?

To answer we need a reference model – which luckily we have all around us everyday – people.  What do people do when they encounter a phrase and don’t have enough information to process it?  They ask a question.  What question would they ask?  Who’s fred? What’s a plane?  What’s Zurich? or would they laugh out loud as they exclaim (and picture) the mountains flying? (in itself a valid hypothesis)

Knowledge is obtained from the answer to the question – as it provided an addendum – a relationship between the phrase, the question and the answer.  Additionally the question itself often gets corrected – providing a short-circuit feedback  loop to the knowledge acquisition process.  The description of the answer also provides information about the relationship of items in the phrase to other information stored within the system.

What’s Zurich?  Zurich is the name of a city in a country called Switzerland.

(assuming that there is some information about what a plane is or that there is some relationship that interprets plans as machines like a car)
What color is the planes? Planes are all shapes and colors but this plane is bright green.
(note in this example the question indicates the singular but uses the plural – which is corrected in the answer)

The question provides insight into the internal state of the system we are interacting with (be it a computer program, a child we’re reading a story to or a colleague we are interacting with).  Inherent in any interaction is feedback, correction, elucidation of terms and phrases to assist understanding with those we are interacting with.  Often it happens in a subconscious way and tends to be in the style of continuous correcting feedback (the same approach we use when we reach down to pick up an object off of a surface).

A system needs to adapt & correct, to provide feedback (both to itself and with the other party it is interacting with) in a way that’s more than just updating state – but that also affects the very rules that make up the system itself.  This, however, is where many people tend to start going wrong.  A common pitfall is that the rules are considered to be the weightings between nodes of information or its relationships.  This however means that the underlying reference system (often implemented as grammar rules) rarely changes – which in essence lobotomizes the system.  It’s an indicator that you’ve put too much forward knowledge into the system.

Take how children learn – not the mechanic but the approach that’s used and not just for language or understanding (which is what we are trying to replicate when we implement the system) but with everything they do.  Nature, bless her cotton socks, is frugal with how she expends energy – so she reuses as much as possible (in essence cutting things down to their most common denominator).  You’ll see the same approach being uses for walking, talking, breathing, looking and following objects – in everything that we see, do or think.  Over time the system specializes domains of knowledge – further compartmentalizing – but also reusing that which has been learned and found to be valid in the domain.  Which in turn allows for further specialization and compartmentalization.

Thursday 18th June, 2009 at 12:10 pm 3 comments

General direction for the Virtual Machine for Frameworks (Symfony & Zend)

One of the Pro’s in the Symfony Users Google group had some comments on the Virtual Machine for Symfony at Sipx.ws and I wanted to share my thinking about my plans.

Generally when developing you should have an environment that represents that to which you’ll be deploying to – it’ll save you time, effort and much pain to have something as close as possible.  There are however, several scenarios for developers:

Targeted deployment

Ideally your environment matches that to which you’ll be deploying to. If you control the server infrastructure then this is less of a problem – you’ll build the server yourself (ideally via an automated deployment process) and building a VM from this is trivial.

If you don’t however control the server infrastructure then you have a more complex situation to deal with. If the gods are smiling then they’ve built their server completely from public distros and repos and used a package manager for all installs. If this is the case you can dump the package list and server build – and rebase an image yourself. Often however they have a custom OS build (tweaked for whatever reason), local repositories (hopefully mirrored) but sometimes not and a few extras thrown in. This makes building an image that represents the environment you’re going to use, while not impossible, generally non-trivial.

ServerGrove (http://www.servergrove.com/), forward-thinking & proactive, are interested in providing an image to their customers that does just this – allows people to develop locally in an environment that represents where the application will be deployed.

Trends

A growing trend with hosting providers is where they allow you to upload your own image to the hosting environment allowing you to build your own OS (subject of course to licensing requirements). One of the aims of the VM was to provide a way for devs to start locally and then upload a copy of the image to the hosting environment. With a few caveats (mostly around networking) you’re assured of 100% success for the deployed project as you’ve been able to put it through its paces in before uploading.

Non-Targeted Deployment

In this scenario the developer is building applications for non-specified specified servers – either because they don’t have or haven’t selected the hosting environment yet, they don’t have complete information from the project sponsor – or some other reason (it’s weird and wacky out there). Another possible deployment is Open Source projects where the deployed application may be any OS – and yet you’d like to have a common “known” environment for developers and end-users.

In this situation the VM helps both the developer and the project sponsor – as it’ll allow the dev to share the VM with the sponsor for testing and signoff. Essentially passing the monkey wrt the hosting environment.

General Approach (now and 1.x)

The current approach I’ve taken is mainly aimed at providing a lean-learning curve, a clean & repeatable environment to the community developing against Symfony and the Zend Framework (the Zend side is mostly a freebie but also aimed at helping people with Lucene search issues). With each build I test to ensure that all sf frameworks work by deploying a test application that covers ORM’s, plugins, routing and the DB/httpd. With the build I try to ensure that it’s portable and therefore works against the major VM Client vendors (VirtualBox, VMWare and Xen currently). The aim of the 1.0 release is to have something built and packaged ready to run – much like the sf sandbox currently works.

While VM’s have been around for a while – and while installing linux has become more user friendly – there’s still a lot of areas you can trip-up building images and installing OS’s. One of the aims was to remove this as a blocker to devs wanting to just get down to developing applications.

With the release of 1.0 there should be the following images and deployments available:

· Images

o devSFCoreServer

o devSFCoreIDE

· Deployments

o Stand alone (everything in one box for simple dev projects)

o Load Balanced (built using devSFCore with configuration that puts the server into modes: lb [load balanced], web [web server, memcached & no db], db [db, svn, no httpd but a http management interface])

· Project helpers

o Helpers to aid start-up of projects and development. Things like building the root development folder, linking to the version of the framework you wish to use, creating and configuring the DB, configuring the application to use the DB and running tests on the initial setup. Think a2ensite for creating a symfony application and you’ll get the picture. The intention isn’t so much to dumb down – but to streamline and to facilitate adoption by those not that familiar with symfony. Included will be log creation of the actual steps involved to help devs understand what to do.

With Deployments the general idea is that you’ll be able to run multiple images in modes – to facilitate testing, architecture scenarios, etc. With this you run one image as a DB, several as web servers and drop in a load balancer – and hey-presto you have a way to test how your application performs when scaling out.

With the 1.x branch I’m intending to go with a much lighter approach – still with some base images for various distributions and deployments (there will be standard and live images along the same approach as the live-cd used with some distributions) but using some of the approaches you’ve outlined for providing the packages and for linking in with repositories. This approach however requires some infrastructure to support it – and infrastructure = time + resources and resources = money.

This approach essentially extends the current sf sandbox to a deployed image mode. It’ll work out compatibilities, issues and fixes, deal with things like pear and pecl dependencies, PDO and handle the deployments you’ll see above.

With 1.x comes features for both devs and hosters (and allows for Targeted deployment). Hosters can build their base image and include the needed components into the image – and share it with their customers(the devs). Devs can download and use the image – and it’ll pull all the needed parts down. When they are ready to deploy – then from within the VM they can provision and deploy the application. With the provisioning on the hosting provider side building the image locally, deploying it and then accepting the deployment of the application.

Should the dev decide to move hosting providers to another supporting this model – as it’ll be built using the same components (but probably a different base OS) – then it should be a simple process to download their base image, deploy from the current VM to the new VM, test and redeploy.

Posted via web from Dasher’s Dev Den

Thursday 21st May, 2009 at 4:34 pm Leave a comment

Parcel Tracking

I constantly find myself returning to isnoop’s parcel geotracking site when I’m impatiently waiting for a delivery – the information is far better than anything you can find from the parcel delivery companies themselves.

I often wonder if user-contributions such as this end up making the companies lazier in their service offerings.

Tuesday 8th January, 2008 at 12:52 pm Leave a comment

Surface Computing

If you’ve not seen or heard about Microsoft Surface yet then you need to visit: http://www.microsoft.com/surface/

Prepare to be gob-smacked!

It’s slated to hit the high-street at the end of this year (2007 for future readers) and the rumour mill put the price somewhere in the region of 5,000-10,000 USD.  

If it works anything like the demos suggest, and there’s no reason why it wouldn’t with the functionality available in supporting devices (mobile phones, cameras, PDA’s, etc), then it will change the way we interact – no only with computers but with each other.

Saturday 9th June, 2007 at 1:58 pm 1 comment

Older Posts


Recent Posts