We’ve been with Jazztel for a couple of years and generally they’ve been pretty decent but 6 months ago or so we noticed that the download speed wasn’t quite what it should be.
Our home setup is a little odd – the office is at the back of the building – almost as far away from the incoming ADSL connection as you can get. We have a Wireless N bridge between the 2 rooms – it’s a little temperamental but generally works. We have a plethora of wifi networks around us and it’s sometimes hard to find a clear channel without too much interference – so I’d assumed that the connection speed I was getting in the office was related.
More recently I’ve been able to do some proper testing of our setup and to my surprise I found that the local network was running fine. Not great but easily getting transfers of 5-10MB.
So I picked up the phone and called the usually helpful tech support of Jazztel to get some help with getting the issue fixed.
ADSL depends a lot on the distance you are from the exchange – so reaching the 20MB service we were paying for was unlikely but we were getting an avg transfer speed of 110 KBps – easily under 1MB so something was up.
Jazztel tech support has changed a lot – often the case as a company grows – and it’s not changed for the better. The first person we managed to reach was polite, helpful and quickly found that our line speed has been capped at 1MB even though their system reports that we should be getting the 20MB service. Ok – cool – so now we know it’s not an ADSL technical issue – it’s config related. Unfortunately he didn’t know how to get the cap removed – no worries he says – he’s passed it up the management food chain and we’ll get a call back the next day.
Days come & go without a call – so we call back. The experience this time however is worse – we reach an aggressive tech that tells us there is no problem and what we have is the best we’ll get. When I explain that we’ve been told otherwise and he needs to look more closely – he hangs up the phone and we end up talking to Customer Service and end-up with the usual tech support/customer-service ping-pong.
Yes there’s an issue but we don’t know how to fix it – if you’d like it fixed then you need to call tech support. Tech Support tell put you through to Customer Service – Huh!?!?
Some of the tech support can see the issue, others can’t be bothered enough to look past the automated tele-script their call center has to help.
The fact that there’s a cap on the line can be seen – and the cap is 5% of the service that’s being paid for. The actual bandwidth we’re getting is 10% less – so even though they’re happy to take your cash for a 20MB service, they’re delivering 0.5% of it.
Now it might just be a technical blunder – some config was written badly and it’s choked the line but the concern is that they don’t know how to fix it. Their system apparently doesn’t let them change the cap that’s been applied. This is worrying – as it means that management has prevented their techs from making the required change.
And if it’s a restriction put in by management – then in my mind it’s a management decision then it’s policy – which means deception & theft.
It’s a shame – they’re a young company that showed really great potential for delivering not only a decent service – but had a great ethic towards their approach to working with customers. Originally they provided a decent service for a good price without shafting customer-care, tech support or all of the other customer-oriented frills that makes a good service.
ISP’s often hide behind the ADSL “distance from exchange” statement to explain the connection speed – but this is the first time I’ve encountered masked management driven caps on the service beyond the usual Acceptable Usage Policies.
We’ve been beavering away working on our mobile client and where possible we’ve wrapped up parcels of code into module for Titanium.
We’ve a few out and available now – most are ready for use – a few are still in beta.
The modules available for Android are:
Paypal (Mobile payments) – http://github.com/dasher/titanium_mobile/tree/integration-paypal
UrbanAirship (Push notifications) – http://github.com/dasher/titanium_mobile/tree/integration-airship
AdMob & Smaata (Mobile ads) – http://github.com/dasher/titanium_mobile/tree/integration-ads
Google Maps (overlays & Polygons) – http://github.com/dasher/titanium_mobile/tree/master-integration
A pull request has been done to Appcelerator for integration with Titanium – so with luck they’ll appear in mainline sometime soon
Crucial Divide appoints Marina Zaliznyak
Need some direction about how to use technology to bring an idea to market?
Eliza brings a couple of things to the table that other systems don’t – mostly as it allows a way to quickly load some structure into systems – which then allow the running of test data against those structures. It’s often a way to short-circuit starting from 0 knowledge (new born infant) and to boot-strap yourself a 3 year old. A simple example is extracting sentences from a paragraph. It can be used as a pre-parser or a post parser or as a way of rephrasing data. Rephrasing is a handy tool for testing validity.
It provides a vehicle for asking questions but also provides an approach to determining the relevance of information within the available context. The term available context was used as it’s often interesting to limit the available information to cognitive processes.
You often ask questions about statements you encounter: Who, What, Where, When, Why
You’ll also have an operational mode that you’ll switch between: operational modes help to define how a cognitive process should approach the problem.
In the human model – think along the lines of how your state of mind changes based on the situational aspects of the encounter. The context of the situation can be external, reflective or constructed.
External contexts are where we are expected to respond – maybe not to all input – but to some. Often these situations are where an action or consensus is required.
Reflective contexts are where information is absorbed and processed – generally to bring out understanding or knowledge but also when a pattern is reverse fit – not proving a fact but re-assimilating input so that it correlates.
Constructed contexts are the what if situations & problem solving. Similar to the reflective context but more about adjusting previous input to test fitness to something new while attempting to maintain it’s validity to other knowledge.
You’ll often start in a reflective context as you assimilate information and then move into a constructed context to maximise knowledge domains. Then you’ll often edge into the external context – while running reflective contexts in the background. Periodically you’ll create constructed contexts to boot-strap knowledge domains and to learn from how knowledge domains are created (which in turn will tune how the reflective domains obtain information).
Essentially this is a lot of talk for saying that you don’t always need to provide an output.
Now I mentioned at the beginning that it’s often interesting to limit the information available to an available context – often it’s not only interesting but also important. The available context is the set of prior knowledge (and the rules (or the approach) of applying the relationships to the information the it’s surrounding knowledge).
If all knowledge is available to an available context and the same approach is used for processing that information – then it’s hard for a system to determine relevance or importance of which facts to extract from data. In essence the system can’t see the wood from the trees.
Think about how you tackle a problem you encounter – you start with one approach based on your experience (so you’re selecting and limiting the tools you’re going to apply to deal with the situation) and based on how the interaction with the situation goes – you’ll adjust. Sometimes you’ll find that you adjust to something very basic (keep it simple stupid or one step at a time) – at others you’ll employ more complex toolsets.
The Eliza approach can be used not just as a processing engine – but also as a way of allowing cognitive systems to switch or activate the contexts I mentioned earlier. It’s also a handy pre-parser for input into SOAR.
One of the reasons for these recent posts is after visiting zbr’s site and reading his interest in NLP and cognition. I stumbled over his site when looking to understand more about POHMELFS, Elliptics and your DST implementation. I’ve been looking for a paralleled distributed storage mechanism that is fast and supports a decent approach to versioning for a while for a NLP & MT approach. Distribution and parallelism are required as I implement a virtualised agent approach which allow me to run modified instances of knowledge domains and/or rules to create dynamic contexts. Versioning is important as it allows working with information from earlier time periods, replaying the formation of rules and assumptions and greatly helps to roll-back processing should the current decision tree appear fruitless. In human cognitive terms these act as sub-concious processing domains.
There are several underlying problems with cognition which are different from what most expect.
The primary issue is due to perception where too much emphasis is attributes to the human senses (primarily sight and sound) – which as I’ve mentioned before – are just inputs. As you’ll know from physics – you’ll often see simple patterns repeated in many different fields – it’s unlikely that cognitive processes will be any different when dealing with sound/sight and thought.
The next issue is that many fall foul of attempting to describe the system in terms they can understand – a natural approach but essentially it boils down to the pushing of grammar parsers and hand lexers with too much forward weighting to identify external grammar (essentially pre-weighting the lexers with formal grammar). An approach that can produce interesting results but isn’t cognition and fails as an end game for achieving it. Essentially this is the approach used in current machine translation processes in it’s various forms.
The key fundamental issue is much simpler and related to issues around: pattern, reduction & relationship. An area that had some activity a while ago in various forms (cellular networks, etc) but fell to the wayside generally due to poor conceptual reference frameworks and the over-emphasis on modelling approaches used in nature (neural networks).
Now comes the time of definitions – a vehicle to ensure we’re on the same page
Cognitive processes thrive on them – and it’s one of the main drivers behind how it perceives, processes and responds to information. There’s a constant search to find similarities between what is perceived and what is known. It’s a fuzzy matching system that is rewarded, in the sense that it promotes change or adaptation, as much by differences as it is with finding similarities. When thinking about similarities – a handy term is to think about something being true or false. Don’t confuse true/false as the general definitions of the terms – it’s more about the sense of confidence. If something has a high confidence of being valid then it is true. The threshold of confidence is something that evolves and adapts within the cognition over time (essentially as a result of experience).
The development of patterns is both external (due to an external perception or input) and internal. To avoid turning this comment into something massive (and boring you ) – think along the lines of the human cognitive process and the subconscious or dreams.
Reduction happens at several key stages – essentially it’s when a domain of experience breaches a threshold. It’s a way of reducing the processing required to a more automatic response. Think along the lines of short-circuit expressions. It’s a fundamental part of the cognitive process. From a human cognitive perspective you have probably seen it in your climbing and in your learning of the trumpet. We often express it as “having the knack” or “getting the hang” of something.
It’s important for 2 reasons: a) it means it has gained knowledge about a domain; b) it allows the cognitive process to further explore a domain. While Reduction is a desirable end-game – it is not The End from a cognitive process perspective. The meta information for this node of Reduction combines again and again with Pattern and Relationship allowing the process to reuse both the knowledge itself but more importantly the lessons learned when achieving reduction.
Relationship is really a meta process for drawing together apparently unrelated information into something that’s cohesive and is likely to either help with identifying patterns or for bringing about Reduction. Relationship at first looks very similar to Pattern but differs in it’s ability to ask itself “what if” and by being able to adjust things (facts, perception, knowledge, Pattern, Reduction and versions of these[versions are actually quite important]) to suit the avenue that it being explored. When expressed in human cognitive terms think of Relationship as the subconscious, dreams or the unfolding of events in thought. The unfolding of events is an example of versions. Essentially Relationship is a simulation that allows the testing of something.
I stumbled over an interesting post on another site (http://www.ioremap.net/node/283) by zbr, a very bright guy which prompted a long comment. I wanted to repost it here to further expand upon later.
NLP based on a grammatical rules engine, while an interesting toy, is essentially a dead-end when it comes to developing an approach to cognition. Language is a complex system that has evolved over time and continues to evolve each and every day. Grammar is an artificial construct that we have developed as a vehicle to describe language but describing something doesn’t mean you understand it or that it can be used to extract knowledge or understanding from what it attempts to describe.
Take the example from Cyc (http://www.cyc.com/cyc/technology/whatiscyc_dir/whatsincyc):
* Fred saw the plane flying over Zurich.
* Fred saw the mountains flying over Zurich.
Grammar itself will help develop a weighted tree of the sentences and you’ll be able to describe the scene – but the system will lack enough reference to be able to respond. In such a situation what is the proper response?
To answer we need a reference model – which luckily we have all around us everyday – people. What do people do when they encounter a phrase and don’t have enough information to process it? They ask a question. What question would they ask? Who’s fred? What’s a plane? What’s Zurich? or would they laugh out loud as they exclaim (and picture) the mountains flying? (in itself a valid hypothesis)
Knowledge is obtained from the answer to the question – as it provided an addendum – a relationship between the phrase, the question and the answer. Additionally the question itself often gets corrected – providing a short-circuit feedback loop to the knowledge acquisition process. The description of the answer also provides information about the relationship of items in the phrase to other information stored within the system.
What’s Zurich? Zurich is the name of a city in a country called Switzerland.
(assuming that there is some information about what a plane is or that there is some relationship that interprets plans as machines like a car)
What color is the planes? Planes are all shapes and colors but this plane is bright green.
(note in this example the question indicates the singular but uses the plural – which is corrected in the answer)
The question provides insight into the internal state of the system we are interacting with (be it a computer program, a child we’re reading a story to or a colleague we are interacting with). Inherent in any interaction is feedback, correction, elucidation of terms and phrases to assist understanding with those we are interacting with. Often it happens in a subconscious way and tends to be in the style of continuous correcting feedback (the same approach we use when we reach down to pick up an object off of a surface).
A system needs to adapt & correct, to provide feedback (both to itself and with the other party it is interacting with) in a way that’s more than just updating state – but that also affects the very rules that make up the system itself. This, however, is where many people tend to start going wrong. A common pitfall is that the rules are considered to be the weightings between nodes of information or its relationships. This however means that the underlying reference system (often implemented as grammar rules) rarely changes – which in essence lobotomizes the system. It’s an indicator that you’ve put too much forward knowledge into the system.
Take how children learn – not the mechanic but the approach that’s used and not just for language or understanding (which is what we are trying to replicate when we implement the system) but with everything they do. Nature, bless her cotton socks, is frugal with how she expends energy – so she reuses as much as possible (in essence cutting things down to their most common denominator). You’ll see the same approach being uses for walking, talking, breathing, looking and following objects – in everything that we see, do or think. Over time the system specializes domains of knowledge – further compartmentalizing – but also reusing that which has been learned and found to be valid in the domain. Which in turn allows for further specialization and compartmentalization.