Posts filed under ‘Internet’
We’ve been with Jazztel for a couple of years and generally they’ve been pretty decent but 6 months ago or so we noticed that the download speed wasn’t quite what it should be.
Our home setup is a little odd – the office is at the back of the building – almost as far away from the incoming ADSL connection as you can get. We have a Wireless N bridge between the 2 rooms – it’s a little temperamental but generally works. We have a plethora of wifi networks around us and it’s sometimes hard to find a clear channel without too much interference – so I’d assumed that the connection speed I was getting in the office was related.
More recently I’ve been able to do some proper testing of our setup and to my surprise I found that the local network was running fine. Not great but easily getting transfers of 5-10MB.
So I picked up the phone and called the usually helpful tech support of Jazztel to get some help with getting the issue fixed.
ADSL depends a lot on the distance you are from the exchange – so reaching the 20MB service we were paying for was unlikely but we were getting an avg transfer speed of 110 KBps – easily under 1MB so something was up.
Jazztel tech support has changed a lot – often the case as a company grows – and it’s not changed for the better. The first person we managed to reach was polite, helpful and quickly found that our line speed has been capped at 1MB even though their system reports that we should be getting the 20MB service. Ok – cool – so now we know it’s not an ADSL technical issue – it’s config related. Unfortunately he didn’t know how to get the cap removed – no worries he says – he’s passed it up the management food chain and we’ll get a call back the next day.
Days come & go without a call – so we call back. The experience this time however is worse – we reach an aggressive tech that tells us there is no problem and what we have is the best we’ll get. When I explain that we’ve been told otherwise and he needs to look more closely – he hangs up the phone and we end up talking to Customer Service and end-up with the usual tech support/customer-service ping-pong.
Yes there’s an issue but we don’t know how to fix it – if you’d like it fixed then you need to call tech support. Tech Support tell put you through to Customer Service – Huh!?!?
Some of the tech support can see the issue, others can’t be bothered enough to look past the automated tele-script their call center has to help.
The fact that there’s a cap on the line can be seen – and the cap is 5% of the service that’s being paid for. The actual bandwidth we’re getting is 10% less – so even though they’re happy to take your cash for a 20MB service, they’re delivering 0.5% of it.
Now it might just be a technical blunder – some config was written badly and it’s choked the line but the concern is that they don’t know how to fix it. Their system apparently doesn’t let them change the cap that’s been applied. This is worrying – as it means that management has prevented their techs from making the required change.
And if it’s a restriction put in by management – then in my mind it’s a management decision then it’s policy – which means deception & theft.
It’s a shame – they’re a young company that showed really great potential for delivering not only a decent service – but had a great ethic towards their approach to working with customers. Originally they provided a decent service for a good price without shafting customer-care, tech support or all of the other customer-oriented frills that makes a good service.
ISP’s often hide behind the ADSL “distance from exchange” statement to explain the connection speed – but this is the first time I’ve encountered masked management driven caps on the service beyond the usual Acceptable Usage Policies.
Need some direction about how to use technology to bring an idea to market?
One of the Pro’s in the Symfony Users Google group had some comments on the Virtual Machine for Symfony at Sipx.ws and I wanted to share my thinking about my plans.
Generally when developing you should have an environment that represents that to which you’ll be deploying to – it’ll save you time, effort and much pain to have something as close as possible. There are however, several scenarios for developers:
Ideally your environment matches that to which you’ll be deploying to. If you control the server infrastructure then this is less of a problem – you’ll build the server yourself (ideally via an automated deployment process) and building a VM from this is trivial.
If you don’t however control the server infrastructure then you have a more complex situation to deal with. If the gods are smiling then they’ve built their server completely from public distros and repos and used a package manager for all installs. If this is the case you can dump the package list and server build – and rebase an image yourself. Often however they have a custom OS build (tweaked for whatever reason), local repositories (hopefully mirrored) but sometimes not and a few extras thrown in. This makes building an image that represents the environment you’re going to use, while not impossible, generally non-trivial.
ServerGrove (http://www.servergrove.com/), forward-thinking & proactive, are interested in providing an image to their customers that does just this – allows people to develop locally in an environment that represents where the application will be deployed.
A growing trend with hosting providers is where they allow you to upload your own image to the hosting environment allowing you to build your own OS (subject of course to licensing requirements). One of the aims of the VM was to provide a way for devs to start locally and then upload a copy of the image to the hosting environment. With a few caveats (mostly around networking) you’re assured of 100% success for the deployed project as you’ve been able to put it through its paces in before uploading.
In this scenario the developer is building applications for non-specified specified servers – either because they don’t have or haven’t selected the hosting environment yet, they don’t have complete information from the project sponsor – or some other reason (it’s weird and wacky out there). Another possible deployment is Open Source projects where the deployed application may be any OS – and yet you’d like to have a common “known” environment for developers and end-users.
In this situation the VM helps both the developer and the project sponsor – as it’ll allow the dev to share the VM with the sponsor for testing and signoff. Essentially passing the monkey wrt the hosting environment.
General Approach (now and 1.x)
The current approach I’ve taken is mainly aimed at providing a lean-learning curve, a clean & repeatable environment to the community developing against Symfony and the Zend Framework (the Zend side is mostly a freebie but also aimed at helping people with Lucene search issues). With each build I test to ensure that all sf frameworks work by deploying a test application that covers ORM’s, plugins, routing and the DB/httpd. With the build I try to ensure that it’s portable and therefore works against the major VM Client vendors (VirtualBox, VMWare and Xen currently). The aim of the 1.0 release is to have something built and packaged ready to run – much like the sf sandbox currently works.
While VM’s have been around for a while – and while installing linux has become more user friendly – there’s still a lot of areas you can trip-up building images and installing OS’s. One of the aims was to remove this as a blocker to devs wanting to just get down to developing applications.
With the release of 1.0 there should be the following images and deployments available:
o Stand alone (everything in one box for simple dev projects)
o Load Balanced (built using devSFCore with configuration that puts the server into modes: lb [load balanced], web [web server, memcached & no db], db [db, svn, no httpd but a http management interface])
· Project helpers
o Helpers to aid start-up of projects and development. Things like building the root development folder, linking to the version of the framework you wish to use, creating and configuring the DB, configuring the application to use the DB and running tests on the initial setup. Think a2ensite for creating a symfony application and you’ll get the picture. The intention isn’t so much to dumb down – but to streamline and to facilitate adoption by those not that familiar with symfony. Included will be log creation of the actual steps involved to help devs understand what to do.
With Deployments the general idea is that you’ll be able to run multiple images in modes – to facilitate testing, architecture scenarios, etc. With this you run one image as a DB, several as web servers and drop in a load balancer – and hey-presto you have a way to test how your application performs when scaling out.
With the 1.x branch I’m intending to go with a much lighter approach – still with some base images for various distributions and deployments (there will be standard and live images along the same approach as the live-cd used with some distributions) but using some of the approaches you’ve outlined for providing the packages and for linking in with repositories. This approach however requires some infrastructure to support it – and infrastructure = time + resources and resources = money.
This approach essentially extends the current sf sandbox to a deployed image mode. It’ll work out compatibilities, issues and fixes, deal with things like pear and pecl dependencies, PDO and handle the deployments you’ll see above.
With 1.x comes features for both devs and hosters (and allows for Targeted deployment). Hosters can build their base image and include the needed components into the image – and share it with their customers(the devs). Devs can download and use the image – and it’ll pull all the needed parts down. When they are ready to deploy – then from within the VM they can provision and deploy the application. With the provisioning on the hosting provider side building the image locally, deploying it and then accepting the deployment of the application.
Should the dev decide to move hosting providers to another supporting this model – as it’ll be built using the same components (but probably a different base OS) – then it should be a simple process to download their base image, deploy from the current VM to the new VM, test and redeploy.
I constantly find myself returning to isnoop’s parcel geotracking site when I’m impatiently waiting for a delivery – the information is far better than anything you can find from the parcel delivery companies themselves.
I often wonder if user-contributions such as this end up making the companies lazier in their service offerings.
Part of a recent project for a client involved selecting and deploying a PHP/MySQL application to a hosting provider.
Each time this requirement comes up – I spend a few hours looking around to get a feel for what’s on offer, the current prices, etc. This time I stumbled across HostMonster.Com and they have quite an impressive package for PHP based applications.
Not to mention:
Decent Bandwidth allocations
Setting up the account for the client was quick and easy – and so far I’ve not encountered any problems with them. Before signing up I dropped an email to their support team asking a couple of questions – to see if/how they responded. 20 mins later a reply turned up.
If you’re looking for a decent hosting provider that’s not gonna cost you an arm and a leg – I’d suggest looking them up.
Google’s webmaster tools have a handy facility to provide some detailed information about web sites and blogs that you run. To get it working you need to verify your site using either a HTML file or modifying the meta tags – not a simple thing to archive with a hosted blog. However tigredefogo has a handy and effective tip here!
Comment & Blog Spam is an annoying aspect that tries to exploit Search Engines to improve rankings. Driving more traffic to the spammers website to generate more revinue – either directly through PPC ads – or indirectly by driving up the value of the domain name.
It’s not just Blogs that are suffering – wiki pollution is a growing problem with poorly secured or badly implemented Wikis. SpamHuntress wrote recently about a massive wiki spam issue on one of the sites she manages. It’s a tough nut to crack – there isn’t a clear definition or deliniation of responsabilities on who is responsible for what. Is it the responsibility of the site owner to make sure their site is secure? Some would say so… but when you try to operate a large community effort (such as managing or maintaining a wiki where you want to promote community participation) implementing extras controls (such as user authentication & validation) dissuade people from participating.
SpamHuntress has a policy of contacting the network manager of the domain the spammers direct traffic to – which is a good policy – but is it their responsibility to do something about it?
I can’t help thinking that the true responibility should come back to Arpa and DNS – and their should be some tie in with the DNS environment.
The specific problems with managing spam for users of WordPress.com are exacerbated by the comment management interface – which don’t allow grouping of messages based on the senders email, IP address or the senders web-address. Instead you have an endless list to step through when trying to pick out a comment or trackback.
It would be helpful in weeding out the good from the bad if you could select from a list of grouping options.