Guardian Anywhere

September 14, 2009

We’re launching a little side-project today. It’s an app for Android phones that lets you read The Guardian, anywhere. We’ve imaginatively called it “Guardian Anywhere”.

Guardian Anywhere: article viewIt’s designed to meet a simple use case: you want the news on your phone to read on your commute into work. You can tell the app to download your newspaper whenever you like, perhaps early in the morning before you wake up. You can also tell it to only grab the news over Wi-fi; there’s a surprising amount of stuff in the Guardian each day, and we don’t want you running your battery down grabbing it all over 3G.

Once it’s downloaded, the content lives on your phone – perfect if you’re travelling on the tube or through some of the more radio-challenged parts of the countryside (like, coincidentally, the Brighton-to-London line). As well as news stories, you get a pile of photographic imagery which the Guardian publishes every day. We’ve also tried to retain a little bit of the Guardian look and feel throughout, though this isn’t an official Guardian product.

If you play with it a while, you’ll discover a few other nifty features tucked away, like:

  • Saving some of the fine photography to your phone as wallpapers;
  • Choosing which sections of the paper to subscribe to (or which to avoid);
  • Clipping articles or photos into a “Saved Articles” folder, to help you skim for interesting stuff and then peruse it at your leisure;
  • Browsing through articles by section, author or tag. This makes it easy to find, say, all the articles in todays paper relating to executive pay and bonuses; and if you’re really interested in a tag (or author), you can subscribe to it;

Guardian Anywhere is the brainchild of James Hugman, who kicked the project off during his gold card time and has been demonstrating it to us ever since at our fortnightly reviews. We’ve been testing and revising the app internally, and launched it onto the Android Marketplace after putting it in for the Android Developer Challenge.

All feedback is, as always, warmly welcome – please do try the app out and leave us some comments in the Market. The little 2D barcode you can see to the right of this paragraph is quite handy for getting hold of the app. Scan it with the barcode scanner on your Android phone to be taken to the right place in the Android Market, click the link if you’re reading this with an Android phone, or just search the Market for “Guardian”.

Two-team retrospective

March 25, 2009

Today was our first review and planning day with the new two-team Future Platforms, and to give ourselves a bit more space to stretch out, we tried holding the morning session at The Skiff – a coworking space run by the Inuda folks a couple of streets away from our offices.

RetrospectiveThe facilities there were excellent. As well as more space we had walls to stick postits onto with (and lots of visibility, room to walk around, etc.), a projector, and beanbags galore. All of this – plus being in an unusual space away from the usual workplace – made a big difference.

The review went well, with our two teams (Anjuna and Tonberry) demonstrating projects they’ve been working on for Microsoft and the BBC respectively. Compared to the limited visibility you get by passing a couple of handsets around a room, having 5 foot tall on-screen demos felt like heaven… and the demonstrations themselves seemed to “flow” a little more than in previous reviews. If only I could talk a little more about their content; as usual, NDAs have my lips sewn shut. Unusually, our design team weren’t presenting their work this time around – much of it has involved supporting the development teams (and was therefore visible in the dev team demos), and South by Southwest had taken its toll on our design resource over the last fortnight….

I was really looking forward to the retrospective, and wasn’t disappointed. The near-unanimous view from both teams was that splitting the company into two had been a success. Each team felt more focussed on their respective projects, had been less affected by context switching, and found stand-ups both quicker and more useful. Even better, we could see measurable and significant improvement in the productivity of the company split into two teams (vs all being in one team).

One issue which was surfaced by 4 different people was that of planning and testing time. We have a persistent experience where testing effort bunches up towards the end of sprints – so that at the start of each fortnight there’s sometimes not enough for our dedicated testers to do, and at the end of the fortnights too much.

Sprint 34 burndown (Anjuna)My own tendency is to attack this problem by trying to spread the load of testing more evenly throughout the sprint (encouraging units of work – user stories in our case – to be taken through development and testing to done before starting on others) and having other team members help out with some of the testing work when there’s too much (whilst some of it demands the eager eye and magic software-breaking fingers of a good QA specialist, I feel there are areas where the rest of us can usefully pitch in). We had a fairly lively debate on this one, and didn’t reach a useful conclusion. In particular I had hoped to bring in some of the limiting-in-phases techniques Karl Scotland covered in his recent Kanban presentation (also held at the Skiff), but we didn’t reach consensus on adopting this – so I’ll give it another go next time around.

Burndown charts also stimulated a bit of conversation. I’d experienced a worrying few days with Anjuna mid-sprint, when progress on getting stories to completion was minimal even as the team frantically worked on a core section of the product… and I’d found myself noting a problem but stuck on what to usefully do about it. In the end the team pulled through and delivered, but I’m left feeling slightly worried; we’ve had sprints in the (thankfully distant) past where a flat-lined burndown continued right to the end, thanks to my ignoring the signs. Putting a positive spin on it, the worst that we’ll ever do is experience a single unproductive fortnight, but even that would feel like a huge opportunity wasted. More thought required.

Finally, on design; last sprint we opted out of planning it formally, this time around we’re officially embedding the design team with Anjuna, where they can crack on with pushing that particular product forward together (and without distractions from other projects).

Dr Evil RetrospectsA solid day; we saw the juicy fruits of our work, had some productive discussions on how to get better, persuaded a sizable chunk of the company to come out for gyoza etc, planned in two teams’ worth of work for a fortnight, and still managed to finish dead on 6pm. Considering that a year ago planning days were regularly overrunning and leaving us all half-dead, with a smaller team, it’s great to see how we’re improving. On that particular point, it’s amazing what benefit an hour or so of working up a sprint backlog for each team the day before planning can deliver…

Thanks as usual to Joh for her facilitation of the retrospective, and to the Skiffers for hosting us!

Splitting Teams

March 15, 2009

It’s a while since I last wrote about how we’re getting on with Scrum, and we’ve had an eventual few weeks… so here goes.

We’re doing This Thing, right, and I can’t tell you anything about it at all, except that 9 of the Future Platforms team flew into Hong Kong a few weeks back and spent a fortnight working on-site for Microsoft in China, to kick off the project we’re doing with them…

FP/MS Teams, End of Sprint ReviewIt’s been quite an experience in many ways: travelling abroad as a team, working across cultural and linguistic divisions, and fitting our processes in with those of the worlds largest software company. We wanted to colocate ourselves for all the classic reasons: tackle risky elements of the project together, put faces to names, and build relationships which can weather the inevitable highs and lows of a collaborative project. And of course, we’re running things in the way we always do – I hope I’ll be able to write more about this in time, I think we’re going to be learning a *lot*.

For those of us lucky enough to go, it was definitely an adventure: hard work, thanks in the main to the sheer dedication of the FP team who were heads down for a full working day and frequently carrying on back at the hotel… but we had a lot of fun, too – as our various Flickr feeds will attest – and our hosts took great pains to make us feel welcome.

We landed back into Heathrow at 4:30am on Monday, and have been collectively fighting jetlag ever since – at the same time reuniting with the crew who remained in the UK whilst we travelled abroad, and our newest hire, James Hugman – who I’m maxi-chuffed to have working with us.

One thing we’ve been conscious of in recent months is that our team size is starting to become a bit cumbersome. Stand-up meetings feel less relevant when there’s 10 people working on a few projects, retrospectives become harder to run with so many opinions to bring out and discuss, and it’s easier to lose focus when planning. And personally I have always cherished the times when I worked in a small (3-5 person) team – it’s when I’ve had the most fun as a developer.

So before we flew to China, we’d held an end-of-sprint retrospective where we agreed that now might be a good time to split the company into two teams – and on Wednesday we did just that. It was quite involved, and in the course of the retrospective we covered:

  • Make-up of the two teams, ensuring they were balanced from a skills perspective and that they allow for continuity. We want to keep folks who’d just built relationships with Microsoft carrying on the good work, for instance;
  • Office space, making sure the teams are colocated and have control over their own environment (they’ve already chosen radically different desk layouts), and that both have space to meet around a task board daily;
  • Running planning days; I was keen to bring the whole company together for reviews so that we all get to see what we’re collectively up to, but we eventually decided to do the same for retrospectives (with a view to learning as much as we can from our experiences even across teams), whilst separating planning out into two per-team 90-minute sessions in the afternoon of planning day, and running separate daily stand-ups.
  • Naming the teams 🙂 Anjuna and Tonberry it is!

Tamuji Throwing ShapesThe day went surprisingly smoothly and was unusually calm. Mary, our Product Owner, has been doing an excellent job of pre-planning preparation (grooming the product backlogs of our various clients into sprint backlogs efficiently), which pays dividends, and we were finished by 6:30pm tonight even after office rearrangement and a 90-minute lunch break, in which we introduced Mr Hugman to the delights of E-kagen.

The main problem we’ve had is that of accurately predicting a velocity for the two teams. In China our productivity was unusually high (greater than it’s ever been, in fact), which we ascribe to working long hours with an unusual level of focus on a single project. Some of this (the focus) we can try to carry forward – in fact, the two-team split should assist us in doing so. But with the company split, we’re not in a position to accurately estimate what one of the new teams can get through, until the end of this sprint.

So we now have 2 development teams, each comprising 3 developers and a full-time QA – plus a design team of 3 supporting them. We’re running design separately for the moment: we have a couple of large design projects, one of which involves no development at all, and will be bringing designers into daily standups for any sprint in which they’re doing any work with the dev team.

Next steps: we’ll bring in another Product Owner (we need to give our clients more love, and this will help – watch this space!); and see how well the team structure we’ve chosen works over the coming weeks.

It’s 2008; if you’d asked us a few years ago, we might have hoped fragmentation would be a thing of the past by now. It’s still a topic of much debate at industry events and considered a bete noir for anyone launching a mobile problem. But is it really such a problem?

Developing large J2ME applications like Trutap is typically a two-stage process. The first stage is to complete the functionality of the application, generally by targeting a couple of reliable handsets to create a couple of “reference versions”. Once the features are working well and to the satisfaction of the client across the reference handsets, the next phase is to port the product across a greater range of devices and fix any problems that arise.

The porting process is affected by fragmentation (differing availability of phone features), handset limitations (e.g. varying amounts of memory and storage available), and bugs (broken or badly implemented features on specific handsets).

The bad old days

The traditional way to tackle porting is to create separate builds for each device, or family of devices.  Conditional compilation and preprocessor statements are used; effecively, portions of the application are written multiple times, each time working around unique quirks of particular handsets. Font sizes, key mappings and the ability to run in the background are typical sources of variance between devices.

When building the application, the portions of code required by each handset are picked, according to a set of characteristics that we assign to each handset. For example, a handset that we know is able to use socket based connections will run a different version of the application to one which is only able to connect via HTTP.

All sorts of problems exist with this approach to porting.  It relies upon a prior knowledge of every target handsets capabilities in order to work effectively.  That is, all the issues that may affect an application running on any given handset need to be understood before a version can be built.  You also end up with a slightly different version, uniquely tailored for every handset that we support.  If you are targetting even a moderately broad range of handsets that’s a whole lot of different builds, and keeping track of what code is going in to each can be very difficult. The QA effort is also vastly increased. Once ports are built and tested you can only be confident in delivering the ports to the handsets you have tested on. When new devices come onto the market or existing devices get software upgrades, /more/ porting and testing is needed before these new handsets can be supported.

Things get even worse when your aim is to use restricted APIs or be featured on many operators’ application portals.  In order to do this, your application builds need to be thoroughly tested by an official verification body and digitally signed.  This is charged on a per-build basis; having many tens of builds of your application will increase these costs dramatically. And remember that every time you release a new feature or bug fixes, you need to recertify.

A different approach

Having been troubled by many of the issues discussed above in the past, we decided to turn this approach on its head with the new version of Trutap.  We judiciously apply some carefully chosen prior knowledge of the features common to the devices we’re targeting and let the handsets tell us the rest when we run the application. We can reliably tell on the fly what a device’s screen size is, and whether it supports features such as camera, phonebook or Bluetooth. This way, all devices run essentially the same version of the application.

The consequence of this is that devices that lack certain features will have pieces of the code on there that they will never use.  However, for the price of a few superfluous bytes of code on some handsets, is an application that has a much higher chance of *just working* on any phone on which it is loaded. And nowadays there are very few handsets on the market where, even for a large application, the additional code size is a problem.

The Trutap UI just falls into place, scaling itself on the fly with regards to screen size, image assets and font height.  And as a result, we’ve spent the bulk of our porting effort solving a few real problems with our target devices, rather than nudging the UI into place on, say, a handset with obscenely large fonts. The end product: a single version of the application, with only three different packages for distribution (each containing small, medium and large icons).  The consequence: a clean, maintainable codebase, a relatively painless porting process, and a far simpler testing phase to boot.

So where’s the magic?

Most of the time if UI components are implemented carefully and robustly, they can scale themselves to the resolution of the device they are on and there is no need for magic. Features that require special device functionality can be included or left out as and when it is needed by detecting what APIs are accessible. A handsets colour space can be accounted for by writing to and reading back from hidden test images. Accurate font height can be detected at runtime in similar ways.

With a thorough understanding of handset behaviors it’s nowadays possible to sidestep at runtime most of the problems normally associated with fragmentation.

Fragmentation hasn’t gone away – but it’s not such a problem any more.

Thom Hopper & Douglas Hoskins
Lead Developers, Future Platforms

Trutap retrospective

November 28, 2008

271120081731Yesterday afternoon we invited the Trutap team down to Brighton, for a post-project retrospective. It’s the first time we’ve done one of these with a client – normally we run them every fortnight with the whole FP team. On top of learning what we could from our largest Scrum project to date, we wanted to lift a bit more of the veil.

I won’t go into the full details of how we ran the day; aside from anything else, the festivities afterwards have blurred much of my memory, but I do have a good set of notes regarding our learnings. As is traditional for us, we split these into things that went well, things that didn’t go so well, and things we’d do next time. There was also an opportunity for us to call out individuals for particular appreciation, which I liked.

So, here’s some of what we got out of it:

What went well

  1. Porting has been extremely smooth this time around. Within 10 days of completing a reference version of the product, we’ve fully QAd and released the new Trutap for around 150 handsets, with more following as I write. Most of the credit for this rests with our fine development team, who’ve been leading us away from the industry-wide nightmare of maintaining hundreds of different versions of the product, and towards a rosy future of fewer SKUs. There’s another post about this on its way, and I mentioned how we’ve done this in my Future of Mobile presentation. Suffice to say TT version 1 had more than 30 SKUs, all built from a single code-base but targeting different devices. TT version 2 has a single binary, and comes in 4 flavours for different screen sizes.
  2. Shared documentation and tools, in particular the product backlog (all work remaining to do) and our bug tracking system, really helped. We were operating very transparently; I do wonder whether a less technically capable client would be as interested, but for those who want it, we’ll do this again.
  3. The relationship between the development teams at Trutap and FP was strong: we like each other and we got on well. ‘Nuff said.
  4. Weekly meetings were very useful, though we only started holding them a couple of months into the project.
  5. Change was handled smoothly; we were able to accept change, addition of scope and iterate aspects of the product as necessary throughout.
  6. We worked at a sustainable pace; we week before launch there was an eery calm at both Trutap and FP. Tthe product was there, the bug count was close to zero, and we hadn’t had to work evenings and weekends to get to this stage. Compared to the period before previous launches I’ve known (even for Trutap), this was incredibly calm. I don’t think our adoption of Scrum can take all the credit for this – the project involved a skilled team at FP and TT who’d worked together before, for one thing – but I certainly think it’s helped.

271120081736What could have gone better

  1. Wireframes were a poor means of specifying the product, requiring a lot of clerical maintenance and attention from both sides and offering an ambiguous level of detail: too much in some areas (e.g. screen layouts), too little in others (e.g. error states and flows).
  2. We started the project haphazardly at both sides, and had to act to bring it back on track a couple of months in.
  3. The design process had “too many cooks” and could have been more focused.
  4. We should have spent more time explaining our approach, from both a project management and a technical perspective. We never actually sat down with Trutap and said “this is how we’re going to work”, instead keeping the details of our process to ourselves. With a large successful Scrum project under our belts and a bit more self-confidence, we’ll do this next time. Equally, from a technical perspective we had a clear idea of how we broke down the work (into screens and UI components) which we could have shared earlier on.
  5. Using version 1 of the product as a catch-all default was a mistake; where not otherwise specified, the product was to “work like version 1” which led to some scope being missed off planning at the FP end, and some confusion where new features didn’t dovetail with old behaviour.
  6. We had many means of communicating: Google docs, Basecamp, Fogbugz, a Jabber chat-room, email, face-to-face meetings, etc. This sometimes led to a lack of clarity and nervousness: when a chat-room had been set up but our dev team weren’t available in it (because they were getting on with work!), the client worried. Next time around we should clarify communications methods: different tools seemed good for different jobs.
  7. Early in the project, we weren’t as good at making collective decisions as we were later on. A looming deadline always helps focus the mind 😉 but we’d try to get this focus earlier on future projects.

Next time around…

  1. We’ll prototype more and wireframe less. We may invest time into tools to support this. Wireframes don’t cut it.
  2. We’ll plan to iterate from the beginning, allowing contingency in timescales and commercials (in fact the latter was planned for, as it turned out ;))
  3. We’ll introduce any change at fortnightly sprint boundaries. A couple of times we had mid-sprint changes which led to the dev team at FP thrashing and progress slowing. Lesson learned: we should be more disciplined here.

    The Retrospective Squid

  4. Towards the end of the project, we had a day we called the “UI Sweep”, where the TT product team sat with our developers and worked through final bits of polish. This made a difference to the product quality disproportionate to the amount of time spent. It’s quite gruelling for the developers, and relies on good tools and practices that let you make changes there-and-then, but was ultimately very worthwhile. The idea of an on-site customer is classic XP.
  5. We’ll have weekly meetings from the beginning, with everyone engaged and involved.
  6. We’ll get everyone to the pub more often 🙂

And the role of honour: called out for particular thanks were Ali, Tobias, Luke and Rob @ Trutap of Trutap, and Thom, Doug and Serge at FP. Thanks again guys, we built something fantastic 🙂

None of this would’ve been possible without the able and entertaining facilitation provided, as ever, by Joh Hunt – cheers Joh 🙂

Looking back at Trutap

November 20, 2008

Trutap home screenAt the Future of Mobile this last Monday we launched a new version of Trutap, one of the projects that’s been keeping FP busy over the last 6 months. It’s been a large project, involving 11 our our folks at various points through its lifecycle and a similar number of people at Trutap.

The brief was to rework and adapt the user interface of the original product, reusing the existing storage and communications components which we had developed in 2007 as part of version 1. Trutap wanted the new UI to support searching of their customer-base, increased emphasis on user profiles, and the addition of large numbers of third-party service providers. We wanted to push J2ME and our Cactus framework a little bit further, and experiment with a new approaches to addressing the problem of porting J2ME applications – but more on that another time.

The project started with a 1-month design effort, where we worked on some of the conceptual side of the UI (such as left-to-right navigation and a breadcrumb-trail like use of tabs) and started thinking about what components would be required to support this.

The former gave us the “big picture” which was bounced back and forth between FP and TT, iterating gently in the breeze. The latter allowed us to start making visible and worthwhile progress early. We started with a completely separate harness for UI components which let us develop and exercise them outside of the main MIDlet, testing them fully (even with edge conditions, e.g. “how does this contact look when their name is too long to show on-screen?”), before plugging them into our automated tests. As a side-effect, this enforced some good design principles: components are properly encapsulated, and we ended up reusing quite a few of them across the app.

This was the first large project which has started, run and finished since we adopted Scrum for project management at FP a year ago. It’s been both a learning process, and confirmation (for us and our clients) that Scrum works “in the real world” on large complex projects… on the subject of which, if you were at my talk at the Werks a month or so back, you would have seen a diagram that looked something like this:

Release burndown

It’s the burndown chart that we used internally to track the progress of this project. The blue bars above the X axis show the initial scope measured in estimated hours of work, whilst the red bars beneath the axis show scope added mid-project – and the black line shows the rough path of the original project plan. Each bar represents work remaining at the start of a two-week sprint. So you can see the project was originally scheduled as 12 weeks effort (sprints 17-22), the initial scope was completed in just under 14 weeks, and scope added – particularly towards the end as we iterated over the messaging section – added 6-8 weeks onto the overall timescales.

You can see that clearly some sprints went better than others (resulting in better progress); internally we tracked specific activities against these sprints, and it’s no surprise that riskier activities (more technically complex ones, or those relying on third parties) tender to go slower than well-understood ones.

Only towards the end did we start sharing this chart with Trutap. I’d previously felt a bit self-conscious about filling status meetings with graphs and charts – it feels a bit geeky – but I learned a lesson here: I wished we’d done this earlier, they found it a really helpful way to represent and understand the progress we were making.

Most of the project proceeded on an incremental basis, as we gradually migrated sets of features over from the version 1 interface to the new look and feel. This proceeded section-by-section: signup and login first, then contact management, messaging, profile management, search, services, and so on. At the end of each iteration (i.e. once a fortnight), we released a version of the product to their QA team for formal testing, though we were conducting QA as part of our development too; I was pleased to hear that Trutap felt quality of releases had noticeably improved since the first version of the product last year. In the final sprints we were releasing more frequently: their QA staff had direct access to our build system and could pull off new binaries as and when they were ready.

We iterated a little on messaging, spending a sprint returning and reworking the interface once our customer had had a chance to see how the wireframes we’d all theorised about worked on a real phone; and in the final sprint of the project we had another run-through, with the Trutappers coming down to sit alongside our development team and get last-minute pre-launch changes made.

Design tended to be done “just-in-time”; sometimes we deliberately anticipated components that would be needed for the next sprints work and undertook design for them in the preceeding sprint (in a classic “design-one-sprint-ahead” model), but sometimes we were able to work a story or two ahead and keep design and development tasks in the same sprint.

I’m deliberately not writing much about lessons learned in this post; we’re having a half-day retrospective with the FP and TT teams getting together next week. I’ll follow up this meeting with a post here summarising the day; and there’ll be another piece coming soon on our approach to porting, which I touched on at the Future of Mobile.

In the meantime: congratulations to everyone at Trutap and Future Platforms (past and present) who worked on this. I love launching 🙂

Enterprising Engineers

November 7, 2008

I’ve just gotten back from a wonderful little event at Sussex University, called Enterprising Engineers.

It’s organised the luxuriantly bearded Jonathan Markwell of Inuda, who seems to have mastered the dual skills of fitting 36 hours into the working day and either attending or organising every digital media event in town. The format was good fun: nibbles and booze followed by a 3-person panel taking turns to talk for 10-20 minutes, and taking questions from the audience.

Tonight it was myself, Dan from Angry Amoeba, and Glenn from Madgex – all talking about products.

Dan held forth on his philosophies towards startups, how it’s affected his working life, and finished with a demonstration of his product, Tails (a beautifully simple-looking bug tracking system).

Glenn followed up with a roundhouse presentation to the face, giving the back-story to Backnetwork, a product Madgex built to connect folks at events which is nowadays languishing without further development, but nonetheless earning its keep as a tool for bartering in exchange for sponsorship.

And then I wibbled somewhat – for my Macbook decided to crash, preventing me from showing the Ghost Detector video that should have introduced my skit. I rambled on about our experiences launching Ghosty in the US, then Flirtomatic and Twitchr: the unifying theme being the complexity and engineering problems which can be lurk behind even the inane, the lewd, or the playful.

Working with the W3C

November 2, 2008

Back in September, I mentioned that I’ve been invited to work with the W3C Mobile Web Best Practices Working Group, specifically to help with Content Transformation (CT).

It’s a really contentious topic. The event which I think provoked the whole discussion was Vodafone foolishly deploying a transcoder which prevented mobile sites from identifying the device used to access them: effectively breaking large chunks of the mobile web. A particularly nasty aspect of this was that the sites most badly affected were the ones which had been specifically written to deliver the best mobile experience.

The W3C CT group is creating a set of guidelines that deployers of transcoding proxies and developers can use to ensure end-users get the best possible experience of mobile content. Involved in this effort are parties from across the mobile value chain, though mostly from larger organisations which tend to participate in these sorts of things. I’m there to try and ensure that smaller parties – content owners and mobile developers – are better represented.

There have been other attempts to put together similar guidelines – the most prominent being Luca Passani’s Rules for Responsible Reformatting: A Developer Manifesto, which has quite a few signatures from the development community, as well as a number of transcoder vendors. There’s a great deal of overlap between the contents of Manifesto and the CT document. I think this is because the two are concerned with a quite specific set of technologies, neither are trying to invent any new technology, and both have the same aim in mind: to ensure that a repeat of the Vodafone/Novarra debacle, or similar, doesn’t recur.

What I like most about the CT document is the responsibilities it places upon transcoder installations, if they’re to be compliant – and with Vodafone in the CT group, I think it’s reasonable for us to expect them to move their transcoders to compliance at some point. The document is still work-in-progress, but right now some of these (with references) include:

  • Leaving content alone when a Cache-control: no-transform header is included in a request or response (4.1.2);
  • Never altering the User-Agent (or indeed other) headers, unless the user has specifically asked for a “restructured desktop experience” (4.1.5);
  • Always telling the user when there’s a mobile-specific version of content available – even if they’ve specifically asked for a transcoded version of the site (4.1.5.3). I think this is lovely: as long as made-for-mobile services are better than transcoded versions (and in my experience it’s not hard to make them so), users will be gently guided towards them wherever they exist;
  • Making testing interfaces available to developers, so that content providers can check how their sites behave when accessed via a transcoder (5)

There’s also a nice set of heuristics referred to, which gives a hint to content providers of what they can do to avoid transcoding.

The big bugbear for me (since joining the group) has been the prospect of transcoders rewriting HTTPS links, which I believe many do today. I’ve been told that in practice Vodafone maintain a list of financial institutions whose sites they will not transcode, presumably to avoid security-related problems and subsequent lawsuits – which would seem to support the notion that this is a legal minefield.

The argument for transcoding HTTPS is that it opens up access to a larger pool of content, including not only financial institutions like banks which absolutely need security, but also any site that uses HTTPS for login forms. Some HTTPS-accessible resources do have less stringent requirements than others (I care more about my bank account than my Twitter login, say), but it’s not a transcoders place to decide when and what security is required, overriding the decisions a content provider may have made.

The CT group has agreed that the current document needs to be strengthened. Right now it is explicit that if a proxy does break end-to-end security, end-users need to be alerted to this fact and given the option of a fully secure experience. Educating the mass market about these sort of security issues is likely to be difficult at best; I take small comfort from the fact that they’ll be given a choice of not being forced into an insecure experience, but this still feels iffy to me.

And security isn’t just for end-users: content providers need to be sure they’re secure, and beyond prohibiting transformation of their content using a no-transform directive there’s not much they can currently do. So I suspect there’s more work cut out for us on the topic – and the amount of feedback around HTTPS would seem to confirm this.

The fact that we need to have either the CT document or the Manifesto is a problem in itself, of course: infrastructure providers shouldn’t be messing with the plumbing of the mobile web in the way that they have been. But given where we are right now, what are we to do? Luca’s already done an excellent job of representing the anger this has caused in the mobile development community; I hope the CT work can complement his approach.

I’m also going to write separately about the process of participating in the group; I’ve found the tools and approach quite interesting and it’s my first experience of such a thing.

A Year of Scrum

October 16, 2008

So, Joh and I did our talk last night at The Werks – attempting to summarise experiences from the last year at Future Platforms, since we formally adopted Scrum for all our development processes.

I managed to get delayed slightly in London, and managed to arrive at Hove station a mere 3 minutes before we were due to start: oh the irony of arriving late when you’re giving a talk about delivering products on time. But fortunately things were slightly behind schedule already and no-one seemed to notice.

The talk itself went OK; it was a rerun of, and expansion upon, a half-hour piece I pulled together for Barcamp Brighton a month or so back… and the Barcamp talk itself was a follow-on from one Joh and I gave at SkillSwap last year, summarising our experiences a week into Scrum. So I was pretty familiar with the material, although I made a point of adding in lots of (sanitised and anonymised) data from real projects we’ve been running.

My Adoring Public

There’s far too much to go into here, and if you’ve been reading this site you’ll have read quite a bit of it already. Plus I have to save *something* to say to people I meet in real life. But if you attended, you might find these annotated slides handy – I’ve taken the ones we showed on the night and added in a few notes here and there which might jog your memory on the points I was making as I went through them. Beware, for as a wise man says: if the slides make sense without my being there, there’s no point in my presenting them; and if they don’t make sense without me, you won’t get much value from them. Caveat emptor!

I was given an hour to do this skit, and with questions we ended up using just shy of two hours, which is the longest I think I’ve ever spoken publicly. With that warning in mind, if you’d like to chat about this sort of thing or find out more, do give me a shout: tom dot hume at future platforms dot com

Thanks to James McCarthy and Rosie Sherry for organising everything, to Joh for gamely keeping me on the straight and narrow once more, and everyone who came along for making it so much fun 🙂