November 23, 2008
I often check Twitter on my iPhone. But instead of using one of the numerous Twitter clients from the app store, I’ll just load up mobile Twitter in Safari. Why don’t I use an app?
There are problems in every option available right now. In this post I’m going to comment on three iPhone Twitter experiences: mobile Twitter (m.twitter.com), and two popular apps, Tweetsville and Twitterrific.
Mobile Twitter (m.twitter.com)
Mobile Twitter, located at m.twitter.com, is the official mobile version of Twitter.com. It works fairly well as a mobile website, but it’s not iPhone optimized. The design doesn’t follow the iPhone human interface guidelines published by Apple. A few changes would improve things for iPhone users:
- Tappable areas should be bigger. The “Older” and “Newer” links should be at least 44 x 44 (recommended by Apple).
- The text entry box should be bigger. Falls into the above suggestion, but I think it’s so important it deserves its own mention. A bigger entry box would benefit all mobile users (the box should be a certain percentage of the screen). Right now it looks ridiculously small on iPhone, and it’s awkward to type an update into. Editing what you’ve written is a frustrating experience. This experience is so poor that I find I use m.twitter.com to read updates, but to actually send an update to Twitter, I use SMS.
- It’d be nice to have a character counter. It’s essential, really: Twitter users need to know how much of their 140-character budget they’ve used and how much they’ve got left.
Third-party Twitter applications for iPhone: Twitterrific and Tweetsville
I’m going to compare Tweetsville and Twitterrific, which isn’t really fair, as I’m comparing the free edition of Twitterrific with a premium app, Tweetsville. But as far as I know, the user experience is the same; except the premium version doesn’t have ads, and it has the option to toggle a light background. Twitterrific is by Iconfactory, and has a free and premium version. Tweetsville is by Ed Voas, who sold the application to Tapulous. It’s a premium app with no free version.
Appearance. I’m not a fan of Twitterrific’s default appearance. The gradient background behind every single update is just something extra the app has to load, along with the text content. I don’t think it looks nice, either. Which, of course, is the real issue here. ; )
Seriously speaking, one of the things I like about Twitter is its simplicity, both in concept and visual design. Any extra graphic embellishment takes away from the simplicity and transparency. It’s worth noting that the desktop version of Twitter doesn’t even allow users to customise a background colour (the default is white). Any Twitter app should aim to load as quickly as possible, so being spare in appearance is a good thing.
Tweetsville’s appearance is simpler. It offers two display options (bubbles or no bubbles). I think it fits better with the appearance of core iPhone apps, in both its visual design and interaction design.
Content concentration. How many updates can you cram into a single screen and is cramming content into the screen a good thing to aim for? Content in context is something designers should definitely take into consideration. Twiterrific appears to be able to fit more updates in a single screen when compared with Tweetsville, which would have the benefit of not having to scroll as much. Given the little work involved in scrolling, and how much you need to scroll anyway, perhaps it doesn’t make much of a difference. It also depends on how many people you’re following, and how much content you need to catch up on.
Tweetsville: (1) plain, (2) speech bubbles.
User experience. Tweetsville looks better than Twitterrific. Additionally, its user experience is better: it is more user-friendly, and more compliant with Apple’s human interface guidelines for iPhone, and this is shown best on the settings screens below.
Tweetsville’s settings vs. Twitterrific’s settings.
Tweetsville’s settings fills a single screen. Twitterrific’s settings fills roughly three screens. The latter offers too many options, and not all are necessary. Is the ‘Light Background’ button totally necessary in the free edition? It mainly serves as an ad for the premium edition. How come Tweetsville gets away with so few settings options?
User control: tab bar. Another good thing about the design of Tweetsville is the presence of the tab bar. The tab bar on the bottom of the screen acts like a useful frame, giving the user more freedom over where they can move within the application.
The tab bar is a great asset. Even better is the ability to edit them (which you can do, surprise surprise, by hitting “edit” on the “more” screen). This works like the tabs in the iPhone’s iPod, which draws on an established affordance (good).
Tweetsville’s custom tab bar
Progress/status bars vs spinners. When I refresh the app I want to know how quickly I’ll be able to read new updates. So I want to see a visual indication of progress.
The spinner (circled in red) doesn’t indicate its progress visually. It just tells me it’s working. Great, but how soon will I get to see my updates?!
The browser bar (also circled in red) fills up as it downloads data. It tells me that not only is something happening, it’s completing a task, and is at least a percentage through completing it.
I really like this and would love to see an app that could show this, even if it’s not accurate. Psychologically, it eases my pain by giving me the impression that something’s getting done!
While none of these experiences are perfect, the good thing about multiple options is that the designers behind them will learn from each other’s merits and mistakes and improve iteratively. Twitterrific was one of the first clients out there for iPhone and iPod touch, and Tweetsville is a fairly recent release, so the latter had more time to learn from existing apps on the market.
I hope that Twitter will make an iPhone optimized site according to Apple’s human interface guidelines, because I’d be happy to use the website. Twitter itself is extremely lightweight, so does it really need an app? Any app should reflect the lightweight nature of Twitter, and aim to keep loading time as low as possible.
November 20, 2008
At the Future of Mobile this last Monday we launched a new version of Trutap, one of the projects that’s been keeping FP busy over the last 6 months. It’s been a large project, involving 11 our our folks at various points through its lifecycle and a similar number of people at Trutap.
The brief was to rework and adapt the user interface of the original product, reusing the existing storage and communications components which we had developed in 2007 as part of version 1. Trutap wanted the new UI to support searching of their customer-base, increased emphasis on user profiles, and the addition of large numbers of third-party service providers. We wanted to push J2ME and our Cactus framework a little bit further, and experiment with a new approaches to addressing the problem of porting J2ME applications – but more on that another time.
The project started with a 1-month design effort, where we worked on some of the conceptual side of the UI (such as left-to-right navigation and a breadcrumb-trail like use of tabs) and started thinking about what components would be required to support this.
The former gave us the “big picture” which was bounced back and forth between FP and TT, iterating gently in the breeze. The latter allowed us to start making visible and worthwhile progress early. We started with a completely separate harness for UI components which let us develop and exercise them outside of the main MIDlet, testing them fully (even with edge conditions, e.g. “how does this contact look when their name is too long to show on-screen?”), before plugging them into our automated tests. As a side-effect, this enforced some good design principles: components are properly encapsulated, and we ended up reusing quite a few of them across the app.
This was the first large project which has started, run and finished since we adopted Scrum for project management at FP a year ago. It’s been both a learning process, and confirmation (for us and our clients) that Scrum works “in the real world” on large complex projects… on the subject of which, if you were at my talk at the Werks a month or so back, you would have seen a diagram that looked something like this:
It’s the burndown chart that we used internally to track the progress of this project. The blue bars above the X axis show the initial scope measured in estimated hours of work, whilst the red bars beneath the axis show scope added mid-project – and the black line shows the rough path of the original project plan. Each bar represents work remaining at the start of a two-week sprint. So you can see the project was originally scheduled as 12 weeks effort (sprints 17-22), the initial scope was completed in just under 14 weeks, and scope added – particularly towards the end as we iterated over the messaging section – added 6-8 weeks onto the overall timescales.
You can see that clearly some sprints went better than others (resulting in better progress); internally we tracked specific activities against these sprints, and it’s no surprise that riskier activities (more technically complex ones, or those relying on third parties) tender to go slower than well-understood ones.
Only towards the end did we start sharing this chart with Trutap. I’d previously felt a bit self-conscious about filling status meetings with graphs and charts – it feels a bit geeky – but I learned a lesson here: I wished we’d done this earlier, they found it a really helpful way to represent and understand the progress we were making.
Most of the project proceeded on an incremental basis, as we gradually migrated sets of features over from the version 1 interface to the new look and feel. This proceeded section-by-section: signup and login first, then contact management, messaging, profile management, search, services, and so on. At the end of each iteration (i.e. once a fortnight), we released a version of the product to their QA team for formal testing, though we were conducting QA as part of our development too; I was pleased to hear that Trutap felt quality of releases had noticeably improved since the first version of the product last year. In the final sprints we were releasing more frequently: their QA staff had direct access to our build system and could pull off new binaries as and when they were ready.
We iterated a little on messaging, spending a sprint returning and reworking the interface once our customer had had a chance to see how the wireframes we’d all theorised about worked on a real phone; and in the final sprint of the project we had another run-through, with the Trutappers coming down to sit alongside our development team and get last-minute pre-launch changes made.
Design tended to be done “just-in-time”; sometimes we deliberately anticipated components that would be needed for the next sprints work and undertook design for them in the preceeding sprint (in a classic “design-one-sprint-ahead” model), but sometimes we were able to work a story or two ahead and keep design and development tasks in the same sprint.
I’m deliberately not writing much about lessons learned in this post; we’re having a half-day retrospective with the FP and TT teams getting together next week. I’ll follow up this meeting with a post here summarising the day; and there’ll be another piece coming soon on our approach to porting, which I touched on at the Future of Mobile.
In the meantime: congratulations to everyone at Trutap and Future Platforms (past and present) who worked on this. I love launching 🙂
November 18, 2008
I really enjoyed Future of Mobile yesterday.
The day started a little sluggishly with a well-qualified panel discussing the future of mobile operating systems. I didn’t feel I learned much here – revenues of the panelists businesses weren’t particularly exciting, and aside from an interesting conversation around runtimes I didn’t feel I learned a great deal.
For me, things really started to take off with the presentation from Doug Richard of Trutap (disclosure: they’re a client of ours). Doug was talking about the rise of a middle class in the developing world that shares aspirations with the middle classes everywhere, and quietly pointed out our arrogance in assuming that it could be otherwise. I particularly liked his notion that Western operators would adopt defensive positions and hence take fewer risks (and be less innovative) than those coming out of India.
I didn’t devote much attention to Matt Millar from Adobe, I’m afraid – sorry Matt, but I was doing last-minute panicking about my own presentation. I’ve not watched the video yet, but whilst I’d spent more time preparing than I ever have in the past (and felt the slides were reasonably polished), I made the mistake of over-planning what I was going to say. Normally I work from bullet points and just chat around them (something I’m comfortable doing) but after my hour-long overrun at the Werks talk a month or so back I tried to restrain myself by planning what I’d say in great depth. The upshot was I felt like I was working from a script, and had to keep checking where I was, staring at a screen instead of talking to the audience. Lesson learned there, but at least I managed to get my macaroons-as-analogy-for-porting slide out.
The bloggers panel was a really good format: 6 bloggers, 6 minutes each, mirroring blogging itself. Really nice to hear Vero Pepperell evangelise a human approach to communication – as an industry we ought to know that stuff, but I can’t help feeling we need someone to gently beat it into us on occasion. Helen was righteous – nuff said.
A lunch, or non-lunch, followed. If there was a weak point to the day overall, I’d say it was the facilities. I heard plenty of people complain about a lack of wi-fi (though as a 3 USB dongle owner I managed OK), there was no lunch provided, and no coffee in one of the coffee breaks. Fortunately Kensington is full of restaurants and cafes, but it would’ve been nice to hang around in a throng during all these breaks. The auditorium itself was excellent – a lovely space, good sound, and power to most seats.
Rich Miner gave a great talk in the afternoon, filling in a bit more detail around Google and their plans, and drawing on his own history launching the Windows SPV Smartphone when he was at Orange. He gave a good if negative insight into the world of operators when he talked about product managers feeling threats from new product developments and derailing them.
Interesting also to hear about his take on mobile web apps – that they fail for reasons of network latency, lack of local storage, and access to device capabilities. Whilst you can see efforts in Android, PhoneGap and OMTP Bondi to address some of these, it’s a little way from the “web apps as future of mobile” angle which I’d heard Google were adopting.
And similarly it was good to hear Rich quizzed on the topic of Android and fragmentation by David Wood (who’s more qualified to talk about this than he?). Rather than espousing the rather bland “we don’t think fragmentation is in the interests of the industry” line I’ve heard from Google before, Rich talked about the value of having a reference implementation by which to judge others; a conformance test being introduced for OEMs; and the use of challenging and popular reference apps to provide a “Lotus 1-2-3” style evaluation of an Android implementation.
Tomi Ahonen was hilarious and upbeat as usual – full of detailed and slightly threatening stats on the hold that mobile has on us, and case studies of fantastic things launched elsewhere (usually Asia). The Tohato “worlds worst war” was my favourite: purchasers of snack products fighting one another in vast virtual armies, wonderful.
And the day finished with another panel discussion: lots of disagreement from qualified folks who’ve been doing this stuff for years, including two of our clients. We had some kind words said about us by Carl from Trutap and Alfie of Moblog fame – thanks guys! – and it was particularly interesting to hear the pendulum of fashion swing back towards applications, away from the mobile web. I wonder how permanent this effect, which is surely down to the iPhone App Store, will be?
The evening party followed, carrying on the upbeat atmosphere 🙂
My slides from the day are online here. The lens-tastic Mr Ribot took video footage of the talk which you can see here, and I heard a rumour that the official footage from the event may go online some time too.
Thanks to Dominic and all the team at Carsonified for the hard work they put into the event – I know all too well from Sophie how much this takes, and they did a cracking job. And a particular yay to Mr Whatley, who stepped in as compere at the last minute and did an excellent job of keeping the audience engaged, even in those sleepy after-lunch slots 😉
November 7, 2008
I’ve just gotten back from a wonderful little event at Sussex University, called Enterprising Engineers.
It’s organised the luxuriantly bearded Jonathan Markwell of Inuda, who seems to have mastered the dual skills of fitting 36 hours into the working day and either attending or organising every digital media event in town. The format was good fun: nibbles and booze followed by a 3-person panel taking turns to talk for 10-20 minutes, and taking questions from the audience.
Dan held forth on his philosophies towards startups, how it’s affected his working life, and finished with a demonstration of his product, Tails (a beautifully simple-looking bug tracking system).
Glenn followed up with a roundhouse presentation to the face, giving the back-story to Backnetwork, a product Madgex built to connect folks at events which is nowadays languishing without further development, but nonetheless earning its keep as a tool for bartering in exchange for sponsorship.
And then I wibbled somewhat – for my Macbook decided to crash, preventing me from showing the Ghost Detector video that should have introduced my skit. I rambled on about our experiences launching Ghosty in the US, then Flirtomatic and Twitchr: the unifying theme being the complexity and engineering problems which can be lurk behind even the inane, the lewd, or the playful.
November 6, 2008
Hmm, so yesterday I experimented with something new in one of our fortnightly retrospectives. It didn’t quite work out as hoped.
The idea was that by using fortune cookies you could draw similarities between your experience of the iteration, and by generating them draw out suggestions and advice for the coming one. It would have the benefit of being a bit of fun, something to get people talking and sharing.
Sadly, it seems that the quality of fortune cookies in Brighton is slightly different than I would have preferred. While the more traditional ‘Confucius says’ style may have worked (for all the clichés would have generated humour), I had not counted on the slightly modernised ones that I purchased.
They ranged from the bizarre, to the slightly insulting, and then on to the strongly suggestive and slightly risqué. Certainly not suitable topics for group discussion, although the humour involved almost made up for it.
Intriguingly, the ‘not so business-like’ style of discussion that this method engendered was really noticeable for strongly persisting during the following group lunch. I hadn’t previously considered this effect, and this has certainly given me some really intriguing threads of thought to follow.
Advice: Check your expectations before you hand out fortune cookies.
* My sample two fortunes said “The colour blue will be lucky for you” (semi-appropriate for me, but not so useful for reflection) and “You are not a complete idiot, some parts of you are missing” (which was just charming really).
November 5, 2008
I spent last Monday at Channel 4’s offices, taking part in the Mobile Game Pitch organised by Channel4, EA and Nokia. For once, I wasn’t the one doing the pitching: I went along as a mentor, helping 4Talent’s young producers prepare for their presentations.
It ended up being a long and intense day, but there were plenty of positives.
One was meeting Scott Foe – considering that he is recognised as the highest profile game producer in the mobile industry, I should have know of him before. Still, he turned out to be a great character, had some nice American sigarettes and gave an insightful presentation about the mobile games industry:
It was great to listen to the 8 game concepts selected for the final, mostly because they game for people wihout mobile background. They were ideas in their infancy, a few of them with some potential, but they show a growing understanding and interest in mobile and its potential.
- The importance of creating trans-media characters
- The significant difference in the marketing of console games (shock & awe) v.s. mobile games (sustained trickle)
- The importance of word of mouth
- The significant value associated to game concept creation – i.e. pre-production
- The 2.5 yrs he and Nokia have invested in the development of Reset Generation (I now have to play it)
November 2, 2008
It’s a really contentious topic. The event which I think provoked the whole discussion was Vodafone foolishly deploying a transcoder which prevented mobile sites from identifying the device used to access them: effectively breaking large chunks of the mobile web. A particularly nasty aspect of this was that the sites most badly affected were the ones which had been specifically written to deliver the best mobile experience.
The W3C CT group is creating a set of guidelines that deployers of transcoding proxies and developers can use to ensure end-users get the best possible experience of mobile content. Involved in this effort are parties from across the mobile value chain, though mostly from larger organisations which tend to participate in these sorts of things. I’m there to try and ensure that smaller parties – content owners and mobile developers – are better represented.
There have been other attempts to put together similar guidelines – the most prominent being Luca Passani’s Rules for Responsible Reformatting: A Developer Manifesto, which has quite a few signatures from the development community, as well as a number of transcoder vendors. There’s a great deal of overlap between the contents of Manifesto and the CT document. I think this is because the two are concerned with a quite specific set of technologies, neither are trying to invent any new technology, and both have the same aim in mind: to ensure that a repeat of the Vodafone/Novarra debacle, or similar, doesn’t recur.
What I like most about the CT document is the responsibilities it places upon transcoder installations, if they’re to be compliant – and with Vodafone in the CT group, I think it’s reasonable for us to expect them to move their transcoders to compliance at some point. The document is still work-in-progress, but right now some of these (with references) include:
- Leaving content alone when a Cache-control: no-transform header is included in a request or response (4.1.2);
- Never altering the User-Agent (or indeed other) headers, unless the user has specifically asked for a “restructured desktop experience” (4.1.5);
- Always telling the user when there’s a mobile-specific version of content available – even if they’ve specifically asked for a transcoded version of the site (184.108.40.206). I think this is lovely: as long as made-for-mobile services are better than transcoded versions (and in my experience it’s not hard to make them so), users will be gently guided towards them wherever they exist;
- Making testing interfaces available to developers, so that content providers can check how their sites behave when accessed via a transcoder (5)
There’s also a nice set of heuristics referred to, which gives a hint to content providers of what they can do to avoid transcoding.
The big bugbear for me (since joining the group) has been the prospect of transcoders rewriting HTTPS links, which I believe many do today. I’ve been told that in practice Vodafone maintain a list of financial institutions whose sites they will not transcode, presumably to avoid security-related problems and subsequent lawsuits – which would seem to support the notion that this is a legal minefield.
The argument for transcoding HTTPS is that it opens up access to a larger pool of content, including not only financial institutions like banks which absolutely need security, but also any site that uses HTTPS for login forms. Some HTTPS-accessible resources do have less stringent requirements than others (I care more about my bank account than my Twitter login, say), but it’s not a transcoders place to decide when and what security is required, overriding the decisions a content provider may have made.
The CT group has agreed that the current document needs to be strengthened. Right now it is explicit that if a proxy does break end-to-end security, end-users need to be alerted to this fact and given the option of a fully secure experience. Educating the mass market about these sort of security issues is likely to be difficult at best; I take small comfort from the fact that they’ll be given a choice of not being forced into an insecure experience, but this still feels iffy to me.
And security isn’t just for end-users: content providers need to be sure they’re secure, and beyond prohibiting transformation of their content using a no-transform directive there’s not much they can currently do. So I suspect there’s more work cut out for us on the topic – and the amount of feedback around HTTPS would seem to confirm this.
The fact that we need to have either the CT document or the Manifesto is a problem in itself, of course: infrastructure providers shouldn’t be messing with the plumbing of the mobile web in the way that they have been. But given where we are right now, what are we to do? Luca’s already done an excellent job of representing the anger this has caused in the mobile development community; I hope the CT work can complement his approach.
I’m also going to write separately about the process of participating in the group; I’ve found the tools and approach quite interesting and it’s my first experience of such a thing.
October 28, 2008
There’s so much news right now about mobile applications and stores, it feels like time to take stock.
When iPhone launched and I got my greedy mitts on a jailbroken Shiny from the US, one of the things I liked most about it was the dodgy “installer” app which the kind man who jailbroke it for me put there. At the time it definitely felt like the best experience I’d had of downloading and installing third-party mobile applications, and Apple have gone on to improve on it with their official Application Store.
Conventional wisdom had been that users don’t download mobile apps, a generalisation which flies in the face of our experience; we know we’ve had well in excess of 750,000 downloads of apps incorporating our Cactus UI library to date, plus the installations we’re unable to track ourselves. And our experience isn’t unique. But there’s definitely been some problems taking owners of conventional smartphones through the process of downloading and installing an application:
- Text into a shortcode
- Receive a WAP Push or text message
- Find it, open it, click on the link
- Ignore the warning that you might go online
- Pray your mobile phone has correct connection settings
- Go online, wondering how much this is costing you
- Find out your phone isn’t supported (whatever that means)
- Wonder what all the fuss is about
So… application downloads to date have been by customers who are educated enough, driven enough or persistent enough to deal with this infernal procedure.
Just one more thing…
I love the mobile web. It’s getting better every year as devices and networks improve, it’s still got a long way to go, and it’s the most cost-effective means of getting a mobile service launched.
But isn’t it strange that Apple are getting massive success selling applications via a an application itself – that they’re not selling and distributing iPhone apps via the web, either on the device or through iTunes? And it looks like Google are taking the same tack.
Isn’t this a pretty strong endorsement of application as a route to online content, rather than the web? And isn’t the success Apple has enjoyed with their application store testament to the fact that even in situations where the web might provide a perfectly serviceable experience (such as e-commerce), applications are a better route to take? Not that I’m suggesting we don’t need to wait and see on this one, or that there won’t be problems down the line as the quantity of content available via these stores increases.
October 18, 2008
Two main things I got out of this talk: (1) carrot: good games should reward people for contributing more, with points, levels, collectable items. (2) goals: good games should have an end goal.
Casting my mind back to various games I’ve played, I’ve never been so hooked on a game as Ultima Online. I played this game compulsively for about a year – quitting only when I really needed to focus on schoolwork (and when I finally switched to a Mac).
Rewards and goals are everywhere in Ultima Online. The gameplay is rich on hundreds of different levels: across the macrocosm of the game down to the tiny little details. I loved that spell ingredients, known as ‘reagents’, spawned across the land – which could be picked up and used, or sold. I could also harvest cotton from cotton fields and sell it to tailors in the town. If you’re like me, you’ll find both of these ideas hopelessly novel.
Another thing I liked about the game: killing monsters gives a character an amount of “fame”. And with enough “fame”, you gain a title. This brings a compelling social aspect to the game: you have something to show other players for your participation in the game. Your title also reflects your skill level, ranging from “novice” to “legendary” (they might have introduced more levels since).
Building a character within a system, within a world, is satisfying, compelling, and addictive. A character can take one of hundreds of possibilities. The game is not just about “killing stuff” within a contrived “level”. In addition to being a mage or a warrior, you can make a living as a tailor, a chef, a bard, a thief, and even a beggar. Skill increases as you practise it: so, to build your bard character, you’d need to first raise enough gold to buy an instrument, then walk around playing said instrument.
This does get a little dull. In some professions, raising skill is much too mechanical and technical for sustained interest, so you could macro or automate it to rise. (If you get caught, though, you put your account at risk.) On the whole, I think the game manages this well: although it can get boring, it is more likely that you will invest the time to build your character than it is for you to give up or quit the game.
How does Ultima Online manage this? You can clearly see the structure and process for raising a skill. In other words, you can see the journey ahead, and know what you need to do in order to reach the end. You can see the rewards in the future: e.g., a tailor with 100.00% skill can make better quality leather armour for your mage. An animal tamer with 100.00% skill has a much better chance at successfully taming a dragon. Getting to 100.00% skill is difficult, but fun: the rewards are both in the journey and in the destination.
I think Ultima Online is the perfect game. Sadly, its membership is dwindling: possibly because World of Warcraft is the new MMORPG vogue, and possibly because gamers aren’t known for their lengthy attention spans.
So, some basic principles which are useful to interaction and experience designers, or anyone planning a social website:
- Reward your users for participation.
- Allow them to build something, and allow them to see the end-goal.
- Give your users a structure: give them limitations
From ‘Rules of Play’:
The idea that players subordinate their behaviors to the restrictions of rules in order to experience play – and its pleasures – is a fundamental aspect of games. The restrictions of rules facilitate play, and in doing so, generate pleasure for players.
From L. S. Vyogotsky:
To observe the rules of the play structure promises much greater pleasure from the game than the gratification of an immediate impulse.
Now: how to bring these principles to social websites?
October 16, 2008
I managed to get delayed slightly in London, and managed to arrive at Hove station a mere 3 minutes before we were due to start: oh the irony of arriving late when you’re giving a talk about delivering products on time. But fortunately things were slightly behind schedule already and no-one seemed to notice.
The talk itself went OK; it was a rerun of, and expansion upon, a half-hour piece I pulled together for Barcamp Brighton a month or so back… and the Barcamp talk itself was a follow-on from one Joh and I gave at SkillSwap last year, summarising our experiences a week into Scrum. So I was pretty familiar with the material, although I made a point of adding in lots of (sanitised and anonymised) data from real projects we’ve been running.
There’s far too much to go into here, and if you’ve been reading this site you’ll have read quite a bit of it already. Plus I have to save *something* to say to people I meet in real life. But if you attended, you might find these annotated slides handy – I’ve taken the ones we showed on the night and added in a few notes here and there which might jog your memory on the points I was making as I went through them. Beware, for as a wise man says: if the slides make sense without my being there, there’s no point in my presenting them; and if they don’t make sense without me, you won’t get much value from them. Caveat emptor!
I was given an hour to do this skit, and with questions we ended up using just shy of two hours, which is the longest I think I’ve ever spoken publicly. With that warning in mind, if you’d like to chat about this sort of thing or find out more, do give me a shout: tom dot hume at future platforms dot com