Sunday, October 14, 2012
Apart from reminding me of reports that Microsoft is planning to spend over $1.5bn on Windows 8 marketing, it also made me think of those horribly contrived YouTube videos in which some smart arse presents his Dad with a Windows 8 installation, offers no coaching or explanation, then records him struggling (examples here and here).
The purpose of such videos is to illustrate the unintuitive nature the Windows 8 user interface, making the point, either directly or indirectly, that Microsoft has got it totally wrong.
When I first saw this stuff emerging, I remember saying to a few a people how these were pure attention grabbing stunts that bore no relation to how things would pan out in the real world. The truth is that users generally aren’t sat down in front of a new installation of a new operating system that someone has set up for them and just told to get on with it – that almost never happens in either a domestic or a business context. And even if users find themselves rocking up to an unfamiliar environment cold and asking someone for help, they are not just going to be told “You’re on your own; you work it out”.
Apart from the obvious clues on new features and navigation presented during the initial setup, Microsoft and OEMs invariably provide some basic documentation in the box to get users going with each new release of Windows (e.g. ‘quick start’ guides). Beyond this bundled help, it has then always been pretty certain that for a few months leading up to and following the Windows 8 launch, we would see every self-respecting PC magazine publish extensive reviews and guides of their own, covering basic education along with tips and tricks to get the most from your new system. The one I referred to above is the first I have seen on the shelves, but you aren’t going to be able move for this Windows 8 awareness and educational stuff pretty soon.
As I have already reported, my own experience of Windows 8 is very positive on desktop configurations driven by a mouse and a keyboard, and my teenage daughter has had a good experience of a dockable slate running the new OS.
So is Windows 8 perfect? No. Is there room for improvement on the initial release? Definitely.
But it really isn’t the disaster area that lot of people would have you believe, so it’s important to take a lot of the more extreme criticism you come across with a pinch of salt.
Thursday, February 02, 2012
As the Wikipedia entry kindly reminded me:
“In 1998, Logic Works was acquired by Platinum Technology for $174.8 million in stock, which was in turn acquired by Computer Associates the next year.”
By then, I had moved on into the world of ERP, but a few days ago, a group of us from Freeform Dynamics had the pleasure of speaking with the guys at CA Technologies now responsible for the latest incarnation of ERwin – now known as CA ERwin.
Apart from nostalgia, I did have a good reason for wanting to catch up with my old friend, and that’s because of the recent announcement of CA ERwin support for SQL Azure, Microsoft’s cloud based RDBMS.
We hear so much about rapid development in the cloud, but all too often the implication is that it’s legitimate for traditional rigour to be sacrificed on the altar of time to deployment. Indeed, some cloud environments, and those that advocate them, almost encourage a quick and dirty approach to development, with minimal analysis and design. The truth is that this will always come back to bite for any significant system, whether in the form of poor performance, lack of flexibility, or high operational overheads.
Returning to CA ERwin, one of the traditional strengths of the tool throughout its evolution has been the ability to define a logical data model then map it onto multiple physical models. This is not a unique capability, but the ERwin line has always provided functionality that is both comprehensive and easy to use – ideal for professional analysts and designers who want to do things properly, but still want to move quickly in an unencumbered manner. It’s a philosophy that would seem to be particularly relevant to those wishing to take advantage of rapid cloud development and deployment, but without sacrificing structure and discipline.
To existing CA ERwin users, SQL Azure becomes just another supported database environment, but an Azure specific version has been announced by CA Technologies for those wishing to adopt a more structured approach to data modelling in a pure cloud environment. In related announcements, CA Technologies has also highlighted a portal facility to allow models to be presented and visualised by different types of user, and increased collaboration capability to encourage more reuse and sharing of models and components. Going hand in hand with the latter is a new concurrent licensing model
Standing back from these announcements, however, my overriding view is that anything encouraging and/or facilitating developers to apply structure and discipline to cloud based projects is to be welcomed. Adoption of cloud should not be an excuse for throwing analysis and design principles out of the window.
Saturday, January 14, 2012
The journalist asking for input was Rosalie Marshall at Incisive, and the article she produced can be seen here – worth a read if you want a quick summary.
The comment I provided, which is quoted in the article was as follows:
“Relying on the public internet for core application connectivity introduces a degree of variability and uncertainty around bandwidth, speed and latency that is unacceptable to many large organisations, which are increasingly putting the emphasis on end-to-end quality of service management. Utilising dedicated links to cloud providers overcomes this and hooking up via incumbent communications service providers can also have benefits in terms of costs, monitoring, troubleshooting and support."
“While security, per se, should not be an issue when sending traffic over the public internet, provided it is appropriately encrypted, directly connecting to the cloud provider does take away a commonly perceived risk, which may make it easier to get sign off from non-technical stakeholders when making cloud-related decisions.”
These comments were based on various conversations with senior IT decision makers, along with, of course, insights from the extensive primary research we have carried out to explore the practicalities of cloud adoption. If you are interested in seeing some of this, a particularly relevant report is one that Andy put together a few months ago, entitled: “Cloud Connectivity; Carefully does it”, which can be downloaded from here.
You can check out that report at your leisure, but suffice it to say that one of Andy’s main conclusions from the research was that connecting to cloud services is a whole different ball game to enabling remote access. Just because you have the comms in place to handle the latter, doesn’t mean they will be up to dealing with the former.
Back to the AWS announcement, Andy later followed up with Amazon and arranged for the team here to speak with Robin Meehan, Chief Technology Officer at Smart421, Amazon’s launch partner for Direct Connect in the UK. Robin pretty much reiterated the points outlined above in my initial take, but we also covered some of the practicality.
Robin highlighted the importance of a one-stop shop for the entire service end-to-end (connectivity and AWS infrastructure services), pointing out that most enterprise customers want to use a specialist to outsource these kinds of activities as they are not core business.
This makes absolute sense. Picking up on the trend towards end to end service management in the enterprise space, one of the frequent snags is how to deal with parts of the chain for which you may not have the specialist skills in house – particularly for elements that are physically outside of the datacentre. More and more, there is a need for trusted partners to whom responsibility can be delegated, and that often means working with suppliers that offer a broader scope and more coherent service.
As Robin says:
“We have deep connectivity skills and reach, as well as the application layer/IaaS skills, so when the customer says 'I can’t reach my Amazon EC2 instance', we are able to triage the problem effectively as we understand the entire architecture. For example, if it turns out to be an EC2 security group issue (aka firewall at the AWS end), we won’t blame the network.”
Of course none of this precludes Amazon customers piecing together the solution themselves, using their own expertise and general comms service providers, but as our research has highlighted, setting up the comms for business critical cloud services is not necessarily as easy as many make it out to be, particularly when more demanding applications and/or larger user bases are involved.
Anyway, the bottom line is that this recent announcement is welcome as it provides UK AWS users with choice that’s been available to US customers for a while now.
Wednesday, January 11, 2012
Unlike cloud, however, which started out as largely a re-hashing of familiar ideas around hosting, SOA, data centre automation and business service management, the whole big data movement is introducing net new capability to the business mainstream from the outset, which was confirmed in a recent Freeform Dynamics research study (122 IT pro respondents, November 2011):
That’s not to say that everything talked about in terms of big data technology is new in absolute terms, but until recently, there weren’t that many offerings in some key big data areas that you would call genuinely ‘enterprise ready’. This has been especially true in the areas of distributed indexing and search, and large scale distributed analytics, where it has often been a case of hand-crafting solutions based on a combination of open source and commercial components to get the desired result; fine if you are Yahoo!, Facebook or a big bank with lots of resource to throw at it, but not really tenable in a busy and resource-constrained mainstream IT department.
With this in mind, vendors like IBM and EMC have been playing the game of bringing open solutions together with their own proprietary technology for a while to form coherent offerings, or at least out of the box integration between the pieces required. This has been necessary because of the shortcomings of environments such as Apache Hadoop in the areas of resilience, security, management and development tooling.
In an announcement this week, however, the daddy of the high end database world, Oracle, has declared its hand. Having already been dabbling in the area of distributed indexing and search (with the Oracle NoSQL Database), it is now getting into bed with Cloudera, arguably the most established independent specialist provider in the Hadoop world.
The end result is the Oracle Big Data Appliance, a Hadoop stack underpinned by Sun/Linux servers and other platform components from Oracle, and augmented with Cloudera’s enhanced Hadoop management environment. Oracle has also announced a portfolio of what it calls ‘Big Data Connectors’, which provide ease of integration between the Hadoop Distributed File System (HDFS) or Oracle NoSQL Database, and a traditional relational database environment.
These announcements are especially interesting given Oracle’s existing strong presence in the high end data management and analytics space. The Cloudera guys are extremely capable and have been doing some good stuff, but the Hadoop distribution at the centre of their activities is strengthened by the Oracle platform pieces. Furthermore, rightly or wrongly, enterprise IT departments often prefer to work with an established incumbent when introducing new ideas and capability into the mix.
Oracle’s broader database management pedigree is also important when we consider that big data technology will, on the whole, complement rather than replace traditional database and storage capability. Indeed there are many scenarios in which it makes sense to exploit both together, e.g. with preliminary exploration and analysis on large data sets with a poor signal to noise ratio taking place in Hadoop, then a more compact and structured derived data set being extracted into a traditional warehouse or BI environment. This is one of the reasons why the connectors Oracle is providing make absolute sense.
The co-existence of big data with traditional database and storage technologies was confirmed during the aforementioned research, which shows quite clearly that with the exception of legacy systems, IT professionals anticipate growth across all of the technology categories explored:
And if you ask the question explicitly, most people confirm that they don’t anticipate big data solutions replacing traditional options in any significant way:
However, turning to hard practicality, we also see a couple of calls to action for vendors on this chart. IT professionals are not convinced that suppliers can back up all of the big data hype with tangible support and services at the moment to help customers realise the potential, and they also have concerns about licensing and commercial arrangements as data related needs become more demanding.
So, despite the technology advances, there is still some work to be done, and it will be interesting to see how Oracle deals with these issues as its big data activities continue to develop.
Sunday, October 09, 2011
Level and type of social media use within the IT industry
Firstly, as part of the introduction to how we at Freeform Dynamics use social media, I shared some snippets of research from a recent study we had conducted. The key chart here is as follows:
What this picture shows is that those working for suppliers within the tech industry are far more likely to be using social networking than the general population of IT professionals in customer/user organisations. This is something we have known for a while, and it’s why the team here at Freeform tend to think of social media primarily as a way of interacting with industry insiders (which Dean Bubley referred to as the ‘Fourth Estate’). The truth is that we have much more effective ways, not least traditional online media, of reaching the users and buyers that are the main consumers of our advisory output.
The above chart is also useful to help us keep the whole social media discussion in perspective. The data presented was gathered via a web survey so those more inclined to interact online will be over-represented because of self-selection. The 30% penetration of social media is therefore almost certainly inflated, underlining the fact that it hasn’t yet pervaded the work place.
We must also remember that social media is not just one thing. Some people use Facebook professionally, others use it purely for personal reasons or not at all. The same goes for Linkedin, Twitter, Google+, blogs and so on. Furthermore, we cannot assume that all work related use is associated with decision making. Many people simply use Linkedin or Facebook to keep up with the movement of colleagues and peers as their careers progress. So, not only is activity fragmented across media/networks, it varies significantly by type and intensity.
Social media should not replace traditional AR comms
Coming to the second point I want to pick up on, the fragmented nature of social media activity is one of the reasons why AR pros should rethink what they are doing if their use of social mechanisms means they start to cut out more traditional modes of interaction with analysts.
IT vendors and service providers communicate with analysts in a variety of ways, from email and telephone at one end, to face-to-face briefings and conferences at the other. Against the background of cost pressure on AR programmes, a few analysts I spoke with while preparing for the IIAR event expressed concern about traditional communication mechanisms potentially being replaced by social media alternatives - Facebook/Linkedin discussions, spokesperson blogs, Twitter events, and so on.
While there is nothing wrong with using social media like this, indeed some vendors already do some of this stuff, social mechanisms should be exploited in addition to rather than instead of traditional ways of interacting.
That's not to say that AR professionals can't spread their time and attention a little differently to embrace social - e.g. if a particular analyst that's important to you is clearly running their professional life on social media, then by all means use the same media to interact with them - just don't assume that such a switch will work for the analyst community as a whole, because it won’t. Assuming an analyst will respond to a Twitter direct message, or pointing them to an exec’s blog as a substitute for a proper conversation, represents a significant degradation of interaction.
The old principle of working out the preferred (or most effective) communication mechanism for each individual remains the same when looking at how social media is worked into analyst relations activity, as does that other fundamental principle of remembering the importance of the 'R' in 'AR'.
The whole debate around how and how much social media should be used by both analysts and AR professionals will no doubt continue, and it will be interesting to see if anyone feels any differently in a year’s time.
Here’s looking forward to the next IIAR debate on the topic.
Wednesday, July 06, 2011
Whenever we put the word ‘service’ into the title of an article to do with IT delivery or management, we can almost guarantee a lower than average click rate. Phrases such as ‘service management’ and ‘service assurance’ are just not grabbers.
Some of this has to do with the pervasiveness of the word ‘service’, which is used and misused in IT-speak to refer to many different things, so is often associated with industry noise. But when used in the context of IT operations, it really is important to take notice of it. As you’ll appreciate by the end of this article, all IT departments are judged on the basis of service delivery, whether they work that way explicitly or not.
But embracing the concept of services proactively when it comes to IT operations has many advantages. Here, for example, is just one of many proof-points illustrating a direct correlation between the adoption of a service-centric approach to IT delivery, and the degree to which IT activities are viewed to be aligned with business priorities.
(The full report from which this chart was extracted can be downloaded here)
If you browse www.freeformdynamics.com, you’ll find reference to this services view of the world in many of our reports. Indeed we now consider it one of our standard segmentation criteria when analysing data, as service-centric IT delivery is generally a good proxy for progressive behaviour and better performance in many areas.
So why is this?
Well some of it has to do with the services view enabling better performance as a result of encouraging an end-to-end approach to operations. Rather than focusing exclusively on monitoring and managing individual components, the idea is that you spend at least as much time and effort on ensuring that everything works together to provide something valuable and appropriate to the end-user. By ‘everything’ here, we mean all relevant parts of the IT and communications chain, including both internal and external components and resources.
As an example, a traditional IT operations approach might include looking after the resilience and uptime of an ‘email system’, and separately managing the uptime and performance of the network. Those taking a service-centric view, however, would be considering the availability and performance of the ‘email service’, as experienced by users at the point of consumption. In our simple example, this obviously needs to take both the email system and the network into account, as the service is dependent on both working acceptably.
It’s at this point that some IT people start to get a bit defensive.
The objection we often hear is that it’s just too onerous to deviate from the component or system based view. The performance of our email service in reality, for example, is actually dependent on a lot of things if you really pull it apart – the PC on the desk or mobile in the hand, the email client software being used, the network (or networks) that transport messages back and forth, the email server environment itself, and the storage devices underpinning it. Pick any other application or ‘service’ and it’s likely to be equally if not more complicated in terms of underlying components and dependencies.
The fear is that it is a short step from adopting an end-to-end services approach and business people starting to judge IT simply on what happens at their screen and keyboard, without taking into account how complicated things are behind the scenes. IT then gets lumbered with a whole bunch of service level commitments and/or expectations that, it is perceived, are a lot harder to manage. It won’t any longer be possible to make the case that you were mostly doing your job well because 99.9 per cent of the infrastructure was working fine, and that major outage was caused by a single component failure that was beyond your control.
But let’s be honest with ourselves here. Users have never really bought into that kind of defence when things have not worked as they should. They have always been pretty much exclusively concerned with what they are able to do (or not do) at the end point of the IT delivery chain. If you ask any user or stakeholder how well they think IT is performing or supporting their area of the business, the language they use is inherently service-centric.
When they talk about email, they focus on the number of times it has been down recently or has been running really slowly. Even if it has been explained to them that the issues have been caused by a comms provider not meeting their obligations, they are not really that interested. They just want you, i.e. the IT department, to make the problem go away.
And it works the other way around too. Business people might well acknowledge how well the call centre system has been running recently, but do they care when you tell them about that major switch failure and the heroic and creative efforts of the network team to re-route traffic and avoid a major outage? The chances are, they’ll probably just shrug, on the basis that it’s your job to keep things running properly, so what’s the big deal?
Adopting a more explicit service-centric approach to IT delivery means you accept these things, and once you do this, you can start to take control. You realise that there is no point in trying to define how well IT is doing in terms of how many green lights are lit within the infrastructure. However good the internal IT view looks, you’ll still ultimately be judged on the basis of what’s delivered to users. If you define and manage expectations and commitments based on this, life actually becomes easier, not harder, as you avoid all of the problems that stem from users defining what is ‘acceptable’ unilaterally, and often in a very subjective manner.
The bottom line is that IT from a business user perspective is all about service consumption whether anyone defines it formally or explicitly in this way or not, so thinking in terms of service delivery within the IT department should really be a no-brainer.
Thursday, June 30, 2011
A couple of weeks ago, I had occasion to spend some time on a 4-5 year old MacBook Pro that my daughter had been using, and immediately noticed how sluggish and clunky it felt compared to my Windows 7 notebook that has an i5 processor, loads of memory and an SSD.
So what? That’s pretty much what you’d expect, isn’t it? The hardware running the Windows machine is so much more capable, so the experience is bound to be better.
The penny then dropped on something.
I have been trying to figure out for about four years now (ever since I got the aforementioned MackBook Pro) why Mac users seem so convinced that OS X and the whole Mac experience is so much better than Windows. It’s something that has totally eluded me. Compatibility to one side, Windows and OS X have always seemed pretty much equivalent to me, and nothing any Mac user has said when trying to support their claim of superiority has ever stood up to cross examination .
But then I realised that I am the kind of person, because of the job I do, that is pretty much always using the latest high spec machines, so when I have been comparing Windows and OS X, it’s generally been on equivalent kit.
I would imagine, however, that most people experience the Mac for the first time when moving from their aging Windows machine that has reached the end of its life - otherwise why would they be investing in something new? They therefore end up comparing an old PC running Windows XP with limited memory, a two generation old processor, and a cluttered and clogged hard disk, to a shiny new high spec Mac running a nice clean install of OS X. They then assume the difference is down to the fact that they have switched from Windows to Mac.
And, or course, having just spent a huge amount of money on a premium machine with a premium brand, they obviously need to justify their decision to themselves, their spouse and to the world in general, hence the “Mac is so much better than Windows” line.
Firing up the old MacBook Pro and noting the (relatively) poor experience it delivered compared to my current Windows notebook made me think of the above explanation. Apologies if this is obvious to a lot of people, and sorry if you genuinely believe that OS X is better, but at least it’s a mystery solved as far as I am concerned.
Having said this, I am still interested in hearing further justifications for claims made of Mac superiority from a user experience and productivity point of view. A few months ago I spent two months using one of the latest i7 MacBook Pros (again with an SSD and loads of RAM) as my main business machine, and while I thought the hardware was great, and I became pretty comfortable with OS X, I still couldn’t see what all the fuss was about; and life was still easier and my productivity better when I returned to Windows.
Anyway, feel free to ping me with you thoughts, or flame me if you are that way inclined :-)
Thursday, June 16, 2011
When I finished spewing coffee all over my monitor, I had a think about why my instincts were telling me this was a bit silly. I then went back to the person concerned and asked them to think about what would happen if some evil wizard came along and with a wave of his wand made everything enabled or delivered by one of the older companies disappear instantly.
The obvious example was IBM. Wave that wand in Big Blue’s direction and immediately our entire financial services infrastructure, telecom infrastructure and a lot of our utilities would collapse. Most large organisations and many public services would also be severely crippled.
Now try the same trick with FaceBook – yes people would miss it, but the world would go on, and some might even argue that it would be a better place. Same with Twitter and a lot of the other internet based companies. I have to admit that I hesitated over Google, but when it really gets down to it, while it would hurt to lose internet search, and the immediate access to the information it represents, I am not sure it would bring the planet crashing to a halt in the same way.
And anyway, when you consider that it was the R&D investment of entities like IBM over the previous decades that enabled a lot of what internet companies, and the rest of us for that matter, now take for granted, it puts things into perspective.
From early calculating machines, though DRAM, RISC processors, magnetic disk drives, the relational database, and the PC, right up to the Watson supercomputer that recently won America’s Jeopardy! game show against the best human contestants, IBM has consistently been, and continues to be, one of the most prolific sources of world-changing innovation on the planet.
So we at Freeform Dynamics would like to say happy birthday IBM, as it turns a century old today. Here’s to the next 100 years of innovation.
Saturday, June 11, 2011
This issue is important. If you are a vested interest looking to drive subscriptions in the SMB space, you need representation by the suppliers that smaller businesses turn to for advice and solutions. If you are a customer that’s trying to make sense of if, where and how this cloud stuff can benefit you, then you need your trusted (often local) supplier to guide and support you.
Some reading this might disagree, and make the argument that the cloud renders the channel redundant. Well “good luck with that”, as my teenage kids would say. So far, no one that I am aware of has found a way of selling cloud services around the channel in any volume to SMBs without an army of out-bound telesales or field sales people of their own. It’s hard to achieve serious scale that way, and without scale, the cloud model doesn’t work well economically.
One of the reasons for the lukewarm reception of cloud within the channel is because the switch from a product-based business model to a services/annuity-based one is not trivial. Accounting practices, remuneration mechanisms and the way propositions are sold all change considerably, which is a lot of upheaval. Hard enough if the demand and margins are there, but I am hearing that neither of these is at the moment, so lack of major movement is unsurprising.
Against this background, it really doesn’t help when vested interests preach at the channel and effectively accuse them of being backward and protectionist. Most players within the channel are SMBs themselves, and it is unrealistic to expect them to divert significant resources to help the big guys create a market when the returns are far from obvious. And let’s be clear, switching models is not something you can play at – unless you approach it seriously and commit, you will not be successful.
This whole thing makes me wonder whether service providers have collectively fallen into the huge trap of assuming that the cloud delivery model would lead to general ‘disintermediation’. This is a clever sounding term popularised in the dot-com era that basically refers to the principle of cutting out the middle man by establishing a direct electronic relationship with the customer. It was a flawed and misguided notion then that everything would shift a direct model, and it remains so now, but the relative lack of thought being given to what’s in it for the channel is consistent with this way of thinking.
So too is a lot of the cloud pricing we see. At one end we have unnecessarily low and highly publicised prices set by direct service provider sales activity around email, content management and other horizontal application and communication services. These often leave nothing in the equation to cover the cost of indirect marketing, sales, account management and support activity, and close the door to mark up because the PR machine has proudly declared to the world how little customers should expect to pay.
At the other end of the spectrum, we have highly priced services that make a mockery of the frequently heard claim that cloud options are cheaper than the on-premise or co-location equivalent. When the canny SMB customer does the sums on lifetime TCO versus cumulative cloud subscription fees, it’s no wonder many still stick with traditional delivery options. Sure, there are lots of reasons other than cost to look at cloud computing, but it becomes a hard-sell when you have to work around raised expectations on savings that cannot possibly be met.
Put these price related challenges together with limited demand, small margins, deferred profit and cash-flow disruption (if you divert existing product sales to cloud) and that’s a pretty big ask of channel partners, especially the smaller ones that serve the local and regional needs of the mainstream SMB market.
The reality is that while traditional product pricing generally reflects the costs involved and the need for profit across distribution tiers, cloud related pricing and channel discounts today often don’t. It doesn’t matter whether this is because of land grab attempts by providers, the idealistic notion of cheap, direct consumption models defining the future of the market, or the mistaken view that customers are so ‘excited’ about cloud that they will pay through the nose for it. All this is secondary to the fact that what’s in it for the channel is often very unclear.
With this in mind, I don’t blame those resellers who are cautiously biding their time. The commercial models around indirect cloud based delivery are currently a mess, and the onus is on the provider community to get its act together, not on the channel to change its attitude.
Sunday, May 22, 2011
Some believe such conflict is a consequence of the traditional ‘waterfall’ approach to software development, e.g. where the process sequentially moves through requirements gathering, business analysis, systems design, coding and testing. Just like in a relay race where there is no going back once the baton has been passed without destroying the team's performance, revisiting earlier phases in a waterfall development can be costly and disruptive.
So you plan and document everything, strictly manage activity, and enforce rigorous change control along the way. The unwritten mantra is that change is the enemy and should be challenged hard wherever it is requested.
The snag, of course, is that things often do change legitimately over the course of a project, especially if the elapsed time is measured in months or years. This applies to requirements, constraints, technology, surrounding systems, and so on. Furthermore, once you show a stakeholder or user something running, it often sparks new ideas and requirements that they hadn’t previously thought of. If the first time they see working software is in the lead up to ‘go live’, with all the remaining time allocated to testing and remediation, this is far from ideal.
Those challenging the traditional waterfall approach assert that it should not be necessary to design, build and test everything before delivering anything. This in turn leads to the notion of ‘agile development’, characterised by rapid and frequent delivery based on a more collaborative approach. Advocates claim that this reduces time to value, copes better with change, and increases the chances of business needs being met.
In 2001, a number of these advocates came together to define the ‘Agile Manifesto’, within which it was stated that the group valued:
Individuals and interactions over processes and tools
Working software over comprehensive documentation
Customer collaboration over contract negotiation
Responding to change over following a plan
Many seasoned IT professionals might consider a project set up on this basis as an accident waiting to happen. The group, however, was careful to point out “while there is value in the items on the right, we value the items on the left more”. The manifesto should therefore not be interpreted as advocating an ill-disciplined ‘make it all up as you go along’ approach.
In practice, the term ‘agile development’ actually refers to a collection of well-defined methodologies such as DSDM, SCRUM, Adaptive Software Development, Extreme Programming, and others. Agile methods are grouped together because they are all based on the more incremental and iterative approach to designing and building software. In an agile development project, the overall objectives are still well-defined, but the way in which they will ultimately be met is deliberately kept fluid in case something changes over time. Work is conducted in discrete units of activity, each leading to the delivery of a set of fully working and tested features and functionality that can be reviewed, accepted and actually used by the business.
In contrast to traditional development organisations that group specialists such as analysts, architects, programmers and testers into separate functional units, agile development teams are generally small (less than 10 members), multi-functional, and self-contained. The principle of ‘self-organisation’ with flat team structures and continuous communication is an important ingredient in the mix. The idea is that in a close-knit group with a discrete common goal (i.e. meeting the objectives of the next software release), people will naturally figure out who needs to do what between themselves then collaborate to achieve the result.
Those who have been around the block a few times might be sceptical of this romantic notion. For many (if not most) of the developers working in mainstream IT departments, the work they do is a job, not a vocation. As in any profession, you have a spectrum of capability and attitude, with highly talented, motivated and naturally collaborative people at one end of the scale, and ‘nine to fivers’ with mediocre skills and an uncooperative mind-set at the other. While agile advocates claim the approach brings out the best in people, the success rate will be heavily influenced in reality by the make-up of teams and the environment in which they operate.
We must also be clear that adoption of the agile approach is not a licence to dispense with core skills and disciplines. Project management, coding standards, code documentation, configuration management and comprehensive testing are all still important. So too are horizontal functions that cut across software development projects such as business analysis, data modelling, technical architecture definition, and overall IT governance.
The reality is that agile methods can be useful for handling small to medium scope development projects (or smaller discrete parts of larger projects), where requirements are particularly dynamic or hard to pin down, and the right mix of people can be brought together. But before jumping to agile, it is important to recognise that many of the problems that arise during waterfall projects are not actually to do with the methodology per se, but a lack of discipline, control and effective communication. It may therefore be better to focus on fixing this first.
Considering agile development in the broader context, it can be a useful complement to traditional methods and a potential way of working around stakeholder reluctance to fund long-running monolithic developments. It is not, however, a magic bullet to neutralise all development woes.
(Article originally written for Computing Magazine)