To Release or Not to Release? The Risks of Launching Software Too Early

The most-hyped video game of the holiday season should’ve been an unquestioned financial success, but the company’s financial objectives forced the premature release of an under-tested, glitch-laden final product. What happens when quality is ignored to ensure an optimal release?

To Release or Not to Release? The Risks of Launching Software Too Early

Photo courtesy of Zehta (Creative Commons)

For a game that spent four years and what’s rumored to be over $100 million in development, it’s been an utter disaster.
For all its technical prowess, the buggy final product just wasn’t that fun to play.  Many reviews echoed a similar theme: it was pushed out too soon.

Whether you have children that play or are a gamer yourself, odds are you’re aware of the Assassin’s Creed video game series, French company Ubisoft’s flagship franchise.  You might’ve even heard of their latest installment, Assassin’s Creed: Unity.  It just hit shelves last month, and although it had all the makings of a holiday blockbuster—a massive marketing campaign, spectacular new features, gorgeous graphics, and a beautifully realized, living-and-breathing recreation of 18th century Paris to play in—its reception has been less than enthusiastic. 

Scratch that, for a game that spent four years and what’s rumored to be over $100 million in development, it’s been an utter disaster.

So what went wrong?  In short, it turns out that, for all the hype the heralded game promised, the actual gameplay itself was laden with glitches and frame rate drops, completely decimating the game’s overall playability.  For all its technical prowess, the buggy final product just wasn’t that fun to play.  Many reviews echoed a similar theme: it was pushed out too soon.

And, unfortunately for Ubisoft, that was only the tip of the iceberg. 

Due to a contractual obligation, reviewers—who received copies in advance—were unable to publish reviews until 12 hours after the game had released (preventing some consumers from reading reviews before buying); a rare and controversial move that indicated the company might have had some prior knowledge of the glitches the game shipped with.  As soon as the reviews started coming in, Ubisoft had a whale of a problem on their hands.  In response, they were left scrambling to mend the defects with patches (something they’re still working on) and dealing with a stock price that had plummeted by 9.12 percent within a day of the game’s launch.

But why on Earth would a company make such a potentially-incendiary decision?  It’s simple, the business had financial goals that could only be met during sales rush of the holiday season.  The last game in the series, which released just before the holidays last year, sold over 10 million units in just two months; undoubtedly aided by plentiful purchases from hordes of holiday shoppers.  Had Ubisoft delayed Assassin’s Creed: Unity, they most certainly could have corrected most of the defects; but they would have done so at the cost of missing the lucrative holiday period to recoup their investment.  To be fair, the company has not admitted that this was indeed the case; but this hypothesis is far from unique.So what happened?  How does a reputable company with a history of delivering well-received, high-quality products suddenly drop the ball?  Well, for starters, we can take a look at when the game released—just in time for the holiday season.  While it’s indeed plausible that Ubisoft’s software testing/QA team merely failed to discover a wide swath of defects (it was Assassin’s Creed’s first release on next-generation consoles), it’s much more likely that the company was aware of its glitches (especially considering the move to delay reviews of the game) and simply chose to release it anyway. 

Ubisoft’s story speaks to challenge that almost all companies face: how do you ensure that your product comes out on schedule with the expected quality?  For the vast majority of organizations out there, release dates are often set based purely on a project schedule—meaning once the expected release date is chosen, companies will typically work their way back from that, using unreliable anecdotal data or arbitrary guesses to allocate each step of the process.  Unfortunately, without quantifiable exit criteria in testing, this method is nothing more than an educated guess—something that will almost always lead to one of two outcomes: project delays or poor-quality releases.

In the end, it all comes back to knowing how much you need to test.  When you can accurately analyze how many defects you expect to find, your project schedule becomes significantly more predictable.  Instead of facing tough dilemmas on delaying to ensure quality or releasing a low-quality product at the optimal time, you’ll be the envy of businesses everywhere: releasing high-quality output exactly when you expected. 

In a lot of ways, this is reminiscent of a situation Lighthouse experienced while doing test consultation for a client’s quarterly releases.  Despite warnings that their code wasn’t ready to go live, the client pushed their first two releases out, resulting in quality issues both times.  After the second release, Lighthouse quantified the high cost of fixing those defects after the release went live; explaining that the cost of delaying their next release to test it properly would be significantly less than doing so afterwards.  In response, the client heeded that advice and delayed their next release—resulting in an overwhelmingly positive response.  In fact, they would later say that it was the best release they’d ever had. 

So what’s the lesson in all of this?  What does this say about our expectations for software quality?  Perhaps what the situation best underscores is the public’s demand for exceedingly high-quality software.  It’s no longer good enough to innovate anymore; your product has to be near flawless to ensure a universally positive reaction.  Your users have no time for a frustrating experience, and your competitors are lying in wait for a chance to capture their loyalty. 

The bottom line here is that when testing is ignored for the sake of the supposed “greater good”, companies are risking a lot more than a few frustrated users.  In this day and age, testing shouldn’t be seen as an impediment to the fulfillment of business objectives; high quality should be seen as a business objective in and of itself. 

Sure, no one likes a delay—not your users, not your managers, and most certainly not your executives—but, with so much riding on your software, isn’t it worth it? 

Ask Ubisoft, they can tell you all about it.

Cheers,

Mike Hodge
Lighthouse Technologies, Inc
Software Testing | Quality Assurance Consulting | Oracle EBS Consulting

{ 1 comment… add one }

Leave a Comment

PMIASQIEEESoftware Engineering InstituteInternational Software Testing Qualifications Board