|
Welcome Session |
09:00 am |
Introduction & ice-breaking activity |
|
Session 1: Keynote |
09:30 am |
Keynote by Greg Grossmeier
(Release Manager at the Wikimedia Foundation)
|
10:30 am |
Coffee Break |
|
Session 2: Integration & Release Processes |
11:00 am |
Analysis of Marketed versus Non-marketed Mobile App Releases by Maleknaz Nayebi, Homayoon Farrahi, and Guenther Ruhe (University of Calgary) [research talk]
Abstract:
Market and user characteristics of mobile apps make their release managements different from proprietary software products and web services. Despite the wealth of information regarding users' feedback of an app, an in-depth analysis of app releases is difficult due to the inconsistency and uncertainty of the information. To better understand and potentially improve app release processes, we analyze major, minor and patch releases for releases following semantic versioning. In particular, we were interested in finding out the difference between marketed and not-marketed releases. Our results show that, in general, major, minor and patch releases have significant differences in the release cycle duration, nature and change velocity. We also observed that there is a significant difference between marketed and non-marketed mobile app releases in terms of cycle duration, nature and the extent of changes, and the number of opened and closed issues.
Adopting Continuous Delivery in AAA Console Games by Jafar Soltani (Microsoft) [practitioner talk]
Abstract:
Games are traditionally developed as a boxed-product. There is a development phase, followed by a bug-fixing phase. Once the level of quality is acceptable, game is released, development team moves on to a new project. They rarely need to maintain the product and release updates after the first few months.
Games are architected as a monolithic application, developed in C++. Game package contains the executable and all the art contents, which makes up most of the package.
During the development phase, the level of quality is generally low, game crashes a lot. Developers mainly care about implementing their own feature and do not think too much about the stability and quality of the game as a whole. Developers spend very little time writing automated tests and rely on manual testers to verify features. It's a common practice to develop features on feature branches. The perceived benefit is developers are productive because they can submit their work to feature branches. All features come together in the bug-fixing phase when all different parts are integrated together. At this stage, many things are broken. This is a clear example of local optimisation, as a feature submitted in a feature branch does not provide any values until it’s integrated with the rest of the game and can be released. Number of bugs could be several thousands. Everyone crunches whilst getting the game to an acceptable level.
At Rare, we decided to change our approach and adopt Continuous Delivery. The main advantages compared to traditional approach are:
- Sustainably delivering new features that are useful to players over a long period of time.
- Minimising crunch and having happier and productive developers.
- Applying hypothesis-driven development mind-set and getting rapid feedback on whether a feature is achieving the intended outcome. This allows us to listen to user feedback and deliver a better quality game that’s more fun and enjoyable for players.
System for Meta-data Analysis using Prediction based Constraints for Detecting Inconsistences in Release Process with Auto-Correction by Anant Bhushan and Pradeep R Revankar (Adobe) [research talk]
Abstract: The Software product release build process usually involves posting a lot of artifacts that are shipped or used as part of the Quality Assurance or Quality Engineering. All the artifacts that are shared or posted together constitute a successful build that can be shipped out. Sometimes, a few of the artifacts might fail to be posted to a shared location that might need an immediate attention in order to repost the artifact with manual intervention.
A system and process is proposed for analyzing metadata generated by an automated build process to detect inconsistencies in generation of build artifacts. The system analyzes data retrieved from meta-data streams, once the start of an expected metadata stream is detected the system generates a list of artifacts that the build is expected to generate, based on the prediction model. Information attributes of the meta-data stream are used for deciding on the anticipated behavior of build. Events are generated based on whether the build data is consistent with the predictions made by the model. The system can enable error detection and recovery in an automated build process. The system can adapt to changing build environment by analyzing data stream for historically relevant data properties.
Discussion |
12:30pm |
Lunch |
|
Session 3: Build & Release Tooling |
02:00 pm |
The SpudFarm: Converting Test Environments from Pets into Cattle by
Benjamin A. Lau (TIBCO Software) [research talk]
About a year ago I was trying to improve our automated deployment and testing processes but found that getting access to a functioning environment reliably just wasn't possible. At the time our test environments were pets. Each was built partially by script and then finished by hand with a great expenditure of time, effort and frustration for everyone involved. After some period of use, that varied depending on what you tested on the environment, it would break again and you'd have to make some, frequently wrong, decision about whether to just start fresh (that could take up to a week) or try to debug the environment instead (that could take even longer and often did).
Here's how we went about automating the creation and management of our test environment to increase developer productivity, reduce costs and increase our ability to experiment with infrastructure configuration with reduced risk.
Escaping AutoHell: A Vision For Automated Analysis and Migration of Autotools Build Systems by Jafar Al-Kofahi, Tien Nguyen, and Christian Kästner (Iowa State University, Carnegie Mellon University)
[research talk]
GNU Autotools is a widely used build tool in the open source community. As open source projects grow more complex, maintaining their build systems becomes more challenges, due to the lack of tool support. Here we propose a platform to mitigate this problem, and aid developers by providing a platform to build support tools for GNU Autotools build systems. The platform would provide an abstract approximation for the build system to be used in different analysis techniques.
Building a Deploy System that Works at 40000 feet by Kat Drobnjakovic (Shopify) [practitioner talk]
Abstract:
Shopify is one of the largest Rails apps in the world and yet remains to be massively scalable and reliable. The platform is able to manage large spikes in traffic that accompany events such as new product releases, holiday shopping seasons and flash sales, and has been benchmarked to process over 25,000 requests per second, all while powering more than 275,000 businesses. Even at such a large scale, all our developers still get to push to master and deploy Shopify in 3 minutes. My talk will break down everything that can happen when deploying Shopify or any really big application.
GitWaterFlow: A Successful Branching Model and Tooling for Achieving Continuous Delivery with Mulitple Version Branches by Rayene Ben Rayana, Silvain Killian, Nicolas Trangez, and Arnaud Calmettes (Scality) [practitioner talk]
Abstract:
Collaborative software development presents organizations with a near-constant flow of day-to-day challenges, and there is no available off-the-shelf solution that covers all needs. This paper provides insight into the hurdles that Scality's Engineering team faced in developing and extending a sophisticated storage solution, while coping with ever-growing development teams, challenging - and regularly shifting - business requirements, and non-trivial new feature development.
The authors present a novel combination of a Git-based Version Control and Branching model with a set of innovative tools dubbed GitWaterFlow to cope with the issues encountered, including the need to both support old product versions (in some cases, going back years) and to provide time-critical delivery of bug fixes.
In the spirit of Continuous Delivery (a methodology which is partially incompatible with customer requirements with regard to internal validation), Scality Release Engineering aims to ensure high quality and stability, to present short and predictable release cycles, and to minimize development disruption. The team's experience with the GitWaterFlow model suggests that the approach has been effective in meeting these goals in the given setting, with room for unceasing fine-tuning and improvement of processes and tools.
Discussion |
03:30 pm |
Coffee Break |
|
Session 5: Lightning Talks & Poster Session
|
04:00 pm |
Get Out of Git Hell by
David Lippa (Amazon.com)
Your build data is precious, don't waste it! Leverage it to deliver great releases by
Rishika Karira and Vinay Awasthi (Adobe)
A Model Driven Method to Deploy Auto-scaling Configuration for Cloud Services by
Hanieh Alipour and Yan Liu (Concordia University)
|
|
Session 6: Break out discussion groups
|
04:45 pm |
Break out discussion groups on topics raised during the workshop or those that participants want to discuss with the community.
|
|
Wrap-Up Session
|
5:30 pm |
Closing discussions
|
|