Loading…
Agile2017 has ended

Sign up or log in to bookmark your favorites and sync them to your phone or calendar.

Testing & Quality [clear filter]
Monday, August 7
 

10:45am

How Machine Learning Will Affect Agile Testing (Paul Merrill)
Limited Capacity seats available


Abstract:
Machine Learning is all the rage. Companies like Google, Amazon, and Microsoft are investing extreme sums of money into their ML budgets. But what is it, and more importantly, how will it affect me, as an Agile tester? As a Scrummaster? As a developer on an Agile Team?
Last year, I was at a testing conference where a group of 5 executives decreed adamantly that ML would replace testers within the next few years. Anytime 5 executives agree on anything I question it! So I wanted to learn if they were right. Over the last few months, I’ve researched and learned about ML. I’ve talked with industry experts in the field and testers with expertise in ML. I wanted to know what they had to say about this decree. I wanted to know for myself, "is testing in danger of being automated by ML?"
Join me to learn what Machine Learning is, How it is affecting the software we build, the products we use and our ability to test our applications. Learn what I’ve found in my research, to get an introduction to ML, and to decide for yourself if the future of testing will be in the hands of ML algorithms.

Learning Outcomes:
  • Gain knowledge of what experts in ML are saying about how it will affect Agile Testing,
  • Take home an introductory understanding of ML,
  • Enough knowledge to decide for yourself if the future of agile testing will be in the hands of ML algorithms.

Attachments:

Speakers
avatar for Paul Merrill

Paul Merrill

Principal Software Engineer in Test, Beaufort Fairmont
Paul Merrill is Principle Software Engineer in Test and Founder of Beaufort Fairmont Automated Testing Services. Nearly two decades into his career spanning roles such as software engineer, tester, manager, consultant and project manager, his views on testing are unique. Paul works... Read More →


Monday August 7, 2017 10:45am - 12:00pm
F4

2:00pm

7 Sources of Waste in Automated Testing and How To Avoid Them (Jonathan Rasmusson)
Limited Capacity full
Adding this to your schedule will put you on the waitlist.


Abstract:
Thousands of hours are wasted every year maintaining poorly written suites of automated tests. Not only do these tests slow teams down, they sap morale, and are a huge time sink. By learning what these seven wastes are teams can avoid much of the dysfunction and waste that comes with most early automation efforts. And instead get to adding value faster by applying a few simple techniques.

Learning Outcomes:
  • How to get your team on the same page when it comes to automated testing
  • How to get testers and developers seeing each other's points-of-view when it comes to writing automated tests
  • How to establish the necessary baseline, culture, language, and rules of thumb around where and when to write different kinds of automated tests
  • How to avoid much of the waste and dysfunction that comes with early automation testing efforts

Attachments:

Speakers
avatar for Jonathan Rasmusson

Jonathan Rasmusson

Engineer, Spotify
Agile, testing, programming, automation, culture. Author of: The Agile Samurai and The Way of the Web Tester


Monday August 7, 2017 2:00pm - 3:15pm
I3

2:00pm

Evolving Your Testing Strategy: Mapping Now, Next, and Later (David Laribee)
Limited Capacity seats available


Abstract:
Pyramids? Quadrants? Cupcakes?! There are a wide array of models that describe approaches to test automation strategy in terms of best practice or anti-pattern. While these models are useful for visually communicating how your team currently manages (or should manage) software quality, no single model represents a complete strategy in and of itself.
In this talk, we’ll begin by framing the universe of Agile testing models: models that range from technical to product to cultural mindsets. I’ll add detail and nuance to each of these models in the form of professional experience, challenges with introduction, and case study. We'll look at the strengths of weaknesses of each model in terms of the constraints it adopts (and ignores). We'll also learn about the social costs of incorporating or abandoning each approach.
With a new lens, focused on testing strategy as an act of curation, I'll share an approach to mapping, evolving, and iterating a testing strategy appropriate for your product development team's specific context.

Learning Outcomes:
  • Understand common test strategy models in terms of their constraints, strengths, and weaknesses.
  • Learn how each may create confrontation or limitations in certain organizational contexts with well-defined tester/developer roles.
  • Combine constraints and classic models to describe your current testing strategy visually with a map.
  • Create a model that describes a desired future state of testing strategy.
  • Identify decisions--and their inherent challenges--necessary to change strategy for a large organization with a complex system.

Attachments:

Speakers
avatar for David Laribee

David Laribee

Principal, Nerd/Noir
David Laribee is a product development coach with deep roots in Lean, Agile, XP and Scrum. He believes in the power of collaboration, simplicity and feedback. Over the last 20 years, David has built teams and products for companies at every scale. He’s founded startups and consulted... Read More →



Monday August 7, 2017 2:00pm - 3:15pm
H1

3:45pm

Pairing: The Secret Sauce of Agile Testing (Jess Lancaster)
Limited Capacity seats available


Abstract:
Finding time to learn test techniques, mentor other testers, grow application knowledge, and cross-train your team members is a daunting task with a complicated recipe. What if you could do these things while testing and finding bugs? Enter Pair Testing. What’s that? Two people testing together where one operates the keyboard in exercising the software and the other participant suggests, analyzes, and notates the testing outcomes. And it’s the secret sauce of agile testing because it makes your routine, bland testing so much more fun and productive! Testers on Jess Lancaster’s team use pair testing not only to make better software but also to foster better team relationships along the way. Jess explores why pairing works, how to run an effective pairing session, how to pair with others on the team, such as project managers, designers, developers, and just how easy it is to get started with pairing. Armed with Jess’ easy-to-use Pair Testing recipe card, plan your first pairing encounter so you are ready to roll when you get back to the office. This sounds easy enough, but you know there will be mistakes when you try it. Jess has you covered there, too. Learn his team’s pairing mistakes and the things his team did to improve their pairing sessions.

Learning Outcomes:
  • Why pairing works, and reasons why you as a tester or agile team member need to be pairing
  • How to get started with pairing using a step-by-step process that leads to successful sessions
  • Learn my team’s pair testing mistakes and what we did to improve so you don't make the same mistakes
  • Pairing with other team members in differing roles, in addition to different ideas for pairing, such as test design, user stories, and bug reports
  • How to use the pairing recipe for making this secret sauce back at your workplace
  • Hands-on exercise with planning pairing sessions so that you can take it back to the office and pair with a co-worker!

Attachments:

Speakers
avatar for Jess Lancaster

Jess Lancaster

QA Practice Manager, TechSmith
Jess Lancaster is the QA practice manager at TechSmith, the makers of Snagit, Camtasia, and other visual communication software applications. He coaches and equips testers with the skills to be quality champions on agile teams. With more than twenty years of information systems and... Read More →


Monday August 7, 2017 3:45pm - 5:00pm
F4
 
Tuesday, August 8
 

2:00pm

Acceptance Criteria for Data-Focused User Stories (Lynn Winterboer)
Limited Capacity full
Adding this to your schedule will put you on the waitlist.


Abstract:
Learn how to write acceptance criteria for DW/BI user stories that align your team to deliver valuable results to your project stakeholders.
Writing user stories for business intelligence projects already feels to many product owners like pushing a large rock up a big hill ... and needing to add solid acceptance criteria to each story feels a bit like the big hill had a false summit: Once you get to the top (user story written) you discover there's a small flat spot and then the hill continues up further, requiring additional detail in the form of acceptance criteria. As one BI PO recently put it, "I write the user story and feel like I've made excellent progress; then the team is all over me with 'That's great, but what's the acceptance criteria?', forcing me to yet again go deep. If I had a better understanding of "sufficient" acceptance criteria, I would have shared it with my team and stopped the beatings!"

Learning Outcomes:
  • How does acceptance criteria differ from the team's definition of "done"?
  • How detailed should acceptance criteria be?
  • What is included in acceptance criteria?
  • What is an example of acceptance criteria for a BI user story?

Attachments:

Speakers
avatar for Lynn Winterboer

Lynn Winterboer

Agile Analytics Educator & Coach, Winterboer Agile Analytics
I teach and coach Analytics and Business Intelligence teams on how to effectively apply agile principles and practices to their work. I also enjoy practicing what I teach by participating as an active agile team member for clients. My career has focused on Agile and data/BI, serving... Read More →



Tuesday August 8, 2017 2:00pm - 3:15pm
Wekiwa 6

3:45pm

Writing better BDD scenarios (Seb Rose, Gaspar Nagy)
Limited Capacity seats available


Abstract:
Behaviour Driven Development is an agile development technique that improves collaboration between technical and non-­technical members of the team, by exploring the problem using examples. These examples then get turned into executable specifications, often called ‘scenarios’. The scenarios should be easy to read by all team members, but writing them expressively is harder than it looks!
In this 75 minute workshop you will learn how to write expressive BDD scenarios. We’ll start by giving you a very brief introduction to BDD/ATDD. You’ll then be introduced to different writing styles by reviewing pre­prepared scenarios. Finally, you’ll get a chance to write your own scenarios based on examples that we’ll bring along.
We’ll be using Gherkin, the syntax used by Cucumber and SpecFlow ­ but you won’t need a computer. And, you'll leave with a checklist of tips that you can use the next time you sit down to write a scenario.

Learning Outcomes:
  • - Identify common Gherkin pitfalls
  • - Write compact, readable living documentation
  • - Enumerate 5 tips/hints for writing good scenarios
  • - Explain the difference between essential and incidental details
  • - Describe how precise, concrete examples illustrate concise, abstract rules/requirements/acceptance criteria
  • - Use the Test Automation Pyramid and Iceberg to convince colleagues to mention the UI less in scenarios

Attachments:

Speakers
avatar for Gaspar Nagy

Gaspar Nagy

coach, trainer and BDD addict, Spec Solutions
I am the creator and main contributor of SpecFlow, regular conference speaker, blogger (http://gasparnagy.com), editor of the BDD Addict monthly newsletter (http://bddaddict.com), and co-author of the book "BDD Books: Discovery - Explore behaviour using Examples" (http://bddbooks.com... Read More →
avatar for Seb Rose

Seb Rose

Director, Cucumber Limited
I have been involved in the full development lifecycle with experience that ranges from Architecture to Support, from BASIC to Ruby. I'm a partner in Cucumber Limited, who help teams adopt and refine their agile practices, with a particular focus on collaboration, BDD and automated... Read More →


Tuesday August 8, 2017 3:45pm - 5:00pm
F3
 
Wednesday, August 9
 

10:45am

Building Agility into regulated mobile software testing projects (JeanAnn Harrison)
Limited Capacity seats available


Abstract:
Working on a regulated product requires certain goals to be met to satisfy regulated auditors along with balancing out achieving test coverage to release a high quality mobile software product. Testing mobile apps can be complex task but adding the goal of meeting regulations can be overwhelming.
Team members must work together to blend meeting regulations, understand user experience tests based on priorities and severity levels to allow for iterative sprints. Testers and Developers need to communicate the inter-dependencies and include prioritized user stories based on severity levels which will help to achieve that high level of test coverage and avoid high risks.
How a tester works with their project team will be key achieving agility in these software projects. Jean Ann will present techniques to inspire project teams to develop what will work best for their company culture.
This session will cover:
1. Mobile App project teams must establish risk management and actionable mitigation tasks prior to each project release. Teams work to establish priorities and severities based on User stories. 5 min
2. Testers work with project team members to help develop test ideas based on the user stories and assigning those stories considering severity and priority. 5 min
3. Group exercise: Create test ideas of a mobile app based on a provided user story for a medical device. Think about severity and priority for users, for project team, for regulated auditors. 15 min
4. Testers & developers are tasked to build quality not only in the mobile app itself but also the inter-dependencies of a full system approach. 15 min
5. Group Exercise: Create a test where an inter-dependent condition could affect software behavior. 15 min
6. Testers provide responsive feedback on the user stories, the testing conducted while the mobile app is being developed through iterations and meeting regulations. 10 min
7. Questions 10 min

Learning Outcomes:
  • 1. Testers will work with project team to incorporate quality and meeting regulations early in planning tests for mobile projects in a regulated environment.
  • 2. Testers will help project team members to create user stories with priorities and severities assigned giving testers specific goals to focus with each sprint.
  • 3. Testers will understand to work closely with development which inter-dependencies can affect how users will be affected by the mobile app.

Attachments:

Speakers

Wednesday August 9, 2017 10:45am - 12:00pm
H2

2:00pm

Three Practices for Paying Ongoing Attention to System Qualities (Rebecca Wirfs-Brock)
Limited Capacity seats available


Abstract:
Does your team have trouble focusing on anything other than implementing features? System qualities such as performance, maintainability or reliability don’t happen by magic. They need explicit attention and focus. What can we as system quality advocates—whether testers, developers, product owners, architects or project managers—do to raise awareness of the qualities of our systems? You’ve probably heard the mantra: make it work, make it right, make it fast. But it can be difficult to retrofit certain qualities into an existing implementation. Making it right means more than verifying the functionality meets stakeholder needs; it also means delivering on the qualities we want in our system. In this session you will be introduced to three simple techniques for specifying system qualities and paying attention to them: landing zones, quality scenarios, and quality checklists. You will also have an opportunity to briefly practice each technique. Yes, you can introduce simple practices that allow you and your team to pay ongoing attention to system quality.

Learning Outcomes:
  • Learn how to use and define a landing zone for key quality attributes
  • Understand the mechanics of writing a quality scenario for "normal" and failure/recovery actions
  • Understand how to co-create two kinds of quality-related checklists: do-confirm, and read-review
  • Learn how to identify natural pause points in your work where checklists can be useful

Attachments:

Speakers
avatar for Rebecca Wirfs-Brock

Rebecca Wirfs-Brock

Wirfs-Brock Associates
I'm best known as the "design geek" who invented Responsibility-Driven Design and the xDriven meme (think TDD, BDD, DDD..). I'm keen about team effectiveness, communicating complex requirements, software quality, agile QA, pragmatic TDD, and patterns and practices for architecting... Read More →


Wednesday August 9, 2017 2:00pm - 3:15pm
Wekiwa 6

3:45pm

Use Tables to Drive out Ambiguity/Redundancy, Discover Scenarios, and Solve World Hunger (Ken Pugh)
Limited Capacity seats available


Abstract:
Ambiguous or missing requirements cause waste, slipped schedules, and mistrust with an organization. Implementing a set of misunderstood requirements produces developer and customer frustration. Creating acceptance tests prior to implementation helps create a common understanding between business and development.
Acceptance tests start with communication between the members of the triad- business, developer, and tester. In this session, we specifically examine how to use tables as an effective means of communication. Employing tables as an analysis matrix helps a team discover missing scenarios. Redundant tests increase test load, so we show how performing an analogy of Karnaugh mapping on tables can help reduce redundant scenarios. We demonstrate that examining tables from various aspects, such as column headers, can reduce ambiguity and help form a domain specific language (DSL). A consistent DSL decreases frustration in discussing future requirements.
We briefly show how to turn the tables into tests for Fit and Gherkin syntax.

Learning Outcomes:
  • • How to elicit details of a requirement using tabular format
  • • How to use tables to search for missing scenarios in acceptance tests
  • • How to discover ambiguity and redundancy in acceptance tests
  • • A way to logically connect tables to classes and modules
  • • How to break complicated requirements represented by tables into smaller ones

Attachments:

Speakers
avatar for Ken Pugh

Ken Pugh

Chief Consultant, Ken Pugh, Inc.
Ken Pugh helps companies evolve into lean-agile organizations through training and coaching. His special interests are in collaborating on requirements, delivering business value, and using lean principles to deliver high quality quickly. Ken trains, mentors, and testifies on technology... Read More →


Wednesday August 9, 2017 3:45pm - 5:00pm
F2
 
Thursday, August 10
 

9:00am

How ATDD Fixed Your Agile Flow (John Riley)
Limited Capacity seats available


Abstract:
Customers demand quality products. Automated regression testing is a method to deliver a quality product. However, several considerations need to be made before committing to implementing automated testing. For example, will automation introduce unnecessary work in the development process? Will team headcount need to be increased? How much will the tools cost? What additional value, if any, was gained in the end?
This presentation will demonstrate how the test-first mindset of the ATDD (Acceptance Test Driven Development) process naturally opens the doors for automated testing. We will see how ATDD can actually simplify the development process and allow teams to continuously improve to become truly agile. A demonstration of an automated regression test suite in action will illustrate just one of many added benefits to products implemented using ATDD.

Learning Outcomes:
  • * Learn how simplifying development processes can produce quick value
  • * Develop a common language that developers, testers, and Product Owners can all understand
  • * Demonstrate other advantages that early user acceptance testing can continually produce value
  • * How to choose the correct test automation tools to implement your automated regression test suite
  • * How the refined process benefits the team
  • * How the organization benefits from the value of a test-first mindset

Attachments:

Speakers
avatar for John Riley

John Riley

Principal Agile Coach and Trainer, Ready Set Agile, LLC
John is the Principal Agile Coach and Trainer at Ready Set Agile in Columbus, OH. He holds certifications for all scrum roles, and his career has also focused on applying techniques in Lean Manufacturing and Application Lifecycle Management for process improvement as an Enterprise... Read More →


Thursday August 10, 2017 9:00am - 10:15am
F2

10:45am

API Testing FUNdamentals (JoEllen Carter, Dan Gilkerson)
Limited Capacity filling up


Abstract:
Applications increasingly talk to each other behind the scenes via APIs. Google’s recent acquisition of Apigee, an API management company, is a strong indicator of the continued importance of APIs in software development. APIs are like building blocks, providing services and data that can be connected with other APIs to build powerful customized apps. However, testing an API can be challenging for these reasons:
  • There is no built-in interface
  • Breaking changes can cause widespread outages
  • Sensitive data may be exposed or accessed
  • Accepted testing paradigms can be difficult to adapt to APIs
In this workshop, you will learn how to fearlessly approach testing an API even if you've never heard of HTTP or cURL. In particular, you will learn:
  • Current business trends that are driving API development
  • Components of HTTP requests and responses, including authentication models, and how to inspect HTTP traffic
  • How to ‘explore’ an API and get some hands-on practice using popular tools
  • Tips on how to design tests for security, performance and backwards-compatibility risks
  • How to incorporate juggling, magic, and standup comedy into your tests (courtesy of Dan)

Learning Outcomes:
  • APIs and application architecture - Why APIs are important
  • HTTP basic info
  • How to turn an HTTP request and response into a 'test'
  • Exploratory testing heuristics particularly valuable when testing an API
  • Recommended tests for any API

Attachments:

Speakers
avatar for JoEllen Carter

JoEllen Carter

QA Manager, Olo, Inc.
I work at Olo! At Olo, we develop an online food ordering platform used by many of the country’s largest restaurant chains, reaching millions of consumers. Our highly skilled team of testers supports weekly releases and a constantly changing architecture as we transition to a... Read More →



Thursday August 10, 2017 10:45am - 12:00pm
Wekiwa 5

10:45am

The Build That Cried Broken: Building Trust in Your Continuous Integration Tests (Angie Jones)
Limited Capacity seats available


Abstract:
There’s a famous Aesop Fable titled “The Boy Who Cried Wolf”. As the story goes, a young shepherd-boy would declare that a wolf was coming in an effort to alarm the villagers who were concerned for their sheep. The boy got a reaction from the villagers the first three or four times he did this, but the villagers eventually became hip to his game and disregarded his future alarms. One day, a wolf really was coming and when the boy tried to alert the villagers, none of them paid him any attention. The sheep, of course, perished.
For many teams, their continuous integration builds have become just like this young shepherd-boy. They are crying “Broken! Broken!” and in a state of panic, team members assess the build. Yet, time and time again, they find that the application is working but that the tests are faulty and giving false alarms. Eventually, no one pays attention to the alerts anymore and have lost faith in what was supposed to be a very important indicator.
Let me help you save the sheep...or in your case, the quality of your application.

Learning Outcomes:
  • How to build stability within your continuous integration tests
  • Tips for managing tests that are failing with just cause
  • How to influence the perception of the credibility of the tests among stakeholders

Attachments:

Speakers
avatar for Angie Jones

Angie Jones

Automation Architect, Senior Developer Advocate at Applitools (USA)
Angie Jones is a Senior Developer Advocate who specializes in test automation strategies and techniques. She shares her wealth of knowledge by speaking and teaching at software conferences all over the world, writing tutorials and technical articles on angiejones.tech, and leading... Read More →


Thursday August 10, 2017 10:45am - 12:00pm
F1