Loading…
Agile2017 has ended
Testing & Quality [clear filter]
Monday, August 7
 

10:45am EDT

How Machine Learning Will Affect Agile Testing (Paul Merrill)
Limited Capacity seats available


Abstract:
Machine Learning is all the rage. Companies like Google, Amazon, and Microsoft are investing extreme sums of money into their ML budgets. But what is it, and more importantly, how will it affect me, as an Agile tester? As a Scrummaster? As a developer on an Agile Team?
Last year, I was at a testing conference where a group of 5 executives decreed adamantly that ML would replace testers within the next few years. Anytime 5 executives agree on anything I question it! So I wanted to learn if they were right. Over the last few months, I’ve researched and learned about ML. I’ve talked with industry experts in the field and testers with expertise in ML. I wanted to know what they had to say about this decree. I wanted to know for myself, "is testing in danger of being automated by ML?"
Join me to learn what Machine Learning is, How it is affecting the software we build, the products we use and our ability to test our applications. Learn what I’ve found in my research, to get an introduction to ML, and to decide for yourself if the future of testing will be in the hands of ML algorithms.

Learning Outcomes:
  • Gain knowledge of what experts in ML are saying about how it will affect Agile Testing,
  • Take home an introductory understanding of ML,
  • Enough knowledge to decide for yourself if the future of agile testing will be in the hands of ML algorithms.

Attachments:

Speakers
avatar for Paul Merrill

Paul Merrill

Principal Software Engineer in Test, Beaufort Fairmont
Paul Merrill is Principle Software Engineer in Test and Founder of Beaufort Fairmont Automated Testing Services. Nearly two decades into his career spanning roles such as software engineer, tester, manager, consultant and project manager, his views on testing are unique. Paul works... Read More →


Monday August 7, 2017 10:45am - 12:00pm EDT
F4

2:00pm EDT

7 Sources of Waste in Automated Testing and How To Avoid Them (Jonathan Rasmusson)
Limited Capacity full
Adding this to your schedule will put you on the waitlist.


Abstract:
Thousands of hours are wasted every year maintaining poorly written suites of automated tests. Not only do these tests slow teams down, they sap morale, and are a huge time sink. By learning what these seven wastes are teams can avoid much of the dysfunction and waste that comes with most early automation efforts. And instead get to adding value faster by applying a few simple techniques.

Learning Outcomes:
  • How to get your team on the same page when it comes to automated testing
  • How to get testers and developers seeing each other's points-of-view when it comes to writing automated tests
  • How to establish the necessary baseline, culture, language, and rules of thumb around where and when to write different kinds of automated tests
  • How to avoid much of the waste and dysfunction that comes with early automation testing efforts

Attachments:

Speakers
avatar for Jonathan Rasmusson

Jonathan Rasmusson

Engineer, Spotify
Agile, testing, programming, automation, culture. Author of: The Agile Samurai and The Way of the Web Tester


Monday August 7, 2017 2:00pm - 3:15pm EDT
I3

2:00pm EDT

Evolving Your Testing Strategy: Mapping Now, Next, and Later (David Laribee)
Limited Capacity seats available


Abstract:
Pyramids? Quadrants? Cupcakes?! There are a wide array of models that describe approaches to test automation strategy in terms of best practice or anti-pattern. While these models are useful for visually communicating how your team currently manages (or should manage) software quality, no single model represents a complete strategy in and of itself.
In this talk, we’ll begin by framing the universe of Agile testing models: models that range from technical to product to cultural mindsets. I’ll add detail and nuance to each of these models in the form of professional experience, challenges with introduction, and case study. We'll look at the strengths of weaknesses of each model in terms of the constraints it adopts (and ignores). We'll also learn about the social costs of incorporating or abandoning each approach.
With a new lens, focused on testing strategy as an act of curation, I'll share an approach to mapping, evolving, and iterating a testing strategy appropriate for your product development team's specific context.

Learning Outcomes:
  • Understand common test strategy models in terms of their constraints, strengths, and weaknesses.
  • Learn how each may create confrontation or limitations in certain organizational contexts with well-defined tester/developer roles.
  • Combine constraints and classic models to describe your current testing strategy visually with a map.
  • Create a model that describes a desired future state of testing strategy.
  • Identify decisions--and their inherent challenges--necessary to change strategy for a large organization with a complex system.

Attachments:

Speakers
avatar for David Laribee

David Laribee

Principal, Nerd/Noir
David Laribee is a product development coach with deep roots in Lean, Agile, XP and Scrum. He believes in the power of collaboration, simplicity and feedback. Over the last 20 years, David has built teams and products for companies at every scale. He’s founded startups and consulted... Read More →



Monday August 7, 2017 2:00pm - 3:15pm EDT
H1

3:45pm EDT

Pairing: The Secret Sauce of Agile Testing (Jess Lancaster)
Limited Capacity seats available


Abstract:
Finding time to learn test techniques, mentor other testers, grow application knowledge, and cross-train your team members is a daunting task with a complicated recipe. What if you could do these things while testing and finding bugs? Enter Pair Testing. What’s that? Two people testing together where one operates the keyboard in exercising the software and the other participant suggests, analyzes, and notates the testing outcomes. And it’s the secret sauce of agile testing because it makes your routine, bland testing so much more fun and productive! Testers on Jess Lancaster’s team use pair testing not only to make better software but also to foster better team relationships along the way. Jess explores why pairing works, how to run an effective pairing session, how to pair with others on the team, such as project managers, designers, developers, and just how easy it is to get started with pairing. Armed with Jess’ easy-to-use Pair Testing recipe card, plan your first pairing encounter so you are ready to roll when you get back to the office. This sounds easy enough, but you know there will be mistakes when you try it. Jess has you covered there, too. Learn his team’s pairing mistakes and the things his team did to improve their pairing sessions.

Learning Outcomes:
  • Why pairing works, and reasons why you as a tester or agile team member need to be pairing
  • How to get started with pairing using a step-by-step process that leads to successful sessions
  • Learn my team’s pair testing mistakes and what we did to improve so you don't make the same mistakes
  • Pairing with other team members in differing roles, in addition to different ideas for pairing, such as test design, user stories, and bug reports
  • How to use the pairing recipe for making this secret sauce back at your workplace
  • Hands-on exercise with planning pairing sessions so that you can take it back to the office and pair with a co-worker!

Attachments:

Speakers
avatar for Jess Lancaster

Jess Lancaster

QA Practice Manager, TechSmith
Jess Lancaster is the QA practice manager at TechSmith, the makers of Snagit, Camtasia, and other visual communication software applications. He coaches and equips testers with the skills to be quality champions on agile teams. With more than twenty years of information systems and... Read More →


Monday August 7, 2017 3:45pm - 5:00pm EDT
F4
 
Tuesday, August 8
 

2:00pm EDT

Acceptance Criteria for Data-Focused User Stories (Lynn Winterboer)
Limited Capacity full
Adding this to your schedule will put you on the waitlist.


Abstract:
Learn how to write acceptance criteria for DW/BI user stories that align your team to deliver valuable results to your project stakeholders.
Writing user stories for business intelligence projects already feels to many product owners like pushing a large rock up a big hill ... and needing to add solid acceptance criteria to each story feels a bit like the big hill had a false summit: Once you get to the top (user story written) you discover there's a small flat spot and then the hill continues up further, requiring additional detail in the form of acceptance criteria. As one BI PO recently put it, "I write the user story and feel like I've made excellent progress; then the team is all over me with 'That's great, but what's the acceptance criteria?', forcing me to yet again go deep. If I had a better understanding of "sufficient" acceptance criteria, I would have shared it with my team and stopped the beatings!"

Learning Outcomes:
  • How does acceptance criteria differ from the team's definition of "done"?
  • How detailed should acceptance criteria be?
  • What is included in acceptance criteria?
  • What is an example of acceptance criteria for a BI user story?

Attachments:

Speakers
avatar for Lynn Winterboer

Lynn Winterboer

Agile Analytics Educator & Coach, Winterboer Agile Analytics
I teach and coach Analytics and Business Intelligence teams on how to effectively apply agile principles and practices to their work. I also enjoy practicing what I teach by participating as an active agile team member for clients. My career has focused on Agile and data/BI, serving... Read More →



Tuesday August 8, 2017 2:00pm - 3:15pm EDT
Wekiwa 6
 
Wednesday, August 9
 

3:45pm EDT

Use Tables to Drive out Ambiguity/Redundancy, Discover Scenarios, and Solve World Hunger (Ken Pugh)
Limited Capacity seats available


Abstract:
Ambiguous or missing requirements cause waste, slipped schedules, and mistrust with an organization. Implementing a set of misunderstood requirements produces developer and customer frustration. Creating acceptance tests prior to implementation helps create a common understanding between business and development.
Acceptance tests start with communication between the members of the triad- business, developer, and tester. In this session, we specifically examine how to use tables as an effective means of communication. Employing tables as an analysis matrix helps a team discover missing scenarios. Redundant tests increase test load, so we show how performing an analogy of Karnaugh mapping on tables can help reduce redundant scenarios. We demonstrate that examining tables from various aspects, such as column headers, can reduce ambiguity and help form a domain specific language (DSL). A consistent DSL decreases frustration in discussing future requirements.
We briefly show how to turn the tables into tests for Fit and Gherkin syntax.

Learning Outcomes:
  • • How to elicit details of a requirement using tabular format
  • • How to use tables to search for missing scenarios in acceptance tests
  • • How to discover ambiguity and redundancy in acceptance tests
  • • A way to logically connect tables to classes and modules
  • • How to break complicated requirements represented by tables into smaller ones

Attachments:

Speakers
avatar for Ken Pugh

Ken Pugh

Principal Consultant, Ken Pugh, Inc.
Ken Pugh helps companies evolve into lean-agile organizations through training and coaching. His special interests are in collaborating on requirements, delivering business value, and using lean principles to deliver high quality quickly. He has written several programming books... Read More →


Wednesday August 9, 2017 3:45pm - 5:00pm EDT
F2
 
Thursday, August 10
 

9:00am EDT

How ATDD Fixed Your Agile Flow (John Riley)
Limited Capacity seats available


Abstract:
Customers demand quality products. Automated regression testing is a method to deliver a quality product. However, several considerations need to be made before committing to implementing automated testing. For example, will automation introduce unnecessary work in the development process? Will team headcount need to be increased? How much will the tools cost? What additional value, if any, was gained in the end?
This presentation will demonstrate how the test-first mindset of the ATDD (Acceptance Test Driven Development) process naturally opens the doors for automated testing. We will see how ATDD can actually simplify the development process and allow teams to continuously improve to become truly agile. A demonstration of an automated regression test suite in action will illustrate just one of many added benefits to products implemented using ATDD.

Learning Outcomes:
  • * Learn how simplifying development processes can produce quick value
  • * Develop a common language that developers, testers, and Product Owners can all understand
  • * Demonstrate other advantages that early user acceptance testing can continually produce value
  • * How to choose the correct test automation tools to implement your automated regression test suite
  • * How the refined process benefits the team
  • * How the organization benefits from the value of a test-first mindset

Attachments:

Speakers
avatar for John Riley

John Riley

Principal Agile Coach and Trainer, Ready Set Agile, LLC
John is the Principal Agile Coach and Trainer at Ready Set Agile in Columbus, OH. He holds certifications for all scrum roles, and his career has also focused on applying techniques in Lean Manufacturing and Application Lifecycle Management for process improvement as an Enterprise... Read More →


Thursday August 10, 2017 9:00am - 10:15am EDT
F2

10:45am EDT

The Build That Cried Broken: Building Trust in Your Continuous Integration Tests (Angie Jones)
Limited Capacity seats available


Abstract:
There’s a famous Aesop Fable titled “The Boy Who Cried Wolf”. As the story goes, a young shepherd-boy would declare that a wolf was coming in an effort to alarm the villagers who were concerned for their sheep. The boy got a reaction from the villagers the first three or four times he did this, but the villagers eventually became hip to his game and disregarded his future alarms. One day, a wolf really was coming and when the boy tried to alert the villagers, none of them paid him any attention. The sheep, of course, perished.
For many teams, their continuous integration builds have become just like this young shepherd-boy. They are crying “Broken! Broken!” and in a state of panic, team members assess the build. Yet, time and time again, they find that the application is working but that the tests are faulty and giving false alarms. Eventually, no one pays attention to the alerts anymore and have lost faith in what was supposed to be a very important indicator.
Let me help you save the sheep...or in your case, the quality of your application.

Learning Outcomes:
  • How to build stability within your continuous integration tests
  • Tips for managing tests that are failing with just cause
  • How to influence the perception of the credibility of the tests among stakeholders

Attachments:

Speakers
avatar for Angie Jones

Angie Jones

Global Head of Developer Relations, TBD (a business unit of Block)
Angie leads developer relations for TBD, Block‘s decentralized finance division that builds open source software for apps where users own their data and identity. She is an award-winning teacher and international keynote speaker who shares her wealth of knowledge at software companies... Read More →


Thursday August 10, 2017 10:45am - 12:00pm EDT
F1
 
Filter sessions
Apply filters to sessions.