Вы находитесь на странице: 1из 7

Final Report

Aug 21, 2017

Project Title: ROCS Test Automation and Framework Development


Author: Sarah Cooper
Mentors: Jeng Yen (347K) and Trevor Reed (347K)

Abstract:
Robot Operational and Computational Services (ROCS) is a tool that provides services
such as rover arm and mast kinematics and time conversions via Amazon Web
Services (AWS) to various teams in both MSL operations and the upcoming Mars2020
mission.

Still in ongoing development, ROCS requires a rigorous testing framework that ensures
accuracy and validates performance. Using Robot Framework and Jenkins, an open
source test automation framework and server respectively, we created the ROCS
testing framework as well as got ROCS running on AWS. Furthermore, we populated
this framework with extensive suites of unit tests for each ROCS service currently
offered. The goal is for the testing framework to seamlessly merge with the existing
workflow of the ROCS developers and persevere through all later stages of
development. Additionally, this automated testing framework has the potential to serve
as a model on which other development teams can base their own frameworks, thus
extending its field of influence to include various Mars2020 projects.

Background:
Robot Operational and Computational Services (ROCS) is a stateless, cloud-based,
RESTful API. It provides computational services such as inverse kinematics of the rover
arm and mast, time conversions, sun positioning, and frame transformations for Rover
Operations. To do this, ROCS provides endpoints to computational routines already
deployed in Robot Sequencing and Visualization Program (RSVP), the program that
rover planners use to write command sequences for the Mars rovers. ROCS is currently
being used both by MSL Operations as well as by ASTTRO for back-room
computations. ROCS is expected to continue to undergo development and be heavily
used for the Mars 2020 mission. As one of only a few cloud-based services at JPL,
ROCS is also helping to pave the way for a shift across lab towards housing information
in the cloud, which will be very useful in enhancing across-lab communication between
various teams.

Rationale:
Myself and another intern, Johan Michalove, were tasked with creating a testing
framework for ROCS and getting ROCS running on the cloud through Amazon Web
Services (AWS). Testing frameworks are valuable for a number of reasons. Automated
tests validate the integrity of the system and the accuracy of the responses. They
inspire more confidence for both developers and users that the software is functioning
properly. Additionally, having a method of automation in place supports continuous
delivery practices on the developer side, making it more manageable to keep tests
written and up to date as software development progresses. Accompanying test reports
Final Report
Aug 21, 2017
can also document bug fixes and assist in automating the pipeline for higher levels of
software quality clearance.

Requirements:
While designing and building this testing framework, there were a number of
considerations to keep in mind, including the precise requirements and standards that
we wanted to hold ourselves to for this project. Early in our internship, we created a list
of these requirements for our framework and have let this guide our work this summer:

 Validate the accuracy of ROCS responses.


 Check error reporting and resilience.
 Functional tests should be fully automated.
 Integrate with Jenkins, Github, and Docker.
 Writing tests should be straight-forward and fast.
 The framework should be flexible enough to easily support future services.

Intended Users:
Another consideration to keep in mind throughout this project has been who exactly the
intended users of our framework would be. At the most fundamental level, the ROCS
Framework is for ROCS developers. It supports continuous delivery practices and
makes testing more accessible, facilitates debugging through its detailed reports, and
enhances developer confidence in modifications made to the codebase. In order for our
project to have longevity, it was also really important that we integrate the testing into
the existing workflow of the ROCS development team, making it easy for the developers
to use and ensuring that it is adaptable to future services that will be added to ROCS.
While there are currently four active services available on ROCS, namely SPICE (time
conversions and sun positions), MSL_IK, RSM_IK, and Frame Transformations, more
services such as Settling are in the works and will be further developed and added over
time, so it was very important that our framework could transition well to these future
demands.

Beyond just the ROCS developers, our framework also has the potential to be used by
other development teams across lab. While the framework is designed for ROCS, it is
adaptable and can be extended and modified to fit other teams’ testing needs. Though
this was not its originally intended use, the ROCS Framework could conceivably grow to
serve as a model for automated testing design for various teams within the Mars2020
mission and into the future.

In addition to software developers, our testing framework also has an audience within
the Verification and Validation (V&V) Teams and the System Operations Staff (CS3).
Our framework can improve communication between developers and V&V as well as
improve turnaround time for validation. The automated test reports document the
services’ capabilities, so the validation teams can immediately understand test cases
and the services themselves through documentation. The automated build and
acceptance testing is also standard practice for large web-based systems, so our
framework aids in the transition of ROCS towards this standard. All of this culminates in
Final Report
Aug 21, 2017
improved communication between all the various teams that are involved in taking
software to launch as well as a speedier return of content and quality clearance from
V&V.

Robot Framework:
Our testing framework utilizes the Robot Framework, which is described on the home
page of their website as an open source “generic test automation framework for
acceptance testing and acceptance test-driven development”. This framework, first
released in 2008, is very well founded and is the framework we ultimately chose after
researching ten distinct testing frameworks and tools and narrowing down our search to
the best picks. While other frameworks had their own appealing qualities, with Gauge in
particular being a strong contender, we ultimately chose to move forward with the Robot
Framework because it is keyword-driven, has reusable test data, is a long-standing and
open source project, is highly extensible, has many libraries on which to draw from, and
is based in Python – the same language used in ROCS.

System Overview:
Our system is broken up into
two separate containers, what
we call the “Service” and
“Client” containers. The
Service Container houses the
ROCS service while the Client
Container encompasses all of
the test data, the Robot
Framework, and the various
libraries and tools it refers to. This Client operates over the internet and simulates an
actual human client, interacting with the ROCS service. We use Docker to create both
Containers, have them interact with each other, and from that interaction, generate
meaningful test results.

The slide shown to the left


provides a visualization of the test
automation flow. Locally,
developers can follow the method
detailed above to test their own
services by creating a “Client”
container that runs the Robot
executable and a “Service”
container that it can tests against.
Once the developer is ready to
commit their code and changes, they then push to their remote Github repository. This
in turn triggers a Jenkins build, which runs on a Jenkins Agent on AWS. The agent
clones the Git Repository, builds the sources, and creates new running Docker images
of the client and service containers. Currently the scripts build a single docker image
which runs all of the test suites against one Docker container per ROCS service. It then
Final Report
Aug 21, 2017
creates the Robot framework reports and logs (log.html, report.html, and output.xml)
which are then displayed alongside the Jenkins build status.

Framework:
Put broadly, our work this summer within the Robot Operations group can be placed
into three categories: creating the ROCS Testing Framework, writing ROCS functional
and acceptance tests, and creating the test automation system. In creating the ROCS
testing framework, we utilized the Swagger documents for each ROCS service to
engage in programmatic keyword generation. Additionally, the Robot Framework
allowed us to provide detailed HTML reports and logs that would be generated every
time the test suites are run locally or a new build is triggered on Jenkins. These reports
(see example below) provide a detailed summary of the passing and failing test cases,
which can then be sorted by test suite, test case file, or various specified tags. We also
wrote accompanying documentation for each test case, which can be useful for both
collaboration between developers and communication with external validation teams.
The details on how to run the ROCS Testing Framework are also available through a
QuickStart tutorial that my partner created.

Test Creation:
Our ROCS Testing Framework is organized into suites, with one suite for each ROCS
service. Within each suite are a number of test case files that each contain up to 15 test
cases. Each test case file is organized based on a broader classification of the types of
tests it houses, such as “Out of Bounds,” “Valid Responses,” or “Error.” To be readable
by Robot, the test case files and individual test cases had to follow a specific format, as
is illustrated on the next page with the graphic. This graphic shows the
ROCS/Spice/Sun/Positions test case file. It is organized into two tables, the Settings
table and the Test Cases table. The Settings table houses all of the global settings
needed for the file, including any resource files that it references and “Force Tags,” in
which one can specify tags that one wishes to be applied to every test case in the file
and which can be used to sort test statistics by in the report. In the Test Cases table,
each test case is named and can be given accompanying documentation that will also
appear in the report. This documentation can be incredibly helpful for a wide audience,
whether it be fellow developers who are searching for bugs or editing code or V&V who
are seeking to understand the software and how it is currently being tested. Below the
documentation, tags specific to each test case can be stated. In the line below, a
Final Report
Aug 21, 2017
keyword will determine the action that the test performs, such as “Get Sun from LMST.”
All of the coding for these keywords is kept separate in a resource file, which describes
the specific path this keyword requires. Thus, the hard coding of the keywords can
largely be kept separate and hidden away unless needed, streamlining the test-writing
process in both appearance and user interface. The test cases shown below use
keywords that take the given input and check the expected response against the
response returned by the ROCS server.

While the interface and formatting of test case writing was fairly straightforward, the
conception of these test cases comes with its own challenges. To write comprehensive
suites of tests for each ROCS service, a fairly in depth understanding of the services in
question and the contexts in which they are used had to be developed. This knowledge
was cultivated slowly over the course of the internship. At the same time, it also proved
useful on occasion to have only a cursory knowledge of the services, especially when
attempting to break them. As a newcomer, it was easier to concoct various user
scenarios, some of which that were such edge-cases that the developers hadn’t before
considered them. In this way, the role of the tester is a curious combination of
anticipating the thoughts of both the developers and the users as a means of predicting
all the possible ways a service can be queried.
Final Report
Aug 21, 2017
In total, we created 300 test cases,
complete with accompanying
documentation and tagging, to
thoroughly test the four currently active
ROCS services (SPICE time
conversions and sun positions, arm
inverse kinematics, mast inverse
kinematics, and frame transformations).
These test cases proved invaluable
resources during our internship,
exposing various bugs within the
existing ROCS code. Through these
test cases, we were then able to
address these issues, collaborating
with ROCS developers to improve the
services. In particular, the SPICE time
conversion and sun position service showed marked improvement through our efforts,
going from passing a mere 22 test cases out of 78 to passing all but 4. We were also
able to extend our testing framework beyond ROCS and integrate it with the MTTT
service, aiding other developers in getting set up with our framework and beginning to
write their own test cases for their service.

Automation:
The final component to our work this
summer was the creation of the test
automation system. This fully automated
cloud-based system is currently up and
running for ROCS and testing with AWS
and Docker. Any time a developer pushes
their work to GitHub for ROCS, a fresh
build is triggered on Jenkins which runs all
of the suites of tests and generates a
results summary as well as links to the
reports and logs. This process is fully automated between Jenkins and Github, and
there is automated Github commit status reporting that can show the progress of the
live build. Additionally, there is build history tracking available complete with trend
graphs (as shown in graphic) as well as build status notifications.

Conclusion and Acknowledgements:

It is my sincerest hope that our work this summer on the ROCS Testing Framework has
been and will continue to be useful – both to the ROCS development team and to the
greater Mars2020 community.
This internship has been an incredibly valuable experience for me and I could not be
more grateful for this amazing opportunity that has allowed me to foster many new
skills. My deepest gratitude to Jeng Yen, my mentor, who made this all possible when
Final Report
Aug 21, 2017
he took a chance on a neurobiology student who wanted to try her hand at something
completely different. Thank you to my other mentor, Trevor Reed, for being so
accessible and ready to provide wisdom and guidance whenever needed. Thank you to
Nick Wiltsie and the entire ROCS development team for all their support, and thank you
to the greater RSVP team for creating such a welcoming atmosphere every day around
the office. Thank you to JPLSIP for this excellent opportunity to experience JPL and
grow as an individual. Finally, thank you to my main collaborator, Johan Michalove, for
enriching my time here in every way and inspiring me to pursue new interests and dare
mighty things.

Вам также может понравиться