Академический Документы
Профессиональный Документы
Культура Документы
Geoffrey Sage
Senior Technical Consultant
29th May 2019
Safe harbor notice for forward-looking statements
This presentation contains “forward‐looking” statements that are based on our management’s beliefs and assumptions and on information currently available to
management. We intend for such forward‐looking statements to be covered by the safe harbor provisions for forward‐looking statements contained in the U.S. Private
Securities Litigation Reform Act of 1995. Forward‐looking statements include information concerning our possible or assumed strategy, future operations, financing plans,
operating model, financial position, future revenues, projected costs, competitive position, industry environment, potential growth opportunities, potential market
opportunities, plans and objectives of management and the effects of competition.
Forward‐looking statements include all statements that are not historical facts and can be identified by terms such as “anticipates,” “believes,” “could,” “seeks,”
“estimates,” “expects,” “intends,” “may,” “plans,” “potential,” “predicts,” “prospects”, “projects,” “should,” “will,” “would” or similar expressions and the negatives of those
terms, although not all forward‐looking statements contain these identifying words. Forward‐looking statements involve known and unknown risks, uncertainties and other
factors that may cause our actual results, performance or achievements to be materially different from any future results, performance or achievements expressed or
implied by the forward‐looking statements. We cannot guarantee that we actually will achieve the plans, intentions, or expectations disclosed in our forward‐looking
statements and you should not place undue reliance on our forward‐looking statements.
Forward-looking statements represent our management’s beliefs and assumptions only as of the date of this presentation. We undertake no obligation, and do not intend
to update these forward‐looking statements, to review or confirm analysts’ expectations, or to provide interim reports or updates on the progress of the current financial
quarter. Further information on these and other factors that could affect our financial results are included in our filings we make with the Securities and Exchange
Commission, including those discussed in our most recent Annual Report on form 10-K.
This presentation includes certain non‐GAAP financial measures as defined by SEC rules. We have provided a reconciliation of those measures to the most directly
comparable GAAP measures in the Appendix. Terms such as “Annual Contract Value” and “G2K Customer” shall have the meanings set forth in our filings with the SEC.
This presentation includes estimates of the size of the target addressable market for our products and services. We obtain industry and market data from our own internal
estimates, from industry and general publications, and from research, surveys and studies conducted by third parties. The data on which we rely, and our assumptions,
involve approximations, judgments about how to define and group product segments and markets, estimates, and risks and uncertainties, including those discussed in our
most recent annual report on Form 10-K and other risks which we do not foresee that may materially, and negatively impact or fundamentally change the markets in which
we compete. Therefore, our estimates of the size of the target addressable markets for our products and services could be overstated. Further, in a number of product
segments and markets our product offerings have only recently been introduced, and we do not have an operating history establishing that our products will successfully
compete in these product and market segments or successfully address the breadth and size of the market opportunity stated of implied by the industry and market data
in this presentation. The information in this presentation on new products, features, or functionalities is intended to outline ServiceNow’s general product direction and
should not be included in making a purchasing decision. The information on new products, features, functionalities is for informational purposes only and may not be
incorporated into any contract. The information on new products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. The
development, release, and timing of any features or functionality described for our products remains at ServiceNow’s sole discretion.
How it works
Best Practice
Development in ATF
Demo
60%
* Quoted from Knowledge17 session collateral written by the Product Manager for ATF.
As a Technical Consultant I don’t even know what a licence is. Please direct any questions to your Account Manager.
FPO
• ATF is not intended for testing every single UI component of your ServiceNow instance
– For example: Every single customer does not need to test the Magnifying Glass icon on a Reference Field.
– Think about what is unique in your instance, and what is common across every instance.
• ATF is not ideal for Test Driven Development where you build the test first
– In ATF, you cannot build a Test to load a form and populate its fields if the form and fields do not exist yet
• It takes time and effort to build your Tests in ATF. If you change the functionality you are
testing, then your Test stops working and you need to update it.
• If you develop new functionality and ATF Tests for it in parallel, then you need to be
mindful that any changes during normal development iterations will result in increased
development costs in ATF.
• In practice, UAT is a good time to start ATF since the development workload is
dropping off and changes to the new features should (hopefully) be minimal.
• The real benefit in ATF is in its repeatability. Once you build your Tests, you can test your
instance as often as you like.
* Maybe/Hopefully…
2. Create a Test
– Represents a Test Case
REST (Server):
• Send REST Request - Inbound REST Steps are for inbound
• Send REST Request - Inbound - REST API Explorer calls into the instance.
• Assert Status Code Use custom Step Configs for
outbound REST calls.
• Assert Status Code Name
• Assert Response Time
• Assert Response Header
• Assert Response Payload
• Assert Response JSON Payload Is Valid
• Assert JSON Response Payload Element
• Assert Response XML Payload Is Well-Formed
• Assert XML Response Payload Element
• Allows you to re-run only the Tests that failed, and ignore the Tests that passed
• Assumes that you have addressed the cause of the failure
Example:
• Impersonate an ITIL User to create a Change Request, then submit for approval
• Impersonate each Approver to update the Approvals and progress through the
workflow
• Tables can be excluded from rollback by adding an Attribute to the Collection record
in the Dictionary: excludeFromRollback=true
– Example: Exclude Emails from rollback to check their content and styling
• Appears to be asynchronous
– Not always completed before the next Test starts
– Do not rely on rollback being completed before the next Test in the Suite is run (eg, Record Query)
• Suite Schedule
– When to Run
– List of Test Suites
– Extends Scheduled Script Execution
• Many UI Test failures are the result of errors logged in the console
• There are often very difficult to diagnose and fix
• They are the source of many Know Errors (eg, the Approval form)
• From London you can whitelist console log errors
• Choose whether the are totally ignored, or if they show a warning
Timed events (such as SLA breaches) can be challenging, but are possible through simulating the passing of
time. You can shorten the durations for the Test, or update the sys_updated_on field with gr.setWorkflow(false)
and gr.autoSysFields(false)
Some customers encounter "limitations" when they fail to understand the "Automated" in Automated Test
Framework.
For example, they try to replicate the functionality of a 'Wait for Condition' Workflow Activity and build Tests
that require manual interaction.
• “Click a UI Action” does not work for UI Actions that are only in the Context Menu
– PRB706524 - Fixed in Jakarta
• Limitations to List field inputs (can only select 1 value, and only if the List references a table)
– PRB1114486 - Won’t be fixed
– Workaround: Populate dynamically from a custom Step Configuration
• Cannot submit Catalogue Item if it does not use the ‘Use Cart Layout’ option
– PRB1192151 – Fixed in Jakarta Patch 9, Kingston Patch 3
• There are a few questions that need to be considered before you start (detailed next):
– How much detail should we test? More detailed testing provides more comprehensive testing, but creates
large maintenance overheads.
– Should we change existing functionality to make a Test work?
– Do we need to test every possible variation in data-driven functionality? For example, a Priority Matrix.
– Can we start building Tests before Production Go-Live?
– Should we re-test common functionality across different processes?
• The issue with highly detailed testing is that when you make a change to the field, you also
need to update any and all ATF Tests that look at that field.
– Do you want to have to find and update ATF Tests each time you change a UI Policy?
• You need to find the right balance between detailed testing and ATF maintenance for
your organisation
• Consider if this is worth while (the above examples create a better user experience)
* Seriously, no promises
• Or, should you create 1 Test for the common Variable Set, then 1 Test for the unique
functionality on each Item?
– Requires understanding of Variable Sets (grey box testing)
– A change to the Variable Set causes 1 Test to fail
• After you submit each Catalogue Item, should you (re)test the RITM State life-cycle?
• Answer depends on your organisation’s Test Strategy/Philosophy
• How are you going to maintain your Test Results that are being generated in a non-Prod
instance?
– Import them into Prod
– Create Data Preserves
– Do you even need/want to store them long-term?
• Use a consistent set of Test Users and Groups to avoid migration issues
• Avoid creating Tests with too many Test Steps
– When a Test fails, all following Test Steps are skipped. If your Test has multiple errors, it will fail at the first error
and you won’t find the second error until the first error is rectified and you rerun the Test.
– More granular (shorter) Tests will allow you to identify more issues in a shorter amount of time.
• Make sure the Test Designer and impersonated User are using the system default date
and time formats
• Use a Record Query to create a Variable when you need to populate a value into
multiple Steps
– Allows you to update a value in one place instead of many
• When populating fields on the client-side, make sure that field updates which trigger
AJAX calls are in separate Steps
– Creates issues when running in a Suite
• Test Suite
– A set of Tests
– Used to run many Tests with one click
• Test
– Represents a Test Case
– A set of Test Steps
– Used to test end-to-end functionality of a process
• Test Step
– A specific action to perform during a Test
– Action is defined by the Test Step Config
• Suite Schedule
– Extends Scheduled Script Execution
– Scheduled job for 1 or more Test Suites
• Test Runner
– An instance of a Client Test Runner
• Test Result
– Result from testing a Test (Case)
– Successful only if all child Tests Steps are successful
• Step Results
– Result from performing a single Test Step (action)
– Successful if the Test Step was completed
– Output contain specific details
• Step Transaction
– m2m link to Transaction Log
• Can be used to generate dynamic values to be used as Inputs for other Test Steps
– Example: Start and End dates for a Change window
– Doesn’t actually test anything (always passes)
– Input a lead time. Output 2 dates for Start and End.
• Email Notification
– Input a Record ID and an Email Notification. Check that an Email was created.
• Generate Change Window
– Input a Lead time and Duration. Output Start and End dates.
• Get Approval
– Input a Record ID. Output an Approval (in Requested State), and the Approver.
– Use this to dynamically impersonate an Approver and update an Approval record.
• Approve all Approvals
– Approval all existing Approvals on a given record. Use this to quickly push a Change Request through its workflow
• Get Random User
– Define some inputs (like Company, Group, Role) and use these to create dynamic queries to create a list of Users
that meet the criteria. Then select one at random. Great for picking a valid User for a scenario if you don’t want
to use test accounts.
• Get Current Date/Time
• Wait
– Input a number of seconds to wait. Use this for asynchronous updates or for debugging.
• Sometimes a Test will work when you execute it on its own, but will fail when executed
as part of a Suite
• This is likely due to increased CPU utilisation, network traffic, and/or server workload
• Asynchronous updates take longer (Workflows, Events, Email Notifications)
– Use asynchronous query methods, or a Wait
• Always Unit Test your ATF Tests in a Suite with other Tests
• Solving these problems requires excellent understanding of the Now Platform
• Do you have a few Test Cases covering a lot of different functionality (high uniqueness),
or a lot of Test Cases covering a small amount of functionality (low uniqueness)?
• Any Test Designer building Tests also needs to set their Date and Time formats to system
defaults, otherwise it can affect the Test Inputs
• Remove the product dev’s life story from the Step execution script field
– First, take a moment to appreciate and respect that something rare has happened…
OOTB code was commented in a useful way.
– Now move it to Contextual Help or a Knowledge Article, or use a Syntax Editor Macro – somewhere you won’t
have to continually select and delete all those lines.
– Fix the syntax error at the end of the closure too.
• Please take a look at the many and varied Automated Test Framework use case
examples on the Docs site
• Don’t forget to Send Feedback :D
• Core setup
– LDAP connection
– Check critical System Properties (test email, etc)
– Integration Heartbeat Tests