Вы находитесь на странице: 1из 8

CHAPTER 7: COMPUTER-ASSISTED AUDIT TECHNIQUES (CAATs)

INTRODUCTION TO INPUT CONTROLS

Input Controls – designed to ensure that the transactions that bring data into the system are valid,
accurate, and complete. Data input procedures can be either source document-triggered (batch)
or direct input (real-time).

Source document input requires human involvement and is prone to clerical errors.
Direct input employs real-time editing techniques to identify and correct errors immediately.

CLASSES OF INPUT CONTROLS


1.) Source document controls 4.) Validation controls
2.) Data coding controls 5.) Input error correction
3.) Batch controls 6.) Generalized data input systems

1.) Source Document Controls – in systems that use physical source documents in initiate
transactions, careful control must be exercised over these instruments. Source document fraud can
be used to remove assets from the organization. To control against this type of exposure, implement
control procedures over source documents to account for each document.
 Use Pre-numbered Source Documents – source documents should come pre-numbered
from the printer with a unique sequential number on each document. This provides an
audit trail for tracing transactions through accounting records.
 Use Source Documents in Sequence – source documents should be distributed to the
users and used in sequence, requiring the adequate physical security be maintained over
the source document inventory at the user site. Access to source documents should be
limited to authorized persons.
 Periodically Audit Source Documents – the auditor should compare the numbers of
documents used to date with those remaining in inventory plus those voided due to errors.

2.) Data Coding Controls – coding controls are checks on the integrity of data codes used in
processing. Three types of errors can corrupt data codes and cause processing errors: Transcription
errors, Single Transposition errors, and Multiple Transposition errors.

Transcription errors fall into three classes:


 Addition errors -occur when an extra digit or character is added to the code.
 Truncation errors- occur when a digit or character is removed from the end of a code.
 Substitution errors- are the replacement of one digit in a code with another.

Two types of Transposition Errors:


 Single transposition errors-occur when two adjacent digits are reversed.
 Multiple transposition errors-occur when nonadjacent digits are transposed .

Check Digits – is a control digit (or digits) added to the code when it is originally assigned that allows
the integrity of the code to be established during subsequent processing. The digit can be located
anywhere in the code: suffix, prefix, or embedded. This technique will detect only transcription
errors.
3.) Batch Controls – are an effective method of managing high volumes of transaction data through
a system. It reconciles output produced by the system with the input originally entered into the
system. Controlling the batch continues throughout all phases of the system. It assures that:
 All records in the batch are processed.
 No records are processed more than once.
 An audit trail of transactions in created from input through processing to the output.
It requires the grouping of similar types of input transactions together in batches and then controlling
the batches throughout data processing.

Two documents are used to accomplish this task: a batch transmittal sheet and a batch control
log. The transmittal sheet becomes the batch control record and is used to assess the integrity of the
batch during processing. The batch transmittal sheet captures relevant information such as:
 Unique batch number
 A batch date
 A transaction code
 Number of records in the batch
 Total dollar value of a financial field
 Sum of unique non--financial field
 Hash Totals – a simple control technique that uses non-financial data to keep track of the
records in a batch. Any key field may be used to calculate a hash total.

4.) Validation Controls – intended to detect errors in transaction data before the data are processed.
Most effective when they are performed as close to the source of the transaction as possible. Some
validation procedures require making references against the current master file. There are three
levels of input validation controls:
1. Field Interrogation – involves programmed procedures that examine the characteristics of
the data in the field.
 Missing Data Checks – used to examine the contents of a field for the presence of blank
spaces.
 Numeric-Alphabetic Data Checks – determine whether the correct form of data is in a
field.
 Zero-Value Checks – used to verify that certain fields are filled with zeros.
 Limit Checks – determine if the value in the field exceeds an authorized limit.
 Range Checks – assign upper and lower limits to acceptable data values.
 Validity Checks – compare actual values in a field against known acceptable values.
 Check Digit – identify keystroke errors in key fields by testing the internal validity of the
code.
2. Record Interrogation – procedures validate the entire record by examining the
interrelationship of its field values.
 Reasonable Checks – determine if a value in one field, which has already passed a limit check
and a range check, is reasonable when considered along with other data fields in the record.
 Sign Checks – tests to se if the sign of a field is correct for the type of record being processed.
· Sequence Checks – determine if a record is out of order.
3. File Interrogation – purpose is to ensure that the correct file is being processed by the
system.
 Internal Label Checks – verify that the file processed is the one the program is actually calling
for. The system matches the file name and serial number in the header label with the
programs file requirements.
 Version Checks – verify that the version of the file processed is correct. The version check
compares the version number of the files being processed with the program’s requirements.
 Expiration Date Check – prevents a file from being deleted before it expires.

5.) Input Error Correction – when errors are detected in a batch, they must be corrected and the
records resubmitted for reprocessing. This must be a controlled process to ensure that errors are
dealt with completely and correctly. Three common error handling techniques are:
1. Immediate Correction – when a keystroke error is detected or an illogical relationship, the
system should halt the data entry procedure until the user corrects the error.
2. Create an Error File – individual errors should be flagged to prevent them from being
processed. At each validation point, the system automatically adjusts the batch control totals to
reflect the removal of the error records from the batch. There are two methods for dealing with this
complexity. The first is to reverse the effects of the partially processed transactions and resubmit the
corrected records to the data input stage. The second is to reinsert corrected records to the
processing stage in which the error was detected.
3. Reject the Entire Batch – some forms of errors are associated with the entire batch and
are not clearly attributable to individual records.Batch errors are one reason for keeping the size of
the batch to a manageable number.
6.) Generalized Data Input Systems – to achieve a high degree of control and standardization over
input validation procedures, some organizations employ a generalized data input system (GDIS)
which includes centralized procedures to manage the data input for all of the organization’s
transaction processing systems. A GDIS eliminates the need to recreate redundant routines for each
new application. Has 3 advantages:

 Improves control by having one common system perform all data validation.
 Ensures that each AIS application applies a consistent standard for data validation.
 Improves systems development efficiency.

A GDIS has 5 major components:


1. Generalized Validation Module – (GVM) performs standard validation routines that are common
to many different applications. These routines are customized to an individual application’s needs
through parameters that specify the program’s specific requirements.
2. Validated Data File – the input data that are validated by the GVM are stored on a validated data
file. This is a temporary holding file through which validated transactions flow to their respective
applications.
3. Error File – error records detected during validation are stored in this file, corrected, and then
resubmitted to the GVM.
4. Error Reports – standardized error reports are distributed to users to facilitate error correction.
5. Transaction Log – is a permanent record of all validated transactions. It its an important element
in the audit trail. However, only successful transactions (those completely processed) should be
entered in the journal.

CLASSES OF PROCESSING CONTROLS


1) Run-to-Run Controls – use batch figures to monitor the batch as it moves from one
programmed procedure (run) to another. It ensures that each run in the system processes the
batch correctly and completely. Specific run-to-run control types are listed below:

 Recalculate Control Totals – after each major operation in the process and after each run, $
amount fields, hash totals, and record counts are accumulated and compared to the
corresponding values stored in the control record.
 Transaction Codes – the transaction code of each record in the batch is compared to the
transaction code contained in the control record, ensuring only the correct type of transaction
is being processed.
 Sequence Checks – the order of the transaction records in the batch is critical to correct and
complete processing. The sequence check control compares the sequence of each record in
the batch with the previous record to ensure that proper sorting took place.
2) Operator Intervention Control - Operator intervention increases the potential for human
error. Systems that limit operator intervention through operator intervention controls are thus
less prone to processing errors. Parameter values and program start points should, to the
extent possible, be derived logically or provided to the system through look-up tables
3) Audit Trail Controls – the preservation of an audit trail is an important objective of process
control. Every transaction must be traceable through each stage of processing. Each major
operation applied to a transaction should be thoroughly documented. The following are
examples of techniques used to preserve audit trails:

 Transaction Logs – every transaction successfully processed by the system should be


recorded on a transaction log. There are two reasons for creating a transaction log: It is a
permanent record of transactions. Not all of the records in the validated transaction file may
be successfully processed. Some of these records fail tests in the subsequent processing
stages. A transaction log should contain only successful transactions.
 Log of Automatic Transactions – all internally generated transactions must be placed in a
transaction log.
 Listing of Automatic Transactions – the responsible end user should receive a detailed list of
all internally generated transactions.
 Unique Transaction Identifiers – each transaction processed by the system must be uniquely
identified with a transaction number.
 Error Listing – a listing of all error records should go to the appropriate user to support error
correction and resubmission.
OUTPUT CONTROLS
Ensure that system output is not lost, misdirected, or corrupted and that privacy is not violated.
The type of processing method in use influences the choice of controls employed to protect system
output.

Controlling Batch Systems Output – Batch systems usually produce output in the form of hard
copy, which typically requires the involvement of intermediaries. The output is removed from the
printer by the computer operator, separated into sheets and separated from other reports, reviewed
for correctness by the data control clerk, and then sent through interoffice mail to the end user.
 Output Spooling – applications are often designed to direct their output to a magnetic disk file
rather than to the printer directly.A computer criminal may use this opportunity to perform any
of the following unauthorized acts:

 Access the output file and change critical data values.


 Access the file and change the number of copies to be printed.
 Make a copy of the output file to produce illegal output reports.
 Destroy the output file before printed takes place.

 Print Programs – the print run program produces hard copy output from the output file. Print
programs are often complex systems that require operator intervention. 4 common types of
intervention actions are:
1. Pausing the print program to load the correct type of output documents.
2. Entering parameters needed by the print run.
3. Restarting the print run at a prescribed checkpoint after a printer malfunction.
4. Removing printed output from the printer for review and distribution.

Print program controls – are designed to deal with two types of exposures: production of
unauthorized copies of output and employee browsing of sensitive data. One way to control
this is to employ output document controls similar to source document controls. The number of
copies specified by the output file can be reconciled with the actual number of output documents
used. To prevent operators from viewing sensitive output, special multi-part paper can be used, with
the top copy colored black to prevent the print from being read.

 Bursting – when output reports are removed from the printer, they go the bursting stage to
have their pages separated and collated. The clerk may make an unauthorized copy of the
report, remove a page from the report, or read sensitive information. The primary control for
this is supervision.

 Waste – computer output waste represents a potential exposure. Dispose properly of aborted
reports and the carbon copies from the multipart paper removed during bursting.

 Data Control – the data control group is responsible for verifying the accuracy of compute
output before it is distributed to the user. The clerk will review the batch control figures for
balance, examine the report body for garbled, illegible, and missing data, and record the
receipt of the report in data control’s batch control log.

 Report Distribution – the primary risks associated with report distribution include reports
being lost, stolen, or misdirected in transit to the user. To minimize these risks: name and
address of the user should be printed on the report, an address file of authorized users should
be consulted to identify each recipient of the report, and maintaining adequate access control
over the files.
 The reports may be placed in a secure mailbox to which only the user has the key.
 The user may be required to appear in person at the distribution center and sign for the report.
 A security officer or special courier may deliver the report to the user.

 End User Controls – output reports should be re-examined for any errors that may have
evaded the data control clerk’s review. Errors detected by the user should be reported to the
appropriate computer services management. A report should be stored in a secure location
until its retention period has expired. Factors influencing the length of time a hard copy report
is retained include:
 Statutory requirements specified by government agencies.
 The number of copies of the report in existence.
 The existence of magnetic or optical images of reports that can act as permanent backup.
 Reports should be destroyed in a manner consistent with the sensitivity of their contents.
Controlling Real-time Systems Output – real-time systems direct their output to the user’s
computer screen, terminal, or printer. This method of distribution eliminates various intermediaries
and thus reduces many of the exposures. The primary threat to real-time output is the
interception, disruption, destruction, or corruption of the output message as it passes along the
communications link. Two types of exposure exist:
1. Exposures from equipment failure.
2. Exposures from subversive acts where the output message is intercepted in transmit between
the sender and receiver.

TESTING COMPUTER APPLICATION CONTROLS


– control-testing techniques provide information about the accuracy and completeness of an
application’s processes. These test follow two general approaches:

 Black Box: Testing around the computer


 White Box: Testing through the computer

 Black Box (Around the Computer) Technique – auditors performing black box testing do not
rely on a detailed knowledge of the application’s internal logic. They seek to understand the
functional characteristics of the application by analyzing flowcharts and interviewing
knowledgeable personnel in the client’s organization. The auditor tests the application by
reconciling production input transactions processed by the application with output results. The
advantage of the black box approach is that the application need not be removed from service
and tested directly. This approach is feasible for testing applications that are relatively simple.
Complex applications require a more focused testing approach to provide the auditor with
evidence of application integrity.

 White Box (Through the Computer) Technique – relies on an in-depth understanding of the
internal logic of the application being tested. Several techniques for testing application logic
directly are included. This approach uses small numbers of specially created test transactions
to verify specific aspects of an application’s logic and controls. Auditors are able to conduct
precise tests, with known variables, and obtain results that they can compare against
objectively calculated results.

 Authenticity Tests – verify that an individual, a programmed procedure, or a message


attempting to access a system is authentic.
 Accuracy Tests – ensure that the system processes only data values that conform to
specified tolerances.
 Completeness Tests – identify missing data within a single record and entire records
missing from a batch.
 Redundancy Tests – determine that an application processes each record only once.
 Access Tests – ensure that the application prevents authorized users from unauthorized
access to data.
 Audit Trail Tests – ensure that the application creates an adequate audit trail. Produces
complete transaction listings, and generates error files and reports for all exceptions.
 Rounding Error Tests – verify the correctness of rounding procedures. Failure to
properly account for this rounding difference can result in an imbalance between the total
(control) interest amount and the sum of the individual interest calculations for each
account. Rounding problems are particularly susceptible to so-called salami funds, that
tend to affect a large number of victims, but the harm to each is immaterial. Each victim
assumes one of the small pieces and is unaware of being defrauded. Operating system
audit trails and audit software can detect excessive file activity. In the case of the salami
fraud, there would be 1000’s of entries into the computer criminal’s personal account that
may be detected in this way.

COMPUTER AIDED AUDIT TOOLS AND TECHNIQUES FOR TESTING CONTROLS

5 CAATT approaches:
1.) Test Data Method – used to establish application integrity by processing specially prepared
sets of input data through production applications that are under review. The results of each
test are compared to predetermined expectations to obtain an objective evaluation of
application logic and
2.) control effectiveness.
 Creating Test Data – when creating test data, auditors must prepare a complete set of both
valid and invalid transactions. If test data are incomplete, auditors might fail to examine critical
branches of application logic and error-checking routines. Test transactions should test every
possible input error, logical process, and irregularity.
2.) Base Case System Evaluation – there are several variants of the test data technique. When the
set of test data in use is comprehensive, the technique is called a base case system evaluation
(BCSE). BCSE tests are conducted with a set of tests
Transactions containing all possible transaction types. These results are the base case. When
subsequent changes to the application occur during maintenance, their effects are evaluated by
comparing current results with base case results.
3.) Tracing – performs an electronic walk-through of the application’s internal logic. Implementing
tracing requires a detailed understanding of the application’s internal logic.
Tracing involves three steps:
The application under review must undergo a special compilation to activate the trace option.
Specific transactions or types of transactions are created as test data.
The test data transactions are traced through all processing stages of the program, and a listing is
produced of all programmed instructions that were executed during the test.
Advantages of Test Data Techniques
 They employ through the computer testing, thus providing the auditor with explicit evidence
concerning application functions.
 Test data runs can be employed with only minimal disruption to the organization’s operations.
 They require only minimal computer expertise on the part of auditors.
Disadvantages of Test Data Techniques
 Auditors must rely on computer services personnel to obtain a copy of the application for test
purposes.
 Audit evidence collected by independent means is more reliable than evidence supplied by the
client.
 Provide a static picture of application integrity at a single point in time. They do not provide a
convenient means of gathering evidence about ongoing application functionality.
 Their relatively high cost of implementation, resulting in auditing inefficiency.
4.) Integrated Test Facility – an automated technique that enables the auditor to test an application’s
logic and controls during its normal operation. ITF databases contain ‘dummy’ or test master file
records integrated with legitimate records. ITF audit modules are designed to discriminate between
ITF transactions and routine production data. The auditor analyzes ITF results against expected
results.
5.) Parallel Simulation - Auditor writes or obtains a copy of the program that simulates key features
or processes to be reviewed / tested
 Auditor gains a thorough understanding of the application under review
 Auditor identifies those processes and controls critical to the application
 Auditor creates the simulation using program or Generalized Audit Software (GAS)
 Auditor runs the simulated program using selected data and files
 Auditor evaluates results and reconciles differences.

Вам также может понравиться