You are on page 1of 6

The Wasting Game

Back office IT operations have numerous priorities to juggle, and this means that all-too-often administrators have to scramble to fix problems manually. This in itself is not new. But better quality software is starting to make an impact in the drive to reduce the wasted time and effort caused by operational inefficiencies. Whats more, companies can actually get more out of their existing assets by doing so.

Wasted money is one of those problems in the management of back office operations that never seems to go away. Divert your efforts at one problem, and another one appears. Nowhere is this issue more acute than in the batch processing of vast quantities of data. Now though, better integrated and more powerful software suites are making it simpler to tackle these challenges.

Firstly, lets make some obvious statements. Information systems are the cornerstone of the modern enterprise, and modern businesses conduct their affairs in a far more open way than ever before meaning that the volume and also the complexity of data is increasing exponentially as a result of doing business with more and more partners and customers.

The responsibility for processing all of this data for sales, accounting, reporting and other managerial purposes rests with the IT operations department. The better the department manages and processes all of those transactions, the more the organisation is free to grow revenues and profits. And the better it processes that data, the more the IT operations team helps to optimise the cost of doing so.

Again, this is not news. The same pressures of delivering services better and at lower cost face every single department in todays businesses as they attempt to cope with fierce competition. For IT operations, this means delivering higher quality services that satisfy both internal requirements and those of partners and customers, lowering operational costs, working within the reduced time windows created by Internet-driven commerce and meeting the ever-changing organisational requirements of dynamic businesses and the latter point in its own right can be enough to make an IT operations team quake with fear.

ORSYP-2009 The Wasting Game

page 2

Pressure on IT Operations
The point is that IT operations management and principally the back office management of the systems has not been able to make the same strides in performance enhancement and cost savings that many other corporate departments have been able to achieve. On one hand, this is somewhat of an impossible task, as IT environments become increasingly heterogeneous, fast-paced business shifts the goalposts and the amount of data that needs to be handled continues to soar.

But on the other hand, to date the most important of back office operations the batch processing of data, which is strategically critical, extremely time-consuming and a drain on costs has been entrusted to a combination of largely unsophisticated software packages that manage tasks across these dispersed IT environments and a home-made approach to ensuring that the batch processing gets done, which varies considerably from one organization to the other depending on the nature of its IT systems. The problem with this approach is that it is prone to failure, it introduces the factor of possible human error and it does not allow tasks to be prioritized. The processing jobs are simply undertaken in large batches, so that while the human resources managers daily attendance check might get delayed being processed because of a glitch with a server, so might the chairmans daily briefing notes.

A History Lesson
Lets step back in time briefly. In the 1970s, we lived in a mainframe-centric world in which all data processing jobs were manually controlled. In the 1980s many companies moved to open systems great for their computing power and flexibility, but with it came the need for job scheduling software that managed the way in which batch processing was conducted. But open systems also posed some tough job scheduling challenges and most organizations adopted a combination of manual, home-made and industrial controls, the latter being software packages driven by the operating system.

Then came the 1990s, with distributed IT systems, integration of critical applications, and the need to process huge volumes of data faster and more reliably. With this came a requirement for job scheduling processes that were sufficiently powerful and flexible to manage batch processing across point-to-point architectures, extremely dispersed operations and in an ever-changing business environment. Job scheduling needed to

ORSYP-2009 The Wasting Game

page 3

become real-time, offer high availability and be able to cope with high volumes but unfortunately many of the software packages available fell short of this requirement. It was in this environment that ORSYP evolved as a leader in innovative job scheduling technologies, and today the company is a world leader in this market, with customers worldwide.

Today, IT operations departments face the task of lowering the wasted time and effort that they have to spend on manual batch processing techniques. Yet in order to do so they must overcome the deficiencies of many of the software suites designed for this purpose.

The best job scheduling software tools today are able to bring a great deal of automation to the task. But businesses also need to have the right combination of processes and people to make job scheduling effective and truly ensure that data processing improvement requirements are met.

The first step is often to define new positions and tasks within the IT operations department so that current production constraints can best be overcome. For instance, people dedicated to the supervision of service level agreement fulfillment can focus on all reporting aspects of day-to-day activity and interface with internal and external clients. Once they are aware of their specific needs, they can ascertain whether they are satisfied with the current service delivery. Once this has been undertaken, new performance targets and an operational team structure for delivering on them can be drawn up.

Dynamic Job Scheduling

While job scheduling has historically been carried out either by piecemeal tools or in the context of broader IT systems management software suites, the importance of batch processing management today means that specific and highly specialist job schedulers are increasingly being favored.

Some job schedulers promise to automate day-to-day batch processing activity, but in truth not all of them are able to do so. Automation involves the intricate and intelligent management of millions of tasks. Its effect, though, is to free up IT operations personnel to focus on making performance improvements to their systems.

Given the heterogeneous nature of many of the IT environments into which job scheduling software is introduced, it is also becoming more and more critical that the scheduling processes are set up to have no single point of failure. This comes down to the architecture of the software and solutions: in traditional

ORSYP-2009 The Wasting Game

page 4

scheduling environments in which a master initiates, prioritizes and controls the jobs, the failure of the master means the failure of the entire procedure, whereas Dollar Universe Automation Power Grid architecture overcomes this fundamental weakness so offers complete fault tolerancy.

ORSYPs Dollar Universe Solutions has the ability to schedule, sequence and control all batch processing and to do so automatically. A batch job is a program that is requested to run without further user interaction, but all-too-often this does not turn out to be the case. Batch jobs need to be closely managed because of their data volume, complexity, frequency, dependencies and performance requirements. Few companies have the confidence to simply push the button and then wait for the end result.

Dollar Universe Solutions also allows any further changes to the physical network to be made without disruption to enterprise batch processes, which is an increasing need as organizations continue to change their operations more frequently. Moreover, it works across numerous platforms so can be easily integrated into all of the common IT environments in use today. It is also the only job scheduler to offer dynamic scheduling and sequencing, so that job processing is totally reliable even when unplanned events occur, such as network failures, late or incorrect processing or the incorporation of ad-hoc job requests into the batch process.

These, then, are the key requirements of job scheduling software and those that give the ability to reduce wasted costs: the automation of all batch operations, the ability to monitor the entire production environment through a single window, the ability to configure quickly and, once the system is up and running, the capability to achieve continuous processing of the operations flow. All of these features save time, and that saves money. It is that simple.

Maximizing Availability
The related cost-reduction issue is that efficient job scheduling allows organizations to reduce the risks associated with their data processing failing, and the costs burdens that these situations can impose. By maximizing the availability of IT resources, and most critically the key business applications, companies can analyze how much money they can save through uninterrupted IT performance over a given period.

Take stock control, for example. If stock management fails to be updated on time because certain required jobs are still awaited by the stock management application, the costs implications can be considerable. If the

ORSYP-2009 The Wasting Game

page 5

reason happens to be based on a server failure, the IT operations department is usually held responsible for the cost of the unavailability.

Good job schedulers offer fault tolerancy, so that continued operation of other servers in the event of individual server downtime during a batch processing operations is assured. The same goes for continued network operations in the event of a network failure, meaning that overall production and operations are not affected.

Equipped with an effective job scheduling solution, organizations can start to reduce the time taken to undertake batch data processing. In fact, the improvements can be virtually immediate, and ORSYP data shows that the average mid-sized european company can save up to 300.000 euros annually by moving to its Dollar Universe Solutions. The reasons are clear: it can eliminate the risk of faulty implementations, remove the risk of human error, secure the production environment, minimize the impact on data processing tasks and most importantly allow staff to be redeployed to other assignments.

Processing large amounts of data is an ever-more critical requirement, yet IT back office teams have few assurances that everything will go smoothly. One thing, however, is certain: the pressure to reduce wastage is increasing, and IT operations teams need job scheduling tools they can rely on.

- ends -

ORSYP-2009 The Wasting Game

page 6