Вы находитесь на странице: 1из 2

Throttle requests before they are sent to back-end systems

Caching might not be feasible since the requested data is dynamic. Even when you apply caching (A1) there still
might be too many request messages for the Product Database to handle. For example, this could happen when a lot
of different products that have not been previously ordered (so they are not yet in the result cache) are ordered at the
same time. To avoid crashing the back-end system, you can limit the number of simultaneous requests that are sent
to back-end systems by buffering them.

You can use Oracle Service Bus to control, or throttle, the message flow between Proxy and Business Services using
a Throttling Queue. This is basically an in-memory queue that holds requests during a peak load and effectively limits
the number of concurrent requests sent to the back-end system.

Figure 5: Configuring a Throttling Queue for a Business Service

Throttling is an operational task and can therefore only be added through the OSB Console. It's not available in the
development environment when working with the Oracle Enterprise Pack for Eclipse (OEPE). Figure 6 illustrates the
configuration of Throttling using the console.

Figure 6: Configuring a Throttling Queue for a Business Service

In this case the throttling queue is configured to hold a maximum of 100 messages. The concurrency is limited to 5,
meaning that only 5 messages are allowed to be sent to the Product Database at the same time. Additional
messages are buffered in the queue and submitted when the concurrency decreases to less than the maximum. The
message expiration attribute is used to configure the maximum time that a message is stored in the queue. When
buffered messages expire, a fault is returned to the service consumer. A message expiration of 0 means that
messages in the throttling queue will not expire.
Fault action type:

• Throttling (fault prevention)

Application and considerations:

• Works best if an asynchronous request-response or a one-way message exchange pattern is used. In a


synchronous request-response pattern, a timeout on the consumer side occurs in case request throttling
takes longer than the synchronous timeout value.
• The throttling queue is not persistent. For example, requests in the throttling queue are lost if the OSB
server goes down.

Impact:

• In the event that messages are throttled, it might take longer for a synchronous request to return its
response. This could mean that some of the service level agreements (SLA) defined for the service are
violated, or, in the worst case, a timeout occurs.

Alternative implementations:

• Use Oracle WebLogic JMS server to implement custom throttling based on persistent queues. In this
scenario you need to control the number of concurrent threads consuming the messages from the queue.
For OSB this can be done by specifying a Dispatch Policy on a Proxy Service. The downside of this
approach is that boilerplate coding is necessary instead of configuration.

Вам также может понравиться