Package org.jboss.netty.handler.traffic


package org.jboss.netty.handler.traffic
Implementation of a Traffic Shaping Handler and Dynamic Statistics.


The main goal of this package is to allow to shape the traffic (bandwidth limitation), but also to get statistics on how many bytes are read or written. Both functions can be active or inactive (traffic or statistics).

Two classes implement this behavior:

  • TrafficCounter: this class implements the counters needed by the handlers. It can be accessed to get some extra information like the read or write bytes since last check, the read and write bandwidth from last check...


  • AbstractTrafficShapingHandler: this abstract class implements the kernel of the traffic shaping. It could be extended to fit your needs. Two classes are proposed as default implementations: see ChannelTrafficShapingHandler and GlobalTrafficShapingHandler respectively for Channel traffic shaping and Global traffic shaping.


  • The insertion in the pipeline of one of those handlers can be wherever you want, but it must be placed before any MemoryAwareThreadPoolExecutor in your pipeline.
    It is really recommended to have such a MemoryAwareThreadPoolExecutor (either non ordered or OrderedMemoryAwareThreadPoolExecutor ) in your pipeline when you want to use this feature with some real traffic shaping, since it will allow to relax the constraint on NioWorker to do other jobs if necessary.
    Instead, if you don't, you can have the following situation: if there are more clients connected and doing data transfer (either in read or write) than NioWorker, your global performance can be under your specifications or even sometimes it will block for a while which can turn to "timeout" operations. For instance, let says that you've got 2 NioWorkers, and 10 clients wants to send data to your server. If you set a bandwidth limitation of 100KB/s for each channel (client), you could have a final limitation of about 60KB/s for each channel since NioWorkers are stopping by this handler.
    When it is used as a read traffic shaper, the handler will set the channel as not readable, so as to relax the NioWorkers.

    An ObjectSizeEstimator can be passed at construction to specify what is the size of the object to be read or write accordingly to the type of object. If not specified, it will used the DefaultObjectSizeEstimator implementation.

Standard use could be as follow:

  • To activate or deactivate the traffic shaping, change the value corresponding to your desire as [Global or per Channel] [Write or Read] Limitation in byte/s.

  • A value of 0 stands for no limitation, so the traffic shaping is deactivate (on what you specified).
    You can either change those values with the method configure in AbstractTrafficShapingHandler.

  • To activate or deactivate the statistics, you can adjust the delay to a low (suggested not less than 200ms for efficiency reasons) or a high value (let say 24H in millisecond is huge enough to not get the problem) or even using 0 which means no computation will be done.

  • If you want to do anything with this statistics, just override the doAccounting method.
    This interval can be changed either from the method configure in AbstractTrafficShapingHandler or directly using the method configure of TrafficCounter.



So in your application you will create your own TrafficShapingHandler and set the values to fit your needs.

XXXXXTrafficShapingHandler myHandler = new XXXXXTrafficShapingHandler(timer);

timer could be created using HashedWheelTimer and XXXXX could be either Global or Channel
pipeline.addLast("XXXXX_TRAFFIC_SHAPING", myHandler);
...
pipeline.addLast("MemoryExecutor",new ExecutionHandler(memoryAwareThreadPoolExecutor));

Note that a new ChannelTrafficShapingHandler must be created for each new channel, but only one GlobalTrafficShapingHandler must be created for all channels.

Note also that you can create different GlobalTrafficShapingHandler if you want to separate classes of channels (for instance either from business point of view or from bind address point of view).